[jira] [Created] (PARQUET-2263) Upgrade maven-shade-plugin to 3.4.1

2023-04-04 Thread Yuming Wang (Jira)
Yuming Wang created PARQUET-2263:


 Summary: Upgrade maven-shade-plugin to 3.4.1
 Key: PARQUET-2263
 URL: https://issues.apache.org/jira/browse/PARQUET-2263
 Project: Parquet
  Issue Type: Improvement
Affects Versions: 1.13.0
Reporter: Yuming Wang






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (PARQUET-1355) Improvement Binary write performance

2022-10-27 Thread Yuming Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/PARQUET-1355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuming Wang resolved PARQUET-1355.
--
Resolution: Won't Fix

> Improvement Binary write performance
> 
>
> Key: PARQUET-1355
> URL: https://issues.apache.org/jira/browse/PARQUET-1355
> Project: Parquet
>  Issue Type: Improvement
>  Components: parquet-mr
>Affects Versions: 1.10.0
>Reporter: Yuming Wang
>Assignee: Yuming Wang
>Priority: Major
>  Labels: pull-request-available
>
> *Benchmark code*:
> {code:java}
> test("Parquet write benchmark") {
>   val count = 100 * 1024 * 1024
>   val numIters = 5
>   withTempPath { path =>
> val benchmark = new Benchmark(s"Parquet write benchmark 
> ${spark.sparkContext.version}", 5)
> Seq("long", "string", "decimal(18, 0)", "decimal(38, 18)").foreach { dt =>
>   benchmark.addCase(s"$dt type", numIters = numIters) { iter =>
> spark.range(count).selectExpr(s"cast(id as $dt) as id")
>   .write.mode("overwrite").parquet(path.getAbsolutePath)
>   }
> }
> benchmark.run()
>   }
> }
> {code}
> *Result*:
> {noformat}
> -- Spark 2.3.3-SNAPSHOT with Parquet 1.8.3
> Java HotSpot(TM) 64-Bit Server VM 1.8.0_151-b12 on Mac OS X 10.12.6
> Intel(R) Core(TM) i7-7820HQ CPU @ 2.90GHz
> Parquet write benchmark 2.3.3-SNAPSHOT:  Best/Avg Time(ms)Rate(M/s)   Per 
> Row(ns)   Relative
> 
> long type   10963 / 11344  0.0  
> 2192675973.8   1.0X
> string type 28423 / 29437  0.0  
> 5684553922.2   0.4X
> decimal(18, 0) type 11558 / 11696  0.0  
> 2311587203.6   0.9X
> decimal(38, 18) type43858 / 44432  0.0  
> 8771537663.4   0.2X
> -- Spark 2.4.0-SNAPSHOT with Parquet 1.10.0
> Java HotSpot(TM) 64-Bit Server VM 1.8.0_151-b12 on Mac OS X 10.12.6
> Intel(R) Core(TM) i7-7820HQ CPU @ 2.90GHz
> Parquet write benchmark 2.4.0-SNAPSHOT:  Best/Avg Time(ms)Rate(M/s)   Per 
> Row(ns)   Relative
> 
> long type   11633 / 12070  0.0  
> 2326572295.8   1.0X
> string type 31374 / 32178  0.0  
> 6274760187.4   0.4X
> decimal(18, 0) type 13019 / 13294  0.0  
> 2603841925.4   0.9X
> decimal(38, 18) type50719 / 50983  0.0 
> 10143775007.6   0.2X
> {noformat}
> The mainly affects the performance is 
> [toByteBuffer|https://github.com/apache/parquet-mr/blob/d61d221c9e752ce2cc0da65ede8b55653b3ae21f/parquet-column/src/main/java/org/apache/parquet/io/api/Binary.java#L83].
>  If don't use the {{toByteBuffer}} when compare binary, the result is:
> {noformat}
> -- Spark 2.4.0-SNAPSHOT with Parquet 1.10.0
> Java HotSpot(TM) 64-Bit Server VM 1.8.0_151-b12 on Mac OS X 10.12.6
> Intel(R) Core(TM) i7-7820HQ CPU @ 2.90GHz
> Parquet write benchmark 2.4.0-SNAPSHOT:  Best/Avg Time(ms)Rate(M/s)   Per 
> Row(ns)   Relative
> 
> long type   11171 / 11508  0.0  
> 2234189382.0   1.0X
> string type 30072 / 30290  0.0  
> 6014346455.4   0.4X
> decimal(18, 0) type 12150 / 12239  0.0  
> 2430052708.8   0.9X
> decimal(38, 18) type44974 / 45423  0.0  
> 8994773738.8   0.2X
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (PARQUET-2192) Add Java 17 build test to GitHub action

2022-09-17 Thread Yuming Wang (Jira)
Yuming Wang created PARQUET-2192:


 Summary: Add Java 17 build test to GitHub action
 Key: PARQUET-2192
 URL: https://issues.apache.org/jira/browse/PARQUET-2192
 Project: Parquet
  Issue Type: Test
  Components: parquet-testing
Affects Versions: 1.13.0
Reporter: Yuming Wang






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (PARQUET-2191) Upgrade Scala to 2.12.17

2022-09-15 Thread Yuming Wang (Jira)
Yuming Wang created PARQUET-2191:


 Summary: Upgrade Scala to 2.12.17
 Key: PARQUET-2191
 URL: https://issues.apache.org/jira/browse/PARQUET-2191
 Project: Parquet
  Issue Type: Improvement
Affects Versions: 1.13.0
Reporter: Yuming Wang






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PARQUET-1203) Corrupted parquet file from Spark

2021-12-03 Thread Yuming Wang (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-1203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17453288#comment-17453288
 ] 

Yuming Wang commented on PARQUET-1203:
--

It may be caused by hardware issue. You can add this line:
{code:scala}
"HostName" -> java.net.InetAddress.getLocalHost.getHostName
{code}
to 
https://github.com/apache/spark/blob/v3.2.0/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetWriteSupport.scala#L115-L117
 to find out which machine will generate corruption files.

> Corrupted parquet file from Spark
> -
>
> Key: PARQUET-1203
> URL: https://issues.apache.org/jira/browse/PARQUET-1203
> Project: Parquet
>  Issue Type: Bug
> Environment: Spark 2.2.1
>Reporter: Dong Jiang
>Assignee: Ryan Blue
>Priority: Major
>
> Hi, 
> We are running on Spark 2.2.1, generating parquet files on S3, like the 
> following 
> pseudo code 
> df.write.parquet(...) 
> We have recently noticed parquet file corruptions, when reading the parquet 
> in Spark or Presto. I downloaded the corrupted file from S3 and got following 
> errors in Spark as the following: 
> Caused by: org.apache.parquet.io.ParquetDecodingException: Can not read 
> value at 40870 in block 0 in file 
> file:/Users/djiang/part-00122-80f4886a-75ce-42fa-b78f-4af35426f434.c000.snappy.parquet
>  
> Caused by: org.apache.parquet.io.ParquetDecodingException: could not read 
> page Page [bytes.size=1048594, valueCount=43663, uncompressedSize=1048594] 
> in col [incoming_aliases_array, list, element, key_value, value] BINARY 
> It appears only one column in one of the rows in the file is corrupt, the 
> file has 111041 rows. 
> My questions are 
> 1) How can I identify the corrupted row? 
> 2) What could cause the corruption? Spark issue or Parquet issue? 
> Any help is greatly appreciated. 
> Thanks, 
> Dong 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (PARQUET-1805) Refactor the configuration for bloom filters

2021-02-03 Thread Yuming Wang (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-1805?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17277848#comment-17277848
 ] 

Yuming Wang commented on PARQUET-1805:
--

Thank you [~gszadovszky]   No issue for now.

> Refactor the configuration for bloom filters
> 
>
> Key: PARQUET-1805
> URL: https://issues.apache.org/jira/browse/PARQUET-1805
> Project: Parquet
>  Issue Type: Improvement
>Reporter: Gabor Szadovszky
>Assignee: Gabor Szadovszky
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.0
>
>
> Refactor the hadoop configuration for bloom filters according to PARQUET-1784.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (PARQUET-41) Add bloom filters to parquet statistics

2021-02-01 Thread Yuming Wang (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-41?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17276854#comment-17276854
 ] 

Yuming Wang commented on PARQUET-41:


[~nchammas] You can check the related configuration parameters: 
[https://github.com/apache/parquet-mr/tree/master/parquet-hadoop 
|https://github.com/apache/parquet-mr/tree/master/parquet-hadoop]This is an 
example:
{code:scala}
val numRows = 1024 * 1024 * 15
val df = spark.range(numRows).selectExpr(
  "id",
  "cast(id as string) as s",
  "cast(id as timestamp) as ts",
  "cast(cast(id as timestamp) as date) as td",
  "cast(id as decimal) as dec")
val benchmark = new org.apache.spark.benchmark.Benchmark(
  "Benchmark bloom filter write",
  numRows,
  minNumIters = 5)

benchmark.addCase("default") { _ =>
  withSQLConf() {
df.write.mode("overwrite").parquet("/tmp/spark/parquet")
  }
}

benchmark.addCase("Build bloom filter for ts column") { _ =>
  withSQLConf(
org.apache.parquet.hadoop.ParquetOutputFormat.BLOOM_FILTER_ENABLED -> 
"false",
org.apache.parquet.hadoop.ParquetOutputFormat.BLOOM_FILTER_ENABLED + "#ts" 
-> "true") {
df.write.mode("overwrite").parquet("/tmp/spark/parquet")
  }
}

benchmark.addCase("Build bloom filter for ts and dec column") { _ =>
  withSQLConf(
org.apache.parquet.hadoop.ParquetOutputFormat.BLOOM_FILTER_ENABLED -> 
"false",
org.apache.parquet.hadoop.ParquetOutputFormat.BLOOM_FILTER_ENABLED + "#ts" 
-> "true",
org.apache.parquet.hadoop.ParquetOutputFormat.BLOOM_FILTER_ENABLED + "#dec" 
-> "true") {
df.write.mode("overwrite").parquet("/tmp/spark/parquet")
  }
}

benchmark.addCase("Build bloom filter for all column") { _ =>
  withSQLConf(
org.apache.parquet.hadoop.ParquetOutputFormat.BLOOM_FILTER_ENABLED -> 
"true") {
df.write.mode("overwrite").parquet("/tmp/spark/parquet")
  }
}
benchmark.run()
{code}

> Add bloom filters to parquet statistics
> ---
>
> Key: PARQUET-41
> URL: https://issues.apache.org/jira/browse/PARQUET-41
> Project: Parquet
>  Issue Type: New Feature
>  Components: parquet-format, parquet-mr
>Reporter: Alex Levenson
>Assignee: Junjie Chen
>Priority: Major
>  Labels: filter2, pull-request-available
> Fix For: format-2.7.0, 1.12.0
>
>
> For row groups with no dictionary, we could still produce a bloom filter. 
> This could be very useful in filtering entire row groups.
> Pull request:
> https://github.com/apache/parquet-mr/pull/215



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (PARQUET-1969) Test by GithubAction

2021-02-01 Thread Yuming Wang (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-1969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17276833#comment-17276833
 ] 

Yuming Wang commented on PARQUET-1969:
--

Travis has been broken for several days. I have tested GithubAction: 
https://github.com/wangyum/parquet-mr/actions/runs/529590762

> Test by GithubAction
> 
>
> Key: PARQUET-1969
> URL: https://issues.apache.org/jira/browse/PARQUET-1969
> Project: Parquet
>  Issue Type: Improvement
>  Components: parquet-mr
>Affects Versions: 1.12.0
>Reporter: Yuming Wang
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (PARQUET-1969) Test by GithubAction

2021-02-01 Thread Yuming Wang (Jira)
Yuming Wang created PARQUET-1969:


 Summary: Test by GithubAction
 Key: PARQUET-1969
 URL: https://issues.apache.org/jira/browse/PARQUET-1969
 Project: Parquet
  Issue Type: Improvement
  Components: parquet-mr
Affects Versions: 1.12.0
Reporter: Yuming Wang






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (PARQUET-1968) FilterApi support In predicate

2021-02-01 Thread Yuming Wang (Jira)
Yuming Wang created PARQUET-1968:


 Summary: FilterApi support In predicate
 Key: PARQUET-1968
 URL: https://issues.apache.org/jira/browse/PARQUET-1968
 Project: Parquet
  Issue Type: Improvement
  Components: parquet-mr
Affects Versions: 1.12.0
Reporter: Yuming Wang


FilterApi should support native In predicate.

Spark:

https://github.com/apache/spark/blob/d6a68e0b67ff7de58073c176dd097070e88ac831/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFilters.scala#L600-L605

Impala:

https://issues.apache.org/jira/browse/IMPALA-3654



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (PARQUET-1805) Refactor the configuration for bloom filters

2021-02-01 Thread Yuming Wang (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-1805?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17276312#comment-17276312
 ] 

Yuming Wang commented on PARQUET-1805:
--

Thank you [~gszadovszky] [~junjie] This is what I want:
{code:sql}
set parquet.bloom.filter.enabled=false;
set parquet.bloom.filter.enabled#ts=true;
set parquet.bloom.filter.enabled#dec=true;
{code}
Benchmark and benchmark result:
{code:scala}
val numRows = 1024 * 1024 * 15
val df = spark.range(numRows).selectExpr(
  "id",
  "cast(id as string) as s",
  "cast(id as timestamp) as ts",
  "cast(cast(id as timestamp) as date) as td",
  "cast(id as decimal) as dec")
val benchmark = new org.apache.spark.benchmark.Benchmark(
  "Benchmark bloom filter write",
  numRows,
  minNumIters = 5)

benchmark.addCase("default") { _ =>
  withSQLConf() {
df.write.mode("overwrite").parquet("/tmp/spark/parquet")
  }
}

benchmark.addCase("Build bloom filter for ts column") { _ =>
  withSQLConf(
org.apache.parquet.hadoop.ParquetOutputFormat.BLOOM_FILTER_ENABLED -> 
"false",
org.apache.parquet.hadoop.ParquetOutputFormat.BLOOM_FILTER_ENABLED + "#ts" 
-> "true") {
df.write.mode("overwrite").parquet("/tmp/spark/parquet")
  }
}

benchmark.addCase("Build bloom filter for ts and dec column") { _ =>
  withSQLConf(
org.apache.parquet.hadoop.ParquetOutputFormat.BLOOM_FILTER_ENABLED -> 
"false",
org.apache.parquet.hadoop.ParquetOutputFormat.BLOOM_FILTER_ENABLED + "#ts" 
-> "true",
org.apache.parquet.hadoop.ParquetOutputFormat.BLOOM_FILTER_ENABLED + "#dec" 
-> "true") {
df.write.mode("overwrite").parquet("/tmp/spark/parquet")
  }
}

benchmark.addCase("Build bloom filter for all column") { _ =>
  withSQLConf(
org.apache.parquet.hadoop.ParquetOutputFormat.BLOOM_FILTER_ENABLED -> 
"true") {
df.write.mode("overwrite").parquet("/tmp/spark/parquet")
  }
}
benchmark.run()
{code}
{noformat}
Java HotSpot(TM) 64-Bit Server VM 1.8.0_251-b08 on Mac OS X 10.15.7
Intel(R) Core(TM) i9-9980HK CPU @ 2.40GHz
Benchmark bloom filter write: Best Time(ms)   Avg Time(ms)   
Stdev(ms)Rate(M/s)   Per Row(ns)   Relative

default5207   5314  
72  3.0 331.1   1.0X
Build bloom filter for ts column   5808   6065 
245  2.7 369.2   0.9X
Build bloom filter for ts and dec column   6685   6776  
79  2.4 425.0   0.8X
Build bloom filter for all column  9077   9889 
629  1.7 577.1   0.6X
{noformat}

cc [~dongjoon]

> Refactor the configuration for bloom filters
> 
>
> Key: PARQUET-1805
> URL: https://issues.apache.org/jira/browse/PARQUET-1805
> Project: Parquet
>  Issue Type: Improvement
>Reporter: Gabor Szadovszky
>Assignee: Gabor Szadovszky
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.0
>
>
> Refactor the hadoop configuration for bloom filters according to PARQUET-1784.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (PARQUET-1805) Refactor the configuration for bloom filters

2021-01-30 Thread Yuming Wang (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-1805?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17275608#comment-17275608
 ] 

Yuming Wang commented on PARQUET-1805:
--

It seems that the previous configuration is better, enabling bloom filter 
seriously affects the writing performance:
{code:scala}
val numRows = 1024 * 1024 * 15
val df = spark.range(numRows).selectExpr(
  "id",
  "cast(id as string) as s",
  "cast(id as timestamp) as ts",
  "cast(cast(id as timestamp) as date) as td",
  "cast(id as decimal) as dec")
val benchmark = new org.apache.spark.benchmark.Benchmark(
  "Benchmark bloom filter write",
  numRows,
  minNumIters = 5)
Seq(false, true).foreach { pushDownEnabled =>
  val name = s"Write parquet ${if (pushDownEnabled) s"(bloom filter)" else ""}"
  benchmark.addCase(name) { _ =>

withSQLConf(org.apache.parquet.hadoop.ParquetOutputFormat.BLOOM_FILTER_ENABLED 
-> s"$pushDownEnabled") {
  df.write.mode("overwrite").parquet("/tmp/spark/parquet")
}
  }
}
benchmark.run()
{code}

{noformat}
Java HotSpot(TM) 64-Bit Server VM 1.8.0_251-b08 on Mac OS X 10.15.7
Intel(R) Core(TM) i9-9980HK CPU @ 2.40GHz
Benchmark bloom filter write: Best Time(ms)   Avg Time(ms)   
Stdev(ms)Rate(M/s)   Per Row(ns)   Relative

Write parquet  5531   6001 
503  2.8 351.6   1.0X
Write parquet (bloom filter)  10529  11633
1113  1.5 669.4   0.5X

{noformat}



> Refactor the configuration for bloom filters
> 
>
> Key: PARQUET-1805
> URL: https://issues.apache.org/jira/browse/PARQUET-1805
> Project: Parquet
>  Issue Type: Improvement
>Reporter: Gabor Szadovszky
>Assignee: Gabor Szadovszky
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.0
>
>
> Refactor the hadoop configuration for bloom filters according to PARQUET-1784.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (PARQUET-1739) Make Spark SQL support Column indexes

2021-01-29 Thread Yuming Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/PARQUET-1739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuming Wang resolved PARQUET-1739.
--
Fix Version/s: 1.11.1
   Resolution: Fixed

> Make Spark SQL support Column indexes
> -
>
> Key: PARQUET-1739
> URL: https://issues.apache.org/jira/browse/PARQUET-1739
> Project: Parquet
>  Issue Type: Improvement
>  Components: parquet-mr
>Affects Versions: 1.11.0
>Reporter: Yuming Wang
>Assignee: Yuming Wang
>Priority: Major
> Fix For: 1.11.1
>
>
> Make Spark SQL support Column indexes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (PARQUET-1746) Changed the data order after DataFrame reuse

2021-01-20 Thread Yuming Wang (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-1746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17268661#comment-17268661
 ] 

Yuming Wang edited comment on PARQUET-1746 at 1/21/21, 12:54 AM:
-

We can disable parquet.page.write-checksum.enabled to workaround this issue:
https://github.com/apache/spark/pull/26804#discussion_r561044576


was (Author: q79969786):
https://github.com/apache/spark/pull/26804#discussion_r561044576

> Changed the data order after DataFrame reuse
> 
>
> Key: PARQUET-1746
> URL: https://issues.apache.org/jira/browse/PARQUET-1746
> Project: Parquet
>  Issue Type: Sub-task
>  Components: parquet-mr
>Affects Versions: 1.11.0
>Reporter: Yuming Wang
>Priority: Major
>
> How to reproduce:
> {code:sh}
> git clone https://github.com/apache/spark.git && cd spark
> git fetch origin pull/26804/head:PARQUET-1746
> git checkout PARQUET-1746
> build/sbt "sql/test-only *StreamSuite"
> {code}
> output:
> {noformat}
> sbt.ForkMain$ForkError: org.scalatest.exceptions.TestFailedException: 
> Decoded objects do not match expected objects:
> expected: WrappedArray(0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10)
> actual:   WrappedArray(0, 1, 3, 4, 5, 6, 7, 8, 9, 10, 2)
> assertnotnull(upcast(getcolumnbyordinal(0, LongType), LongType, - root class: 
> "scala.Long"))
> +- upcast(getcolumnbyordinal(0, LongType), LongType, - root class: 
> "scala.Long")
>+- getcolumnbyordinal(0, LongType)
>  
>   at 
> org.scalatest.Assertions.newAssertionFailedException(Assertions.scala:530)
>   at 
> org.scalatest.Assertions.newAssertionFailedException$(Assertions.scala:529)
>   at 
> org.scalatest.FunSuite.newAssertionFailedException(FunSuite.scala:1560)
>   at org.scalatest.Assertions.fail(Assertions.scala:1091)
>   at org.scalatest.Assertions.fail$(Assertions.scala:1087)
>   at org.scalatest.FunSuite.fail(FunSuite.scala:1560)
>   at org.apache.spark.sql.QueryTest.checkDataset(QueryTest.scala:73)
>   at 
> org.apache.spark.sql.streaming.StreamSuite.$anonfun$new$22(StreamSuite.scala:215)
>   at 
> org.apache.spark.sql.streaming.StreamSuite.$anonfun$new$22$adapted(StreamSuite.scala:208)
>   at 
> org.apache.spark.sql.test.SQLTestUtils.$anonfun$withTempDir$1(SQLTestUtils.scala:76)
>   at 
> org.apache.spark.sql.test.SQLTestUtils.$anonfun$withTempDir$1$adapted(SQLTestUtils.scala:75)
>   at org.apache.spark.SparkFunSuite.withTempDir(SparkFunSuite.scala:161)
>   at 
> org.apache.spark.sql.streaming.StreamSuite.org$apache$spark$sql$test$SQLTestUtils$$super$withTempDir(StreamSuite.scala:51)
>   at 
> org.apache.spark.sql.test.SQLTestUtils.withTempDir(SQLTestUtils.scala:75)
>   at 
> org.apache.spark.sql.test.SQLTestUtils.withTempDir$(SQLTestUtils.scala:74)
>   at 
> org.apache.spark.sql.streaming.StreamSuite.withTempDir(StreamSuite.scala:51)
>   at 
> org.apache.spark.sql.streaming.StreamSuite.$anonfun$new$21(StreamSuite.scala:208)
>   at 
> org.apache.spark.sql.streaming.StreamSuite.$anonfun$new$21$adapted(StreamSuite.scala:207)
>   at 
> org.apache.spark.sql.test.SQLTestUtils.$anonfun$withTempDir$1(SQLTestUtils.scala:76)
>   at 
> org.apache.spark.sql.test.SQLTestUtils.$anonfun$withTempDir$1$adapted(SQLTestUtils.scala:75)
>   at org.apache.spark.SparkFunSuite.withTempDir(SparkFunSuite.scala:161)
>   at 
> org.apache.spark.sql.streaming.StreamSuite.org$apache$spark$sql$test$SQLTestUtils$$super$withTempDir(StreamSuite.scala:51)
>   at 
> org.apache.spark.sql.test.SQLTestUtils.withTempDir(SQLTestUtils.scala:75)
>   at 
> org.apache.spark.sql.test.SQLTestUtils.withTempDir$(SQLTestUtils.scala:74)
>   at 
> org.apache.spark.sql.streaming.StreamSuite.withTempDir(StreamSuite.scala:51)
>   at 
> org.apache.spark.sql.streaming.StreamSuite.assertDF$1(StreamSuite.scala:207)
>   at 
> org.apache.spark.sql.streaming.StreamSuite.$anonfun$new$25(StreamSuite.scala:226)
>   at 
> org.apache.spark.sql.catalyst.plans.SQLHelper.withSQLConf(SQLHelper.scala:52)
>   at 
> org.apache.spark.sql.catalyst.plans.SQLHelper.withSQLConf$(SQLHelper.scala:36)
>   at 
> org.apache.spark.sql.streaming.StreamSuite.org$apache$spark$sql$test$SQLTestUtilsBase$$super$withSQLConf(StreamSuite.scala:51)
>   at 
> org.apache.spark.sql.test.SQLTestUtilsBase.withSQLConf(SQLTestUtils.scala:231)
>   at 
> org.apache.spark.sql.test.SQLTestUtilsBase.withSQLConf$(SQLTestUtils.scala:229)
>   at 
> org.apache.spark.sql.streaming.StreamSuite.withSQLConf(StreamSuite.scala:51)
>   at 
> org.apache.spark.sql.streaming.StreamSuite.$anonfun$new$24(StreamSuite.scala:225)
>   at 
> org.apache.spark.sql.streaming.StreamSuite.$anonfun$new$24$adapted(StreamSuite.scala:224)
>   at 

[jira] [Commented] (PARQUET-1746) Changed the data order after DataFrame reuse

2021-01-20 Thread Yuming Wang (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-1746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17268957#comment-17268957
 ] 

Yuming Wang commented on PARQUET-1746:
--

We can disable parquet.page.write-checksum.enabled to workaround this issue:
https://github.com/apache/spark/pull/26804#discussion_r561044576

> Changed the data order after DataFrame reuse
> 
>
> Key: PARQUET-1746
> URL: https://issues.apache.org/jira/browse/PARQUET-1746
> Project: Parquet
>  Issue Type: Sub-task
>  Components: parquet-mr
>Affects Versions: 1.11.0
>Reporter: Yuming Wang
>Priority: Major
>
> How to reproduce:
> {code:sh}
> git clone https://github.com/apache/spark.git && cd spark
> git fetch origin pull/26804/head:PARQUET-1746
> git checkout PARQUET-1746
> build/sbt "sql/test-only *StreamSuite"
> {code}
> output:
> {noformat}
> sbt.ForkMain$ForkError: org.scalatest.exceptions.TestFailedException: 
> Decoded objects do not match expected objects:
> expected: WrappedArray(0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10)
> actual:   WrappedArray(0, 1, 3, 4, 5, 6, 7, 8, 9, 10, 2)
> assertnotnull(upcast(getcolumnbyordinal(0, LongType), LongType, - root class: 
> "scala.Long"))
> +- upcast(getcolumnbyordinal(0, LongType), LongType, - root class: 
> "scala.Long")
>+- getcolumnbyordinal(0, LongType)
>  
>   at 
> org.scalatest.Assertions.newAssertionFailedException(Assertions.scala:530)
>   at 
> org.scalatest.Assertions.newAssertionFailedException$(Assertions.scala:529)
>   at 
> org.scalatest.FunSuite.newAssertionFailedException(FunSuite.scala:1560)
>   at org.scalatest.Assertions.fail(Assertions.scala:1091)
>   at org.scalatest.Assertions.fail$(Assertions.scala:1087)
>   at org.scalatest.FunSuite.fail(FunSuite.scala:1560)
>   at org.apache.spark.sql.QueryTest.checkDataset(QueryTest.scala:73)
>   at 
> org.apache.spark.sql.streaming.StreamSuite.$anonfun$new$22(StreamSuite.scala:215)
>   at 
> org.apache.spark.sql.streaming.StreamSuite.$anonfun$new$22$adapted(StreamSuite.scala:208)
>   at 
> org.apache.spark.sql.test.SQLTestUtils.$anonfun$withTempDir$1(SQLTestUtils.scala:76)
>   at 
> org.apache.spark.sql.test.SQLTestUtils.$anonfun$withTempDir$1$adapted(SQLTestUtils.scala:75)
>   at org.apache.spark.SparkFunSuite.withTempDir(SparkFunSuite.scala:161)
>   at 
> org.apache.spark.sql.streaming.StreamSuite.org$apache$spark$sql$test$SQLTestUtils$$super$withTempDir(StreamSuite.scala:51)
>   at 
> org.apache.spark.sql.test.SQLTestUtils.withTempDir(SQLTestUtils.scala:75)
>   at 
> org.apache.spark.sql.test.SQLTestUtils.withTempDir$(SQLTestUtils.scala:74)
>   at 
> org.apache.spark.sql.streaming.StreamSuite.withTempDir(StreamSuite.scala:51)
>   at 
> org.apache.spark.sql.streaming.StreamSuite.$anonfun$new$21(StreamSuite.scala:208)
>   at 
> org.apache.spark.sql.streaming.StreamSuite.$anonfun$new$21$adapted(StreamSuite.scala:207)
>   at 
> org.apache.spark.sql.test.SQLTestUtils.$anonfun$withTempDir$1(SQLTestUtils.scala:76)
>   at 
> org.apache.spark.sql.test.SQLTestUtils.$anonfun$withTempDir$1$adapted(SQLTestUtils.scala:75)
>   at org.apache.spark.SparkFunSuite.withTempDir(SparkFunSuite.scala:161)
>   at 
> org.apache.spark.sql.streaming.StreamSuite.org$apache$spark$sql$test$SQLTestUtils$$super$withTempDir(StreamSuite.scala:51)
>   at 
> org.apache.spark.sql.test.SQLTestUtils.withTempDir(SQLTestUtils.scala:75)
>   at 
> org.apache.spark.sql.test.SQLTestUtils.withTempDir$(SQLTestUtils.scala:74)
>   at 
> org.apache.spark.sql.streaming.StreamSuite.withTempDir(StreamSuite.scala:51)
>   at 
> org.apache.spark.sql.streaming.StreamSuite.assertDF$1(StreamSuite.scala:207)
>   at 
> org.apache.spark.sql.streaming.StreamSuite.$anonfun$new$25(StreamSuite.scala:226)
>   at 
> org.apache.spark.sql.catalyst.plans.SQLHelper.withSQLConf(SQLHelper.scala:52)
>   at 
> org.apache.spark.sql.catalyst.plans.SQLHelper.withSQLConf$(SQLHelper.scala:36)
>   at 
> org.apache.spark.sql.streaming.StreamSuite.org$apache$spark$sql$test$SQLTestUtilsBase$$super$withSQLConf(StreamSuite.scala:51)
>   at 
> org.apache.spark.sql.test.SQLTestUtilsBase.withSQLConf(SQLTestUtils.scala:231)
>   at 
> org.apache.spark.sql.test.SQLTestUtilsBase.withSQLConf$(SQLTestUtils.scala:229)
>   at 
> org.apache.spark.sql.streaming.StreamSuite.withSQLConf(StreamSuite.scala:51)
>   at 
> org.apache.spark.sql.streaming.StreamSuite.$anonfun$new$24(StreamSuite.scala:225)
>   at 
> org.apache.spark.sql.streaming.StreamSuite.$anonfun$new$24$adapted(StreamSuite.scala:224)
>   at scala.collection.immutable.List.foreach(List.scala:392)
>   at 
> org.apache.spark.sql.streaming.StreamSuite.$anonfun$new$20(StreamSuite.scala:224)
>   at 
> 

[jira] [Commented] (PARQUET-1746) Changed the data order after DataFrame reuse

2021-01-20 Thread Yuming Wang (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-1746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17268661#comment-17268661
 ] 

Yuming Wang commented on PARQUET-1746:
--

https://github.com/apache/spark/pull/26804#discussion_r561044576

> Changed the data order after DataFrame reuse
> 
>
> Key: PARQUET-1746
> URL: https://issues.apache.org/jira/browse/PARQUET-1746
> Project: Parquet
>  Issue Type: Sub-task
>  Components: parquet-mr
>Affects Versions: 1.11.0
>Reporter: Yuming Wang
>Priority: Major
>
> How to reproduce:
> {code:sh}
> git clone https://github.com/apache/spark.git && cd spark
> git fetch origin pull/26804/head:PARQUET-1746
> git checkout PARQUET-1746
> build/sbt "sql/test-only *StreamSuite"
> {code}
> output:
> {noformat}
> sbt.ForkMain$ForkError: org.scalatest.exceptions.TestFailedException: 
> Decoded objects do not match expected objects:
> expected: WrappedArray(0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10)
> actual:   WrappedArray(0, 1, 3, 4, 5, 6, 7, 8, 9, 10, 2)
> assertnotnull(upcast(getcolumnbyordinal(0, LongType), LongType, - root class: 
> "scala.Long"))
> +- upcast(getcolumnbyordinal(0, LongType), LongType, - root class: 
> "scala.Long")
>+- getcolumnbyordinal(0, LongType)
>  
>   at 
> org.scalatest.Assertions.newAssertionFailedException(Assertions.scala:530)
>   at 
> org.scalatest.Assertions.newAssertionFailedException$(Assertions.scala:529)
>   at 
> org.scalatest.FunSuite.newAssertionFailedException(FunSuite.scala:1560)
>   at org.scalatest.Assertions.fail(Assertions.scala:1091)
>   at org.scalatest.Assertions.fail$(Assertions.scala:1087)
>   at org.scalatest.FunSuite.fail(FunSuite.scala:1560)
>   at org.apache.spark.sql.QueryTest.checkDataset(QueryTest.scala:73)
>   at 
> org.apache.spark.sql.streaming.StreamSuite.$anonfun$new$22(StreamSuite.scala:215)
>   at 
> org.apache.spark.sql.streaming.StreamSuite.$anonfun$new$22$adapted(StreamSuite.scala:208)
>   at 
> org.apache.spark.sql.test.SQLTestUtils.$anonfun$withTempDir$1(SQLTestUtils.scala:76)
>   at 
> org.apache.spark.sql.test.SQLTestUtils.$anonfun$withTempDir$1$adapted(SQLTestUtils.scala:75)
>   at org.apache.spark.SparkFunSuite.withTempDir(SparkFunSuite.scala:161)
>   at 
> org.apache.spark.sql.streaming.StreamSuite.org$apache$spark$sql$test$SQLTestUtils$$super$withTempDir(StreamSuite.scala:51)
>   at 
> org.apache.spark.sql.test.SQLTestUtils.withTempDir(SQLTestUtils.scala:75)
>   at 
> org.apache.spark.sql.test.SQLTestUtils.withTempDir$(SQLTestUtils.scala:74)
>   at 
> org.apache.spark.sql.streaming.StreamSuite.withTempDir(StreamSuite.scala:51)
>   at 
> org.apache.spark.sql.streaming.StreamSuite.$anonfun$new$21(StreamSuite.scala:208)
>   at 
> org.apache.spark.sql.streaming.StreamSuite.$anonfun$new$21$adapted(StreamSuite.scala:207)
>   at 
> org.apache.spark.sql.test.SQLTestUtils.$anonfun$withTempDir$1(SQLTestUtils.scala:76)
>   at 
> org.apache.spark.sql.test.SQLTestUtils.$anonfun$withTempDir$1$adapted(SQLTestUtils.scala:75)
>   at org.apache.spark.SparkFunSuite.withTempDir(SparkFunSuite.scala:161)
>   at 
> org.apache.spark.sql.streaming.StreamSuite.org$apache$spark$sql$test$SQLTestUtils$$super$withTempDir(StreamSuite.scala:51)
>   at 
> org.apache.spark.sql.test.SQLTestUtils.withTempDir(SQLTestUtils.scala:75)
>   at 
> org.apache.spark.sql.test.SQLTestUtils.withTempDir$(SQLTestUtils.scala:74)
>   at 
> org.apache.spark.sql.streaming.StreamSuite.withTempDir(StreamSuite.scala:51)
>   at 
> org.apache.spark.sql.streaming.StreamSuite.assertDF$1(StreamSuite.scala:207)
>   at 
> org.apache.spark.sql.streaming.StreamSuite.$anonfun$new$25(StreamSuite.scala:226)
>   at 
> org.apache.spark.sql.catalyst.plans.SQLHelper.withSQLConf(SQLHelper.scala:52)
>   at 
> org.apache.spark.sql.catalyst.plans.SQLHelper.withSQLConf$(SQLHelper.scala:36)
>   at 
> org.apache.spark.sql.streaming.StreamSuite.org$apache$spark$sql$test$SQLTestUtilsBase$$super$withSQLConf(StreamSuite.scala:51)
>   at 
> org.apache.spark.sql.test.SQLTestUtilsBase.withSQLConf(SQLTestUtils.scala:231)
>   at 
> org.apache.spark.sql.test.SQLTestUtilsBase.withSQLConf$(SQLTestUtils.scala:229)
>   at 
> org.apache.spark.sql.streaming.StreamSuite.withSQLConf(StreamSuite.scala:51)
>   at 
> org.apache.spark.sql.streaming.StreamSuite.$anonfun$new$24(StreamSuite.scala:225)
>   at 
> org.apache.spark.sql.streaming.StreamSuite.$anonfun$new$24$adapted(StreamSuite.scala:224)
>   at scala.collection.immutable.List.foreach(List.scala:392)
>   at 
> org.apache.spark.sql.streaming.StreamSuite.$anonfun$new$20(StreamSuite.scala:224)
>   at 
> scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
>   at 

[jira] [Created] (PARQUET-1964) Add null check for getFilteredRecordCount

2021-01-18 Thread Yuming Wang (Jira)
Yuming Wang created PARQUET-1964:


 Summary: Add null check for getFilteredRecordCount
 Key: PARQUET-1964
 URL: https://issues.apache.org/jira/browse/PARQUET-1964
 Project: Parquet
  Issue Type: Improvement
Reporter: Yuming Wang


How to reproduce this issue:
{code:scala}
val hadoopInputFile = HadoopInputFile.fromPath(new 
Path("/path/to/parquet/000.snappy.parquet"), new Configuration())
val reader = ParquetFileReader.open(hadoopInputFile)
val recordCount = reader.getFilteredRecordCount
reader.close()
{code}

Output:
{noformat}
java.lang.NullPointerException was thrown.
java.lang.NullPointerException
at 
org.apache.parquet.internal.filter2.columnindex.ColumnIndexFilter.calculateRowRanges(ColumnIndexFilter.java:81)
at 
org.apache.parquet.hadoop.ParquetFileReader.getRowRanges(ParquetFileReader.java:961)
at 
org.apache.parquet.hadoop.ParquetFileReader.getFilteredRecordCount(ParquetFileReader.java:766)
{noformat}




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (PARQUET-1952) Upgrade Avro to 1.10.1

2020-12-14 Thread Yuming Wang (Jira)
Yuming Wang created PARQUET-1952:


 Summary: Upgrade Avro to 1.10.1
 Key: PARQUET-1952
 URL: https://issues.apache.org/jira/browse/PARQUET-1952
 Project: Parquet
  Issue Type: Improvement
Reporter: Yuming Wang


Avro 1.10.1 release notes:

https://issues.apache.org/jira/issues/?jql=project%20%3D%20AVRO%20AND%20%20fixVersion%20%3D%201.10.1%20and%20status%20%3D%20Resolved%20%20%20ORDER%20BY%20priority%20DESC%2C%20updated%20DESC



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (PARQUET-1946) Parquet File not readable by Google big query (works with Spark)

2020-12-07 Thread Yuming Wang (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-1946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17245061#comment-17245061
 ] 

Yuming Wang commented on PARQUET-1946:
--

Could you try to disable {{parquet.filter.columnindex.enabled}}?

> Parquet File not readable by Google big query (works with Spark)
> 
>
> Key: PARQUET-1946
> URL: https://issues.apache.org/jira/browse/PARQUET-1946
> Project: Parquet
>  Issue Type: Bug
>  Components: parquet-avro
>Affects Versions: 1.11.0
> Environment: [secor|https://github.com/pinterest/secor]
> GCP 
> Big Query google cloud
> Parquet writer 1.11
>  
>  
>Reporter: Richard Grossman
>Priority: Blocker
>
> Hi
> I'm trying to write Avro message to parquet on GCS. These parquet should be 
> query by big query engine who support now parquet.
> To do this I'm using Secor a kafka log persister tools from pinterest.
> First I didn't notice any problem using Spark the same file can be read 
> without any problem all is working perfect.
> Now using Big query bring and error like this :
> Error while reading table: , error message: Read less values than expected: 
> Actual: 29333, Expected: 33827. Row group: 0, Column: , File:
> After investigation using parquet-tools I figured out that in parquet there 
> is metadata regarding number total of unique values for each columns eg from 
> parquet-tools
> page 0: DLE:BIT_PACKED RLE:BIT_PACKED [more]... CRC:[PAGE CORRUPT] VC:547
> So the VC value indicate that the total number of unique value in the file is 
> 547.
> Now when make a spark SQL like SELECT DISTINCT COUNT(column) FROM ... I get 
> 421 mean this number in the metadata is incorrect.
> So what is not a problem for Spark to read is a blocking problem for Big data 
> because it relies on these values and found it incorrect.
> Is there any configuration of the writer that can prevent these errors in the 
> metadata ? Or is it a normal behavior that should be a problem ?
> Thanks



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (PARQUET-1739) Make Spark SQL support Column indexes

2020-04-12 Thread Yuming Wang (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-1739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17081827#comment-17081827
 ] 

Yuming Wang commented on PARQUET-1739:
--

[~gszadovszky]  I found that in some cases the performance will be worse:
|Case|Parquet 1.11 Vectorized(ms)|Parquet 1.11 Vectorized(Pushdown)(ms)|Parquet 
1.10 Vectorized(ms)|Parquet 1.10 Vectorized(Pushdown)(ms)|%Improved|
|Select 1 distinct string row (value <=> '100')|6309|1418|7113|528|1.68560606|

> Make Spark SQL support Column indexes
> -
>
> Key: PARQUET-1739
> URL: https://issues.apache.org/jira/browse/PARQUET-1739
> Project: Parquet
>  Issue Type: Improvement
>  Components: parquet-mr
>Affects Versions: 1.11.0
>Reporter: Yuming Wang
>Assignee: Yuming Wang
>Priority: Major
> Fix For: 1.11.1
>
>
> Make Spark SQL support Column indexes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (PARQUET-1739) Make Spark SQL support Column indexes

2020-04-12 Thread Yuming Wang (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-1739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17081826#comment-17081826
 ] 

Yuming Wang commented on PARQUET-1739:
--

Spark benchmark result:
|Case|Parquet 1.11 Vectorized(ms)|Parquet 1.11 Vectorized(Pushdown)(ms)|Parquet 
1.10 Vectorized(ms)|Parquet 1.10 Vectorized(Pushdown)(ms)|%Improved|
|Select 0 string row (value IS NULL)|7001|631|8459|569|0.10896309|
|Select 0 string row ('7864320' < value < 
'7864320')|8801|744|9596|470|0.58297872|
|Select 1 string row (value = '7864320')|6973|578|8415|456|0.26754386|
|Select 1 string row (value <=> '7864320')|7090|867|9681|663|0.30769231|
|Select 1 string row ('7864320' <= value <= 
'7864320')|7637|639|8257|442|0.44570136|
|Select all string rows (value IS NOT NULL)|14638|14926|15058|17091|-0.1266749|
|Select 0 int row (value IS NULL)|7233|532|8373|460|0.15652174|
|Select 0 int row (7864320 < value < 7864320)|6474|558|8176|620|-0.1|
|Select 1 int row (value = 7864320)|7284|554|7545|435|0.27356322|
|Select 1 int row (value <=> 7864320)|7109|724|8550|484|0.49586777|
|Select 1 int row (7864320 <= value <= 7864320)|6340|563|7648|440|0.27954545|
|Select 1 int row (7864319 < value < 7864321)|7134|620|7521|435|0.42528736|
|Select 10% int rows (value < 1572864)|7561|1986|8790|1988|-0.001006|
|Select 50% int rows (value < 7864320)|10425|7434|10445|7133|0.04219823|
|Select 90% int rows (value < 14155776)|12130|11745|12959|12574|-0.0659297|
|Select all int rows (value IS NOT NULL)|12662|12961|13640|13794|-0.0603886|
|Select all int rows (value > -1)|12568|12864|13547|13691|-0.0604046|
|Select all int rows (value != -1)|12574|12874|14617|14533|-0.114154|
|Select 0 distinct string row (value IS NULL)|5925|455|7013|371|0.22641509|
|Select 0 distinct string row ('100' < value < 
'100')|6037|445|7087|391|0.13810742|
|Select 1 distinct string row (value = '100')|6107|603|7169|524|0.15076336|
|Select 1 distinct string row (value <=> '100')|6309|1418|7113|528|1.68560606|
|Select 1 distinct string row ('100' <= value <= 
'100')|6224|620|7222|549|0.12932605|
|Select all distinct string rows (value IS NOT 
NULL)|14198|14293|15175|16194|-0.1173892|
|StringStartsWith filter: (value like '10%')|8399|3572|10298|2642|0.35200606|
|StringStartsWith filter: (value like '1000%')|7424|559|7998|441|0.2675737|
|StringStartsWith filter: (value like '786432%')|7554|542|7920|428|0.26635514|
|Select 1 decimal(9, 2) row (value = 7864320)|2684|131|3834|115|0.13913043|
|Select 10% decimal(9, 2) rows (value < 1572864)|4201|2280|5139|2170|0.05069124|
|Select 50% decimal(9, 2) rows (value < 7864320)|8661|8325|9593|10449|-0.203273|
|Select 90% decimal(9, 2) rows (value < 
14155776)|10213|9833|11647|11828|-0.1686676|
|Select 1 decimal(18, 2) row (value = 7864320)|3259|150|4631|133|0.12781955|
|Select 10% decimal(18, 2) rows (value < 
1572864)|4072|1284|5285|1260|0.01904762|
|Select 50% decimal(18, 2) rows (value < 
7864320)|7010|5495|7959|5898|-0.0683282|
|Select 90% decimal(18, 2) rows (value < 
14155776)|10037|9957|10845|10535|-0.0548647|
|Select 1 decimal(38, 2) row (value = 7864320)|4970|151|5943|131|0.15267176|
|Select 10% decimal(38, 2) rows (value < 
1572864)|5912|1605|7079|1827|-0.1215107|
|Select 50% decimal(38, 2) rows (value < 
7864320)|9784|7573|11497|7991|-0.0523088|
|Select 90% decimal(38, 2) rows (value < 
14155776)|13935|13341|14702|14183|-0.0593668|
|InSet -> InFilters (values count: 5, distribution: 
10)|7193|600|8001|495|0.21212121|
|InSet -> InFilters (values count: 5, distribution: 
50)|7002|577|8042|480|0.20208333|
|InSet -> InFilters (values count: 5, distribution: 
90)|7003|587|8526|484|0.21280992|
|InSet -> InFilters (values count: 10, distribution: 
10)|6984|625|8279|519|0.20423892|
|InSet -> InFilters (values count: 10, distribution: 
50)|6949|706|8097|505|0.3980198|
|InSet -> InFilters (values count: 10, distribution: 
90)|7336|613|7961|507|0.20907298|
|InSet -> InFilters (values count: 50, distribution: 
10)|7369|7475|8052|8244|-0.09328|
|InSet -> InFilters (values count: 50, distribution: 
50)|7295|7619|8202|8311|-0.0832631|
|InSet -> InFilters (values count: 50, distribution: 
90)|7584|7610|8405|8326|-0.0859957|
|InSet -> InFilters (values count: 100, distribution: 
10)|7264|7358|8041|8200|-0.1026829|
|InSet -> InFilters (values count: 100, distribution: 
50)|7192|7277|8019|8437|-0.1374896|
|InSet -> InFilters (values count: 100, distribution: 
90)|7040|7236|10567|10681|-0.3225353|
|Select 1 tinyint row (value = CAST(63 AS 
tinyint))|3185|247|4855|235|0.05106383|
|Select 10% tinyint rows (value < CAST(12 AS 
tinyint))|3823|1120|5091|1209|-0.0736146|
|Select 50% tinyint rows (value < CAST(63 AS 
tinyint))|6570|5117|9265|6076|-0.1578341|
|Select 90% tinyint rows (value < CAST(114 AS 
tinyint))|9291|9229|10508|10152|-0.090918|
|Select 1 timestamp stored as INT96 row (value = CAST(7864320 AS 
timestamp))|4054|4757|6253|4774|-0.003561|
|Select 10% timestamp stored 

[jira] [Commented] (PARQUET-1745) No result for partition key included in Parquet file

2020-01-20 Thread Yuming Wang (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-1745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17019522#comment-17019522
 ] 

Yuming Wang commented on PARQUET-1745:
--

cc [~cloud_fan]

> No result for partition key included in Parquet file
> 
>
> Key: PARQUET-1745
> URL: https://issues.apache.org/jira/browse/PARQUET-1745
> Project: Parquet
>  Issue Type: Sub-task
>  Components: parquet-mr
>Affects Versions: 1.11.0
>Reporter: Yuming Wang
>Priority: Major
> Attachments: FilterByColumnIndex.png
>
>
> How to reproduce:
> {code:sh}
> git clone https://github.com/apache/spark.git && cd spark
> git fetch origin pull/26804/head:PARQUET-1745
> git checkout PARQUET-1745
> build/sbt "sql/test-only *ParquetV2PartitionDiscoverySuite"
> {code}
> output:
> {noformat}
> [info] - read partitioned table - partition key included in Parquet file *** 
> FAILED *** (1 second, 57 milliseconds)
> [info]   Results do not match for query:
> [info]   Timezone: 
> sun.util.calendar.ZoneInfo[id="America/Los_Angeles",offset=-2880,dstSavings=360,useDaylight=true,transitions=185,lastRule=java.util.SimpleTimeZone[id=America/Los_Angeles,offset=-2880,dstSavings=360,useDaylight=true,startYear=0,startMode=3,startMonth=2,startDay=8,startDayOfWeek=1,startTime=720,startTimeMode=0,endMode=3,endMonth=10,endDay=1,endDayOfWeek=1,endTime=720,endTimeMode=0]]
> [info]   Timezone Env:
> [info]
> [info]   == Parsed Logical Plan ==
> [info]   'Project [*]
> [info]   +- 'Filter ('pi = 1)
> [info]  +- 'UnresolvedRelation [t]
> [info]
> [info]   == Analyzed Logical Plan ==
> [info]   intField: int, stringField: string, pi: int, ps: string
> [info]   Project [intField#1788, stringField#1789, pi#1790, ps#1791]
> [info]   +- Filter (pi#1790 = 1)
> [info]  +- SubqueryAlias `t`
> [info] +- RelationV2[intField#1788, stringField#1789, pi#1790, 
> ps#1791] parquet 
> file:/root/opensource/apache-spark/target/tmp/spark-c7e85130-3e1f-4137-ac7c-32f48be3b74a
> [info]
> [info]   == Optimized Logical Plan ==
> [info]   Filter (isnotnull(pi#1790) AND (pi#1790 = 1))
> [info]   +- RelationV2[intField#1788, stringField#1789, pi#1790, ps#1791] 
> parquet 
> file:/root/opensource/apache-spark/target/tmp/spark-c7e85130-3e1f-4137-ac7c-32f48be3b74a
> [info]
> [info]   == Physical Plan ==
> [info]   *(1) Project [intField#1788, stringField#1789, pi#1790, ps#1791]
> [info]   +- *(1) Filter (isnotnull(pi#1790) AND (pi#1790 = 1))
> [info]  +- *(1) ColumnarToRow
> [info] +- BatchScan[intField#1788, stringField#1789, pi#1790, 
> ps#1791] ParquetScan Location: 
> InMemoryFileIndex[file:/root/opensource/apache-spark/target/tmp/spark-c7e85130-3e1f-4137-ac7c-32f...,
>  ReadSchema: struct, PushedFilters: 
> [IsNotNull(pi), EqualTo(pi,1)]
> [info]
> [info]   == Results ==
> [info]
> [info]   == Results ==
> [info]   !== Correct Answer - 20 ==   == Spark Answer - 0 ==
> [info]struct<>struct<>
> [info]   ![1,1,1,bar]
> [info]   ![1,1,1,foo]
> [info]   ![10,10,1,bar]
> [info]   ![10,10,1,foo]
> [info]   ![2,2,1,bar]
> [info]   ![2,2,1,foo]
> [info]   ![3,3,1,bar]
> [info]   ![3,3,1,foo]
> [info]   ![4,4,1,bar]
> [info]   ![4,4,1,foo]
> [info]   ![5,5,1,bar]
> [info]   ![5,5,1,foo]
> [info]   ![6,6,1,bar]
> [info]   ![6,6,1,foo]
> [info]   ![7,7,1,bar]
> [info]   ![7,7,1,foo]
> [info]   ![8,8,1,bar]
> [info]   ![8,8,1,foo]
> [info]   ![9,9,1,bar]
> [info]   ![9,9,1,foo] (QueryTest.scala:248)
> [info]   org.scalatest.exceptions.TestFailedException:
> [info]   at 
> org.scalatest.Assertions.newAssertionFailedException(Assertions.scala:530)
> [info]   at 
> org.scalatest.Assertions.newAssertionFailedException$(Assertions.scala:529)
> [info]   at 
> org.apache.spark.sql.QueryTest$.newAssertionFailedException(QueryTest.scala:238)
> [info]   at org.scalatest.Assertions.fail(Assertions.scala:1091)
> [info]   at org.scalatest.Assertions.fail$(Assertions.scala:1087)
> [info]   at org.apache.spark.sql.QueryTest$.fail(QueryTest.scala:238)
> [info]   at org.apache.spark.sql.QueryTest$.checkAnswer(QueryTest.scala:248)
> [info]   at org.apache.spark.sql.QueryTest.checkAnswer(QueryTest.scala:156)
> [info]   at 
> org.apache.spark.sql.execution.datasources.parquet.ParquetV2PartitionDiscoverySuite.$anonfun$new$194(ParquetPartitionDiscoverySuite.scala:1232)
> [info]   at 
> scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
> [info]   at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
> [info]   at 
> org.apache.spark.sql.test.SQLTestUtilsBase.withTempView(SQLTestUtils.scala:260)
> [info]   at 
> org.apache.spark.sql.test.SQLTestUtilsBase.withTempView$(SQLTestUtils.scala:258)
> [info]   at 
> 

[jira] [Commented] (PARQUET-1746) Changed the data order after DataFrame reuse

2020-01-10 Thread Yuming Wang (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-1746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17013368#comment-17013368
 ] 

Yuming Wang commented on PARQUET-1746:
--

It seems {{1.12.0-SNAPSHOT}} works.

> Changed the data order after DataFrame reuse
> 
>
> Key: PARQUET-1746
> URL: https://issues.apache.org/jira/browse/PARQUET-1746
> Project: Parquet
>  Issue Type: Sub-task
>  Components: parquet-mr
>Affects Versions: 1.11.0
>Reporter: Yuming Wang
>Priority: Major
>
> How to reproduce:
> {code:sh}
> git clone https://github.com/apache/spark.git && cd spark
> git fetch origin pull/26804/head:PARQUET-1746
> git checkout PARQUET-1746
> build/sbt "sql/test-only *StreamSuite"
> {code}
> output:
> {noformat}
> sbt.ForkMain$ForkError: org.scalatest.exceptions.TestFailedException: 
> Decoded objects do not match expected objects:
> expected: WrappedArray(0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10)
> actual:   WrappedArray(0, 1, 3, 4, 5, 6, 7, 8, 9, 10, 2)
> assertnotnull(upcast(getcolumnbyordinal(0, LongType), LongType, - root class: 
> "scala.Long"))
> +- upcast(getcolumnbyordinal(0, LongType), LongType, - root class: 
> "scala.Long")
>+- getcolumnbyordinal(0, LongType)
>  
>   at 
> org.scalatest.Assertions.newAssertionFailedException(Assertions.scala:530)
>   at 
> org.scalatest.Assertions.newAssertionFailedException$(Assertions.scala:529)
>   at 
> org.scalatest.FunSuite.newAssertionFailedException(FunSuite.scala:1560)
>   at org.scalatest.Assertions.fail(Assertions.scala:1091)
>   at org.scalatest.Assertions.fail$(Assertions.scala:1087)
>   at org.scalatest.FunSuite.fail(FunSuite.scala:1560)
>   at org.apache.spark.sql.QueryTest.checkDataset(QueryTest.scala:73)
>   at 
> org.apache.spark.sql.streaming.StreamSuite.$anonfun$new$22(StreamSuite.scala:215)
>   at 
> org.apache.spark.sql.streaming.StreamSuite.$anonfun$new$22$adapted(StreamSuite.scala:208)
>   at 
> org.apache.spark.sql.test.SQLTestUtils.$anonfun$withTempDir$1(SQLTestUtils.scala:76)
>   at 
> org.apache.spark.sql.test.SQLTestUtils.$anonfun$withTempDir$1$adapted(SQLTestUtils.scala:75)
>   at org.apache.spark.SparkFunSuite.withTempDir(SparkFunSuite.scala:161)
>   at 
> org.apache.spark.sql.streaming.StreamSuite.org$apache$spark$sql$test$SQLTestUtils$$super$withTempDir(StreamSuite.scala:51)
>   at 
> org.apache.spark.sql.test.SQLTestUtils.withTempDir(SQLTestUtils.scala:75)
>   at 
> org.apache.spark.sql.test.SQLTestUtils.withTempDir$(SQLTestUtils.scala:74)
>   at 
> org.apache.spark.sql.streaming.StreamSuite.withTempDir(StreamSuite.scala:51)
>   at 
> org.apache.spark.sql.streaming.StreamSuite.$anonfun$new$21(StreamSuite.scala:208)
>   at 
> org.apache.spark.sql.streaming.StreamSuite.$anonfun$new$21$adapted(StreamSuite.scala:207)
>   at 
> org.apache.spark.sql.test.SQLTestUtils.$anonfun$withTempDir$1(SQLTestUtils.scala:76)
>   at 
> org.apache.spark.sql.test.SQLTestUtils.$anonfun$withTempDir$1$adapted(SQLTestUtils.scala:75)
>   at org.apache.spark.SparkFunSuite.withTempDir(SparkFunSuite.scala:161)
>   at 
> org.apache.spark.sql.streaming.StreamSuite.org$apache$spark$sql$test$SQLTestUtils$$super$withTempDir(StreamSuite.scala:51)
>   at 
> org.apache.spark.sql.test.SQLTestUtils.withTempDir(SQLTestUtils.scala:75)
>   at 
> org.apache.spark.sql.test.SQLTestUtils.withTempDir$(SQLTestUtils.scala:74)
>   at 
> org.apache.spark.sql.streaming.StreamSuite.withTempDir(StreamSuite.scala:51)
>   at 
> org.apache.spark.sql.streaming.StreamSuite.assertDF$1(StreamSuite.scala:207)
>   at 
> org.apache.spark.sql.streaming.StreamSuite.$anonfun$new$25(StreamSuite.scala:226)
>   at 
> org.apache.spark.sql.catalyst.plans.SQLHelper.withSQLConf(SQLHelper.scala:52)
>   at 
> org.apache.spark.sql.catalyst.plans.SQLHelper.withSQLConf$(SQLHelper.scala:36)
>   at 
> org.apache.spark.sql.streaming.StreamSuite.org$apache$spark$sql$test$SQLTestUtilsBase$$super$withSQLConf(StreamSuite.scala:51)
>   at 
> org.apache.spark.sql.test.SQLTestUtilsBase.withSQLConf(SQLTestUtils.scala:231)
>   at 
> org.apache.spark.sql.test.SQLTestUtilsBase.withSQLConf$(SQLTestUtils.scala:229)
>   at 
> org.apache.spark.sql.streaming.StreamSuite.withSQLConf(StreamSuite.scala:51)
>   at 
> org.apache.spark.sql.streaming.StreamSuite.$anonfun$new$24(StreamSuite.scala:225)
>   at 
> org.apache.spark.sql.streaming.StreamSuite.$anonfun$new$24$adapted(StreamSuite.scala:224)
>   at scala.collection.immutable.List.foreach(List.scala:392)
>   at 
> org.apache.spark.sql.streaming.StreamSuite.$anonfun$new$20(StreamSuite.scala:224)
>   at 
> scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
>   at 

[jira] [Created] (PARQUET-1748) Update current release version in README.md

2020-01-10 Thread Yuming Wang (Jira)
Yuming Wang created PARQUET-1748:


 Summary: Update current release version in README.md
 Key: PARQUET-1748
 URL: https://issues.apache.org/jira/browse/PARQUET-1748
 Project: Parquet
  Issue Type: Task
  Components: parquet-mr
Affects Versions: 1.11.1
Reporter: Yuming Wang
Assignee: Yuming Wang






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (PARQUET-1745) No result for partition key included in Parquet file

2020-01-10 Thread Yuming Wang (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-1745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17012674#comment-17012674
 ] 

Yuming Wang edited comment on PARQUET-1745 at 1/10/20 10:36 AM:


It seems the reason is filter not in {{columnIndexStore}}:
 !FilterByColumnIndex.png! 


was (Author: q79969786):
It seems the reason is filter not in {{columnIndexStore}}:
 !FilterByColumnInndex.png! 

> No result for partition key included in Parquet file
> 
>
> Key: PARQUET-1745
> URL: https://issues.apache.org/jira/browse/PARQUET-1745
> Project: Parquet
>  Issue Type: Sub-task
>  Components: parquet-mr
>Affects Versions: 1.11.0
>Reporter: Yuming Wang
>Priority: Major
> Attachments: FilterByColumnIndex.png
>
>
> How to reproduce:
> {code:sh}
> git clone https://github.com/apache/spark.git && cd spark
> git fetch origin pull/26804/head:PARQUET-1745
> git checkout PARQUET-1745
> build/sbt "sql/test-only *ParquetV2PartitionDiscoverySuite"
> {code}
> output:
> {noformat}
> [info] - read partitioned table - partition key included in Parquet file *** 
> FAILED *** (1 second, 57 milliseconds)
> [info]   Results do not match for query:
> [info]   Timezone: 
> sun.util.calendar.ZoneInfo[id="America/Los_Angeles",offset=-2880,dstSavings=360,useDaylight=true,transitions=185,lastRule=java.util.SimpleTimeZone[id=America/Los_Angeles,offset=-2880,dstSavings=360,useDaylight=true,startYear=0,startMode=3,startMonth=2,startDay=8,startDayOfWeek=1,startTime=720,startTimeMode=0,endMode=3,endMonth=10,endDay=1,endDayOfWeek=1,endTime=720,endTimeMode=0]]
> [info]   Timezone Env:
> [info]
> [info]   == Parsed Logical Plan ==
> [info]   'Project [*]
> [info]   +- 'Filter ('pi = 1)
> [info]  +- 'UnresolvedRelation [t]
> [info]
> [info]   == Analyzed Logical Plan ==
> [info]   intField: int, stringField: string, pi: int, ps: string
> [info]   Project [intField#1788, stringField#1789, pi#1790, ps#1791]
> [info]   +- Filter (pi#1790 = 1)
> [info]  +- SubqueryAlias `t`
> [info] +- RelationV2[intField#1788, stringField#1789, pi#1790, 
> ps#1791] parquet 
> file:/root/opensource/apache-spark/target/tmp/spark-c7e85130-3e1f-4137-ac7c-32f48be3b74a
> [info]
> [info]   == Optimized Logical Plan ==
> [info]   Filter (isnotnull(pi#1790) AND (pi#1790 = 1))
> [info]   +- RelationV2[intField#1788, stringField#1789, pi#1790, ps#1791] 
> parquet 
> file:/root/opensource/apache-spark/target/tmp/spark-c7e85130-3e1f-4137-ac7c-32f48be3b74a
> [info]
> [info]   == Physical Plan ==
> [info]   *(1) Project [intField#1788, stringField#1789, pi#1790, ps#1791]
> [info]   +- *(1) Filter (isnotnull(pi#1790) AND (pi#1790 = 1))
> [info]  +- *(1) ColumnarToRow
> [info] +- BatchScan[intField#1788, stringField#1789, pi#1790, 
> ps#1791] ParquetScan Location: 
> InMemoryFileIndex[file:/root/opensource/apache-spark/target/tmp/spark-c7e85130-3e1f-4137-ac7c-32f...,
>  ReadSchema: struct, PushedFilters: 
> [IsNotNull(pi), EqualTo(pi,1)]
> [info]
> [info]   == Results ==
> [info]
> [info]   == Results ==
> [info]   !== Correct Answer - 20 ==   == Spark Answer - 0 ==
> [info]struct<>struct<>
> [info]   ![1,1,1,bar]
> [info]   ![1,1,1,foo]
> [info]   ![10,10,1,bar]
> [info]   ![10,10,1,foo]
> [info]   ![2,2,1,bar]
> [info]   ![2,2,1,foo]
> [info]   ![3,3,1,bar]
> [info]   ![3,3,1,foo]
> [info]   ![4,4,1,bar]
> [info]   ![4,4,1,foo]
> [info]   ![5,5,1,bar]
> [info]   ![5,5,1,foo]
> [info]   ![6,6,1,bar]
> [info]   ![6,6,1,foo]
> [info]   ![7,7,1,bar]
> [info]   ![7,7,1,foo]
> [info]   ![8,8,1,bar]
> [info]   ![8,8,1,foo]
> [info]   ![9,9,1,bar]
> [info]   ![9,9,1,foo] (QueryTest.scala:248)
> [info]   org.scalatest.exceptions.TestFailedException:
> [info]   at 
> org.scalatest.Assertions.newAssertionFailedException(Assertions.scala:530)
> [info]   at 
> org.scalatest.Assertions.newAssertionFailedException$(Assertions.scala:529)
> [info]   at 
> org.apache.spark.sql.QueryTest$.newAssertionFailedException(QueryTest.scala:238)
> [info]   at org.scalatest.Assertions.fail(Assertions.scala:1091)
> [info]   at org.scalatest.Assertions.fail$(Assertions.scala:1087)
> [info]   at org.apache.spark.sql.QueryTest$.fail(QueryTest.scala:238)
> [info]   at org.apache.spark.sql.QueryTest$.checkAnswer(QueryTest.scala:248)
> [info]   at org.apache.spark.sql.QueryTest.checkAnswer(QueryTest.scala:156)
> [info]   at 
> org.apache.spark.sql.execution.datasources.parquet.ParquetV2PartitionDiscoverySuite.$anonfun$new$194(ParquetPartitionDiscoverySuite.scala:1232)
> [info]   at 
> scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
> [info]   at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
> [info]   at 
> 

[jira] [Updated] (PARQUET-1745) No result for partition key included in Parquet file

2020-01-10 Thread Yuming Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/PARQUET-1745?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuming Wang updated PARQUET-1745:
-
Attachment: FilterByColumnIndex.png

> No result for partition key included in Parquet file
> 
>
> Key: PARQUET-1745
> URL: https://issues.apache.org/jira/browse/PARQUET-1745
> Project: Parquet
>  Issue Type: Sub-task
>  Components: parquet-mr
>Affects Versions: 1.11.0
>Reporter: Yuming Wang
>Priority: Major
> Attachments: FilterByColumnIndex.png
>
>
> How to reproduce:
> {code:sh}
> git clone https://github.com/apache/spark.git && cd spark
> git fetch origin pull/26804/head:PARQUET-1745
> git checkout PARQUET-1745
> build/sbt "sql/test-only *ParquetV2PartitionDiscoverySuite"
> {code}
> output:
> {noformat}
> [info] - read partitioned table - partition key included in Parquet file *** 
> FAILED *** (1 second, 57 milliseconds)
> [info]   Results do not match for query:
> [info]   Timezone: 
> sun.util.calendar.ZoneInfo[id="America/Los_Angeles",offset=-2880,dstSavings=360,useDaylight=true,transitions=185,lastRule=java.util.SimpleTimeZone[id=America/Los_Angeles,offset=-2880,dstSavings=360,useDaylight=true,startYear=0,startMode=3,startMonth=2,startDay=8,startDayOfWeek=1,startTime=720,startTimeMode=0,endMode=3,endMonth=10,endDay=1,endDayOfWeek=1,endTime=720,endTimeMode=0]]
> [info]   Timezone Env:
> [info]
> [info]   == Parsed Logical Plan ==
> [info]   'Project [*]
> [info]   +- 'Filter ('pi = 1)
> [info]  +- 'UnresolvedRelation [t]
> [info]
> [info]   == Analyzed Logical Plan ==
> [info]   intField: int, stringField: string, pi: int, ps: string
> [info]   Project [intField#1788, stringField#1789, pi#1790, ps#1791]
> [info]   +- Filter (pi#1790 = 1)
> [info]  +- SubqueryAlias `t`
> [info] +- RelationV2[intField#1788, stringField#1789, pi#1790, 
> ps#1791] parquet 
> file:/root/opensource/apache-spark/target/tmp/spark-c7e85130-3e1f-4137-ac7c-32f48be3b74a
> [info]
> [info]   == Optimized Logical Plan ==
> [info]   Filter (isnotnull(pi#1790) AND (pi#1790 = 1))
> [info]   +- RelationV2[intField#1788, stringField#1789, pi#1790, ps#1791] 
> parquet 
> file:/root/opensource/apache-spark/target/tmp/spark-c7e85130-3e1f-4137-ac7c-32f48be3b74a
> [info]
> [info]   == Physical Plan ==
> [info]   *(1) Project [intField#1788, stringField#1789, pi#1790, ps#1791]
> [info]   +- *(1) Filter (isnotnull(pi#1790) AND (pi#1790 = 1))
> [info]  +- *(1) ColumnarToRow
> [info] +- BatchScan[intField#1788, stringField#1789, pi#1790, 
> ps#1791] ParquetScan Location: 
> InMemoryFileIndex[file:/root/opensource/apache-spark/target/tmp/spark-c7e85130-3e1f-4137-ac7c-32f...,
>  ReadSchema: struct, PushedFilters: 
> [IsNotNull(pi), EqualTo(pi,1)]
> [info]
> [info]   == Results ==
> [info]
> [info]   == Results ==
> [info]   !== Correct Answer - 20 ==   == Spark Answer - 0 ==
> [info]struct<>struct<>
> [info]   ![1,1,1,bar]
> [info]   ![1,1,1,foo]
> [info]   ![10,10,1,bar]
> [info]   ![10,10,1,foo]
> [info]   ![2,2,1,bar]
> [info]   ![2,2,1,foo]
> [info]   ![3,3,1,bar]
> [info]   ![3,3,1,foo]
> [info]   ![4,4,1,bar]
> [info]   ![4,4,1,foo]
> [info]   ![5,5,1,bar]
> [info]   ![5,5,1,foo]
> [info]   ![6,6,1,bar]
> [info]   ![6,6,1,foo]
> [info]   ![7,7,1,bar]
> [info]   ![7,7,1,foo]
> [info]   ![8,8,1,bar]
> [info]   ![8,8,1,foo]
> [info]   ![9,9,1,bar]
> [info]   ![9,9,1,foo] (QueryTest.scala:248)
> [info]   org.scalatest.exceptions.TestFailedException:
> [info]   at 
> org.scalatest.Assertions.newAssertionFailedException(Assertions.scala:530)
> [info]   at 
> org.scalatest.Assertions.newAssertionFailedException$(Assertions.scala:529)
> [info]   at 
> org.apache.spark.sql.QueryTest$.newAssertionFailedException(QueryTest.scala:238)
> [info]   at org.scalatest.Assertions.fail(Assertions.scala:1091)
> [info]   at org.scalatest.Assertions.fail$(Assertions.scala:1087)
> [info]   at org.apache.spark.sql.QueryTest$.fail(QueryTest.scala:238)
> [info]   at org.apache.spark.sql.QueryTest$.checkAnswer(QueryTest.scala:248)
> [info]   at org.apache.spark.sql.QueryTest.checkAnswer(QueryTest.scala:156)
> [info]   at 
> org.apache.spark.sql.execution.datasources.parquet.ParquetV2PartitionDiscoverySuite.$anonfun$new$194(ParquetPartitionDiscoverySuite.scala:1232)
> [info]   at 
> scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
> [info]   at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
> [info]   at 
> org.apache.spark.sql.test.SQLTestUtilsBase.withTempView(SQLTestUtils.scala:260)
> [info]   at 
> org.apache.spark.sql.test.SQLTestUtilsBase.withTempView$(SQLTestUtils.scala:258)
> [info]   at 
> 

[jira] [Updated] (PARQUET-1745) No result for partition key included in Parquet file

2020-01-10 Thread Yuming Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/PARQUET-1745?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuming Wang updated PARQUET-1745:
-
Attachment: (was: image-2020-01-10-18-35-06-129.png)

> No result for partition key included in Parquet file
> 
>
> Key: PARQUET-1745
> URL: https://issues.apache.org/jira/browse/PARQUET-1745
> Project: Parquet
>  Issue Type: Sub-task
>  Components: parquet-mr
>Affects Versions: 1.11.0
>Reporter: Yuming Wang
>Priority: Major
>
> How to reproduce:
> {code:sh}
> git clone https://github.com/apache/spark.git && cd spark
> git fetch origin pull/26804/head:PARQUET-1745
> git checkout PARQUET-1745
> build/sbt "sql/test-only *ParquetV2PartitionDiscoverySuite"
> {code}
> output:
> {noformat}
> [info] - read partitioned table - partition key included in Parquet file *** 
> FAILED *** (1 second, 57 milliseconds)
> [info]   Results do not match for query:
> [info]   Timezone: 
> sun.util.calendar.ZoneInfo[id="America/Los_Angeles",offset=-2880,dstSavings=360,useDaylight=true,transitions=185,lastRule=java.util.SimpleTimeZone[id=America/Los_Angeles,offset=-2880,dstSavings=360,useDaylight=true,startYear=0,startMode=3,startMonth=2,startDay=8,startDayOfWeek=1,startTime=720,startTimeMode=0,endMode=3,endMonth=10,endDay=1,endDayOfWeek=1,endTime=720,endTimeMode=0]]
> [info]   Timezone Env:
> [info]
> [info]   == Parsed Logical Plan ==
> [info]   'Project [*]
> [info]   +- 'Filter ('pi = 1)
> [info]  +- 'UnresolvedRelation [t]
> [info]
> [info]   == Analyzed Logical Plan ==
> [info]   intField: int, stringField: string, pi: int, ps: string
> [info]   Project [intField#1788, stringField#1789, pi#1790, ps#1791]
> [info]   +- Filter (pi#1790 = 1)
> [info]  +- SubqueryAlias `t`
> [info] +- RelationV2[intField#1788, stringField#1789, pi#1790, 
> ps#1791] parquet 
> file:/root/opensource/apache-spark/target/tmp/spark-c7e85130-3e1f-4137-ac7c-32f48be3b74a
> [info]
> [info]   == Optimized Logical Plan ==
> [info]   Filter (isnotnull(pi#1790) AND (pi#1790 = 1))
> [info]   +- RelationV2[intField#1788, stringField#1789, pi#1790, ps#1791] 
> parquet 
> file:/root/opensource/apache-spark/target/tmp/spark-c7e85130-3e1f-4137-ac7c-32f48be3b74a
> [info]
> [info]   == Physical Plan ==
> [info]   *(1) Project [intField#1788, stringField#1789, pi#1790, ps#1791]
> [info]   +- *(1) Filter (isnotnull(pi#1790) AND (pi#1790 = 1))
> [info]  +- *(1) ColumnarToRow
> [info] +- BatchScan[intField#1788, stringField#1789, pi#1790, 
> ps#1791] ParquetScan Location: 
> InMemoryFileIndex[file:/root/opensource/apache-spark/target/tmp/spark-c7e85130-3e1f-4137-ac7c-32f...,
>  ReadSchema: struct, PushedFilters: 
> [IsNotNull(pi), EqualTo(pi,1)]
> [info]
> [info]   == Results ==
> [info]
> [info]   == Results ==
> [info]   !== Correct Answer - 20 ==   == Spark Answer - 0 ==
> [info]struct<>struct<>
> [info]   ![1,1,1,bar]
> [info]   ![1,1,1,foo]
> [info]   ![10,10,1,bar]
> [info]   ![10,10,1,foo]
> [info]   ![2,2,1,bar]
> [info]   ![2,2,1,foo]
> [info]   ![3,3,1,bar]
> [info]   ![3,3,1,foo]
> [info]   ![4,4,1,bar]
> [info]   ![4,4,1,foo]
> [info]   ![5,5,1,bar]
> [info]   ![5,5,1,foo]
> [info]   ![6,6,1,bar]
> [info]   ![6,6,1,foo]
> [info]   ![7,7,1,bar]
> [info]   ![7,7,1,foo]
> [info]   ![8,8,1,bar]
> [info]   ![8,8,1,foo]
> [info]   ![9,9,1,bar]
> [info]   ![9,9,1,foo] (QueryTest.scala:248)
> [info]   org.scalatest.exceptions.TestFailedException:
> [info]   at 
> org.scalatest.Assertions.newAssertionFailedException(Assertions.scala:530)
> [info]   at 
> org.scalatest.Assertions.newAssertionFailedException$(Assertions.scala:529)
> [info]   at 
> org.apache.spark.sql.QueryTest$.newAssertionFailedException(QueryTest.scala:238)
> [info]   at org.scalatest.Assertions.fail(Assertions.scala:1091)
> [info]   at org.scalatest.Assertions.fail$(Assertions.scala:1087)
> [info]   at org.apache.spark.sql.QueryTest$.fail(QueryTest.scala:238)
> [info]   at org.apache.spark.sql.QueryTest$.checkAnswer(QueryTest.scala:248)
> [info]   at org.apache.spark.sql.QueryTest.checkAnswer(QueryTest.scala:156)
> [info]   at 
> org.apache.spark.sql.execution.datasources.parquet.ParquetV2PartitionDiscoverySuite.$anonfun$new$194(ParquetPartitionDiscoverySuite.scala:1232)
> [info]   at 
> scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
> [info]   at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
> [info]   at 
> org.apache.spark.sql.test.SQLTestUtilsBase.withTempView(SQLTestUtils.scala:260)
> [info]   at 
> org.apache.spark.sql.test.SQLTestUtilsBase.withTempView$(SQLTestUtils.scala:258)
> [info]   at 
> org.apache.spark.sql.execution.datasources.parquet.ParquetPartitionDiscoverySuite.withTempView(ParquetPartitionDiscoverySuite.scala:53)
> 

[jira] [Updated] (PARQUET-1745) No result for partition key included in Parquet file

2020-01-10 Thread Yuming Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/PARQUET-1745?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuming Wang updated PARQUET-1745:
-
Attachment: (was: image-2020-01-10-18-34-30-039.png)

> No result for partition key included in Parquet file
> 
>
> Key: PARQUET-1745
> URL: https://issues.apache.org/jira/browse/PARQUET-1745
> Project: Parquet
>  Issue Type: Sub-task
>  Components: parquet-mr
>Affects Versions: 1.11.0
>Reporter: Yuming Wang
>Priority: Major
>
> How to reproduce:
> {code:sh}
> git clone https://github.com/apache/spark.git && cd spark
> git fetch origin pull/26804/head:PARQUET-1745
> git checkout PARQUET-1745
> build/sbt "sql/test-only *ParquetV2PartitionDiscoverySuite"
> {code}
> output:
> {noformat}
> [info] - read partitioned table - partition key included in Parquet file *** 
> FAILED *** (1 second, 57 milliseconds)
> [info]   Results do not match for query:
> [info]   Timezone: 
> sun.util.calendar.ZoneInfo[id="America/Los_Angeles",offset=-2880,dstSavings=360,useDaylight=true,transitions=185,lastRule=java.util.SimpleTimeZone[id=America/Los_Angeles,offset=-2880,dstSavings=360,useDaylight=true,startYear=0,startMode=3,startMonth=2,startDay=8,startDayOfWeek=1,startTime=720,startTimeMode=0,endMode=3,endMonth=10,endDay=1,endDayOfWeek=1,endTime=720,endTimeMode=0]]
> [info]   Timezone Env:
> [info]
> [info]   == Parsed Logical Plan ==
> [info]   'Project [*]
> [info]   +- 'Filter ('pi = 1)
> [info]  +- 'UnresolvedRelation [t]
> [info]
> [info]   == Analyzed Logical Plan ==
> [info]   intField: int, stringField: string, pi: int, ps: string
> [info]   Project [intField#1788, stringField#1789, pi#1790, ps#1791]
> [info]   +- Filter (pi#1790 = 1)
> [info]  +- SubqueryAlias `t`
> [info] +- RelationV2[intField#1788, stringField#1789, pi#1790, 
> ps#1791] parquet 
> file:/root/opensource/apache-spark/target/tmp/spark-c7e85130-3e1f-4137-ac7c-32f48be3b74a
> [info]
> [info]   == Optimized Logical Plan ==
> [info]   Filter (isnotnull(pi#1790) AND (pi#1790 = 1))
> [info]   +- RelationV2[intField#1788, stringField#1789, pi#1790, ps#1791] 
> parquet 
> file:/root/opensource/apache-spark/target/tmp/spark-c7e85130-3e1f-4137-ac7c-32f48be3b74a
> [info]
> [info]   == Physical Plan ==
> [info]   *(1) Project [intField#1788, stringField#1789, pi#1790, ps#1791]
> [info]   +- *(1) Filter (isnotnull(pi#1790) AND (pi#1790 = 1))
> [info]  +- *(1) ColumnarToRow
> [info] +- BatchScan[intField#1788, stringField#1789, pi#1790, 
> ps#1791] ParquetScan Location: 
> InMemoryFileIndex[file:/root/opensource/apache-spark/target/tmp/spark-c7e85130-3e1f-4137-ac7c-32f...,
>  ReadSchema: struct, PushedFilters: 
> [IsNotNull(pi), EqualTo(pi,1)]
> [info]
> [info]   == Results ==
> [info]
> [info]   == Results ==
> [info]   !== Correct Answer - 20 ==   == Spark Answer - 0 ==
> [info]struct<>struct<>
> [info]   ![1,1,1,bar]
> [info]   ![1,1,1,foo]
> [info]   ![10,10,1,bar]
> [info]   ![10,10,1,foo]
> [info]   ![2,2,1,bar]
> [info]   ![2,2,1,foo]
> [info]   ![3,3,1,bar]
> [info]   ![3,3,1,foo]
> [info]   ![4,4,1,bar]
> [info]   ![4,4,1,foo]
> [info]   ![5,5,1,bar]
> [info]   ![5,5,1,foo]
> [info]   ![6,6,1,bar]
> [info]   ![6,6,1,foo]
> [info]   ![7,7,1,bar]
> [info]   ![7,7,1,foo]
> [info]   ![8,8,1,bar]
> [info]   ![8,8,1,foo]
> [info]   ![9,9,1,bar]
> [info]   ![9,9,1,foo] (QueryTest.scala:248)
> [info]   org.scalatest.exceptions.TestFailedException:
> [info]   at 
> org.scalatest.Assertions.newAssertionFailedException(Assertions.scala:530)
> [info]   at 
> org.scalatest.Assertions.newAssertionFailedException$(Assertions.scala:529)
> [info]   at 
> org.apache.spark.sql.QueryTest$.newAssertionFailedException(QueryTest.scala:238)
> [info]   at org.scalatest.Assertions.fail(Assertions.scala:1091)
> [info]   at org.scalatest.Assertions.fail$(Assertions.scala:1087)
> [info]   at org.apache.spark.sql.QueryTest$.fail(QueryTest.scala:238)
> [info]   at org.apache.spark.sql.QueryTest$.checkAnswer(QueryTest.scala:248)
> [info]   at org.apache.spark.sql.QueryTest.checkAnswer(QueryTest.scala:156)
> [info]   at 
> org.apache.spark.sql.execution.datasources.parquet.ParquetV2PartitionDiscoverySuite.$anonfun$new$194(ParquetPartitionDiscoverySuite.scala:1232)
> [info]   at 
> scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
> [info]   at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
> [info]   at 
> org.apache.spark.sql.test.SQLTestUtilsBase.withTempView(SQLTestUtils.scala:260)
> [info]   at 
> org.apache.spark.sql.test.SQLTestUtilsBase.withTempView$(SQLTestUtils.scala:258)
> [info]   at 
> org.apache.spark.sql.execution.datasources.parquet.ParquetPartitionDiscoverySuite.withTempView(ParquetPartitionDiscoverySuite.scala:53)
> 

[jira] [Comment Edited] (PARQUET-1745) No result for partition key included in Parquet file

2020-01-10 Thread Yuming Wang (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-1745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17012674#comment-17012674
 ] 

Yuming Wang edited comment on PARQUET-1745 at 1/10/20 10:35 AM:


It seems the reason is filter not in {{columnIndexStore}}:
 !FilterByColumnInndex.png! 


was (Author: q79969786):
[ParquetPartitionDiscoverySuite.scala#L1200-L1208|https://github.com/apache/spark/blob/36fa1980c24c5c697982b107c8f9714f3eb57f36/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetPartitionDiscoverySuite.scala#L1200-L1208]
 generated column-index:
{noformat}
LM-SHC-16502798:parquet-tools yumwang$ java -jar 
target/parquet-tools-1.11.0.jar  column-index -c pi 
/private/var/folders/tg/f5mz46090wg7swzgdc69f8q03965_0/T/spark-13ffb7a1-ca33-4db7-9e1c-0d966020d64a/pi=1/ps=bar/part-0-0527485f-70b1-44fa-9f3e-7c92f0ffa20b-c000.snappy.parquet
row group 0:
column index for column pi:
Boudary order: ASCENDING
  null count  min   max 

page-0 0  1 1   


offset index for column pi:
  offset   compressed size   first row index
page-06221 0

LM-SHC-16502798:parquet-tools yumwang$ java -jar 
target/parquet-tools-1.11.0.jar  column-index -c ps 
/private/var/folders/tg/f5mz46090wg7swzgdc69f8q03965_0/T/spark-13ffb7a1-ca33-4db7-9e1c-0d966020d64a/pi=1/ps=bar/part-0-0527485f-70b1-44fa-9f3e-7c92f0ffa20b-c000.snappy.parquet

row group 0:
column index for column ps:
Boudary order: ASCENDING
  null count  min   max 

page-0 0  bar   bar 


offset index for column ps:
  offset   compressed size   first row index
page-0   15427 0

{noformat}

Add filtered by column-index:
 !FilterByColumnIndex.png! 


> No result for partition key included in Parquet file
> 
>
> Key: PARQUET-1745
> URL: https://issues.apache.org/jira/browse/PARQUET-1745
> Project: Parquet
>  Issue Type: Sub-task
>  Components: parquet-mr
>Affects Versions: 1.11.0
>Reporter: Yuming Wang
>Priority: Major
> Attachments: FilterByColumnIndex.png, 
> image-2020-01-10-18-34-30-039.png, image-2020-01-10-18-35-06-129.png
>
>
> How to reproduce:
> {code:sh}
> git clone https://github.com/apache/spark.git && cd spark
> git fetch origin pull/26804/head:PARQUET-1745
> git checkout PARQUET-1745
> build/sbt "sql/test-only *ParquetV2PartitionDiscoverySuite"
> {code}
> output:
> {noformat}
> [info] - read partitioned table - partition key included in Parquet file *** 
> FAILED *** (1 second, 57 milliseconds)
> [info]   Results do not match for query:
> [info]   Timezone: 
> sun.util.calendar.ZoneInfo[id="America/Los_Angeles",offset=-2880,dstSavings=360,useDaylight=true,transitions=185,lastRule=java.util.SimpleTimeZone[id=America/Los_Angeles,offset=-2880,dstSavings=360,useDaylight=true,startYear=0,startMode=3,startMonth=2,startDay=8,startDayOfWeek=1,startTime=720,startTimeMode=0,endMode=3,endMonth=10,endDay=1,endDayOfWeek=1,endTime=720,endTimeMode=0]]
> [info]   Timezone Env:
> [info]
> [info]   == Parsed Logical Plan ==
> [info]   'Project [*]
> [info]   +- 'Filter ('pi = 1)
> [info]  +- 'UnresolvedRelation [t]
> [info]
> [info]   == Analyzed Logical Plan ==
> [info]   intField: int, stringField: string, pi: int, ps: string
> [info]   Project [intField#1788, stringField#1789, pi#1790, ps#1791]
> [info]   +- Filter (pi#1790 = 1)
> [info]  +- SubqueryAlias `t`
> [info] +- RelationV2[intField#1788, stringField#1789, pi#1790, 
> ps#1791] parquet 
> file:/root/opensource/apache-spark/target/tmp/spark-c7e85130-3e1f-4137-ac7c-32f48be3b74a
> [info]
> [info]   == Optimized Logical Plan ==
> [info]   Filter (isnotnull(pi#1790) AND (pi#1790 = 1))
> [info]   +- RelationV2[intField#1788, stringField#1789, pi#1790, ps#1791] 
> parquet 
> file:/root/opensource/apache-spark/target/tmp/spark-c7e85130-3e1f-4137-ac7c-32f48be3b74a
> [info]
> [info]   == Physical Plan ==
> [info]   *(1) Project [intField#1788, stringField#1789, pi#1790, ps#1791]
> [info]   +- *(1) Filter (isnotnull(pi#1790) AND (pi#1790 = 1))
> [info]  +- *(1) ColumnarToRow
> [info] +- BatchScan[intField#1788, stringField#1789, pi#1790, 
> ps#1791] ParquetScan Location: 
> 

[jira] [Updated] (PARQUET-1745) No result for partition key included in Parquet file

2020-01-10 Thread Yuming Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/PARQUET-1745?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuming Wang updated PARQUET-1745:
-
Attachment: (was: FilterByColumnIndex.png)

> No result for partition key included in Parquet file
> 
>
> Key: PARQUET-1745
> URL: https://issues.apache.org/jira/browse/PARQUET-1745
> Project: Parquet
>  Issue Type: Sub-task
>  Components: parquet-mr
>Affects Versions: 1.11.0
>Reporter: Yuming Wang
>Priority: Major
>
> How to reproduce:
> {code:sh}
> git clone https://github.com/apache/spark.git && cd spark
> git fetch origin pull/26804/head:PARQUET-1745
> git checkout PARQUET-1745
> build/sbt "sql/test-only *ParquetV2PartitionDiscoverySuite"
> {code}
> output:
> {noformat}
> [info] - read partitioned table - partition key included in Parquet file *** 
> FAILED *** (1 second, 57 milliseconds)
> [info]   Results do not match for query:
> [info]   Timezone: 
> sun.util.calendar.ZoneInfo[id="America/Los_Angeles",offset=-2880,dstSavings=360,useDaylight=true,transitions=185,lastRule=java.util.SimpleTimeZone[id=America/Los_Angeles,offset=-2880,dstSavings=360,useDaylight=true,startYear=0,startMode=3,startMonth=2,startDay=8,startDayOfWeek=1,startTime=720,startTimeMode=0,endMode=3,endMonth=10,endDay=1,endDayOfWeek=1,endTime=720,endTimeMode=0]]
> [info]   Timezone Env:
> [info]
> [info]   == Parsed Logical Plan ==
> [info]   'Project [*]
> [info]   +- 'Filter ('pi = 1)
> [info]  +- 'UnresolvedRelation [t]
> [info]
> [info]   == Analyzed Logical Plan ==
> [info]   intField: int, stringField: string, pi: int, ps: string
> [info]   Project [intField#1788, stringField#1789, pi#1790, ps#1791]
> [info]   +- Filter (pi#1790 = 1)
> [info]  +- SubqueryAlias `t`
> [info] +- RelationV2[intField#1788, stringField#1789, pi#1790, 
> ps#1791] parquet 
> file:/root/opensource/apache-spark/target/tmp/spark-c7e85130-3e1f-4137-ac7c-32f48be3b74a
> [info]
> [info]   == Optimized Logical Plan ==
> [info]   Filter (isnotnull(pi#1790) AND (pi#1790 = 1))
> [info]   +- RelationV2[intField#1788, stringField#1789, pi#1790, ps#1791] 
> parquet 
> file:/root/opensource/apache-spark/target/tmp/spark-c7e85130-3e1f-4137-ac7c-32f48be3b74a
> [info]
> [info]   == Physical Plan ==
> [info]   *(1) Project [intField#1788, stringField#1789, pi#1790, ps#1791]
> [info]   +- *(1) Filter (isnotnull(pi#1790) AND (pi#1790 = 1))
> [info]  +- *(1) ColumnarToRow
> [info] +- BatchScan[intField#1788, stringField#1789, pi#1790, 
> ps#1791] ParquetScan Location: 
> InMemoryFileIndex[file:/root/opensource/apache-spark/target/tmp/spark-c7e85130-3e1f-4137-ac7c-32f...,
>  ReadSchema: struct, PushedFilters: 
> [IsNotNull(pi), EqualTo(pi,1)]
> [info]
> [info]   == Results ==
> [info]
> [info]   == Results ==
> [info]   !== Correct Answer - 20 ==   == Spark Answer - 0 ==
> [info]struct<>struct<>
> [info]   ![1,1,1,bar]
> [info]   ![1,1,1,foo]
> [info]   ![10,10,1,bar]
> [info]   ![10,10,1,foo]
> [info]   ![2,2,1,bar]
> [info]   ![2,2,1,foo]
> [info]   ![3,3,1,bar]
> [info]   ![3,3,1,foo]
> [info]   ![4,4,1,bar]
> [info]   ![4,4,1,foo]
> [info]   ![5,5,1,bar]
> [info]   ![5,5,1,foo]
> [info]   ![6,6,1,bar]
> [info]   ![6,6,1,foo]
> [info]   ![7,7,1,bar]
> [info]   ![7,7,1,foo]
> [info]   ![8,8,1,bar]
> [info]   ![8,8,1,foo]
> [info]   ![9,9,1,bar]
> [info]   ![9,9,1,foo] (QueryTest.scala:248)
> [info]   org.scalatest.exceptions.TestFailedException:
> [info]   at 
> org.scalatest.Assertions.newAssertionFailedException(Assertions.scala:530)
> [info]   at 
> org.scalatest.Assertions.newAssertionFailedException$(Assertions.scala:529)
> [info]   at 
> org.apache.spark.sql.QueryTest$.newAssertionFailedException(QueryTest.scala:238)
> [info]   at org.scalatest.Assertions.fail(Assertions.scala:1091)
> [info]   at org.scalatest.Assertions.fail$(Assertions.scala:1087)
> [info]   at org.apache.spark.sql.QueryTest$.fail(QueryTest.scala:238)
> [info]   at org.apache.spark.sql.QueryTest$.checkAnswer(QueryTest.scala:248)
> [info]   at org.apache.spark.sql.QueryTest.checkAnswer(QueryTest.scala:156)
> [info]   at 
> org.apache.spark.sql.execution.datasources.parquet.ParquetV2PartitionDiscoverySuite.$anonfun$new$194(ParquetPartitionDiscoverySuite.scala:1232)
> [info]   at 
> scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
> [info]   at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
> [info]   at 
> org.apache.spark.sql.test.SQLTestUtilsBase.withTempView(SQLTestUtils.scala:260)
> [info]   at 
> org.apache.spark.sql.test.SQLTestUtilsBase.withTempView$(SQLTestUtils.scala:258)
> [info]   at 
> org.apache.spark.sql.execution.datasources.parquet.ParquetPartitionDiscoverySuite.withTempView(ParquetPartitionDiscoverySuite.scala:53)
> [info]   at 

[jira] [Comment Edited] (PARQUET-1745) No result for partition key included in Parquet file

2020-01-10 Thread Yuming Wang (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-1745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17012674#comment-17012674
 ] 

Yuming Wang edited comment on PARQUET-1745 at 1/10/20 10:21 AM:


[ParquetPartitionDiscoverySuite.scala#L1200-L1208|https://github.com/apache/spark/blob/36fa1980c24c5c697982b107c8f9714f3eb57f36/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetPartitionDiscoverySuite.scala#L1200-L1208]
 generated column-index:
{noformat}
LM-SHC-16502798:parquet-tools yumwang$ java -jar 
target/parquet-tools-1.11.0.jar  column-index -c pi 
/private/var/folders/tg/f5mz46090wg7swzgdc69f8q03965_0/T/spark-13ffb7a1-ca33-4db7-9e1c-0d966020d64a/pi=1/ps=bar/part-0-0527485f-70b1-44fa-9f3e-7c92f0ffa20b-c000.snappy.parquet
row group 0:
column index for column pi:
Boudary order: ASCENDING
  null count  min   max 

page-0 0  1 1   


offset index for column pi:
  offset   compressed size   first row index
page-06221 0

LM-SHC-16502798:parquet-tools yumwang$ java -jar 
target/parquet-tools-1.11.0.jar  column-index -c ps 
/private/var/folders/tg/f5mz46090wg7swzgdc69f8q03965_0/T/spark-13ffb7a1-ca33-4db7-9e1c-0d966020d64a/pi=1/ps=bar/part-0-0527485f-70b1-44fa-9f3e-7c92f0ffa20b-c000.snappy.parquet

row group 0:
column index for column ps:
Boudary order: ASCENDING
  null count  min   max 

page-0 0  bar   bar 


offset index for column ps:
  offset   compressed size   first row index
page-0   15427 0

{noformat}

Add filtered by column-index:
 !FilterByColumnIndex.png! 



was (Author: q79969786):
[ParquetPartitionDiscoverySuite.scala#L1200-L1208|https://github.com/apache/spark/blob/36fa1980c24c5c697982b107c8f9714f3eb57f36/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetPartitionDiscoverySuite.scala#L1200-L1208]
 generated column-index:
{noformat}
LM-SHC-16502798:parquet-tools yumwang$ java -jar 
target/parquet-tools-1.11.0.jar  column-index -c pi 
/private/var/folders/tg/f5mz46090wg7swzgdc69f8q03965_0/T/spark-13ffb7a1-ca33-4db7-9e1c-0d966020d64a/pi=1/ps=bar/part-0-0527485f-70b1-44fa-9f3e-7c92f0ffa20b-c000.snappy.parquet
row group 0:
column index for column pi:
Boudary order: ASCENDING
  null count  min   max 

page-0 0  1 1   


offset index for column pi:
  offset   compressed size   first row index
page-06221 0

LM-SHC-16502798:parquet-tools yumwang$ java -jar 
target/parquet-tools-1.11.0.jar  column-index -c ps 
/private/var/folders/tg/f5mz46090wg7swzgdc69f8q03965_0/T/spark-13ffb7a1-ca33-4db7-9e1c-0d966020d64a/pi=1/ps=bar/part-0-0527485f-70b1-44fa-9f3e-7c92f0ffa20b-c000.snappy.parquet

row group 0:
column index for column ps:
Boudary order: ASCENDING
  null count  min   max 

page-0 0  bar   bar 


offset index for column ps:
  offset   compressed size   first row index
page-0   15427 0

{noformat}

So filtered by column-index is incorrect:
 !FilterByColumnIndex.png! 


> No result for partition key included in Parquet file
> 
>
> Key: PARQUET-1745
> URL: https://issues.apache.org/jira/browse/PARQUET-1745
> Project: Parquet
>  Issue Type: Sub-task
>  Components: parquet-mr
>Affects Versions: 1.11.0
>Reporter: Yuming Wang
>Priority: Major
> Attachments: FilterByColumnIndex.png
>
>
> How to reproduce:
> {code:sh}
> git clone https://github.com/apache/spark.git && cd spark
> git fetch origin pull/26804/head:PARQUET-1745
> git checkout PARQUET-1745
> build/sbt "sql/test-only *ParquetV2PartitionDiscoverySuite"
> {code}
> output:
> {noformat}
> [info] - read partitioned table - partition key included in Parquet file *** 
> FAILED *** (1 second, 57 milliseconds)
> [info]   Results do not match for query:
> [info]   

[jira] [Commented] (PARQUET-1745) No result for partition key included in Parquet file

2020-01-10 Thread Yuming Wang (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-1745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17012674#comment-17012674
 ] 

Yuming Wang commented on PARQUET-1745:
--

[ParquetPartitionDiscoverySuite.scala#L1200-L1208|https://github.com/apache/spark/blob/36fa1980c24c5c697982b107c8f9714f3eb57f36/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetPartitionDiscoverySuite.scala#L1200-L1208]
 generated column-index:
{noformat}
LM-SHC-16502798:parquet-tools yumwang$ java -jar 
target/parquet-tools-1.11.0.jar  column-index -c pi 
/private/var/folders/tg/f5mz46090wg7swzgdc69f8q03965_0/T/spark-13ffb7a1-ca33-4db7-9e1c-0d966020d64a/pi=1/ps=bar/part-0-0527485f-70b1-44fa-9f3e-7c92f0ffa20b-c000.snappy.parquet
row group 0:
column index for column pi:
Boudary order: ASCENDING
  null count  min   max 

page-0 0  1 1   


offset index for column pi:
  offset   compressed size   first row index
page-06221 0

LM-SHC-16502798:parquet-tools yumwang$ java -jar 
target/parquet-tools-1.11.0.jar  column-index -c ps 
/private/var/folders/tg/f5mz46090wg7swzgdc69f8q03965_0/T/spark-13ffb7a1-ca33-4db7-9e1c-0d966020d64a/pi=1/ps=bar/part-0-0527485f-70b1-44fa-9f3e-7c92f0ffa20b-c000.snappy.parquet

row group 0:
column index for column ps:
Boudary order: ASCENDING
  null count  min   max 

page-0 0  bar   bar 


offset index for column ps:
  offset   compressed size   first row index
page-0   15427 0

{noformat}

So filtered by column-index is incorrect:
 !FilterByColumnIndex.png! 


> No result for partition key included in Parquet file
> 
>
> Key: PARQUET-1745
> URL: https://issues.apache.org/jira/browse/PARQUET-1745
> Project: Parquet
>  Issue Type: Sub-task
>  Components: parquet-mr
>Affects Versions: 1.11.0
>Reporter: Yuming Wang
>Priority: Major
> Attachments: FilterByColumnIndex.png
>
>
> How to reproduce:
> {code:sh}
> git clone https://github.com/apache/spark.git && cd spark
> git fetch origin pull/26804/head:PARQUET-1745
> git checkout PARQUET-1745
> build/sbt "sql/test-only *ParquetV2PartitionDiscoverySuite"
> {code}
> output:
> {noformat}
> [info] - read partitioned table - partition key included in Parquet file *** 
> FAILED *** (1 second, 57 milliseconds)
> [info]   Results do not match for query:
> [info]   Timezone: 
> sun.util.calendar.ZoneInfo[id="America/Los_Angeles",offset=-2880,dstSavings=360,useDaylight=true,transitions=185,lastRule=java.util.SimpleTimeZone[id=America/Los_Angeles,offset=-2880,dstSavings=360,useDaylight=true,startYear=0,startMode=3,startMonth=2,startDay=8,startDayOfWeek=1,startTime=720,startTimeMode=0,endMode=3,endMonth=10,endDay=1,endDayOfWeek=1,endTime=720,endTimeMode=0]]
> [info]   Timezone Env:
> [info]
> [info]   == Parsed Logical Plan ==
> [info]   'Project [*]
> [info]   +- 'Filter ('pi = 1)
> [info]  +- 'UnresolvedRelation [t]
> [info]
> [info]   == Analyzed Logical Plan ==
> [info]   intField: int, stringField: string, pi: int, ps: string
> [info]   Project [intField#1788, stringField#1789, pi#1790, ps#1791]
> [info]   +- Filter (pi#1790 = 1)
> [info]  +- SubqueryAlias `t`
> [info] +- RelationV2[intField#1788, stringField#1789, pi#1790, 
> ps#1791] parquet 
> file:/root/opensource/apache-spark/target/tmp/spark-c7e85130-3e1f-4137-ac7c-32f48be3b74a
> [info]
> [info]   == Optimized Logical Plan ==
> [info]   Filter (isnotnull(pi#1790) AND (pi#1790 = 1))
> [info]   +- RelationV2[intField#1788, stringField#1789, pi#1790, ps#1791] 
> parquet 
> file:/root/opensource/apache-spark/target/tmp/spark-c7e85130-3e1f-4137-ac7c-32f48be3b74a
> [info]
> [info]   == Physical Plan ==
> [info]   *(1) Project [intField#1788, stringField#1789, pi#1790, ps#1791]
> [info]   +- *(1) Filter (isnotnull(pi#1790) AND (pi#1790 = 1))
> [info]  +- *(1) ColumnarToRow
> [info] +- BatchScan[intField#1788, stringField#1789, pi#1790, 
> ps#1791] ParquetScan Location: 
> InMemoryFileIndex[file:/root/opensource/apache-spark/target/tmp/spark-c7e85130-3e1f-4137-ac7c-32f...,
>  ReadSchema: struct, PushedFilters: 
> [IsNotNull(pi), EqualTo(pi,1)]
> [info]
> [info]   == Results ==
> [info]
> [info]   == Results ==
> [info]   !== Correct Answer - 20 ==   == Spark Answer - 0 ==
> [info]

[jira] [Updated] (PARQUET-1745) No result for partition key included in Parquet file

2020-01-10 Thread Yuming Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/PARQUET-1745?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuming Wang updated PARQUET-1745:
-
Attachment: FilterByColumnIndex.png

> No result for partition key included in Parquet file
> 
>
> Key: PARQUET-1745
> URL: https://issues.apache.org/jira/browse/PARQUET-1745
> Project: Parquet
>  Issue Type: Sub-task
>  Components: parquet-mr
>Affects Versions: 1.11.0
>Reporter: Yuming Wang
>Priority: Major
> Attachments: FilterByColumnIndex.png
>
>
> How to reproduce:
> {code:sh}
> git clone https://github.com/apache/spark.git && cd spark
> git fetch origin pull/26804/head:PARQUET-1745
> git checkout PARQUET-1745
> build/sbt "sql/test-only *ParquetV2PartitionDiscoverySuite"
> {code}
> output:
> {noformat}
> [info] - read partitioned table - partition key included in Parquet file *** 
> FAILED *** (1 second, 57 milliseconds)
> [info]   Results do not match for query:
> [info]   Timezone: 
> sun.util.calendar.ZoneInfo[id="America/Los_Angeles",offset=-2880,dstSavings=360,useDaylight=true,transitions=185,lastRule=java.util.SimpleTimeZone[id=America/Los_Angeles,offset=-2880,dstSavings=360,useDaylight=true,startYear=0,startMode=3,startMonth=2,startDay=8,startDayOfWeek=1,startTime=720,startTimeMode=0,endMode=3,endMonth=10,endDay=1,endDayOfWeek=1,endTime=720,endTimeMode=0]]
> [info]   Timezone Env:
> [info]
> [info]   == Parsed Logical Plan ==
> [info]   'Project [*]
> [info]   +- 'Filter ('pi = 1)
> [info]  +- 'UnresolvedRelation [t]
> [info]
> [info]   == Analyzed Logical Plan ==
> [info]   intField: int, stringField: string, pi: int, ps: string
> [info]   Project [intField#1788, stringField#1789, pi#1790, ps#1791]
> [info]   +- Filter (pi#1790 = 1)
> [info]  +- SubqueryAlias `t`
> [info] +- RelationV2[intField#1788, stringField#1789, pi#1790, 
> ps#1791] parquet 
> file:/root/opensource/apache-spark/target/tmp/spark-c7e85130-3e1f-4137-ac7c-32f48be3b74a
> [info]
> [info]   == Optimized Logical Plan ==
> [info]   Filter (isnotnull(pi#1790) AND (pi#1790 = 1))
> [info]   +- RelationV2[intField#1788, stringField#1789, pi#1790, ps#1791] 
> parquet 
> file:/root/opensource/apache-spark/target/tmp/spark-c7e85130-3e1f-4137-ac7c-32f48be3b74a
> [info]
> [info]   == Physical Plan ==
> [info]   *(1) Project [intField#1788, stringField#1789, pi#1790, ps#1791]
> [info]   +- *(1) Filter (isnotnull(pi#1790) AND (pi#1790 = 1))
> [info]  +- *(1) ColumnarToRow
> [info] +- BatchScan[intField#1788, stringField#1789, pi#1790, 
> ps#1791] ParquetScan Location: 
> InMemoryFileIndex[file:/root/opensource/apache-spark/target/tmp/spark-c7e85130-3e1f-4137-ac7c-32f...,
>  ReadSchema: struct, PushedFilters: 
> [IsNotNull(pi), EqualTo(pi,1)]
> [info]
> [info]   == Results ==
> [info]
> [info]   == Results ==
> [info]   !== Correct Answer - 20 ==   == Spark Answer - 0 ==
> [info]struct<>struct<>
> [info]   ![1,1,1,bar]
> [info]   ![1,1,1,foo]
> [info]   ![10,10,1,bar]
> [info]   ![10,10,1,foo]
> [info]   ![2,2,1,bar]
> [info]   ![2,2,1,foo]
> [info]   ![3,3,1,bar]
> [info]   ![3,3,1,foo]
> [info]   ![4,4,1,bar]
> [info]   ![4,4,1,foo]
> [info]   ![5,5,1,bar]
> [info]   ![5,5,1,foo]
> [info]   ![6,6,1,bar]
> [info]   ![6,6,1,foo]
> [info]   ![7,7,1,bar]
> [info]   ![7,7,1,foo]
> [info]   ![8,8,1,bar]
> [info]   ![8,8,1,foo]
> [info]   ![9,9,1,bar]
> [info]   ![9,9,1,foo] (QueryTest.scala:248)
> [info]   org.scalatest.exceptions.TestFailedException:
> [info]   at 
> org.scalatest.Assertions.newAssertionFailedException(Assertions.scala:530)
> [info]   at 
> org.scalatest.Assertions.newAssertionFailedException$(Assertions.scala:529)
> [info]   at 
> org.apache.spark.sql.QueryTest$.newAssertionFailedException(QueryTest.scala:238)
> [info]   at org.scalatest.Assertions.fail(Assertions.scala:1091)
> [info]   at org.scalatest.Assertions.fail$(Assertions.scala:1087)
> [info]   at org.apache.spark.sql.QueryTest$.fail(QueryTest.scala:238)
> [info]   at org.apache.spark.sql.QueryTest$.checkAnswer(QueryTest.scala:248)
> [info]   at org.apache.spark.sql.QueryTest.checkAnswer(QueryTest.scala:156)
> [info]   at 
> org.apache.spark.sql.execution.datasources.parquet.ParquetV2PartitionDiscoverySuite.$anonfun$new$194(ParquetPartitionDiscoverySuite.scala:1232)
> [info]   at 
> scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
> [info]   at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
> [info]   at 
> org.apache.spark.sql.test.SQLTestUtilsBase.withTempView(SQLTestUtils.scala:260)
> [info]   at 
> org.apache.spark.sql.test.SQLTestUtilsBase.withTempView$(SQLTestUtils.scala:258)
> [info]   at 
> 

[jira] [Updated] (PARQUET-1746) Changed the data order after DataFrame reuse

2020-01-08 Thread Yuming Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/PARQUET-1746?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuming Wang updated PARQUET-1746:
-
Summary: Changed the data order after DataFrame reuse  (was: Change the 
order after DataFrame reuse)

> Changed the data order after DataFrame reuse
> 
>
> Key: PARQUET-1746
> URL: https://issues.apache.org/jira/browse/PARQUET-1746
> Project: Parquet
>  Issue Type: Sub-task
>  Components: parquet-mr
>Affects Versions: 1.11.0
>Reporter: Yuming Wang
>Priority: Major
>
> How to reproduce:
> {code:sh}
> git clone https://github.com/apache/spark.git && cd spark
> git fetch origin pull/26804/head:PARQUET-1746
> git checkout PARQUET-1746
> build/sbt "sql/test-only *StreamSuite"
> {code}
> output:
> {noformat}
> sbt.ForkMain$ForkError: org.scalatest.exceptions.TestFailedException: 
> Decoded objects do not match expected objects:
> expected: WrappedArray(0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10)
> actual:   WrappedArray(0, 1, 3, 4, 5, 6, 7, 8, 9, 10, 2)
> assertnotnull(upcast(getcolumnbyordinal(0, LongType), LongType, - root class: 
> "scala.Long"))
> +- upcast(getcolumnbyordinal(0, LongType), LongType, - root class: 
> "scala.Long")
>+- getcolumnbyordinal(0, LongType)
>  
>   at 
> org.scalatest.Assertions.newAssertionFailedException(Assertions.scala:530)
>   at 
> org.scalatest.Assertions.newAssertionFailedException$(Assertions.scala:529)
>   at 
> org.scalatest.FunSuite.newAssertionFailedException(FunSuite.scala:1560)
>   at org.scalatest.Assertions.fail(Assertions.scala:1091)
>   at org.scalatest.Assertions.fail$(Assertions.scala:1087)
>   at org.scalatest.FunSuite.fail(FunSuite.scala:1560)
>   at org.apache.spark.sql.QueryTest.checkDataset(QueryTest.scala:73)
>   at 
> org.apache.spark.sql.streaming.StreamSuite.$anonfun$new$22(StreamSuite.scala:215)
>   at 
> org.apache.spark.sql.streaming.StreamSuite.$anonfun$new$22$adapted(StreamSuite.scala:208)
>   at 
> org.apache.spark.sql.test.SQLTestUtils.$anonfun$withTempDir$1(SQLTestUtils.scala:76)
>   at 
> org.apache.spark.sql.test.SQLTestUtils.$anonfun$withTempDir$1$adapted(SQLTestUtils.scala:75)
>   at org.apache.spark.SparkFunSuite.withTempDir(SparkFunSuite.scala:161)
>   at 
> org.apache.spark.sql.streaming.StreamSuite.org$apache$spark$sql$test$SQLTestUtils$$super$withTempDir(StreamSuite.scala:51)
>   at 
> org.apache.spark.sql.test.SQLTestUtils.withTempDir(SQLTestUtils.scala:75)
>   at 
> org.apache.spark.sql.test.SQLTestUtils.withTempDir$(SQLTestUtils.scala:74)
>   at 
> org.apache.spark.sql.streaming.StreamSuite.withTempDir(StreamSuite.scala:51)
>   at 
> org.apache.spark.sql.streaming.StreamSuite.$anonfun$new$21(StreamSuite.scala:208)
>   at 
> org.apache.spark.sql.streaming.StreamSuite.$anonfun$new$21$adapted(StreamSuite.scala:207)
>   at 
> org.apache.spark.sql.test.SQLTestUtils.$anonfun$withTempDir$1(SQLTestUtils.scala:76)
>   at 
> org.apache.spark.sql.test.SQLTestUtils.$anonfun$withTempDir$1$adapted(SQLTestUtils.scala:75)
>   at org.apache.spark.SparkFunSuite.withTempDir(SparkFunSuite.scala:161)
>   at 
> org.apache.spark.sql.streaming.StreamSuite.org$apache$spark$sql$test$SQLTestUtils$$super$withTempDir(StreamSuite.scala:51)
>   at 
> org.apache.spark.sql.test.SQLTestUtils.withTempDir(SQLTestUtils.scala:75)
>   at 
> org.apache.spark.sql.test.SQLTestUtils.withTempDir$(SQLTestUtils.scala:74)
>   at 
> org.apache.spark.sql.streaming.StreamSuite.withTempDir(StreamSuite.scala:51)
>   at 
> org.apache.spark.sql.streaming.StreamSuite.assertDF$1(StreamSuite.scala:207)
>   at 
> org.apache.spark.sql.streaming.StreamSuite.$anonfun$new$25(StreamSuite.scala:226)
>   at 
> org.apache.spark.sql.catalyst.plans.SQLHelper.withSQLConf(SQLHelper.scala:52)
>   at 
> org.apache.spark.sql.catalyst.plans.SQLHelper.withSQLConf$(SQLHelper.scala:36)
>   at 
> org.apache.spark.sql.streaming.StreamSuite.org$apache$spark$sql$test$SQLTestUtilsBase$$super$withSQLConf(StreamSuite.scala:51)
>   at 
> org.apache.spark.sql.test.SQLTestUtilsBase.withSQLConf(SQLTestUtils.scala:231)
>   at 
> org.apache.spark.sql.test.SQLTestUtilsBase.withSQLConf$(SQLTestUtils.scala:229)
>   at 
> org.apache.spark.sql.streaming.StreamSuite.withSQLConf(StreamSuite.scala:51)
>   at 
> org.apache.spark.sql.streaming.StreamSuite.$anonfun$new$24(StreamSuite.scala:225)
>   at 
> org.apache.spark.sql.streaming.StreamSuite.$anonfun$new$24$adapted(StreamSuite.scala:224)
>   at scala.collection.immutable.List.foreach(List.scala:392)
>   at 
> org.apache.spark.sql.streaming.StreamSuite.$anonfun$new$20(StreamSuite.scala:224)
>   at 
> scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
>   at 

[jira] [Created] (PARQUET-1746) Change the order after DataFrame reuse

2020-01-08 Thread Yuming Wang (Jira)
Yuming Wang created PARQUET-1746:


 Summary: Change the order after DataFrame reuse
 Key: PARQUET-1746
 URL: https://issues.apache.org/jira/browse/PARQUET-1746
 Project: Parquet
  Issue Type: Sub-task
  Components: parquet-mr
Affects Versions: 1.11.0
Reporter: Yuming Wang


How to reproduce:

{code:sh}
git clone https://github.com/apache/spark.git && cd spark
git fetch origin pull/26804/head:PARQUET-1746
git checkout PARQUET-1746
build/sbt "sql/test-only *StreamSuite"
{code}

output:
{noformat}
sbt.ForkMain$ForkError: org.scalatest.exceptions.TestFailedException: 
Decoded objects do not match expected objects:
expected: WrappedArray(0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10)
actual:   WrappedArray(0, 1, 3, 4, 5, 6, 7, 8, 9, 10, 2)
assertnotnull(upcast(getcolumnbyordinal(0, LongType), LongType, - root class: 
"scala.Long"))
+- upcast(getcolumnbyordinal(0, LongType), LongType, - root class: "scala.Long")
   +- getcolumnbyordinal(0, LongType)

 
at 
org.scalatest.Assertions.newAssertionFailedException(Assertions.scala:530)
at 
org.scalatest.Assertions.newAssertionFailedException$(Assertions.scala:529)
at 
org.scalatest.FunSuite.newAssertionFailedException(FunSuite.scala:1560)
at org.scalatest.Assertions.fail(Assertions.scala:1091)
at org.scalatest.Assertions.fail$(Assertions.scala:1087)
at org.scalatest.FunSuite.fail(FunSuite.scala:1560)
at org.apache.spark.sql.QueryTest.checkDataset(QueryTest.scala:73)
at 
org.apache.spark.sql.streaming.StreamSuite.$anonfun$new$22(StreamSuite.scala:215)
at 
org.apache.spark.sql.streaming.StreamSuite.$anonfun$new$22$adapted(StreamSuite.scala:208)
at 
org.apache.spark.sql.test.SQLTestUtils.$anonfun$withTempDir$1(SQLTestUtils.scala:76)
at 
org.apache.spark.sql.test.SQLTestUtils.$anonfun$withTempDir$1$adapted(SQLTestUtils.scala:75)
at org.apache.spark.SparkFunSuite.withTempDir(SparkFunSuite.scala:161)
at 
org.apache.spark.sql.streaming.StreamSuite.org$apache$spark$sql$test$SQLTestUtils$$super$withTempDir(StreamSuite.scala:51)
at 
org.apache.spark.sql.test.SQLTestUtils.withTempDir(SQLTestUtils.scala:75)
at 
org.apache.spark.sql.test.SQLTestUtils.withTempDir$(SQLTestUtils.scala:74)
at 
org.apache.spark.sql.streaming.StreamSuite.withTempDir(StreamSuite.scala:51)
at 
org.apache.spark.sql.streaming.StreamSuite.$anonfun$new$21(StreamSuite.scala:208)
at 
org.apache.spark.sql.streaming.StreamSuite.$anonfun$new$21$adapted(StreamSuite.scala:207)
at 
org.apache.spark.sql.test.SQLTestUtils.$anonfun$withTempDir$1(SQLTestUtils.scala:76)
at 
org.apache.spark.sql.test.SQLTestUtils.$anonfun$withTempDir$1$adapted(SQLTestUtils.scala:75)
at org.apache.spark.SparkFunSuite.withTempDir(SparkFunSuite.scala:161)
at 
org.apache.spark.sql.streaming.StreamSuite.org$apache$spark$sql$test$SQLTestUtils$$super$withTempDir(StreamSuite.scala:51)
at 
org.apache.spark.sql.test.SQLTestUtils.withTempDir(SQLTestUtils.scala:75)
at 
org.apache.spark.sql.test.SQLTestUtils.withTempDir$(SQLTestUtils.scala:74)
at 
org.apache.spark.sql.streaming.StreamSuite.withTempDir(StreamSuite.scala:51)
at 
org.apache.spark.sql.streaming.StreamSuite.assertDF$1(StreamSuite.scala:207)
at 
org.apache.spark.sql.streaming.StreamSuite.$anonfun$new$25(StreamSuite.scala:226)
at 
org.apache.spark.sql.catalyst.plans.SQLHelper.withSQLConf(SQLHelper.scala:52)
at 
org.apache.spark.sql.catalyst.plans.SQLHelper.withSQLConf$(SQLHelper.scala:36)
at 
org.apache.spark.sql.streaming.StreamSuite.org$apache$spark$sql$test$SQLTestUtilsBase$$super$withSQLConf(StreamSuite.scala:51)
at 
org.apache.spark.sql.test.SQLTestUtilsBase.withSQLConf(SQLTestUtils.scala:231)
at 
org.apache.spark.sql.test.SQLTestUtilsBase.withSQLConf$(SQLTestUtils.scala:229)
at 
org.apache.spark.sql.streaming.StreamSuite.withSQLConf(StreamSuite.scala:51)
at 
org.apache.spark.sql.streaming.StreamSuite.$anonfun$new$24(StreamSuite.scala:225)
at 
org.apache.spark.sql.streaming.StreamSuite.$anonfun$new$24$adapted(StreamSuite.scala:224)
at scala.collection.immutable.List.foreach(List.scala:392)
at 
org.apache.spark.sql.streaming.StreamSuite.$anonfun$new$20(StreamSuite.scala:224)
at 
scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at org.scalatest.OutcomeOf.outcomeOf(OutcomeOf.scala:85)
at org.scalatest.OutcomeOf.outcomeOf$(OutcomeOf.scala:83)
at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
at org.scalatest.Transformer.apply(Transformer.scala:22)
at org.scalatest.Transformer.apply(Transformer.scala:20)
at org.scalatest.FunSuiteLike$$anon$1.apply(FunSuiteLike.scala:186)
at 

[jira] [Updated] (PARQUET-1744) Some filters throws ArrayIndexOutOfBoundsException

2020-01-08 Thread Yuming Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/PARQUET-1744?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuming Wang updated PARQUET-1744:
-
Summary: Some filters throws ArrayIndexOutOfBoundsException  (was: Some 
filter throws ArrayIndexOutOfBoundsException)

> Some filters throws ArrayIndexOutOfBoundsException
> --
>
> Key: PARQUET-1744
> URL: https://issues.apache.org/jira/browse/PARQUET-1744
> Project: Parquet
>  Issue Type: Sub-task
>  Components: parquet-mr
>Affects Versions: 1.11.0
>Reporter: Yuming Wang
>Priority: Major
>
> How to reproduce:
> * Build Spark
> {code:sh}
> git clone https://github.com/apache/spark.git && cd spark
> git fetch origin pull/26804/head:PARQUET-1744
> git checkout PARQUET-1744
> build/sbt  package
> bin/spark-shell
> {code}
> * Prepare data:
> {code:scala}
> spark.sql("create table t1(a int, b int, c int) using parquet")
> spark.sql("insert into t1 values(1,0,0)")
> spark.sql("insert into t1 values(2,0,1)")
> spark.sql("insert into t1 values(3,1,0)")
> spark.sql("insert into t1 values(4,1,1)")
> spark.sql("insert into t1 values(5,null,0)")
> spark.sql("insert into t1 values(6,null,1)")
> spark.sql("insert into t1 values(7,null,null)")
> {code}
> * Run test 1
> {code:scala}
> scala> spark.sql("select a+120 from t1 where b<10 OR c=1").show
> java.lang.reflect.InvocationTargetException
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.base/java.lang.reflect.Method.invoke(Method.java:566)
>   at 
> org.apache.spark.sql.execution.datasources.parquet.SpecificParquetRecordReaderBase.initialize(SpecificParquetRecordReaderBase.java:155)
>   at 
> org.apache.spark.sql.execution.datasources.parquet.VectorizedParquetRecordReader.initialize(VectorizedParquetRecordReader.java:131)
>   at 
> org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat.$anonfun$buildReaderWithPartitionValues$2(ParquetFileFormat.scala:319)
>   at 
> org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.org$apache$spark$sql$execution$datasources$FileScanRDD$$anon$$readCurrentFile(FileScanRDD.scala:116)
>   at 
> org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:169)
>   at 
> org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:93)
>   at 
> org.apache.spark.sql.execution.FileSourceScanExec$$anon$1.hasNext(DataSourceScanExec.scala:486)
>   at 
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.columnartorow_nextBatch_0$(Unknown
>  Source)
>   at 
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown
>  Source)
>   at 
> org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
>   at 
> org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:726)
>   at 
> org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:339)
>   at 
> org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:872)
>   at 
> org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:872)
>   at 
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
>   at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:349)
>   at org.apache.spark.rdd.RDD.iterator(RDD.scala:313)
>   at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
>   at org.apache.spark.scheduler.Task.run(Task.scala:127)
>   at 
> org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:441)
>   at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
>   at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:444)
>   at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
>   at 
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
>   at java.base/java.lang.Thread.run(Thread.java:834)
> Caused by: java.lang.ArrayIndexOutOfBoundsException: Index -1 out of bounds 
> for length 0
>   at 
> org.apache.parquet.internal.column.columnindex.IntColumnIndexBuilder$IntColumnIndex$1.compareValueToMin(IntColumnIndexBuilder.java:74)
>   at 
> org.apache.parquet.internal.column.columnindex.BoundaryOrder$2.lt(BoundaryOrder.java:123)
>   at 
> 

[jira] [Updated] (PARQUET-1744) Some filter throws ArrayIndexOutOfBoundsException

2020-01-08 Thread Yuming Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/PARQUET-1744?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuming Wang updated PARQUET-1744:
-
Description: 
How to reproduce:
* Build Spark
{code:sh}
git clone https://github.com/apache/spark.git && cd spark
git fetch origin pull/26804/head:PARQUET-1744
git checkout PARQUET-1744
build/sbt  package
bin/spark-shell
{code}
* Prepare data:
{code:scala}
spark.sql("create table t1(a int, b int, c int) using parquet")
spark.sql("insert into t1 values(1,0,0)")
spark.sql("insert into t1 values(2,0,1)")
spark.sql("insert into t1 values(3,1,0)")
spark.sql("insert into t1 values(4,1,1)")
spark.sql("insert into t1 values(5,null,0)")
spark.sql("insert into t1 values(6,null,1)")
spark.sql("insert into t1 values(7,null,null)")
{code}
* Run test 1
{code:scala}
scala> spark.sql("select a+120 from t1 where b<10 OR c=1").show
java.lang.reflect.InvocationTargetException
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
org.apache.spark.sql.execution.datasources.parquet.SpecificParquetRecordReaderBase.initialize(SpecificParquetRecordReaderBase.java:155)
at 
org.apache.spark.sql.execution.datasources.parquet.VectorizedParquetRecordReader.initialize(VectorizedParquetRecordReader.java:131)
at 
org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat.$anonfun$buildReaderWithPartitionValues$2(ParquetFileFormat.scala:319)
at 
org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.org$apache$spark$sql$execution$datasources$FileScanRDD$$anon$$readCurrentFile(FileScanRDD.scala:116)
at 
org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:169)
at 
org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:93)
at 
org.apache.spark.sql.execution.FileSourceScanExec$$anon$1.hasNext(DataSourceScanExec.scala:486)
at 
org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.columnartorow_nextBatch_0$(Unknown
 Source)
at 
org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown
 Source)
at 
org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at 
org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:726)
at 
org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:339)
at 
org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:872)
at 
org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:872)
at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:349)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:313)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:127)
at 
org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:441)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:444)
at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: java.lang.ArrayIndexOutOfBoundsException: Index -1 out of bounds for 
length 0
at 
org.apache.parquet.internal.column.columnindex.IntColumnIndexBuilder$IntColumnIndex$1.compareValueToMin(IntColumnIndexBuilder.java:74)
at 
org.apache.parquet.internal.column.columnindex.BoundaryOrder$2.lt(BoundaryOrder.java:123)
at 
org.apache.parquet.internal.column.columnindex.ColumnIndexBuilder$ColumnIndexBase.visit(ColumnIndexBuilder.java:262)
at 
org.apache.parquet.internal.column.columnindex.ColumnIndexBuilder$ColumnIndexBase.visit(ColumnIndexBuilder.java:64)
at 
org.apache.parquet.internal.filter2.columnindex.ColumnIndexFilter.lambda$visit$2(ColumnIndexFilter.java:131)
at 
org.apache.parquet.internal.filter2.columnindex.ColumnIndexFilter.applyPredicate(ColumnIndexFilter.java:176)
at 
org.apache.parquet.internal.filter2.columnindex.ColumnIndexFilter.visit(ColumnIndexFilter.java:131)
at 

[jira] [Created] (PARQUET-1744) Some filter throws ArrayIndexOutOfBoundsException

2020-01-08 Thread Yuming Wang (Jira)
Yuming Wang created PARQUET-1744:


 Summary: Some filter throws ArrayIndexOutOfBoundsException
 Key: PARQUET-1744
 URL: https://issues.apache.org/jira/browse/PARQUET-1744
 Project: Parquet
  Issue Type: Sub-task
  Components: parquet-mr
Affects Versions: 1.11.0
Reporter: Yuming Wang


How to reproduce:
{noformat}

{noformat}




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PARQUET-1740) Make ParquetFileReader.getFilteredRecordCount public

2020-01-08 Thread Yuming Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/PARQUET-1740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuming Wang updated PARQUET-1740:
-
Description: Please see  
[https://github.com/apache/spark/pull/26804/commits/4756e67dddbbf891c445efb78b202706e133cb46]
 for more details.

> Make ParquetFileReader.getFilteredRecordCount public
> 
>
> Key: PARQUET-1740
> URL: https://issues.apache.org/jira/browse/PARQUET-1740
> Project: Parquet
>  Issue Type: Sub-task
>  Components: parquet-mr
>Affects Versions: 1.11.0
>Reporter: Yuming Wang
>Priority: Major
>  Labels: pull-request-available
>
> Please see  
> [https://github.com/apache/spark/pull/26804/commits/4756e67dddbbf891c445efb78b202706e133cb46]
>  for more details.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PARQUET-1740) Make ParquetFileReader.getFilteredRecordCount public

2020-01-08 Thread Yuming Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/PARQUET-1740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuming Wang updated PARQUET-1740:
-
Affects Version/s: 1.11.0

> Make ParquetFileReader.getFilteredRecordCount public
> 
>
> Key: PARQUET-1740
> URL: https://issues.apache.org/jira/browse/PARQUET-1740
> Project: Parquet
>  Issue Type: Sub-task
>Affects Versions: 1.11.0
>Reporter: Yuming Wang
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PARQUET-1740) Make ParquetFileReader.getFilteredRecordCount public

2020-01-08 Thread Yuming Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/PARQUET-1740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuming Wang updated PARQUET-1740:
-
Component/s: parquet-mr

> Make ParquetFileReader.getFilteredRecordCount public
> 
>
> Key: PARQUET-1740
> URL: https://issues.apache.org/jira/browse/PARQUET-1740
> Project: Parquet
>  Issue Type: Sub-task
>  Components: parquet-mr
>Affects Versions: 1.11.0
>Reporter: Yuming Wang
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (PARQUET-1740) Make ParquetFileReader.getFilteredRecordCount public

2020-01-08 Thread Yuming Wang (Jira)
Yuming Wang created PARQUET-1740:


 Summary: Make ParquetFileReader.getFilteredRecordCount public
 Key: PARQUET-1740
 URL: https://issues.apache.org/jira/browse/PARQUET-1740
 Project: Parquet
  Issue Type: Sub-task
Reporter: Yuming Wang






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PARQUET-1739) Make Spark SQL support Column indexes

2020-01-08 Thread Yuming Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/PARQUET-1739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuming Wang updated PARQUET-1739:
-
Description: Make Spark SQL support 

> Make Spark SQL support Column indexes
> -
>
> Key: PARQUET-1739
> URL: https://issues.apache.org/jira/browse/PARQUET-1739
> Project: Parquet
>  Issue Type: Improvement
>  Components: parquet-mr
>Affects Versions: 1.11.0
>Reporter: Yuming Wang
>Priority: Major
>
> Make Spark SQL support 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PARQUET-1739) Make Spark SQL support Column indexes

2020-01-08 Thread Yuming Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/PARQUET-1739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuming Wang updated PARQUET-1739:
-
Description: Make Spark SQL support Column indexes.  (was: Make Spark SQL 
support )

> Make Spark SQL support Column indexes
> -
>
> Key: PARQUET-1739
> URL: https://issues.apache.org/jira/browse/PARQUET-1739
> Project: Parquet
>  Issue Type: Improvement
>  Components: parquet-mr
>Affects Versions: 1.11.0
>Reporter: Yuming Wang
>Priority: Major
>
> Make Spark SQL support Column indexes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (PARQUET-1739) Make Spark SQL support Column indexes

2020-01-08 Thread Yuming Wang (Jira)
Yuming Wang created PARQUET-1739:


 Summary: Make Spark SQL support Column indexes
 Key: PARQUET-1739
 URL: https://issues.apache.org/jira/browse/PARQUET-1739
 Project: Parquet
  Issue Type: Improvement
  Components: parquet-mr
Affects Versions: 1.11.0
Reporter: Yuming Wang






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (PARQUET-1485) Snappy Decompressor/Compressor may cause direct memory leak

2019-11-11 Thread Yuming Wang (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-1485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16971409#comment-16971409
 ] 

Yuming Wang commented on PARQUET-1485:
--

Could we backport this patch for 
[parquet-1.10.x|https://github.com/apache/parquet-mr/tree/parquet-1.10.x]?

> Snappy Decompressor/Compressor may cause direct memory leak
> ---
>
> Key: PARQUET-1485
> URL: https://issues.apache.org/jira/browse/PARQUET-1485
> Project: Parquet
>  Issue Type: Bug
>  Components: parquet-mr
>Affects Versions: 1.8.1
> Environment: parquet-1.8.1
> spark2.1
>Reporter: liupengcheng
>Assignee: liupengcheng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>
> In out production environment, we encountered a direct memory oom issues 
> caused by the direct buffer not released in time.
> After carefully checked the code, it seems that the some methods of 
> SnappyDecompressor/SnappyCompressor would not release the direct buffer 
> manually. If too much direct memory allocated and no GC happens, this bug may 
> result in direct memory oom.
> Moreover, if the `-XX:+DisableImplicitGC` jvm option is specified, the direct 
> memory oom would happen easily for large datasets.
> Seems that the problem still exist in the latest code.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (PARQUET-1638) ParquetFileReader.readFooter and ParquetFileReader.readNextRowGroup may be hang

2019-08-15 Thread Yuming Wang (JIRA)


[ 
https://issues.apache.org/jira/browse/PARQUET-1638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16908097#comment-16908097
 ] 

Yuming Wang commented on PARQUET-1638:
--

The issue fixed by HDFS-10223.

> ParquetFileReader.readFooter and ParquetFileReader.readNextRowGroup may be 
> hang
> ---
>
> Key: PARQUET-1638
> URL: https://issues.apache.org/jira/browse/PARQUET-1638
> Project: Parquet
>  Issue Type: Bug
>  Components: parquet-mr
>Affects Versions: 1.8.3
>Reporter: Yuming Wang
>Priority: Major
> Attachments: image-2019-08-15-15-38-47-898.png, 
> image-2019-08-15-15-41-36-878.png, image-2019-08-15-15-43-56-912.png
>
>
> It's a Spark SQL application. These 3 tasks hang for more than 1.5 hours.
> !image-2019-08-15-15-38-47-898.png!
> !image-2019-08-15-15-43-56-912.png!
> !image-2019-08-15-15-41-36-878.png!



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Resolved] (PARQUET-1638) ParquetFileReader.readFooter and ParquetFileReader.readNextRowGroup may be hang

2019-08-15 Thread Yuming Wang (JIRA)


 [ 
https://issues.apache.org/jira/browse/PARQUET-1638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuming Wang resolved PARQUET-1638.
--
Resolution: Invalid

> ParquetFileReader.readFooter and ParquetFileReader.readNextRowGroup may be 
> hang
> ---
>
> Key: PARQUET-1638
> URL: https://issues.apache.org/jira/browse/PARQUET-1638
> Project: Parquet
>  Issue Type: Bug
>  Components: parquet-mr
>Affects Versions: 1.8.3
>Reporter: Yuming Wang
>Priority: Major
> Attachments: image-2019-08-15-15-38-47-898.png, 
> image-2019-08-15-15-41-36-878.png, image-2019-08-15-15-43-56-912.png
>
>
> It's a Spark SQL application. These 3 tasks hang for more than 1.5 hours.
> !image-2019-08-15-15-38-47-898.png!
> !image-2019-08-15-15-43-56-912.png!
> !image-2019-08-15-15-41-36-878.png!



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (PARQUET-1638) ParquetFileReader.readFooter and ParquetFileReader.readNextRowGroup may be hang

2019-08-15 Thread Yuming Wang (JIRA)


[ 
https://issues.apache.org/jira/browse/PARQUET-1638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16907898#comment-16907898
 ] 

Yuming Wang commented on PARQUET-1638:
--

cc [~rdblue] [~gszadovszky]

> ParquetFileReader.readFooter and ParquetFileReader.readNextRowGroup may be 
> hang
> ---
>
> Key: PARQUET-1638
> URL: https://issues.apache.org/jira/browse/PARQUET-1638
> Project: Parquet
>  Issue Type: Bug
>  Components: parquet-mr
>Affects Versions: 1.8.3
>Reporter: Yuming Wang
>Priority: Major
> Attachments: image-2019-08-15-15-38-47-898.png, 
> image-2019-08-15-15-41-36-878.png, image-2019-08-15-15-43-56-912.png
>
>
> It's a Spark SQL application. These 3 tasks hang for more than 1.5 hours.
> !image-2019-08-15-15-38-47-898.png!
> !image-2019-08-15-15-43-56-912.png!
> !image-2019-08-15-15-41-36-878.png!



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (PARQUET-1638) ParquetFileReader.readFooter and ParquetFileReader.readNextRowGroup may be hang

2019-08-15 Thread Yuming Wang (JIRA)


 [ 
https://issues.apache.org/jira/browse/PARQUET-1638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuming Wang updated PARQUET-1638:
-
Description: 
It's a Spark SQL application. These 3 tasks hang for more than 1.5 hours.

!image-2019-08-15-15-38-47-898.png!

!image-2019-08-15-15-43-56-912.png!

!image-2019-08-15-15-41-36-878.png!

  was:
!image-2019-08-15-15-38-47-898.png!

!image-2019-08-15-15-43-56-912.png!

!image-2019-08-15-15-41-36-878.png!


> ParquetFileReader.readFooter and ParquetFileReader.readNextRowGroup may be 
> hang
> ---
>
> Key: PARQUET-1638
> URL: https://issues.apache.org/jira/browse/PARQUET-1638
> Project: Parquet
>  Issue Type: Bug
>  Components: parquet-mr
>Affects Versions: 1.8.3
>Reporter: Yuming Wang
>Priority: Major
> Attachments: image-2019-08-15-15-38-47-898.png, 
> image-2019-08-15-15-41-36-878.png, image-2019-08-15-15-43-56-912.png
>
>
> It's a Spark SQL application. These 3 tasks hang for more than 1.5 hours.
> !image-2019-08-15-15-38-47-898.png!
> !image-2019-08-15-15-43-56-912.png!
> !image-2019-08-15-15-41-36-878.png!



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (PARQUET-1638) ParquetFileReader.readFooter and ParquetFileReader.readNextRowGroup may be hang

2019-08-15 Thread Yuming Wang (JIRA)


 [ 
https://issues.apache.org/jira/browse/PARQUET-1638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuming Wang updated PARQUET-1638:
-
Attachment: (was: image-2019-08-15-15-39-12-288.png)

> ParquetFileReader.readFooter and ParquetFileReader.readNextRowGroup may be 
> hang
> ---
>
> Key: PARQUET-1638
> URL: https://issues.apache.org/jira/browse/PARQUET-1638
> Project: Parquet
>  Issue Type: Bug
>  Components: parquet-mr
>Affects Versions: 1.8.3
>Reporter: Yuming Wang
>Priority: Major
> Attachments: image-2019-08-15-15-38-47-898.png, 
> image-2019-08-15-15-41-36-878.png, image-2019-08-15-15-43-56-912.png
>
>
> !image-2019-08-15-15-38-47-898.png!
> !image-2019-08-15-15-43-56-912.png!
> !image-2019-08-15-15-41-36-878.png!



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (PARQUET-1638) ParquetFileReader.readFooter and ParquetFileReader.readNextRowGroup may be hang

2019-08-15 Thread Yuming Wang (JIRA)


 [ 
https://issues.apache.org/jira/browse/PARQUET-1638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuming Wang updated PARQUET-1638:
-
Attachment: (was: image-2019-08-15-15-43-56-529.png)

> ParquetFileReader.readFooter and ParquetFileReader.readNextRowGroup may be 
> hang
> ---
>
> Key: PARQUET-1638
> URL: https://issues.apache.org/jira/browse/PARQUET-1638
> Project: Parquet
>  Issue Type: Bug
>  Components: parquet-mr
>Affects Versions: 1.8.3
>Reporter: Yuming Wang
>Priority: Major
> Attachments: image-2019-08-15-15-38-47-898.png, 
> image-2019-08-15-15-41-36-878.png, image-2019-08-15-15-43-56-912.png
>
>
> !image-2019-08-15-15-38-47-898.png!
> !image-2019-08-15-15-43-56-912.png!
> !image-2019-08-15-15-41-36-878.png!



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (PARQUET-1638) ParquetFileReader.readFooter may be hang

2019-08-15 Thread Yuming Wang (JIRA)


 [ 
https://issues.apache.org/jira/browse/PARQUET-1638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuming Wang updated PARQUET-1638:
-
Attachment: image-2019-08-15-15-43-56-529.png

> ParquetFileReader.readFooter may be hang
> 
>
> Key: PARQUET-1638
> URL: https://issues.apache.org/jira/browse/PARQUET-1638
> Project: Parquet
>  Issue Type: Bug
>  Components: parquet-mr
>Affects Versions: 1.8.3
>Reporter: Yuming Wang
>Priority: Major
> Attachments: image-2019-08-15-15-38-47-898.png, 
> image-2019-08-15-15-39-12-288.png, image-2019-08-15-15-41-36-878.png, 
> image-2019-08-15-15-43-56-529.png, image-2019-08-15-15-43-56-912.png
>
>
> !image-2019-08-15-15-38-47-898.png!
>  
> !image-2019-08-15-15-39-12-288.png!
> !image-2019-08-15-15-41-36-878.png!



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (PARQUET-1638) ParquetFileReader.readFooter and ParquetFileReader.readNextRowGroup may be hang

2019-08-15 Thread Yuming Wang (JIRA)


 [ 
https://issues.apache.org/jira/browse/PARQUET-1638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuming Wang updated PARQUET-1638:
-
Summary: ParquetFileReader.readFooter and 
ParquetFileReader.readNextRowGroup may be hang  (was: 
ParquetFileReader.readFooter may be hang)

> ParquetFileReader.readFooter and ParquetFileReader.readNextRowGroup may be 
> hang
> ---
>
> Key: PARQUET-1638
> URL: https://issues.apache.org/jira/browse/PARQUET-1638
> Project: Parquet
>  Issue Type: Bug
>  Components: parquet-mr
>Affects Versions: 1.8.3
>Reporter: Yuming Wang
>Priority: Major
> Attachments: image-2019-08-15-15-38-47-898.png, 
> image-2019-08-15-15-39-12-288.png, image-2019-08-15-15-41-36-878.png, 
> image-2019-08-15-15-43-56-529.png, image-2019-08-15-15-43-56-912.png
>
>
> !image-2019-08-15-15-38-47-898.png!
> !image-2019-08-15-15-43-56-912.png!
> !image-2019-08-15-15-41-36-878.png!



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (PARQUET-1638) ParquetFileReader.readFooter may be hang

2019-08-15 Thread Yuming Wang (JIRA)


 [ 
https://issues.apache.org/jira/browse/PARQUET-1638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuming Wang updated PARQUET-1638:
-
Attachment: image-2019-08-15-15-43-56-912.png

> ParquetFileReader.readFooter may be hang
> 
>
> Key: PARQUET-1638
> URL: https://issues.apache.org/jira/browse/PARQUET-1638
> Project: Parquet
>  Issue Type: Bug
>  Components: parquet-mr
>Affects Versions: 1.8.3
>Reporter: Yuming Wang
>Priority: Major
> Attachments: image-2019-08-15-15-38-47-898.png, 
> image-2019-08-15-15-39-12-288.png, image-2019-08-15-15-41-36-878.png, 
> image-2019-08-15-15-43-56-529.png, image-2019-08-15-15-43-56-912.png
>
>
> !image-2019-08-15-15-38-47-898.png!
>  
> !image-2019-08-15-15-39-12-288.png!
> !image-2019-08-15-15-41-36-878.png!



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (PARQUET-1638) ParquetFileReader.readFooter may be hang

2019-08-15 Thread Yuming Wang (JIRA)


 [ 
https://issues.apache.org/jira/browse/PARQUET-1638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuming Wang updated PARQUET-1638:
-
Description: 
!image-2019-08-15-15-38-47-898.png!

!image-2019-08-15-15-43-56-912.png!

!image-2019-08-15-15-41-36-878.png!

  was:
!image-2019-08-15-15-38-47-898.png!

 

!image-2019-08-15-15-39-12-288.png!

!image-2019-08-15-15-41-36-878.png!


> ParquetFileReader.readFooter may be hang
> 
>
> Key: PARQUET-1638
> URL: https://issues.apache.org/jira/browse/PARQUET-1638
> Project: Parquet
>  Issue Type: Bug
>  Components: parquet-mr
>Affects Versions: 1.8.3
>Reporter: Yuming Wang
>Priority: Major
> Attachments: image-2019-08-15-15-38-47-898.png, 
> image-2019-08-15-15-39-12-288.png, image-2019-08-15-15-41-36-878.png, 
> image-2019-08-15-15-43-56-529.png, image-2019-08-15-15-43-56-912.png
>
>
> !image-2019-08-15-15-38-47-898.png!
> !image-2019-08-15-15-43-56-912.png!
> !image-2019-08-15-15-41-36-878.png!



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (PARQUET-1638) ParquetFileReader.readFooter may be hang

2019-08-15 Thread Yuming Wang (JIRA)


 [ 
https://issues.apache.org/jira/browse/PARQUET-1638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuming Wang updated PARQUET-1638:
-
Description: 
!image-2019-08-15-15-38-47-898.png!

 

!image-2019-08-15-15-39-12-288.png!

!image-2019-08-15-15-41-36-878.png!

  was:
!image-2019-08-15-15-38-47-898.png!

 

!image-2019-08-15-15-39-12-288.png!


> ParquetFileReader.readFooter may be hang
> 
>
> Key: PARQUET-1638
> URL: https://issues.apache.org/jira/browse/PARQUET-1638
> Project: Parquet
>  Issue Type: Bug
>  Components: parquet-mr
>Affects Versions: 1.8.3
>Reporter: Yuming Wang
>Priority: Major
> Attachments: image-2019-08-15-15-38-47-898.png, 
> image-2019-08-15-15-39-12-288.png, image-2019-08-15-15-41-36-878.png
>
>
> !image-2019-08-15-15-38-47-898.png!
>  
> !image-2019-08-15-15-39-12-288.png!
> !image-2019-08-15-15-41-36-878.png!



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (PARQUET-1638) ParquetFileReader.readFooter may be hang

2019-08-15 Thread Yuming Wang (JIRA)


 [ 
https://issues.apache.org/jira/browse/PARQUET-1638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuming Wang updated PARQUET-1638:
-
Attachment: image-2019-08-15-15-41-36-878.png

> ParquetFileReader.readFooter may be hang
> 
>
> Key: PARQUET-1638
> URL: https://issues.apache.org/jira/browse/PARQUET-1638
> Project: Parquet
>  Issue Type: Bug
>  Components: parquet-mr
>Affects Versions: 1.8.3
>Reporter: Yuming Wang
>Priority: Major
> Attachments: image-2019-08-15-15-38-47-898.png, 
> image-2019-08-15-15-39-12-288.png, image-2019-08-15-15-41-36-878.png
>
>
> !image-2019-08-15-15-38-47-898.png!
>  
> !image-2019-08-15-15-39-12-288.png!



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Created] (PARQUET-1638) ParquetFileReader.readFooter may be hang

2019-08-15 Thread Yuming Wang (JIRA)
Yuming Wang created PARQUET-1638:


 Summary: ParquetFileReader.readFooter may be hang
 Key: PARQUET-1638
 URL: https://issues.apache.org/jira/browse/PARQUET-1638
 Project: Parquet
  Issue Type: Bug
  Components: parquet-mr
Affects Versions: 1.8.3
Reporter: Yuming Wang
 Attachments: image-2019-08-15-15-38-47-898.png, 
image-2019-08-15-15-39-12-288.png

!image-2019-08-15-15-38-47-898.png!

 

!image-2019-08-15-15-39-12-288.png!



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (PARQUET-1488) UserDefinedPredicate throw NullPointerException

2019-07-15 Thread Yuming Wang (JIRA)


[ 
https://issues.apache.org/jira/browse/PARQUET-1488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16885201#comment-16885201
 ] 

Yuming Wang commented on PARQUET-1488:
--

Thank you [~gszadovszky] Please work on this.

> UserDefinedPredicate throw NullPointerException
> ---
>
> Key: PARQUET-1488
> URL: https://issues.apache.org/jira/browse/PARQUET-1488
> Project: Parquet
>  Issue Type: Bug
>  Components: parquet-mr
>Affects Versions: 1.11.0
>Reporter: Yuming Wang
>Assignee: Yuming Wang
>Priority: Major
>
> It throws {{NullPointerException}} after upgrade parquet to 1.11.0 when using 
> {{UserDefinedPredicate}}.
> The  
> [UserDefinedPredicate|https://github.com/apache/spark/blob/faf73dcd33d04365c28c2846d3a1f845785f69df/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFilters.scala#L548-L578]
>  is:
> {code:java}
> new UserDefinedPredicate[Binary] with Serializable {  
> 
>   private val strToBinary = Binary.fromReusedByteArray(v.getBytes)
> 
>   private val size = strToBinary.length   
> 
>   
> 
>   override def canDrop(statistics: Statistics[Binary]): Boolean = {   
> 
> val comparator = 
> PrimitiveComparator.UNSIGNED_LEXICOGRAPHICAL_BINARY_COMPARATOR   
> val max = statistics.getMax   
> 
> val min = statistics.getMin   
> 
> comparator.compare(max.slice(0, math.min(size, max.length)), strToBinary) 
> < 0 ||  
>   comparator.compare(min.slice(0, math.min(size, min.length)), 
> strToBinary) > 0   
>   }   
> 
>   
> 
>   override def inverseCanDrop(statistics: Statistics[Binary]): Boolean = {
> 
> val comparator = 
> PrimitiveComparator.UNSIGNED_LEXICOGRAPHICAL_BINARY_COMPARATOR   
> val max = statistics.getMax   
> 
> val min = statistics.getMin   
> 
> comparator.compare(max.slice(0, math.min(size, max.length)), strToBinary) 
> == 0 && 
>   comparator.compare(min.slice(0, math.min(size, min.length)), 
> strToBinary) == 0  
>   }   
> 
>   
> 
>   override def keep(value: Binary): Boolean = {   
> 
> UTF8String.fromBytes(value.getBytes).startsWith(  
> 
>   UTF8String.fromBytes(strToBinary.getBytes)) 
> 
>   }   
> 
> } 
> 
> {code}
> The stack trace is:
> {noformat}
> java.lang.NullPointerException
>   at 
> org.apache.spark.sql.execution.datasources.parquet.ParquetFilters$$anon$1.keep(ParquetFilters.scala:573)
>   at 
> org.apache.spark.sql.execution.datasources.parquet.ParquetFilters$$anon$1.keep(ParquetFilters.scala:552)
>   at 
> org.apache.parquet.internal.filter2.columnindex.ColumnIndexFilter.visit(ColumnIndexFilter.java:152)
>   at 
> org.apache.parquet.internal.filter2.columnindex.ColumnIndexFilter.visit(ColumnIndexFilter.java:56)
>   at 
> org.apache.parquet.filter2.predicate.Operators$UserDefined.accept(Operators.java:377)
>   at 
> org.apache.parquet.internal.filter2.columnindex.ColumnIndexFilter.visit(ColumnIndexFilter.java:181)
>   at 
> org.apache.parquet.internal.filter2.columnindex.ColumnIndexFilter.visit(ColumnIndexFilter.java:56)
>   at 
> org.apache.parquet.filter2.predicate.Operators$And.accept(Operators.java:309)
>   at 
> org.apache.parquet.internal.filter2.columnindex.ColumnIndexFilter$1.visit(ColumnIndexFilter.java:86)
>   at 
> org.apache.parquet.internal.filter2.columnindex.ColumnIndexFilter$1.visit(ColumnIndexFilter.java:81)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (PARQUET-1563) cannot read 'date' datatype which write by spark

2019-04-16 Thread Yuming Wang (JIRA)


[ 
https://issues.apache.org/jira/browse/PARQUET-1563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16818693#comment-16818693
 ] 

Yuming Wang commented on PARQUET-1563:
--

It's not a bug. you need convert it to date:

https://github.com/apache/spark/blob/21a7bfd5c324e6c82152229f1394f26afeae771c/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetWriteSupport.scala#L145-L147

> cannot read 'date' datatype which write by spark
> 
>
> Key: PARQUET-1563
> URL: https://issues.apache.org/jira/browse/PARQUET-1563
> Project: Parquet
>  Issue Type: Bug
> Environment: jdk: 1.8
> macOS Mojave 10.14.4
>Reporter: Fan Mo
>Priority: Major
>
> I'm using spark 2.4.0 to write parquet file and try to use 
> parquet-column-1.10.jar to read the data. All the primary datatypes are 
> working however for the date datatype it gets some meanless number.  For 
> example, input date is '1970-04-26', output data is '115'. if I use Spark to 
> read the data, it can get the correct date. 
> following are my reader code:
> val reader = ParquetFileReader.open(HadoopInputFile.fromPath(new 
> Path(("testfile.snappy.parquet")), new Configuration()))
> val schema = reader.getFooter.getFileMetaData.getSchema
> var pages : PageReadStore = null
> while((pages = reader.readNextRowGroup()) != null) {
>  val rows = pages.getRowCount
>  val columnIO = new ColumnIOFactory().getColumnIO(schema)
>  val recordReader = columnIO.getRecordReader(pages,new 
> GroupRecordConverter(schema))
>  (0L until rows).foreach{ _ : Long =>
>  val simpleGroup = recordReader.read()
>  println(simpleGroup)
>  }
> }



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PARQUET-1143) Update Java for format 2.4.0 changes

2019-04-10 Thread Yuming Wang (JIRA)


[ 
https://issues.apache.org/jira/browse/PARQUET-1143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16814281#comment-16814281
 ] 

Yuming Wang commented on PARQUET-1143:
--

[~rdblue] Should we update the *Fix Version/s*?

> Update Java for format 2.4.0 changes
> 
>
> Key: PARQUET-1143
> URL: https://issues.apache.org/jira/browse/PARQUET-1143
> Project: Parquet
>  Issue Type: Task
>  Components: parquet-mr
>Affects Versions: 1.9.0, 1.8.2
>Reporter: Ryan Blue
>Assignee: Ryan Blue
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PARQUET-1488) UserDefinedPredicate throw NullPointerException

2019-01-09 Thread Yuming Wang (JIRA)


[ 
https://issues.apache.org/jira/browse/PARQUET-1488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16738424#comment-16738424
 ] 

Yuming Wang commented on PARQUET-1488:
--

[~gszadovszky] Thanks a lot.

> UserDefinedPredicate throw NullPointerException
> ---
>
> Key: PARQUET-1488
> URL: https://issues.apache.org/jira/browse/PARQUET-1488
> Project: Parquet
>  Issue Type: Bug
>  Components: parquet-mr
>Affects Versions: 1.11.0
>Reporter: Yuming Wang
>Assignee: Gabor Szadovszky
>Priority: Major
>
> It throws {{NullPointerException}} after upgrade parquet to 1.11.0 when using 
> {{UserDefinedPredicate}}.
> The  
> [UserDefinedPredicate|https://github.com/apache/spark/blob/faf73dcd33d04365c28c2846d3a1f845785f69df/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFilters.scala#L548-L578]
>  is:
> {code:java}
> new UserDefinedPredicate[Binary] with Serializable {  
> 
>   private val strToBinary = Binary.fromReusedByteArray(v.getBytes)
> 
>   private val size = strToBinary.length   
> 
>   
> 
>   override def canDrop(statistics: Statistics[Binary]): Boolean = {   
> 
> val comparator = 
> PrimitiveComparator.UNSIGNED_LEXICOGRAPHICAL_BINARY_COMPARATOR   
> val max = statistics.getMax   
> 
> val min = statistics.getMin   
> 
> comparator.compare(max.slice(0, math.min(size, max.length)), strToBinary) 
> < 0 ||  
>   comparator.compare(min.slice(0, math.min(size, min.length)), 
> strToBinary) > 0   
>   }   
> 
>   
> 
>   override def inverseCanDrop(statistics: Statistics[Binary]): Boolean = {
> 
> val comparator = 
> PrimitiveComparator.UNSIGNED_LEXICOGRAPHICAL_BINARY_COMPARATOR   
> val max = statistics.getMax   
> 
> val min = statistics.getMin   
> 
> comparator.compare(max.slice(0, math.min(size, max.length)), strToBinary) 
> == 0 && 
>   comparator.compare(min.slice(0, math.min(size, min.length)), 
> strToBinary) == 0  
>   }   
> 
>   
> 
>   override def keep(value: Binary): Boolean = {   
> 
> UTF8String.fromBytes(value.getBytes).startsWith(  
> 
>   UTF8String.fromBytes(strToBinary.getBytes)) 
> 
>   }   
> 
> } 
> 
> {code}
> The stack trace is:
> {noformat}
> java.lang.NullPointerException
>   at 
> org.apache.spark.sql.execution.datasources.parquet.ParquetFilters$$anon$1.keep(ParquetFilters.scala:573)
>   at 
> org.apache.spark.sql.execution.datasources.parquet.ParquetFilters$$anon$1.keep(ParquetFilters.scala:552)
>   at 
> org.apache.parquet.internal.filter2.columnindex.ColumnIndexFilter.visit(ColumnIndexFilter.java:152)
>   at 
> org.apache.parquet.internal.filter2.columnindex.ColumnIndexFilter.visit(ColumnIndexFilter.java:56)
>   at 
> org.apache.parquet.filter2.predicate.Operators$UserDefined.accept(Operators.java:377)
>   at 
> org.apache.parquet.internal.filter2.columnindex.ColumnIndexFilter.visit(ColumnIndexFilter.java:181)
>   at 
> org.apache.parquet.internal.filter2.columnindex.ColumnIndexFilter.visit(ColumnIndexFilter.java:56)
>   at 
> org.apache.parquet.filter2.predicate.Operators$And.accept(Operators.java:309)
>   at 
> org.apache.parquet.internal.filter2.columnindex.ColumnIndexFilter$1.visit(ColumnIndexFilter.java:86)
>   at 
> org.apache.parquet.internal.filter2.columnindex.ColumnIndexFilter$1.visit(ColumnIndexFilter.java:81)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PARQUET-1488) UserDefinedPredicate throw NullPointerException

2019-01-08 Thread Yuming Wang (JIRA)


 [ 
https://issues.apache.org/jira/browse/PARQUET-1488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuming Wang updated PARQUET-1488:
-
Description: 
It throws {{NullPointerException}} after upgrade parquet to 1.11.0 when using 
{{UserDefinedPredicate}}.

The  
[UserDefinedPredicate|https://github.com/apache/spark/blob/faf73dcd33d04365c28c2846d3a1f845785f69df/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFilters.scala#L548-L578]
 is:
{code:java}
new UserDefinedPredicate[Binary] with Serializable {
  
  private val strToBinary = Binary.fromReusedByteArray(v.getBytes)  
  
  private val size = strToBinary.length 
  

  
  override def canDrop(statistics: Statistics[Binary]): Boolean = { 
  
val comparator = 
PrimitiveComparator.UNSIGNED_LEXICOGRAPHICAL_BINARY_COMPARATOR   
val max = statistics.getMax 
  
val min = statistics.getMin 
  
comparator.compare(max.slice(0, math.min(size, max.length)), strToBinary) < 
0 ||  
  comparator.compare(min.slice(0, math.min(size, min.length)), strToBinary) 
> 0   
  } 
  

  
  override def inverseCanDrop(statistics: Statistics[Binary]): Boolean = {  
  
val comparator = 
PrimitiveComparator.UNSIGNED_LEXICOGRAPHICAL_BINARY_COMPARATOR   
val max = statistics.getMax 
  
val min = statistics.getMin 
  
comparator.compare(max.slice(0, math.min(size, max.length)), strToBinary) 
== 0 && 
  comparator.compare(min.slice(0, math.min(size, min.length)), strToBinary) 
== 0  
  } 
  

  
  override def keep(value: Binary): Boolean = { 
  
UTF8String.fromBytes(value.getBytes).startsWith(
  
  UTF8String.fromBytes(strToBinary.getBytes))   
  
  } 
  
}   
  
{code}
The stack trace is:
{noformat}
java.lang.NullPointerException
at 
org.apache.spark.sql.execution.datasources.parquet.ParquetFilters$$anon$1.keep(ParquetFilters.scala:573)
at 
org.apache.spark.sql.execution.datasources.parquet.ParquetFilters$$anon$1.keep(ParquetFilters.scala:552)
at 
org.apache.parquet.internal.filter2.columnindex.ColumnIndexFilter.visit(ColumnIndexFilter.java:152)
at 
org.apache.parquet.internal.filter2.columnindex.ColumnIndexFilter.visit(ColumnIndexFilter.java:56)
at 
org.apache.parquet.filter2.predicate.Operators$UserDefined.accept(Operators.java:377)
at 
org.apache.parquet.internal.filter2.columnindex.ColumnIndexFilter.visit(ColumnIndexFilter.java:181)
at 
org.apache.parquet.internal.filter2.columnindex.ColumnIndexFilter.visit(ColumnIndexFilter.java:56)
at 
org.apache.parquet.filter2.predicate.Operators$And.accept(Operators.java:309)
at 
org.apache.parquet.internal.filter2.columnindex.ColumnIndexFilter$1.visit(ColumnIndexFilter.java:86)
at 
org.apache.parquet.internal.filter2.columnindex.ColumnIndexFilter$1.visit(ColumnIndexFilter.java:81)
{noformat}

  was:
It throw {{NullPointerException}} after upgrade parquet to 1.11.0 when using 
{{UserDefinedPredicate}}.

The  
[UserDefinedPredicate|https://github.com/apache/spark/blob/faf73dcd33d04365c28c2846d3a1f845785f69df/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFilters.scala#L548-L578]
 is:
{code}
new UserDefinedPredicate[Binary] with Serializable {
  
  private val strToBinary = Binary.fromReusedByteArray(v.getBytes)  
  
  private val size = strToBinary.length 
  

  
  override def canDrop(statistics: Statistics[Binary]): Boolean = { 
  
val comparator = 
PrimitiveComparator.UNSIGNED_LEXICOGRAPHICAL_BINARY_COMPARATOR   
val max = statistics.getMax 
  
val min = statistics.getMin 
  

[jira] [Commented] (PARQUET-1488) UserDefinedPredicate throw NullPointerException

2019-01-08 Thread Yuming Wang (JIRA)


[ 
https://issues.apache.org/jira/browse/PARQUET-1488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16737790#comment-16737790
 ] 

Yuming Wang commented on PARQUET-1488:
--

cc [~gszadovszky]

> UserDefinedPredicate throw NullPointerException
> ---
>
> Key: PARQUET-1488
> URL: https://issues.apache.org/jira/browse/PARQUET-1488
> Project: Parquet
>  Issue Type: Bug
>  Components: parquet-mr
>Affects Versions: 1.11.0
>Reporter: Yuming Wang
>Priority: Major
>
> It throw {{NullPointerException}} after upgrade parquet to 1.11.0 when using 
> {{UserDefinedPredicate}}.
> The  
> [UserDefinedPredicate|https://github.com/apache/spark/blob/faf73dcd33d04365c28c2846d3a1f845785f69df/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFilters.scala#L548-L578]
>  is:
> {code}
> new UserDefinedPredicate[Binary] with Serializable {  
> 
>   private val strToBinary = Binary.fromReusedByteArray(v.getBytes)
> 
>   private val size = strToBinary.length   
> 
>   
> 
>   override def canDrop(statistics: Statistics[Binary]): Boolean = {   
> 
> val comparator = 
> PrimitiveComparator.UNSIGNED_LEXICOGRAPHICAL_BINARY_COMPARATOR   
> val max = statistics.getMax   
> 
> val min = statistics.getMin   
> 
> comparator.compare(max.slice(0, math.min(size, max.length)), strToBinary) 
> < 0 ||  
>   comparator.compare(min.slice(0, math.min(size, min.length)), 
> strToBinary) > 0   
>   }   
> 
>   
> 
>   override def inverseCanDrop(statistics: Statistics[Binary]): Boolean = {
> 
> val comparator = 
> PrimitiveComparator.UNSIGNED_LEXICOGRAPHICAL_BINARY_COMPARATOR   
> val max = statistics.getMax   
> 
> val min = statistics.getMin   
> 
> comparator.compare(max.slice(0, math.min(size, max.length)), strToBinary) 
> == 0 && 
>   comparator.compare(min.slice(0, math.min(size, min.length)), 
> strToBinary) == 0  
>   }   
> 
>   
> 
>   override def keep(value: Binary): Boolean = {   
> 
> UTF8String.fromBytes(value.getBytes).startsWith(  
> 
>   UTF8String.fromBytes(strToBinary.getBytes)) 
> 
>   }   
> 
> } 
> 
> {code}
> The stack trace is:
> {noformat}
> java.lang.NullPointerException
>   at 
> org.apache.spark.sql.execution.datasources.parquet.ParquetFilters$$anon$1.keep(ParquetFilters.scala:573)
>   at 
> org.apache.spark.sql.execution.datasources.parquet.ParquetFilters$$anon$1.keep(ParquetFilters.scala:552)
>   at 
> org.apache.parquet.internal.filter2.columnindex.ColumnIndexFilter.visit(ColumnIndexFilter.java:152)
>   at 
> org.apache.parquet.internal.filter2.columnindex.ColumnIndexFilter.visit(ColumnIndexFilter.java:56)
>   at 
> org.apache.parquet.filter2.predicate.Operators$UserDefined.accept(Operators.java:377)
>   at 
> org.apache.parquet.internal.filter2.columnindex.ColumnIndexFilter.visit(ColumnIndexFilter.java:181)
>   at 
> org.apache.parquet.internal.filter2.columnindex.ColumnIndexFilter.visit(ColumnIndexFilter.java:56)
>   at 
> org.apache.parquet.filter2.predicate.Operators$And.accept(Operators.java:309)
>   at 
> org.apache.parquet.internal.filter2.columnindex.ColumnIndexFilter$1.visit(ColumnIndexFilter.java:86)
>   at 
> org.apache.parquet.internal.filter2.columnindex.ColumnIndexFilter$1.visit(ColumnIndexFilter.java:81)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PARQUET-1488) UserDefinedPredicate throw NullPointerException

2019-01-08 Thread Yuming Wang (JIRA)
Yuming Wang created PARQUET-1488:


 Summary: UserDefinedPredicate throw NullPointerException
 Key: PARQUET-1488
 URL: https://issues.apache.org/jira/browse/PARQUET-1488
 Project: Parquet
  Issue Type: Bug
  Components: parquet-mr
Affects Versions: 1.11.0
Reporter: Yuming Wang


It throw {{NullPointerException}} after upgrade parquet to 1.11.0 when using 
{{UserDefinedPredicate}}.

The  
[UserDefinedPredicate|https://github.com/apache/spark/blob/faf73dcd33d04365c28c2846d3a1f845785f69df/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFilters.scala#L548-L578]
 is:
{code}
new UserDefinedPredicate[Binary] with Serializable {
  
  private val strToBinary = Binary.fromReusedByteArray(v.getBytes)  
  
  private val size = strToBinary.length 
  

  
  override def canDrop(statistics: Statistics[Binary]): Boolean = { 
  
val comparator = 
PrimitiveComparator.UNSIGNED_LEXICOGRAPHICAL_BINARY_COMPARATOR   
val max = statistics.getMax 
  
val min = statistics.getMin 
  
comparator.compare(max.slice(0, math.min(size, max.length)), strToBinary) < 
0 ||  
  comparator.compare(min.slice(0, math.min(size, min.length)), strToBinary) 
> 0   
  } 
  

  
  override def inverseCanDrop(statistics: Statistics[Binary]): Boolean = {  
  
val comparator = 
PrimitiveComparator.UNSIGNED_LEXICOGRAPHICAL_BINARY_COMPARATOR   
val max = statistics.getMax 
  
val min = statistics.getMin 
  
comparator.compare(max.slice(0, math.min(size, max.length)), strToBinary) 
== 0 && 
  comparator.compare(min.slice(0, math.min(size, min.length)), strToBinary) 
== 0  
  } 
  

  
  override def keep(value: Binary): Boolean = { 
  
UTF8String.fromBytes(value.getBytes).startsWith(
  
  UTF8String.fromBytes(strToBinary.getBytes))   
  
  } 
  
}   
  
{code}
The stack trace is:
{noformat}
java.lang.NullPointerException
at 
org.apache.spark.sql.execution.datasources.parquet.ParquetFilters$$anon$1.keep(ParquetFilters.scala:573)
at 
org.apache.spark.sql.execution.datasources.parquet.ParquetFilters$$anon$1.keep(ParquetFilters.scala:552)
at 
org.apache.parquet.internal.filter2.columnindex.ColumnIndexFilter.visit(ColumnIndexFilter.java:152)
at 
org.apache.parquet.internal.filter2.columnindex.ColumnIndexFilter.visit(ColumnIndexFilter.java:56)
at 
org.apache.parquet.filter2.predicate.Operators$UserDefined.accept(Operators.java:377)
at 
org.apache.parquet.internal.filter2.columnindex.ColumnIndexFilter.visit(ColumnIndexFilter.java:181)
at 
org.apache.parquet.internal.filter2.columnindex.ColumnIndexFilter.visit(ColumnIndexFilter.java:56)
at 
org.apache.parquet.filter2.predicate.Operators$And.accept(Operators.java:309)
at 
org.apache.parquet.internal.filter2.columnindex.ColumnIndexFilter$1.visit(ColumnIndexFilter.java:86)
at 
org.apache.parquet.internal.filter2.columnindex.ColumnIndexFilter$1.visit(ColumnIndexFilter.java:81)
{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PARQUET-1434) Release parquet-mr 1.11.0

2018-12-12 Thread Yuming Wang (JIRA)


[ 
https://issues.apache.org/jira/browse/PARQUET-1434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16718664#comment-16718664
 ] 

Yuming Wang commented on PARQUET-1434:
--

It didn't pushed to maven repository?

> Release parquet-mr 1.11.0
> -
>
> Key: PARQUET-1434
> URL: https://issues.apache.org/jira/browse/PARQUET-1434
> Project: Parquet
>  Issue Type: Task
>  Components: parquet-mr
>Reporter: Nandor Kollar
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (PARQUET-1434) Release parquet-mr 1.11.0

2018-12-12 Thread Yuming Wang (JIRA)


[ 
https://issues.apache.org/jira/browse/PARQUET-1434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16718664#comment-16718664
 ] 

Yuming Wang edited comment on PARQUET-1434 at 12/12/18 9:16 AM:


[https://github.com/apache/parquet-mr/releases/tag/apache-parquet-1.11.0]

It didn't pushed to maven repository?


was (Author: q79969786):
It didn't pushed to maven repository?

> Release parquet-mr 1.11.0
> -
>
> Key: PARQUET-1434
> URL: https://issues.apache.org/jira/browse/PARQUET-1434
> Project: Parquet
>  Issue Type: Task
>  Components: parquet-mr
>Reporter: Nandor Kollar
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PARQUET-1432) ACID support

2018-10-01 Thread Yuming Wang (JIRA)
Yuming Wang created PARQUET-1432:


 Summary: ACID support
 Key: PARQUET-1432
 URL: https://issues.apache.org/jira/browse/PARQUET-1432
 Project: Parquet
  Issue Type: New Feature
  Components: parquet-format, parquet-mr
Affects Versions: 1.10.1
Reporter: Yuming Wang


https://orc.apache.org/docs/acid.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PARQUET-1359) Out of Memory when reading large parquet file

2018-07-27 Thread Yuming Wang (JIRA)


[ 
https://issues.apache.org/jira/browse/PARQUET-1359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16559333#comment-16559333
 ] 

Yuming Wang commented on PARQUET-1359:
--

Is it duplicate to 
[PARQUET-980|https://issues.apache.org/jira/browse/PARQUET-980]?

> Out of Memory when reading large parquet file
> -
>
> Key: PARQUET-1359
> URL: https://issues.apache.org/jira/browse/PARQUET-1359
> Project: Parquet
>  Issue Type: Bug
>Reporter: Ryan Sachs
>Priority: Major
>
> Hi,
> We are successfully reading parquet files block by block, and are running 
> into a JVM out of memory issue in a certain edge case. Consider the following 
> scenario:
> Parquet file has one column and one block and is 10 GB
> Our JVM is 5 GB
> Is there any way to read such a file? Below is our implementation/stack trace
> {code:java}
> Caused by: java.lang.OutOfMemoryError: Java heap space
> at 
> org.apache.parquet.hadoop.ParquetFileReader$ConsecutiveChunkList.readAll(ParquetFileReader.java:778)
> at 
> org.apache.parquet.hadoop.ParquetFileReader.readNextRowGroup(ParquetFileReader.java:511)
> try {
>   ParquetMetadata readFooter = ParquetFileReader.readFooter(hfsConfig, path,
>ParquetMetadataConverter.NO_FILTER);
>   MessageType schema = readFooter.getFileMetaData().getSchema();
>   long a = readFooter.getBlocks().stream().
> reduce(0L, (left, right) -> left > 
>   right.getTotalByteSize() ? left : right.getTotalByteSize(), 
> (leftl, rightl) -> leftl > rightl ? leftl : rightl);
>   for (BlockMetaData block : readFooter.getBlocks()) {
> try {
>   fileReader = new ParquetFileReader(hfsConfig, 
>readFooter.getFileMetaData(), path, Collections
>   .singletonList(block), schema.getColumns());
>   PageReadStore pages;
> while (null != (pages = fileReader.readNextRowGroup())) {
>   //exception gets thrown here on blocks larger than jvm memory
>   final long rows = pages.getRowCount();
>   final MessageColumnIO columnIO = new 
> ColumnIOFactory().getColumnIO(schema);
>   final RecordReader recordReader = 
> columnIO.getRecordReader(pages, new GroupRecordConverter(schema));
>   for (int i = 0; i < rows; i++) {
> final Group group = recordReader.read();
> int fieldCount = group.getType().getFieldCount();
> for (int field = 0; field < fieldCount; field++) {
>   int valueCount = group.getFieldRepetitionCount(field);
>   Type fieldType = group.getType().getType(field);
>   String fieldName = fieldType.getName();
>   for (int index = 0; index < valueCount; index++) {
> // Process data 
>   }
> }
>   }
> }
>   } catch (IOException e) {
> ...
>   } finally {
> ...
>   }
> }{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PARQUET-1355) Improvement Binary write performance

2018-07-23 Thread Yuming Wang (JIRA)


 [ 
https://issues.apache.org/jira/browse/PARQUET-1355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuming Wang updated PARQUET-1355:
-
Description: 
*Benchmark code*:
{code:java}
test("Parquet write benchmark") {
  val count = 100 * 1024 * 1024
  val numIters = 5
  withTempPath { path =>
val benchmark = new Benchmark(s"Parquet write benchmark 
${spark.sparkContext.version}", 5)

Seq("long", "string", "decimal(18, 0)", "decimal(38, 18)").foreach { dt =>
  benchmark.addCase(s"$dt type", numIters = numIters) { iter =>
spark.range(count).selectExpr(s"cast(id as $dt) as id")
  .write.mode("overwrite").parquet(path.getAbsolutePath)
  }
}
benchmark.run()
  }
}
{code}
*Result*:
{noformat}
-- Spark 2.3.3-SNAPSHOT with Parquet 1.8.3

Java HotSpot(TM) 64-Bit Server VM 1.8.0_151-b12 on Mac OS X 10.12.6
Intel(R) Core(TM) i7-7820HQ CPU @ 2.90GHz

Parquet write benchmark 2.3.3-SNAPSHOT:  Best/Avg Time(ms)Rate(M/s)   Per 
Row(ns)   Relative

long type   10963 / 11344  0.0  
2192675973.8   1.0X
string type 28423 / 29437  0.0  
5684553922.2   0.4X
decimal(18, 0) type 11558 / 11696  0.0  
2311587203.6   0.9X
decimal(38, 18) type43858 / 44432  0.0  
8771537663.4   0.2X


-- Spark 2.4.0-SNAPSHOT with Parquet 1.10.0

Java HotSpot(TM) 64-Bit Server VM 1.8.0_151-b12 on Mac OS X 10.12.6
Intel(R) Core(TM) i7-7820HQ CPU @ 2.90GHz

Parquet write benchmark 2.4.0-SNAPSHOT:  Best/Avg Time(ms)Rate(M/s)   Per 
Row(ns)   Relative

long type   11633 / 12070  0.0  
2326572295.8   1.0X
string type 31374 / 32178  0.0  
6274760187.4   0.4X
decimal(18, 0) type 13019 / 13294  0.0  
2603841925.4   0.9X
decimal(38, 18) type50719 / 50983  0.0 
10143775007.6   0.2X
{noformat}
The mainly affects the performance is 
[toByteBuffer|https://github.com/apache/parquet-mr/blob/d61d221c9e752ce2cc0da65ede8b55653b3ae21f/parquet-column/src/main/java/org/apache/parquet/io/api/Binary.java#L83].
 If don't use the {{toByteBuffer}} when compare binary, the result is:
{noformat}
-- Spark 2.4.0-SNAPSHOT with Parquet 1.10.0

Java HotSpot(TM) 64-Bit Server VM 1.8.0_151-b12 on Mac OS X 10.12.6
Intel(R) Core(TM) i7-7820HQ CPU @ 2.90GHz

Parquet write benchmark 2.4.0-SNAPSHOT:  Best/Avg Time(ms)Rate(M/s)   Per 
Row(ns)   Relative

long type   11171 / 11508  0.0  
2234189382.0   1.0X
string type 30072 / 30290  0.0  
6014346455.4   0.4X
decimal(18, 0) type 12150 / 12239  0.0  
2430052708.8   0.9X
decimal(38, 18) type44974 / 45423  0.0  
8994773738.8   0.2X
{noformat}

  was:
*Benchmark code*:
{code:java}
test("Parquet write benchmark") {
  val count = 100 * 1024 * 1024
  val numIters = 5
  withTempPath { path =>
val benchmark = new Benchmark(s"Parquet write benchmark 
${spark.sparkContext.version}", 5)

Seq("long", "string", "decimal(18, 0)", "decimal(38, 18)").foreach { dt =>
  benchmark.addCase(s"$dt type", numIters = numIters) { iter =>
spark.range(count).selectExpr(s"cast(id as $dt) as id")
  .write.mode("overwrite").parquet(path.getAbsolutePath)
  }
}
benchmark.run()
  }
}
{code}
*Result*:
{noformat}
-- Spark 2.3.3-SNAPSHOT with Parquet 1.8.3

Java HotSpot(TM) 64-Bit Server VM 1.8.0_151-b12 on Mac OS X 10.12.6
Intel(R) Core(TM) i7-7820HQ CPU @ 2.90GHz

Parquet write benchmark 2.3.3-SNAPSHOT:  Best/Avg Time(ms)Rate(M/s)   Per 
Row(ns)   Relative

long type   10963 / 11344  0.0  
2192675973.8   1.0X
string type 28423 / 29437  0.0  
5684553922.2   0.4X
decimal(18, 0) type 11558 / 11696  0.0  
2311587203.6   0.9X
decimal(38, 18) type43858 / 44432  0.0  
8771537663.4   0.2X


-- Spark 2.4.0-SNAPSHOT with Parquet 1.10.0

Java HotSpot(TM) 64-Bit Server VM 1.8.0_151-b12 on Mac OS X 10.12.6
Intel(R) Core(TM) i7-7820HQ CPU @ 2.90GHz

Parquet write benchmark 2.4.0-SNAPSHOT:  Best/Avg Time(ms)Rate(M/s)   Per 
Row(ns)   Relative

[jira] [Updated] (PARQUET-1355) Improvement Binary write performance

2018-07-23 Thread Yuming Wang (JIRA)


 [ 
https://issues.apache.org/jira/browse/PARQUET-1355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuming Wang updated PARQUET-1355:
-
Summary: Improvement Binary write performance  (was: Improvement parquet 
Binary write performance)

> Improvement Binary write performance
> 
>
> Key: PARQUET-1355
> URL: https://issues.apache.org/jira/browse/PARQUET-1355
> Project: Parquet
>  Issue Type: Improvement
>  Components: parquet-mr
>Affects Versions: 1.10.0
>Reporter: Yuming Wang
>Assignee: Yuming Wang
>Priority: Major
>
> *Benchmark code*:
> {code:java}
> test("Parquet write benchmark") {
>   val count = 100 * 1024 * 1024
>   val numIters = 5
>   withTempPath { path =>
> val benchmark = new Benchmark(s"Parquet write benchmark 
> ${spark.sparkContext.version}", 5)
> Seq("long", "string", "decimal(18, 0)", "decimal(38, 18)").foreach { dt =>
>   benchmark.addCase(s"$dt type", numIters = numIters) { iter =>
> spark.range(count).selectExpr(s"cast(id as $dt) as id")
>   .write.mode("overwrite").parquet(path.getAbsolutePath)
>   }
> }
> benchmark.run()
>   }
> }
> {code}
> *Result*:
> {noformat}
> -- Spark 2.3.3-SNAPSHOT with Parquet 1.8.3
> Java HotSpot(TM) 64-Bit Server VM 1.8.0_151-b12 on Mac OS X 10.12.6
> Intel(R) Core(TM) i7-7820HQ CPU @ 2.90GHz
> Parquet write benchmark 2.3.3-SNAPSHOT:  Best/Avg Time(ms)Rate(M/s)   Per 
> Row(ns)   Relative
> 
> long type   10963 / 11344  0.0  
> 2192675973.8   1.0X
> string type 28423 / 29437  0.0  
> 5684553922.2   0.4X
> decimal(18, 0) type 11558 / 11696  0.0  
> 2311587203.6   0.9X
> decimal(38, 18) type43858 / 44432  0.0  
> 8771537663.4   0.2X
> -- Spark 2.4.0-SNAPSHOT with Parquet 1.10.0
> Java HotSpot(TM) 64-Bit Server VM 1.8.0_151-b12 on Mac OS X 10.12.6
> Intel(R) Core(TM) i7-7820HQ CPU @ 2.90GHz
> Parquet write benchmark 2.4.0-SNAPSHOT:  Best/Avg Time(ms)Rate(M/s)   Per 
> Row(ns)   Relative
> 
> long type   11633 / 12070  0.0  
> 2326572295.8   1.0X
> string type 31374 / 32178  0.0  
> 6274760187.4   0.4X
> decimal(18, 0) type 13019 / 13294  0.0  
> 2603841925.4   0.9X
> decimal(38, 18) type50719 / 50983  0.0 
> 10143775007.6   0.2X
> {noformat}
> The mainly is 
> [toByteBuffer|https://github.com/apache/parquet-mr/blob/d61d221c9e752ce2cc0da65ede8b55653b3ae21f/parquet-column/src/main/java/org/apache/parquet/io/api/Binary.java#L83]
>  affects performance.
>  If do not use the {{toByteBuffer}} when compare binary, the result is:
> {noformat}
> -- Spark 2.4.0-SNAPSHOT with Parquet 1.10.0
> Java HotSpot(TM) 64-Bit Server VM 1.8.0_151-b12 on Mac OS X 10.12.6
> Intel(R) Core(TM) i7-7820HQ CPU @ 2.90GHz
> Parquet write benchmark 2.4.0-SNAPSHOT:  Best/Avg Time(ms)Rate(M/s)   Per 
> Row(ns)   Relative
> 
> long type   11171 / 11508  0.0  
> 2234189382.0   1.0X
> string type 30072 / 30290  0.0  
> 6014346455.4   0.4X
> decimal(18, 0) type 12150 / 12239  0.0  
> 2430052708.8   0.9X
> decimal(38, 18) type44974 / 45423  0.0  
> 8994773738.8   0.2X
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PARQUET-1355) Improvement parquet Binary write performance

2018-07-23 Thread Yuming Wang (JIRA)


 [ 
https://issues.apache.org/jira/browse/PARQUET-1355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuming Wang updated PARQUET-1355:
-
Description: 
*Benchmark code*:
{code:java}
test("Parquet write benchmark") {
  val count = 100 * 1024 * 1024
  val numIters = 5
  withTempPath { path =>
val benchmark = new Benchmark(s"Parquet write benchmark 
${spark.sparkContext.version}", 5)

Seq("long", "string", "decimal(18, 0)", "decimal(38, 18)").foreach { dt =>
  benchmark.addCase(s"$dt type", numIters = numIters) { iter =>
spark.range(count).selectExpr(s"cast(id as $dt) as id")
  .write.mode("overwrite").parquet(path.getAbsolutePath)
  }
}
benchmark.run()
  }
}
{code}
*Result*:
{noformat}
-- Spark 2.3.3-SNAPSHOT with Parquet 1.8.3

Java HotSpot(TM) 64-Bit Server VM 1.8.0_151-b12 on Mac OS X 10.12.6
Intel(R) Core(TM) i7-7820HQ CPU @ 2.90GHz

Parquet write benchmark 2.3.3-SNAPSHOT:  Best/Avg Time(ms)Rate(M/s)   Per 
Row(ns)   Relative

long type   10963 / 11344  0.0  
2192675973.8   1.0X
string type 28423 / 29437  0.0  
5684553922.2   0.4X
decimal(18, 0) type 11558 / 11696  0.0  
2311587203.6   0.9X
decimal(38, 18) type43858 / 44432  0.0  
8771537663.4   0.2X


-- Spark 2.4.0-SNAPSHOT with Parquet 1.10.0

Java HotSpot(TM) 64-Bit Server VM 1.8.0_151-b12 on Mac OS X 10.12.6
Intel(R) Core(TM) i7-7820HQ CPU @ 2.90GHz

Parquet write benchmark 2.4.0-SNAPSHOT:  Best/Avg Time(ms)Rate(M/s)   Per 
Row(ns)   Relative

long type   11633 / 12070  0.0  
2326572295.8   1.0X
string type 31374 / 32178  0.0  
6274760187.4   0.4X
decimal(18, 0) type 13019 / 13294  0.0  
2603841925.4   0.9X
decimal(38, 18) type50719 / 50983  0.0 
10143775007.6   0.2X
{noformat}
The mainly is 
[toByteBuffer|https://github.com/apache/parquet-mr/blob/d61d221c9e752ce2cc0da65ede8b55653b3ae21f/parquet-column/src/main/java/org/apache/parquet/io/api/Binary.java#L83]
 affects performance.
 If do not use the {{toByteBuffer}} when compare binary, the result is:
{noformat}
-- Spark 2.4.0-SNAPSHOT with Parquet 1.10.0

Java HotSpot(TM) 64-Bit Server VM 1.8.0_151-b12 on Mac OS X 10.12.6
Intel(R) Core(TM) i7-7820HQ CPU @ 2.90GHz

Parquet write benchmark 2.4.0-SNAPSHOT:  Best/Avg Time(ms)Rate(M/s)   Per 
Row(ns)   Relative

long type   11171 / 11508  0.0  
2234189382.0   1.0X
string type 30072 / 30290  0.0  
6014346455.4   0.4X
decimal(18, 0) type 12150 / 12239  0.0  
2430052708.8   0.9X
decimal(38, 18) type44974 / 45423  0.0  
8994773738.8   0.2X
{noformat}

  was:
*Benchmark code*:
{code:java}
test("Parquet write benchmark") {
  val count = 100 * 1024 * 1024
  val numIters = 5
  withTempPath { path =>
val benchmark = new Benchmark(s"Parquet write benchmark 
${spark.sparkContext.version}", 5)

Seq("long", "string", "decimal(18, 0)", "decimal(38, 18)", 
"timestamp").foreach { dt =>
  benchmark.addCase(s"$dt type", numIters = numIters) { iter =>
spark.range(count).selectExpr(s"cast(id as $dt) as id")
  .write.mode("overwrite").parquet(path.getAbsolutePath)
  }
}
benchmark.run()
  }
}
{code}

*Result*:

{noformat}
-- Spark 2.3.3-SNAPSHOT with Parquet 1.8.3

Java HotSpot(TM) 64-Bit Server VM 1.8.0_151-b12 on Mac OS X 10.12.6
Intel(R) Core(TM) i7-7820HQ CPU @ 2.90GHz

Parquet write benchmark 2.3.3-SNAPSHOT:  Best/Avg Time(ms)Rate(M/s)   Per 
Row(ns)   Relative

long type   10963 / 11344  0.0  
2192675973.8   1.0X
string type 28423 / 29437  0.0  
5684553922.2   0.4X
decimal(18, 0) type 11558 / 11696  0.0  
2311587203.6   0.9X
decimal(38, 18) type43858 / 44432  0.0  
8771537663.4   0.2X


-- Spark 2.4.0-SNAPSHOT with Parquet 1.10.0

Java HotSpot(TM) 64-Bit Server VM 1.8.0_151-b12 on Mac OS X 10.12.6
Intel(R) Core(TM) i7-7820HQ CPU @ 2.90GHz

Parquet write benchmark 2.4.0-SNAPSHOT:  Best/Avg Time(ms)Rate(M/s)   Per 
Row(ns)   Relative

[jira] [Assigned] (PARQUET-1355) Improvement parquet Binary write performance

2018-07-23 Thread Yuming Wang (JIRA)


 [ 
https://issues.apache.org/jira/browse/PARQUET-1355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuming Wang reassigned PARQUET-1355:


Assignee: Yuming Wang

> Improvement parquet Binary write performance
> 
>
> Key: PARQUET-1355
> URL: https://issues.apache.org/jira/browse/PARQUET-1355
> Project: Parquet
>  Issue Type: Improvement
>  Components: parquet-mr
>Affects Versions: 1.10.0
>Reporter: Yuming Wang
>Assignee: Yuming Wang
>Priority: Major
>
> *Benchmark code*:
> {code:java}
> test("Parquet write benchmark") {
>   val count = 100 * 1024 * 1024
>   val numIters = 5
>   withTempPath { path =>
> val benchmark = new Benchmark(s"Parquet write benchmark 
> ${spark.sparkContext.version}", 5)
> Seq("long", "string", "decimal(18, 0)", "decimal(38, 18)", 
> "timestamp").foreach { dt =>
>   benchmark.addCase(s"$dt type", numIters = numIters) { iter =>
> spark.range(count).selectExpr(s"cast(id as $dt) as id")
>   .write.mode("overwrite").parquet(path.getAbsolutePath)
>   }
> }
> benchmark.run()
>   }
> }
> {code}
> *Result*:
> {noformat}
> -- Spark 2.3.3-SNAPSHOT with Parquet 1.8.3
> Java HotSpot(TM) 64-Bit Server VM 1.8.0_151-b12 on Mac OS X 10.12.6
> Intel(R) Core(TM) i7-7820HQ CPU @ 2.90GHz
> Parquet write benchmark 2.3.3-SNAPSHOT:  Best/Avg Time(ms)Rate(M/s)   Per 
> Row(ns)   Relative
> 
> long type   10963 / 11344  0.0  
> 2192675973.8   1.0X
> string type 28423 / 29437  0.0  
> 5684553922.2   0.4X
> decimal(18, 0) type 11558 / 11696  0.0  
> 2311587203.6   0.9X
> decimal(38, 18) type43858 / 44432  0.0  
> 8771537663.4   0.2X
> -- Spark 2.4.0-SNAPSHOT with Parquet 1.10.0
> Java HotSpot(TM) 64-Bit Server VM 1.8.0_151-b12 on Mac OS X 10.12.6
> Intel(R) Core(TM) i7-7820HQ CPU @ 2.90GHz
> Parquet write benchmark 2.4.0-SNAPSHOT:  Best/Avg Time(ms)Rate(M/s)   Per 
> Row(ns)   Relative
> 
> long type   11633 / 12070  0.0  
> 2326572295.8   1.0X
> string type 31374 / 32178  0.0  
> 6274760187.4   0.4X
> decimal(18, 0) type 13019 / 13294  0.0  
> 2603841925.4   0.9X
> decimal(38, 18) type50719 / 50983  0.0 
> 10143775007.6   0.2X
> {noformat}
> The mainly is 
> [toByteBuffer|https://github.com/apache/parquet-mr/blob/d61d221c9e752ce2cc0da65ede8b55653b3ae21f/parquet-column/src/main/java/org/apache/parquet/io/api/Binary.java#L83]
>  affects performance.
> If do not use the {{toByteBuffer}} when compare binary, the result is:
> {noformat}
> -- Spark 2.4.0-SNAPSHOT with Parquet 1.10.0
> Java HotSpot(TM) 64-Bit Server VM 1.8.0_151-b12 on Mac OS X 10.12.6
> Intel(R) Core(TM) i7-7820HQ CPU @ 2.90GHz
> Parquet write benchmark 2.4.0-SNAPSHOT:  Best/Avg Time(ms)Rate(M/s)   Per 
> Row(ns)   Relative
> 
> long type   11171 / 11508  0.0  
> 2234189382.0   1.0X
> string type 30072 / 30290  0.0  
> 6014346455.4   0.4X
> decimal(18, 0) type 12150 / 12239  0.0  
> 2430052708.8   0.9X
> decimal(38, 18) type44974 / 45423  0.0  
> 8994773738.8   0.2X
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PARQUET-1355) Improvement parquet Binary write performance

2018-07-23 Thread Yuming Wang (JIRA)
Yuming Wang created PARQUET-1355:


 Summary: Improvement parquet Binary write performance
 Key: PARQUET-1355
 URL: https://issues.apache.org/jira/browse/PARQUET-1355
 Project: Parquet
  Issue Type: Improvement
  Components: parquet-mr
Affects Versions: 1.10.0
Reporter: Yuming Wang


*Benchmark code*:
{code:java}
test("Parquet write benchmark") {
  val count = 100 * 1024 * 1024
  val numIters = 5
  withTempPath { path =>
val benchmark = new Benchmark(s"Parquet write benchmark 
${spark.sparkContext.version}", 5)

Seq("long", "string", "decimal(18, 0)", "decimal(38, 18)", 
"timestamp").foreach { dt =>
  benchmark.addCase(s"$dt type", numIters = numIters) { iter =>
spark.range(count).selectExpr(s"cast(id as $dt) as id")
  .write.mode("overwrite").parquet(path.getAbsolutePath)
  }
}
benchmark.run()
  }
}
{code}

*Result*:

{noformat}
-- Spark 2.3.3-SNAPSHOT with Parquet 1.8.3

Java HotSpot(TM) 64-Bit Server VM 1.8.0_151-b12 on Mac OS X 10.12.6
Intel(R) Core(TM) i7-7820HQ CPU @ 2.90GHz

Parquet write benchmark 2.3.3-SNAPSHOT:  Best/Avg Time(ms)Rate(M/s)   Per 
Row(ns)   Relative

long type   10963 / 11344  0.0  
2192675973.8   1.0X
string type 28423 / 29437  0.0  
5684553922.2   0.4X
decimal(18, 0) type 11558 / 11696  0.0  
2311587203.6   0.9X
decimal(38, 18) type43858 / 44432  0.0  
8771537663.4   0.2X


-- Spark 2.4.0-SNAPSHOT with Parquet 1.10.0

Java HotSpot(TM) 64-Bit Server VM 1.8.0_151-b12 on Mac OS X 10.12.6
Intel(R) Core(TM) i7-7820HQ CPU @ 2.90GHz

Parquet write benchmark 2.4.0-SNAPSHOT:  Best/Avg Time(ms)Rate(M/s)   Per 
Row(ns)   Relative

long type   11633 / 12070  0.0  
2326572295.8   1.0X
string type 31374 / 32178  0.0  
6274760187.4   0.4X
decimal(18, 0) type 13019 / 13294  0.0  
2603841925.4   0.9X
decimal(38, 18) type50719 / 50983  0.0 
10143775007.6   0.2X
{noformat}


The mainly is 
[toByteBuffer|https://github.com/apache/parquet-mr/blob/d61d221c9e752ce2cc0da65ede8b55653b3ae21f/parquet-column/src/main/java/org/apache/parquet/io/api/Binary.java#L83]
 affects performance.
If do not use the {{toByteBuffer}} when compare binary, the result is:
{noformat}
-- Spark 2.4.0-SNAPSHOT with Parquet 1.10.0

Java HotSpot(TM) 64-Bit Server VM 1.8.0_151-b12 on Mac OS X 10.12.6
Intel(R) Core(TM) i7-7820HQ CPU @ 2.90GHz

Parquet write benchmark 2.4.0-SNAPSHOT:  Best/Avg Time(ms)Rate(M/s)   Per 
Row(ns)   Relative

long type   11171 / 11508  0.0  
2234189382.0   1.0X
string type 30072 / 30290  0.0  
6014346455.4   0.4X
decimal(18, 0) type 12150 / 12239  0.0  
2430052708.8   0.9X
decimal(38, 18) type44974 / 45423  0.0  
8994773738.8   0.2X
{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PARQUET-1338) PrimitiveType.equals throw NPE

2018-06-29 Thread Yuming Wang (JIRA)


 [ 
https://issues.apache.org/jira/browse/PARQUET-1338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuming Wang resolved PARQUET-1338.
--
Resolution: Won't Fix

> PrimitiveType.equals throw NPE
> --
>
> Key: PARQUET-1338
> URL: https://issues.apache.org/jira/browse/PARQUET-1338
> Project: Parquet
>  Issue Type: Bug
>  Components: parquet-mr
>Affects Versions: 1.10.1
>Reporter: Yuming Wang
>Assignee: Yuming Wang
>Priority: Major
>  Labels: pull-request-available
>
> Error message:
> {noformat}
> java.lang.NullPointerException
>   at 
> org.apache.parquet.schema.PrimitiveType.equals(PrimitiveType.java:614)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PARQUET-1338) PrimitiveType.equals throw NPE

2018-06-25 Thread Yuming Wang (JIRA)
Yuming Wang created PARQUET-1338:


 Summary: PrimitiveType.equals throw NPE
 Key: PARQUET-1338
 URL: https://issues.apache.org/jira/browse/PARQUET-1338
 Project: Parquet
  Issue Type: Bug
  Components: parquet-mr
Affects Versions: 1.10.1
Reporter: Yuming Wang


Error message:
{noformat}
java.lang.NullPointerException
at 
org.apache.parquet.schema.PrimitiveType.equals(PrimitiveType.java:614)
{noformat}




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (PARQUET-1338) PrimitiveType.equals throw NPE

2018-06-25 Thread Yuming Wang (JIRA)


 [ 
https://issues.apache.org/jira/browse/PARQUET-1338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuming Wang reassigned PARQUET-1338:


Assignee: Yuming Wang

> PrimitiveType.equals throw NPE
> --
>
> Key: PARQUET-1338
> URL: https://issues.apache.org/jira/browse/PARQUET-1338
> Project: Parquet
>  Issue Type: Bug
>  Components: parquet-mr
>Affects Versions: 1.10.1
>Reporter: Yuming Wang
>Assignee: Yuming Wang
>Priority: Major
>
> Error message:
> {noformat}
> java.lang.NullPointerException
>   at 
> org.apache.parquet.schema.PrimitiveType.equals(PrimitiveType.java:614)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PARQUET-1336) PrimitiveComparator should implements Serializable

2018-06-25 Thread Yuming Wang (JIRA)


 [ 
https://issues.apache.org/jira/browse/PARQUET-1336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuming Wang updated PARQUET-1336:
-
Summary: PrimitiveComparator should implements Serializable   (was: 
BinaryComparator should implements Serializable )

> PrimitiveComparator should implements Serializable 
> ---
>
> Key: PARQUET-1336
> URL: https://issues.apache.org/jira/browse/PARQUET-1336
> Project: Parquet
>  Issue Type: Improvement
>  Components: parquet-mr
>Affects Versions: 1.10.0
>Reporter: Yuming Wang
>Assignee: Yuming Wang
>Priority: Major
>  Labels: pull-request-available
>
> {code:java}
> [info] Cause: java.lang.RuntimeException: java.io.NotSerializableException: 
> org.apache.parquet.schema.PrimitiveComparator$8
> [info] at 
> org.apache.parquet.hadoop.ParquetInputFormat.setFilterPredicate(ParquetInputFormat.java:211)
> [info] at 
> org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat$$anonfun$buildReaderWithPartitionValues$1.apply(ParquetFileFormat.scala:399)
> [info] at 
> org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat$$anonfun$buildReaderWithPartitionValues$1.apply(ParquetFileFormat.scala:349)
> [info] at 
> org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.org$apache$spark$sql$execution$datasources$FileScanRDD$$anon$$readCurrentFile(FileScanRDD.scala:128)
> [info] at 
> org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:182)
> [info] at 
> org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:109)
> [info] at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
> [info] at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
> [info] at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
> [info] at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
> [info] at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
> [info] at org.apache.spark.util.Utils$.getIteratorSize(Utils.scala:1791)
> [info] at org.apache.spark.rdd.RDD$$anonfun$count$1.apply(RDD.scala:1162)
> [info] at org.apache.spark.rdd.RDD$$anonfun$count$1.apply(RDD.scala:1162)
> [info] at 
> org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2071)
> [info] at 
> org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2071)
> [info] at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
> [info] at org.apache.spark.scheduler.Task.run(Task.scala:109)
> [info] at 
> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:367)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (PARQUET-1336) BinaryComparator should implements Serializable

2018-06-23 Thread Yuming Wang (JIRA)


 [ 
https://issues.apache.org/jira/browse/PARQUET-1336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuming Wang reassigned PARQUET-1336:


Assignee: Yuming Wang

> BinaryComparator should implements Serializable 
> 
>
> Key: PARQUET-1336
> URL: https://issues.apache.org/jira/browse/PARQUET-1336
> Project: Parquet
>  Issue Type: Improvement
>  Components: parquet-mr
>Affects Versions: 1.10.0
>Reporter: Yuming Wang
>Assignee: Yuming Wang
>Priority: Major
>  Labels: pull-request-available
>
> {code:java}
> [info] Cause: java.lang.RuntimeException: java.io.NotSerializableException: 
> org.apache.parquet.schema.PrimitiveComparator$8
> [info] at 
> org.apache.parquet.hadoop.ParquetInputFormat.setFilterPredicate(ParquetInputFormat.java:211)
> [info] at 
> org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat$$anonfun$buildReaderWithPartitionValues$1.apply(ParquetFileFormat.scala:399)
> [info] at 
> org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat$$anonfun$buildReaderWithPartitionValues$1.apply(ParquetFileFormat.scala:349)
> [info] at 
> org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.org$apache$spark$sql$execution$datasources$FileScanRDD$$anon$$readCurrentFile(FileScanRDD.scala:128)
> [info] at 
> org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:182)
> [info] at 
> org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:109)
> [info] at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
> [info] at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
> [info] at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
> [info] at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
> [info] at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
> [info] at org.apache.spark.util.Utils$.getIteratorSize(Utils.scala:1791)
> [info] at org.apache.spark.rdd.RDD$$anonfun$count$1.apply(RDD.scala:1162)
> [info] at org.apache.spark.rdd.RDD$$anonfun$count$1.apply(RDD.scala:1162)
> [info] at 
> org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2071)
> [info] at 
> org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2071)
> [info] at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
> [info] at org.apache.spark.scheduler.Task.run(Task.scala:109)
> [info] at 
> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:367)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PARQUET-1336) BinaryComparator should implements Serializable

2018-06-23 Thread Yuming Wang (JIRA)


 [ 
https://issues.apache.org/jira/browse/PARQUET-1336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuming Wang updated PARQUET-1336:
-
Description: 
{code:java}
[info] Cause: java.lang.RuntimeException: java.io.NotSerializableException: 
org.apache.parquet.schema.PrimitiveComparator$8
[info] at 
org.apache.parquet.hadoop.ParquetInputFormat.setFilterPredicate(ParquetInputFormat.java:211)
[info] at 
org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat$$anonfun$buildReaderWithPartitionValues$1.apply(ParquetFileFormat.scala:399)
[info] at 
org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat$$anonfun$buildReaderWithPartitionValues$1.apply(ParquetFileFormat.scala:349)
[info] at 
org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.org$apache$spark$sql$execution$datasources$FileScanRDD$$anon$$readCurrentFile(FileScanRDD.scala:128)
[info] at 
org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:182)
[info] at 
org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:109)
[info] at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
[info] at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
[info] at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
[info] at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
[info] at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
[info] at org.apache.spark.util.Utils$.getIteratorSize(Utils.scala:1791)
[info] at org.apache.spark.rdd.RDD$$anonfun$count$1.apply(RDD.scala:1162)
[info] at org.apache.spark.rdd.RDD$$anonfun$count$1.apply(RDD.scala:1162)
[info] at 
org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2071)
[info] at 
org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2071)
[info] at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
[info] at org.apache.spark.scheduler.Task.run(Task.scala:109)
[info] at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:367)

{code}

> BinaryComparator should implements Serializable 
> 
>
> Key: PARQUET-1336
> URL: https://issues.apache.org/jira/browse/PARQUET-1336
> Project: Parquet
>  Issue Type: Improvement
>  Components: parquet-mr
>Affects Versions: 1.10.0
>Reporter: Yuming Wang
>Priority: Major
>
> {code:java}
> [info] Cause: java.lang.RuntimeException: java.io.NotSerializableException: 
> org.apache.parquet.schema.PrimitiveComparator$8
> [info] at 
> org.apache.parquet.hadoop.ParquetInputFormat.setFilterPredicate(ParquetInputFormat.java:211)
> [info] at 
> org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat$$anonfun$buildReaderWithPartitionValues$1.apply(ParquetFileFormat.scala:399)
> [info] at 
> org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat$$anonfun$buildReaderWithPartitionValues$1.apply(ParquetFileFormat.scala:349)
> [info] at 
> org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.org$apache$spark$sql$execution$datasources$FileScanRDD$$anon$$readCurrentFile(FileScanRDD.scala:128)
> [info] at 
> org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:182)
> [info] at 
> org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:109)
> [info] at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
> [info] at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
> [info] at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
> [info] at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
> [info] at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
> [info] at org.apache.spark.util.Utils$.getIteratorSize(Utils.scala:1791)
> [info] at org.apache.spark.rdd.RDD$$anonfun$count$1.apply(RDD.scala:1162)
> [info] at org.apache.spark.rdd.RDD$$anonfun$count$1.apply(RDD.scala:1162)
> [info] at 
> org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2071)
> [info] at 
> org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2071)
> [info] at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
> [info] at org.apache.spark.scheduler.Task.run(Task.scala:109)
> [info] at 
> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:367)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PARQUET-1336) BinaryComparator should implements Serializable

2018-06-23 Thread Yuming Wang (JIRA)
Yuming Wang created PARQUET-1336:


 Summary: BinaryComparator should implements Serializable 
 Key: PARQUET-1336
 URL: https://issues.apache.org/jira/browse/PARQUET-1336
 Project: Parquet
  Issue Type: Improvement
  Components: parquet-mr
Affects Versions: 1.10.0
Reporter: Yuming Wang






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PARQUET-1247) org.apache.parquet.column.values.dictionary.PlainValuesDictionary$PlainLongDictionary

2018-06-23 Thread Yuming Wang (JIRA)


[ 
https://issues.apache.org/jira/browse/PARQUET-1247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16521311#comment-16521311
 ] 

Yuming Wang commented on PARQUET-1247:
--

How to reproduce?

> org.apache.parquet.column.values.dictionary.PlainValuesDictionary$PlainLongDictionary
> -
>
> Key: PARQUET-1247
> URL: https://issues.apache.org/jira/browse/PARQUET-1247
> Project: Parquet
>  Issue Type: Bug
>Reporter: Shrutika modi
>Priority: Major
>
> java.lang.UnsupportedOperationException: 
> org.apache.parquet.column.values.dictionary.PlainValuesDictionary$PlainLongDictionary
>  at org.apache.parquet.column.Dictionary.decodeToBinary(Dictionary.java:44)
>  at 
> org.apache.spark.sql.execution.vectorized.ColumnVector.getUTF8String(ColumnVector.java:625)
>  at 
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown
>  Source)
>  at 
> org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
>  at 
> org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8$$anon$1.hasNext(WholeStageCodegenExec.scala:377)
>  at 
> org.apache.spark.sql.execution.datasources.FileFormatWriter$SingleDirectoryWriteTask.execute(FileFormatWriter.scala:243)
>  at 
> org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:190)
>  at 
> org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:188)
>  at 
> org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1341)
>  at 
> org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:193)
>  at 
> org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$3.apply(FileFormatWriter.scala:129)
>  at 
> org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$3.apply(FileFormatWriter.scala:128)
>  at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
>  at org.apache.spark.scheduler.Task.run(Task.scala:99)
>  at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:282)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PARQUET-1248) java.lang.UnsupportedOperationException: Unimplemented type: StringType

2018-06-05 Thread Yuming Wang (JIRA)


[ 
https://issues.apache.org/jira/browse/PARQUET-1248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502780#comment-16502780
 ] 

Yuming Wang commented on PARQUET-1248:
--

[~modi.shrutika], How to reproduce this issue?

>  java.lang.UnsupportedOperationException: Unimplemented type: StringType
> 
>
> Key: PARQUET-1248
> URL: https://issues.apache.org/jira/browse/PARQUET-1248
> Project: Parquet
>  Issue Type: Bug
>Reporter: Shrutika modi
>Priority: Major
>
> java.lang.UnsupportedOperationException: Unimplemented type: StringType
>  at 
> org.apache.spark.sql.execution.datasources.parquet.VectorizedColumnReader.readIntBatch(VectorizedColumnReader.java:356)
>  at 
> org.apache.spark.sql.execution.datasources.parquet.VectorizedColumnReader.readBatch(VectorizedColumnReader.java:183)
>  at 
> org.apache.spark.sql.execution.datasources.parquet.VectorizedParquetRecordReader.nextBatch(VectorizedParquetRecordReader.java:230)
>  at 
> org.apache.spark.sql.execution.datasources.parquet.VectorizedParquetRecordReader.nextKeyValue(VectorizedParquetRecordReader.java:137)
>  at 
> org.apache.spark.sql.execution.datasources.RecordReaderIterator.hasNext(RecordReaderIterator.scala:39)
>  at 
> org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1$$anon$2.getNext(FileScanRDD.scala:133)
>  at org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:73)
>  at 
> org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:102)
>  at 
> org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:166)
>  at 
> org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:102)
>  at 
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.scan_nextBatch$(Unknown
>  Source)
>  at 
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown
>  Source)
>  at 
> org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
>  at 
> org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8$$anon$1.hasNext(WholeStageCodegenExec.scala:377)
>  at 
> org.apache.spark.sql.execution.datasources.FileFormatWriter$SingleDirectoryWriteTask.execute(FileFormatWriter.scala:243)
>  at 
> org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:190)
>  at 
> org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:188)
>  at 
> org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1341)
>  at 
> org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:193)
>  ... 8 more



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PARQUET-1238) Invalid links found in parquet site document page

2018-06-05 Thread Yuming Wang (JIRA)


[ 
https://issues.apache.org/jira/browse/PARQUET-1238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502778#comment-16502778
 ] 

Yuming Wang commented on PARQUET-1238:
--

[~xuchuanyin],  Github repo: [https://github.com/apache/parquet-format]

 

> Invalid links found in parquet site document page
> -
>
> Key: PARQUET-1238
> URL: https://issues.apache.org/jira/browse/PARQUET-1238
> Project: Parquet
>  Issue Type: Bug
>Reporter: xuchuanyin
>Priority: Trivial
> Attachments: PARQUET-1238_fixed_invalid_links_in_latest_html_md.patch
>
>
> Links to pictures in document page are invalid, such as Section ‘File Format’ 
> and ‘Metadata’
>  
> Links to external documents in document page are invalid, such as Section 
> 'Motivation', 'Logical Types' and 'Data Pages'
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PARQUET-1317) ParquetMetadataConverter throw NPE

2018-06-04 Thread Yuming Wang (JIRA)


 [ 
https://issues.apache.org/jira/browse/PARQUET-1317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuming Wang updated PARQUET-1317:
-
Description: 
How to reproduce:

{code:scala}
$ bin/spark-shell 
scala> spark.range(10).selectExpr("cast(id as string) as 
id").coalesce(1).write.parquet("/tmp/parquet-1317")
scala> 

java -jar ./parquet-tools/target/parquet-tools-1.10.1-SNAPSHOT.jar head --debug 
file:///tmp/parquet-1317/part-0-6cfafbdd-fdeb-4861-8499-8583852ba437-c000.snappy.parquet
{code}

{noformat}
java.io.IOException: Could not read footer: java.lang.NullPointerException

at 
org.apache.parquet.hadoop.ParquetFileReader.readAllFootersInParallel(ParquetFileReader.java:271)

at 
org.apache.parquet.hadoop.ParquetFileReader.readAllFootersInParallelUsingSummaryFiles(ParquetFileReader.java:202)

at 
org.apache.parquet.hadoop.ParquetFileReader.readFooters(ParquetFileReader.java:354)

at 
org.apache.parquet.tools.command.RowCountCommand.execute(RowCountCommand.java:88)

at org.apache.parquet.tools.Main.main(Main.java:223)

Caused by: java.lang.NullPointerException

at 
org.apache.parquet.format.converter.ParquetMetadataConverter.getOriginalType(ParquetMetadataConverter.java:828)

at 
org.apache.parquet.format.converter.ParquetMetadataConverter.buildChildren(ParquetMetadataConverter.java:1173)

at 
org.apache.parquet.format.converter.ParquetMetadataConverter.fromParquetSchema(ParquetMetadataConverter.java:1124)

at 
org.apache.parquet.format.converter.ParquetMetadataConverter.fromParquetMetadata(ParquetMetadataConverter.java:1058)

at 
org.apache.parquet.format.converter.ParquetMetadataConverter.readParquetMetadata(ParquetMetadataConverter.java:1052)

at 
org.apache.parquet.hadoop.ParquetFileReader.readFooter(ParquetFileReader.java:532)

at 
org.apache.parquet.hadoop.ParquetFileReader.readFooter(ParquetFileReader.java:505)

at 
org.apache.parquet.hadoop.ParquetFileReader.readFooter(ParquetFileReader.java:499)

at 
org.apache.parquet.hadoop.ParquetFileReader.readFooter(ParquetFileReader.java:476)

at 
org.apache.parquet.hadoop.ParquetFileReader$2.call(ParquetFileReader.java:261)

at 
org.apache.parquet.hadoop.ParquetFileReader$2.call(ParquetFileReader.java:257)

at java.util.concurrent.FutureTask.run(FutureTask.java:266)

at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)

at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)

at java.lang.Thread.run(Thread.java:748)

java.io.IOException: Could not read footer: 
java.lang.NullPointerException{noformat}

  was:
{noformat}
java.io.IOException: Could not read footer: java.lang.NullPointerException

at 
org.apache.parquet.hadoop.ParquetFileReader.readAllFootersInParallel(ParquetFileReader.java:271)

at 
org.apache.parquet.hadoop.ParquetFileReader.readAllFootersInParallelUsingSummaryFiles(ParquetFileReader.java:202)

at 
org.apache.parquet.hadoop.ParquetFileReader.readFooters(ParquetFileReader.java:354)

at 
org.apache.parquet.tools.command.RowCountCommand.execute(RowCountCommand.java:88)

at org.apache.parquet.tools.Main.main(Main.java:223)

Caused by: java.lang.NullPointerException

at 
org.apache.parquet.format.converter.ParquetMetadataConverter.getOriginalType(ParquetMetadataConverter.java:828)

at 
org.apache.parquet.format.converter.ParquetMetadataConverter.buildChildren(ParquetMetadataConverter.java:1173)

at 
org.apache.parquet.format.converter.ParquetMetadataConverter.fromParquetSchema(ParquetMetadataConverter.java:1124)

at 
org.apache.parquet.format.converter.ParquetMetadataConverter.fromParquetMetadata(ParquetMetadataConverter.java:1058)

at 
org.apache.parquet.format.converter.ParquetMetadataConverter.readParquetMetadata(ParquetMetadataConverter.java:1052)

at 
org.apache.parquet.hadoop.ParquetFileReader.readFooter(ParquetFileReader.java:532)

at 
org.apache.parquet.hadoop.ParquetFileReader.readFooter(ParquetFileReader.java:505)

at 
org.apache.parquet.hadoop.ParquetFileReader.readFooter(ParquetFileReader.java:499)

at 
org.apache.parquet.hadoop.ParquetFileReader.readFooter(ParquetFileReader.java:476)

at 
org.apache.parquet.hadoop.ParquetFileReader$2.call(ParquetFileReader.java:261)

at 
org.apache.parquet.hadoop.ParquetFileReader$2.call(ParquetFileReader.java:257)

at java.util.concurrent.FutureTask.run(FutureTask.java:266)

at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)

at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)

at java.lang.Thread.run(Thread.java:748)

java.io.IOException: Could not read footer: 
java.lang.NullPointerException{noformat}


> ParquetMetadataConverter throw NPE
> --
>
> Key: PARQUET-1317
> URL: https://issues.apache.org/jira/browse/PARQUET-1317
> Project: Parquet
>  Issue Type: Bug
>Affects Versions: 1.10.1
>Reporter: Yuming 

[jira] [Commented] (PARQUET-1317) ParquetMetadataConverter throw NPE

2018-06-04 Thread Yuming Wang (JIRA)


[ 
https://issues.apache.org/jira/browse/PARQUET-1317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16499820#comment-16499820
 ] 

Yuming Wang commented on PARQUET-1317:
--

I'm working on this.

> ParquetMetadataConverter throw NPE
> --
>
> Key: PARQUET-1317
> URL: https://issues.apache.org/jira/browse/PARQUET-1317
> Project: Parquet
>  Issue Type: Bug
>Affects Versions: 1.10.1
>Reporter: Yuming Wang
>Priority: Major
>
> {noformat}
> java.io.IOException: Could not read footer: java.lang.NullPointerException
> at 
> org.apache.parquet.hadoop.ParquetFileReader.readAllFootersInParallel(ParquetFileReader.java:271)
> at 
> org.apache.parquet.hadoop.ParquetFileReader.readAllFootersInParallelUsingSummaryFiles(ParquetFileReader.java:202)
> at 
> org.apache.parquet.hadoop.ParquetFileReader.readFooters(ParquetFileReader.java:354)
> at 
> org.apache.parquet.tools.command.RowCountCommand.execute(RowCountCommand.java:88)
> at org.apache.parquet.tools.Main.main(Main.java:223)
> Caused by: java.lang.NullPointerException
> at 
> org.apache.parquet.format.converter.ParquetMetadataConverter.getOriginalType(ParquetMetadataConverter.java:828)
> at 
> org.apache.parquet.format.converter.ParquetMetadataConverter.buildChildren(ParquetMetadataConverter.java:1173)
> at 
> org.apache.parquet.format.converter.ParquetMetadataConverter.fromParquetSchema(ParquetMetadataConverter.java:1124)
> at 
> org.apache.parquet.format.converter.ParquetMetadataConverter.fromParquetMetadata(ParquetMetadataConverter.java:1058)
> at 
> org.apache.parquet.format.converter.ParquetMetadataConverter.readParquetMetadata(ParquetMetadataConverter.java:1052)
> at 
> org.apache.parquet.hadoop.ParquetFileReader.readFooter(ParquetFileReader.java:532)
> at 
> org.apache.parquet.hadoop.ParquetFileReader.readFooter(ParquetFileReader.java:505)
> at 
> org.apache.parquet.hadoop.ParquetFileReader.readFooter(ParquetFileReader.java:499)
> at 
> org.apache.parquet.hadoop.ParquetFileReader.readFooter(ParquetFileReader.java:476)
> at 
> org.apache.parquet.hadoop.ParquetFileReader$2.call(ParquetFileReader.java:261)
> at 
> org.apache.parquet.hadoop.ParquetFileReader$2.call(ParquetFileReader.java:257)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> java.io.IOException: Could not read footer: 
> java.lang.NullPointerException{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PARQUET-1317) ParquetMetadataConverter throw NPE

2018-06-04 Thread Yuming Wang (JIRA)
Yuming Wang created PARQUET-1317:


 Summary: ParquetMetadataConverter throw NPE
 Key: PARQUET-1317
 URL: https://issues.apache.org/jira/browse/PARQUET-1317
 Project: Parquet
  Issue Type: Bug
Affects Versions: 1.10.1
Reporter: Yuming Wang


{noformat}
java.io.IOException: Could not read footer: java.lang.NullPointerException

at 
org.apache.parquet.hadoop.ParquetFileReader.readAllFootersInParallel(ParquetFileReader.java:271)

at 
org.apache.parquet.hadoop.ParquetFileReader.readAllFootersInParallelUsingSummaryFiles(ParquetFileReader.java:202)

at 
org.apache.parquet.hadoop.ParquetFileReader.readFooters(ParquetFileReader.java:354)

at 
org.apache.parquet.tools.command.RowCountCommand.execute(RowCountCommand.java:88)

at org.apache.parquet.tools.Main.main(Main.java:223)

Caused by: java.lang.NullPointerException

at 
org.apache.parquet.format.converter.ParquetMetadataConverter.getOriginalType(ParquetMetadataConverter.java:828)

at 
org.apache.parquet.format.converter.ParquetMetadataConverter.buildChildren(ParquetMetadataConverter.java:1173)

at 
org.apache.parquet.format.converter.ParquetMetadataConverter.fromParquetSchema(ParquetMetadataConverter.java:1124)

at 
org.apache.parquet.format.converter.ParquetMetadataConverter.fromParquetMetadata(ParquetMetadataConverter.java:1058)

at 
org.apache.parquet.format.converter.ParquetMetadataConverter.readParquetMetadata(ParquetMetadataConverter.java:1052)

at 
org.apache.parquet.hadoop.ParquetFileReader.readFooter(ParquetFileReader.java:532)

at 
org.apache.parquet.hadoop.ParquetFileReader.readFooter(ParquetFileReader.java:505)

at 
org.apache.parquet.hadoop.ParquetFileReader.readFooter(ParquetFileReader.java:499)

at 
org.apache.parquet.hadoop.ParquetFileReader.readFooter(ParquetFileReader.java:476)

at 
org.apache.parquet.hadoop.ParquetFileReader$2.call(ParquetFileReader.java:261)

at 
org.apache.parquet.hadoop.ParquetFileReader$2.call(ParquetFileReader.java:257)

at java.util.concurrent.FutureTask.run(FutureTask.java:266)

at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)

at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)

at java.lang.Thread.run(Thread.java:748)

java.io.IOException: Could not read footer: 
java.lang.NullPointerException{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)