[jira] [Comment Edited] (SPARK-31345) Spark fails to write hive parquet table with empty array

2020-04-04 Thread Dongjoon Hyun (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-31345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17075314#comment-17075314
 ] 

Dongjoon Hyun edited comment on SPARK-31345 at 4/4/20, 9:39 PM:


For the record, the above example doesn't work in all 2.x versions with 
different reasons. This issue seems to be related multiple JIRAs related 
`saveAsTable` and `array()` issues.


was (Author: dongjoon):
For the record, the above example doesn't work in all 2.x versions with 
different reasons.

> Spark fails to write hive parquet table with empty array
> 
>
> Key: SPARK-31345
> URL: https://issues.apache.org/jira/browse/SPARK-31345
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.0.2, 2.1.3, 2.2.3, 2.3.4, 2.4.5
> Environment: - Spark 2.4.5 (compiled against hadoop 2.10.0, and 
> bundled with hadoop 2.10.0 dependencies in `spark.yarn.archive`)
> - Hive 3.1.2
> - Hadoop 3.2.1
>Reporter: Naitree Zhu
>Priority: Major
>
> When writing an existing Hive Parquet table using Spark SQL, I encountered an 
> error when writing empty `array()` or `map()`.
> Test case to reproduce:
> {code:java}
> spark.sql("create table test_null (col1 array) stored as parquet")
> val df = spark.sql("select cast(array() as array) as col1")
> df.write.format("hive").mode("append").saveAsTable("default.test_null")
> {code}
> Exception raised:
> {code:java}
> 20/04/04 09:16:03 WARN TaskSetManager: Lost task 0.0 in stage 16.0 (TID 30, 
> test-node, executor 2): org.apache.spark.SparkException: Task failed while 
> writing rows.
>   at 
> org.apache.spark.sql.execution.datasources.FileFormatWriter$.executeTask(FileFormatWriter.scala:257)
>   at 
> org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$write$15(FileFormatWriter.scala:177)
>   at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
>   at org.apache.spark.scheduler.Task.run(Task.scala:123)
>   at 
> org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:411)
>   at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
>   at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.RuntimeException: Parquet record is malformed: empty 
> fields are illegal, the field should be ommited completely instead
>   at 
> org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriter.write(DataWritableWriter.java:64)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriteSupport.write(DataWritableWriteSupport.java:59)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriteSupport.write(DataWritableWriteSupport.java:31)
>   at 
> parquet.hadoop.InternalParquetRecordWriter.write(InternalParquetRecordWriter.java:121)
>   at 
> parquet.hadoop.ParquetRecordWriter.write(ParquetRecordWriter.java:123)
>   at parquet.hadoop.ParquetRecordWriter.write(ParquetRecordWriter.java:42)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.write.ParquetRecordWriterWrapper.write(ParquetRecordWriterWrapper.java:111)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.write.ParquetRecordWriterWrapper.write(ParquetRecordWriterWrapper.java:124)
>   at 
> org.apache.spark.sql.hive.execution.HiveOutputWriter.write(HiveFileFormat.scala:149)
>   at 
> org.apache.spark.sql.execution.datasources.SingleDirectoryDataWriter.write(FileFormatDataWriter.scala:137)
>   at 
> org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$executeTask$1(FileFormatWriter.scala:245)
>   at 
> org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1394)
>   at 
> org.apache.spark.sql.execution.datasources.FileFormatWriter$.executeTask(FileFormatWriter.scala:242)
>   ... 9 more
> Caused by: parquet.io.ParquetEncodingException: empty fields are illegal, the 
> field should be ommited completely instead
>   at 
> parquet.io.MessageColumnIO$MessageColumnIORecordConsumer.endField(MessageColumnIO.java:244)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriter.writeArray(DataWritableWriter.java:186)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriter.writeValue(DataWritableWriter.java:113)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriter.writeGroupFields(DataWritableWriter.java:89)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriter.write(DataWritableWriter.java:60)
>   ... 21 more
> {code}
> However, 

[jira] [Comment Edited] (SPARK-31345) Spark fails to write hive parquet table with empty array

2020-04-04 Thread Dongjoon Hyun (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-31345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17075305#comment-17075305
 ] 

Dongjoon Hyun edited comment on SPARK-31345 at 4/4/20, 9:29 PM:


Thank you for reporting, [~naitree]. This is fixed at 3.0.0.
{code}
scala> spark.version
res0: String = 3.0.0-preview2

scala> spark.sql("create table test_null (col1 array) stored as parquet")
res1: org.apache.spark.sql.DataFrame = []

scala> val df = spark.sql("select cast(array() as array) as col1")
df: org.apache.spark.sql.DataFrame = [col1: array]

scala> df.write.format("hive").mode("append").saveAsTable("default.test_null")

scala>
{code}


was (Author: dongjoon):
Thank you for reporting, [~naitree]. This is fixed at 3.0.0.


> Spark fails to write hive parquet table with empty array
> 
>
> Key: SPARK-31345
> URL: https://issues.apache.org/jira/browse/SPARK-31345
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.3.4, 2.4.5
> Environment: - Spark 2.4.5 (compiled against hadoop 2.10.0, and 
> bundled with hadoop 2.10.0 dependencies in `spark.yarn.archive`)
> - Hive 3.1.2
> - Hadoop 3.2.1
>Reporter: Naitree Zhu
>Priority: Major
>
> When writing an existing Hive Parquet table using Spark SQL, I encountered an 
> error when writing empty `array()` or `map()`.
> Test case to reproduce:
> {code:java}
> spark.sql("create table test_null (col1 array) stored as parquet")
> val df = spark.sql("select cast(array() as array) as col1")
> df.write.format("hive").mode("append").saveAsTable("default.test_null")
> {code}
> Exception raised:
> {code:java}
> 20/04/04 09:16:03 WARN TaskSetManager: Lost task 0.0 in stage 16.0 (TID 30, 
> test-node, executor 2): org.apache.spark.SparkException: Task failed while 
> writing rows.
>   at 
> org.apache.spark.sql.execution.datasources.FileFormatWriter$.executeTask(FileFormatWriter.scala:257)
>   at 
> org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$write$15(FileFormatWriter.scala:177)
>   at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
>   at org.apache.spark.scheduler.Task.run(Task.scala:123)
>   at 
> org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:411)
>   at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
>   at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.RuntimeException: Parquet record is malformed: empty 
> fields are illegal, the field should be ommited completely instead
>   at 
> org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriter.write(DataWritableWriter.java:64)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriteSupport.write(DataWritableWriteSupport.java:59)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriteSupport.write(DataWritableWriteSupport.java:31)
>   at 
> parquet.hadoop.InternalParquetRecordWriter.write(InternalParquetRecordWriter.java:121)
>   at 
> parquet.hadoop.ParquetRecordWriter.write(ParquetRecordWriter.java:123)
>   at parquet.hadoop.ParquetRecordWriter.write(ParquetRecordWriter.java:42)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.write.ParquetRecordWriterWrapper.write(ParquetRecordWriterWrapper.java:111)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.write.ParquetRecordWriterWrapper.write(ParquetRecordWriterWrapper.java:124)
>   at 
> org.apache.spark.sql.hive.execution.HiveOutputWriter.write(HiveFileFormat.scala:149)
>   at 
> org.apache.spark.sql.execution.datasources.SingleDirectoryDataWriter.write(FileFormatDataWriter.scala:137)
>   at 
> org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$executeTask$1(FileFormatWriter.scala:245)
>   at 
> org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1394)
>   at 
> org.apache.spark.sql.execution.datasources.FileFormatWriter$.executeTask(FileFormatWriter.scala:242)
>   ... 9 more
> Caused by: parquet.io.ParquetEncodingException: empty fields are illegal, the 
> field should be ommited completely instead
>   at 
> parquet.io.MessageColumnIO$MessageColumnIORecordConsumer.endField(MessageColumnIO.java:244)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriter.writeArray(DataWritableWriter.java:186)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriter.writeValue(DataWritableWriter.java:113)
>   at 
> 

[jira] [Comment Edited] (SPARK-31345) Spark fails to write hive parquet table with empty array

2020-04-04 Thread Dongjoon Hyun (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-31345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17075305#comment-17075305
 ] 

Dongjoon Hyun edited comment on SPARK-31345 at 4/4/20, 9:27 PM:


Thank you for reporting, [~naitree]. This is fixed at 3.0.0.



was (Author: dongjoon):
Thank you for reporting, [~naitree]. This is fixed via SPARK-29462 .

> Spark fails to write hive parquet table with empty array
> 
>
> Key: SPARK-31345
> URL: https://issues.apache.org/jira/browse/SPARK-31345
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.3.4, 2.4.5
> Environment: - Spark 2.4.5 (compiled against hadoop 2.10.0, and 
> bundled with hadoop 2.10.0 dependencies in `spark.yarn.archive`)
> - Hive 3.1.2
> - Hadoop 3.2.1
>Reporter: Naitree Zhu
>Priority: Major
>
> When writing an existing Hive Parquet table using Spark SQL, I encountered an 
> error when writing empty `array()` or `map()`.
> Test case to reproduce:
> {code:java}
> spark.sql("create table test_null (col1 array) stored as parquet")
> val df = spark.sql("select cast(array() as array) as col1")
> df.write.format("hive").mode("append").saveAsTable("default.test_null")
> {code}
> Exception raised:
> {code:java}
> 20/04/04 09:16:03 WARN TaskSetManager: Lost task 0.0 in stage 16.0 (TID 30, 
> test-node, executor 2): org.apache.spark.SparkException: Task failed while 
> writing rows.
>   at 
> org.apache.spark.sql.execution.datasources.FileFormatWriter$.executeTask(FileFormatWriter.scala:257)
>   at 
> org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$write$15(FileFormatWriter.scala:177)
>   at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
>   at org.apache.spark.scheduler.Task.run(Task.scala:123)
>   at 
> org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:411)
>   at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
>   at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.RuntimeException: Parquet record is malformed: empty 
> fields are illegal, the field should be ommited completely instead
>   at 
> org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriter.write(DataWritableWriter.java:64)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriteSupport.write(DataWritableWriteSupport.java:59)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriteSupport.write(DataWritableWriteSupport.java:31)
>   at 
> parquet.hadoop.InternalParquetRecordWriter.write(InternalParquetRecordWriter.java:121)
>   at 
> parquet.hadoop.ParquetRecordWriter.write(ParquetRecordWriter.java:123)
>   at parquet.hadoop.ParquetRecordWriter.write(ParquetRecordWriter.java:42)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.write.ParquetRecordWriterWrapper.write(ParquetRecordWriterWrapper.java:111)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.write.ParquetRecordWriterWrapper.write(ParquetRecordWriterWrapper.java:124)
>   at 
> org.apache.spark.sql.hive.execution.HiveOutputWriter.write(HiveFileFormat.scala:149)
>   at 
> org.apache.spark.sql.execution.datasources.SingleDirectoryDataWriter.write(FileFormatDataWriter.scala:137)
>   at 
> org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$executeTask$1(FileFormatWriter.scala:245)
>   at 
> org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1394)
>   at 
> org.apache.spark.sql.execution.datasources.FileFormatWriter$.executeTask(FileFormatWriter.scala:242)
>   ... 9 more
> Caused by: parquet.io.ParquetEncodingException: empty fields are illegal, the 
> field should be ommited completely instead
>   at 
> parquet.io.MessageColumnIO$MessageColumnIORecordConsumer.endField(MessageColumnIO.java:244)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriter.writeArray(DataWritableWriter.java:186)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriter.writeValue(DataWritableWriter.java:113)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriter.writeGroupFields(DataWritableWriter.java:89)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriter.write(DataWritableWriter.java:60)
>   ... 21 more
> {code}
> However, letting spark to create the table implicitly, it would succeed.
> {code:java}
> spark.sql("drop table default.test_null")
>