Import parquet-hadoop-bundle jar. into the spark hive project When you
compress data using zstd, you may load it preferentially from the
parquet-hadoop-bundle, and you canundefinedt find the enum constant
parquet.hadoop.metadata.CompressionCodecName.ZSTD

>
> 18/12/20 10:35:28 ERROR Executor: Exception in task 0.2 in stage 1.0 (TID
> 5)
> org.apache.hadoop.hive.ql.metadata.HiveException:
> java.lang.IllegalArgumentException: No enum constant
> parquet.hadoop.metadata.CompressionCodecName.ZSTD
> /parquet
>         at
> org.apache.spark.sql.execution.datasources.SingleDirectoryDataWriter.<init>(FileFormatDataWriter.scala:109)
>         at
> org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:243)
>         at
> org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:175)
>         at
> org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:174)
>         at
> org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
>         at org.apache.spark.scheduler.Task.run(Task.scala:121)
>         at
> org.apache.spark.executor.Executor$TaskRunner$$anonfun$11.apply(Executor.scala:406)
>         at
> org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
>         at
> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:412)
>         at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>         at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.IllegalArgumentException: No enum constant
> parquet.hadoop.metadata.CompressionCodecName.ZSTD
>         at java.lang.Enum.valueOf(Enum.java:238)
>         at
> parquet.hadoop.metadata.CompressionCodecName.valueOf(CompressionCodecName.java:24)
>         at
> parquet.hadoop.metadata.CompressionCodecName.fromConf(CompressionCodecName.java:34)
>         at
> org.apache.hadoop.hive.ql.io.parquet.write.ParquetRecordWriterWrapper.initializeSerProperties(ParquetRecordWriterWrapper.java:94)
>         at
> org.apache.hadoop.hive.ql.io.parquet.write.ParquetRecordWriterWrapper.<init>(ParquetRecordWriterWrapper.java:61)
>         at
> org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat.getParquerRecordWriterWrapper(MapredParquetOutputFormat.java:125)
>         at
> org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat.getHiveRecordWriter(MapredParquetOutputFormat.java:114)
>         at
> org.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getRecordWriter(HiveFileFormatUtils.java:261)
>         at
> org.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getHiveRecordWriter(HiveFileFormatUtils.java:246)
>         ... 15 more

Reply via email to