Hi Jitendra,

I'm not sure yet but I think this may be an Avro dependency problem. See
[1] for a better explanation than I can provide (especially the "Update" in
the accepted answer). [2] is the place that is throwing the exception. Is
org.apache.avro.JsonProperties.NULL_VALUE present in your runtime?

[1] https://stackoverflow.com/questions/6686753/nosuchfielderror-java
[2]
https://github.com/apache/parquet-mr/blob/6901a2040848c6b37fa61f4b0a76246445f396db/parquet-avro/src/main/java/org/apache/parquet/avro/AvroSchemaConverter.java#L261

Hope this helps!

Thanks,
Łukasz

czw., 14 mar 2019 o 05:11 jitendra sharma <[email protected]>
napisał(a):

> Hi All,
>
> I am trying to read the parquet snappy compressed file in the Apache beam
> using Flink Runner in the AWS EMR cluster. But getting the below error:
>
> Caused by: org.apache.beam.sdk.util.UserCodeException: 
> java.lang.NoSuchFieldError: NULL_VALUE
>       at 
> org.apache.beam.sdk.util.UserCodeException.wrap(UserCodeException.java:34)
>       at 
> org.apache.beam.sdk.io.parquet.ParquetIO$ReadFiles$ReadFn$DoFnInvoker.invokeProcessElement(Unknown
>  Source)
>       at 
> org.apache.beam.runners.core.SimpleDoFnRunner.invokeProcessElement(SimpleDoFnRunner.java:275)
>       at 
> org.apache.beam.runners.core.SimpleDoFnRunner.processElement(SimpleDoFnRunner.java:240)
>       at 
> org.apache.beam.runners.flink.metrics.DoFnRunnerWithMetricsUpdate.processElement(DoFnRunnerWithMetricsUpdate.java:63)
>       at 
> org.apache.beam.runners.flink.translation.functions.FlinkDoFnFunction.mapPartition(FlinkDoFnFunction.java:128)
>       at 
> org.apache.flink.runtime.operators.MapPartitionDriver.run(MapPartitionDriver.java:103)
>       at org.apache.flink.runtime.operators.BatchTask.run(BatchTask.java:503)
>       at 
> org.apache.flink.runtime.operators.BatchTask.invoke(BatchTask.java:368)
>       at org.apache.flink.runtime.taskmanager.Task.run(Task.java:712)
>       at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.NoSuchFieldError: NULL_VALUE
>       at 
> org.apache.parquet.avro.AvroSchemaConverter.convertFields(AvroSchemaConverter.java:246)
>       at 
> org.apache.parquet.avro.AvroSchemaConverter.convert(AvroSchemaConverter.java:231)
>       at 
> org.apache.parquet.avro.AvroReadSupport.prepareForRead(AvroReadSupport.java:130)
>       at 
> org.apache.parquet.hadoop.InternalParquetRecordReader.initialize(InternalParquetRecordReader.java:183)
>       at 
> org.apache.parquet.hadoop.ParquetReader.initReader(ParquetReader.java:156)
>       at org.apache.parquet.hadoop.ParquetReader.read(ParquetReader.java:135)
>       at 
> org.apache.beam.sdk.io.parquet.ParquetIO$ReadFiles$ReadFn.processElement(ParquetIO.java:215)
>
>
> Any help/suggestion is appreciated.
>
> Regards,
> Jitendra
> Jitendra Sharma
>

Reply via email to