Hi All,

 I am facing below stack traces while reading data from parquet file

Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: 7

        at parquet.bytes.BytesUtils.bytesToLong(BytesUtils.java:247)

        at
parquet.column.statistics.LongStatistics.setMinMaxFromBytes(LongStatistics.java:47)

        at
parquet.format.converter.ParquetMetadataConverter.fromParquetStatistics(ParquetMetadataConverter.java:249)

        at
parquet.format.converter.ParquetMetadataConverter.fromParquetMetadata(ParquetMetadataConverter.java:543)

        at
parquet.format.converter.ParquetMetadataConverter.readParquetMetadata(ParquetMetadataConverter.java:520)

        at
parquet.hadoop.ParquetFileReader.readFooter(ParquetFileReader.java:426)

        at
parquet.hadoop.ParquetFileReader.readFooter(ParquetFileReader.java:389)

        at
org.apache.spark.sql.parquet.ParquetTypesConverter$$anonfun$readMetaData$3.apply(ParquetTypes.scala:457)

        at
org.apache.spark.sql.parquet.ParquetTypesConverter$$anonfun$readMetaData$3.apply(ParquetTypes.scala:457)

        at scala.Option.map(Option.scala:145)

        at
org.apache.spark.sql.parquet.ParquetTypesConverter$.readMetaData(ParquetTypes.scala:457)

        at
org.apache.spark.sql.parquet.ParquetTypesConverter$.readSchemaFromFile(ParquetTypes.scala:477)

        at
org.apache.spark.sql.parquet.ParquetRelation.<init>(ParquetRelation.scala:65)

        at org.apache.spark.sql.SQLContext.parquetFile(SQLContext.scala:165)

Please suggest. It seems like it not able to convert some data

Reply via email to