[
https://issues.apache.org/jira/browse/HIVE-9863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14347809#comment-14347809
]
Xuefu Zhang commented on HIVE-9863:
-----------------------------------
More errors in hive.log:
{code}
at
org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderCreationException(HiveIOExceptionHandlerChain.java:97)
at
org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderCreationException(HiveIOExceptionHandlerUtil.java:57)
at
org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.initNextRecordReader(HadoopShimsSecure.java:265)
at
org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.<init>(HadoopShimsSecure.java:212)
at
org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileInputFormatShim.getRecordReader(HadoopShimsSecure.java:332)
at
org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getRecordReader(CombineHiveInputFormat.java:715)
at org.apache.spark.rdd.HadoopRDD$$anon$1.<init>(HadoopRDD.scala:236)
at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:212)
at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:101)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
at
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
at
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:68)
at
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
at org.apache.spark.scheduler.Task.run(Task.scala:64)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:197)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:722)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:525)
at
org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.initNextRecordReader(HadoopShimsSecure.java:251)
... 18 more
Caused by: java.lang.IllegalStateException: All the offsets listed in the split
should be found in the file. expected: [4, 4] found: [BlockMetaData{69644,
881917418 [ColumnMetaData{GZIP [guid] BINARY [PLAIN, BIT_PACKED], 4},
ColumnMetaData{GZIP [collection_name] BINARY [PLAIN_DICTIONARY, BIT_PACKED],
389571}, ColumnMetaData{GZIP [doc_type] BINARY [PLAIN_DICTIONARY, BIT_PACKED],
389790}, ColumnMetaData{GZIP [stage] INT64 [PLAIN_DICTIONARY, BIT_PACKED],
389887}, ColumnMetaData{GZIP [meta_timestamp] INT64 [RLE, PLAIN_DICTIONARY,
BIT_PACKED], 397673}, ColumnMetaData{GZIP [doc_timestamp] INT64 [RLE,
PLAIN_DICTIONARY, BIT_PACKED], 422161}, ColumnMetaData{GZIP [meta_size] INT32
[RLE, PLAIN_DICTIONARY, BIT_PACKED], 460215}, ColumnMetaData{GZIP
[content_size] INT32 [RLE, PLAIN_DICTIONARY, BIT_PACKED], 521728},
ColumnMetaData{GZIP [source] BINARY [RLE, PLAIN, BIT_PACKED], 683740},
ColumnMetaData{GZIP [delete_flag] BOOLEAN [RLE, PLAIN, BIT_PACKED], 683787},
ColumnMetaData{GZIP [meta] BINARY [RLE, PLAIN, BIT_PACKED], 683834},
ColumnMetaData{GZIP [content] BINARY [RLE, PLAIN, BIT_PACKED], 6992365}]}] out
of: [4, 129785482, 260224757] in range 0, 134217728
at
parquet.hadoop.ParquetRecordReader.initializeInternalReader(ParquetRecordReader.java:180)
at
parquet.hadoop.ParquetRecordReader.initialize(ParquetRecordReader.java:138)
at
org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.<init>(ParquetRecordReaderWrapper.java:111)
at
org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.<init>(ParquetRecordReaderWrapper.java:76)
at
org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat.getRecordReader(MapredParquetInputFormat.java:72)
at
org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.<init>(CombineHiveRecordReader.java:66)
... 23 more
2015-03-04 15:54:52,374 WARN [task-result-getter-1]: scheduler.TaskSetManager
(Logging.scala:logWarning(71)) - Lost task 0.0 in stage 0.0 (TID 1, localhost):
java.io.IOException: java.lang.reflect.InvocationTargetException
at
org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderCreationException(HiveIOExceptionHandlerChain.java:97)
at
org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderCreationException(HiveIOExceptionHandlerUtil.java:57)
at
org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.initNextRecordReader(HadoopShimsSecure.java:265)
at
org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.<init>(HadoopShimsSecure.java:212)
at
org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileInputFormatShim.getRecordReader(HadoopShimsSecure.java:332)
at
org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getRecordReader(CombineHiveInputFormat.java:715)
at org.apache.spark.rdd.HadoopRDD$$anon$1.<init>(HadoopRDD.scala:236)
at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:212)
at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:101)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
at
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
at
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:68)
at
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
at org.apache.spark.scheduler.Task.run(Task.scala:64)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:197)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:722)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:525)
at
org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.initNextRecordReader(HadoopShimsSecure.java:251)
... 18 more
Caused by: java.lang.IllegalStateException: All the offsets listed in the split
should be found in the file. expected: [4, 4] found: [BlockMetaData{69644,
881917418 [ColumnMetaData{GZIP [guid] BINARY [PLAIN, BIT_PACKED], 4},
ColumnMetaData{GZIP [collection_name] BINARY [PLAIN_DICTIONARY, BIT_PACKED],
389571}, ColumnMetaData{GZIP [doc_type] BINARY [PLAIN_DICTIONARY, BIT_PACKED],
389790}, ColumnMetaData{GZIP [stage] INT64 [PLAIN_DICTIONARY, BIT_PACKED],
389887}, ColumnMetaData{GZIP [meta_timestamp] INT64 [RLE, PLAIN_DICTIONARY,
BIT_PACKED], 397673}, ColumnMetaData{GZIP [doc_timestamp] INT64 [RLE,
PLAIN_DICTIONARY, BIT_PACKED], 422161}, ColumnMetaData{GZIP [meta_size] INT32
[RLE, PLAIN_DICTIONARY, BIT_PACKED], 460215}, ColumnMetaData{GZIP
[content_size] INT32 [RLE, PLAIN_DICTIONARY, BIT_PACKED], 521728},
ColumnMetaData{GZIP [source] BINARY [RLE, PLAIN, BIT_PACKED], 683740},
ColumnMetaData{GZIP [delete_flag] BOOLEAN [RLE, PLAIN, BIT_PACKED], 683787},
ColumnMetaData{GZIP [meta] BINARY [RLE, PLAIN, BIT_PACKED], 683834},
ColumnMetaData{GZIP [content] BINARY [RLE, PLAIN, BIT_PACKED], 6992365}]}] out
of: [4, 129785482, 260224757] in range 0, 134217728
at
parquet.hadoop.ParquetRecordReader.initializeInternalReader(ParquetRecordReader.java:180)
at
parquet.hadoop.ParquetRecordReader.initialize(ParquetRecordReader.java:138)
at
org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.<init>(ParquetRecordReaderWrapper.java:111)
at
org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.<init>(ParquetRecordReaderWrapper.java:76)
at
org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat.getRecordReader(MapredParquetInputFormat.java:72)
at
org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.<init>(CombineHiveRecordReader.java:66)
... 23 more
2015-03-04 15:54:52,376 ERROR [task-result-getter-1]: scheduler.TaskSetManager
(Logging.scala:logError(75)) - Task 0 in stage 0.0 failed 1 times; aborting job
{code}
> Querying parquet tables fails with IllegalStateException [Spark Branch]
> -----------------------------------------------------------------------
>
> Key: HIVE-9863
> URL: https://issues.apache.org/jira/browse/HIVE-9863
> Project: Hive
> Issue Type: Sub-task
> Components: Spark
> Reporter: Xuefu Zhang
>
> Not necessarily happens only in spark branch, queries such as select count(*)
> from table_name fails with error:
> {code}
> hive> select * from content limit 2;
> OK
> Failed with exception java.io.IOException:java.lang.IllegalStateException:
> All the offsets listed in the split should be found in the file. expected:
> [4, 4] found: [BlockMetaData{69644, 881917418 [ColumnMetaData{GZIP [guid]
> BINARY [PLAIN, BIT_PACKED], 4}, ColumnMetaData{GZIP [collection_name] BINARY
> [PLAIN_DICTIONARY, BIT_PACKED], 389571}, ColumnMetaData{GZIP [doc_type]
> BINARY [PLAIN_DICTIONARY, BIT_PACKED], 389790}, ColumnMetaData{GZIP [stage]
> INT64 [PLAIN_DICTIONARY, BIT_PACKED], 389887}, ColumnMetaData{GZIP
> [meta_timestamp] INT64 [RLE, PLAIN_DICTIONARY, BIT_PACKED], 397673},
> ColumnMetaData{GZIP [doc_timestamp] INT64 [RLE, PLAIN_DICTIONARY,
> BIT_PACKED], 422161}, ColumnMetaData{GZIP [meta_size] INT32 [RLE,
> PLAIN_DICTIONARY, BIT_PACKED], 460215}, ColumnMetaData{GZIP [content_size]
> INT32 [RLE, PLAIN_DICTIONARY, BIT_PACKED], 521728}, ColumnMetaData{GZIP
> [source] BINARY [RLE, PLAIN, BIT_PACKED], 683740}, ColumnMetaData{GZIP
> [delete_flag] BOOLEAN [RLE, PLAIN, BIT_PACKED], 683787}, ColumnMetaData{GZIP
> [meta] BINARY [RLE, PLAIN, BIT_PACKED], 683834}, ColumnMetaData{GZIP
> [content] BINARY [RLE, PLAIN, BIT_PACKED], 6992365}]}] out of: [4,
> 129785482, 260224757] in range 0, 134217728
> Time taken: 0.253 seconds
> hive>
> {code}
> I can reproduce the problem with either local or yarn-cluster. It seems
> happening to MR also. Thus, I suspect this is an parquet problem.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)