[
https://issues.apache.org/jira/browse/HIVE-26902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
yanbin.zhang updated HIVE-26902:
--------------------------------
External issue URL: https://issues.apache.org/jira/browse/HIVE-22373
> Failed to close AbstractFileMergeOperator
> -----------------------------------------
>
> Key: HIVE-26902
> URL: https://issues.apache.org/jira/browse/HIVE-26902
> Project: Hive
> Issue Type: Bug
> Components: Spark
> Affects Versions: 3.1.2
> Environment: hadoop:3.2.1
> hive:3.1.2
> spark:2.4.6
> hive on spark
> Reporter: zhenkuan_zhang
> Priority: Major
> Fix For: NA
>
>
> when i set hive.merge.sparkfiles to true.Sometimes an error is reported when
> SQL is running。The error log is as follows:
> org.apache.hadoop.hive.ql.metadata.HiveException: Failed to close
> AbstractFileMergeOperator
> at
> org.apache.hadoop.hive.ql.exec.spark.SparkMergeFileRecordHandler.close(SparkMergeFileRecordHandler.java:115)
> at
> org.apache.hadoop.hive.ql.exec.spark.HiveMapFunctionResultList.closeRecordProcessor(HiveMapFunctionResultList.java:58)
> at
> org.apache.hadoop.hive.ql.exec.spark.HiveBaseFunctionResultList.hasNext(HiveBaseFunctionResultList.java:96)
> at
> scala.collection.convert.Wrappers$JIteratorWrapper.hasNext(Wrappers.scala:42)
> at scala.collection.Iterator$class.foreach(Iterator.scala:891)
> at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
> at
> org.apache.spark.rdd.AsyncRDDActions$$anonfun$foreachAsync$1$$anonfun$apply$12.apply(AsyncRDDActions.scala:127)
> at
> org.apache.spark.rdd.AsyncRDDActions$$anonfun$foreachAsync$1$$anonfun$apply$12.apply(AsyncRDDActions.scala:127)
> at
> org.apache.spark.SparkContext$$anonfun$37.apply(SparkContext.scala:2212)
> at
> org.apache.spark.SparkContext$$anonfun$37.apply(SparkContext.scala:2212)
> at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
> at org.apache.spark.scheduler.Task.run(Task.scala:123)
> at
> org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
> at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
> at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Failed to close
> AbstractFileMergeOperator
> at
> org.apache.hadoop.hive.ql.exec.AbstractFileMergeOperator.closeOp(AbstractFileMergeOperator.java:315)
> at
> org.apache.hadoop.hive.ql.exec.OrcFileMergeOperator.closeOp(OrcFileMergeOperator.java:265)
> at
> org.apache.hadoop.hive.ql.exec.spark.SparkMergeFileRecordHandler.close(SparkMergeFileRecordHandler.java:113)
> ... 17 more
> Caused by: java.io.IOException: Unable to rename
> hdfs://olapCluster/user/hive/warehouse/bi_dw.db/kpy_sfc_fyd_parts_d74_hour_temp/.hive-staging_hive_2023-01-03_13-15-16_144_4347904191947316325-50073/_task_tmp.-ext-10000/_tmp.000003_0
> to
> hdfs://olapCluster/user/hive/warehouse/bi_dw.db/sfc__temp/.hive-staging_hive_2023-01-03_13-15-16_144_4347904191947316325-50073/_tmp.-ext-10000/000003_0
> at
> org.apache.hadoop.hive.ql.exec.AbstractFileMergeOperator.closeOp(AbstractFileMergeOperator.java:254)
> ... 19 more
--
This message was sent by Atlassian Jira
(v8.20.10#820010)