wxd5146 opened a new issue, #5926: URL: https://github.com/apache/kyuubi/issues/5926
### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct) ### Search before asking - [X] I have searched in the [issues](https://github.com/apache/kyuubi/issues?q=is%3Aissue) and found no similar issues. ### Describe the bug Pre-Description: 1、run sql on spark engine 2、spark engine write and read from hive 3、create hive table as orc or parquet when i execute sql like “insert overwrite table tpch.ods_tpch_lineitem_d_1 select distinct * from tpch.ods_tpch_lineitem_d;” i check the hdfs path,find that the kyuubi spark engine will delete the hdfs path and will create the path at the last dag stage  but,there is a issue, if the sql run failed and do not reach the last stage,then the hdfs path of hive table will lost. when i close the spark session and open a new session, run sql “insert overwrite table tpch.ods_tpch_lineitem_d_1 select distinct * from tpch.ods_tpch_lineitem_d;” it will can not find the hdfs path and the task will failed. ### Affects Version(s) 1.7.3/1.8.0 ### Kyuubi Server Log Output _No response_ ### Kyuubi Engine Log Output _No response_ ### Kyuubi Server Configurations _No response_ ### Kyuubi Engine Configurations _No response_ ### Additional context _No response_ ### Are you willing to submit PR? - [ ] Yes. I would be willing to submit a PR with guidance from the Kyuubi community to fix. - [ ] No. I cannot submit a PR at this time. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected] --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
