singhpk234 commented on issue #4383:
URL: https://github.com/apache/iceberg/issues/4383#issuecomment-1077251204


   Looks like this might be causing this issue (Might be wrong here) : 
   we pass the conf we have HadoopFileIO to HadoopOutputFile : 
   
   
https://github.com/apache/iceberg/blob/ea8bbe749753e3d1ddc595ce75a88a55a442a67e/core/src/main/java/org/apache/iceberg/hadoop/HadoopFileIO.java#L63-L65
   
   which is modified for ORC write path : 
   
   
https://github.com/apache/iceberg/blob/ea8bbe749753e3d1ddc595ce75a88a55a442a67e/orc/src/main/java/org/apache/iceberg/orc/ORC.java#L165-L174
   
   
   Looks like it fails for ORC only : 
   > org.apache.iceberg.spark.extensions.TestCopyOnWriteMerge > 
testMergeWithSnapshotIsolation[catalogName = testhive, implementation = 
org.apache.iceberg.spark.SparkCatalog, config = {type=hive, 
default-namespace=default}, format = orc, vectorized = true, distributionMode = 
none] FAILED
   
   should we create new conf always and copy the existing hadoop confs in to 
new conf to avoid this ? 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to