Rajesh Balamohan created HIVE-24862:
---------------------------------------

             Summary: Fix race condition causing NPE during dynamic partition 
loading
                 Key: HIVE-24862
                 URL: https://issues.apache.org/jira/browse/HIVE-24862
             Project: Hive
          Issue Type: Improvement
            Reporter: Rajesh Balamohan


Following properties default to 15 threads.
{noformat}
hive.load.dynamic.partitions.thread
hive.mv.files.thread  
{noformat}
During loadDynamicPartitions, it ends ups initializing {{newFiles}} without 
synchronization (HIVE-20661, HIVE-24738). 
 
[https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java#L2871]

This causes race condition when dynamic partition thread internally makes use 
of {{hive.mv.files.threads}} in copyFiles/replaceFiles. 
 This causes "NPE" during retrieval in {{addInsertFileInformation()}}.

 

e.g stacktrace
{noformat}
Caused by: java.lang.NullPointerException
  at org.apache.hadoop.fs.FileSystem.fixRelativePart(FileSystem.java:2734)
  at 
org.apache.hadoop.hdfs.DistributedFileSystem.fixRelativePart(DistributedFileSystem.java:3396)
  at 
org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1740)
  at org.apache.hadoop.fs.FileSystem.isDirectory(FileSystem.java:1740)
  at 
org.apache.hadoop.hive.ql.metadata.Hive.addInsertFileInformation(Hive.java:3566)
  at org.apache.hadoop.hive.ql.metadata.Hive.fireInsertEvent(Hive.java:3540)
  at 
org.apache.hadoop.hive.ql.metadata.Hive.loadPartitionInternal(Hive.java:2414)
  at 
org.apache.hadoop.hive.ql.metadata.Hive.lambda$loadDynamicPartitions$4(Hive.java:2909)
  at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  at java.lang.Thread.run(Thread.java:748)
{noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to