yihua opened a new pull request, #5841:
URL: https://github.com/apache/hudi/pull/5841

   ## What is the purpose of the pull request
   
   When reading the metadata table directly with the metadata table path in 
Spark, i.e., 
`spark.read.format("hudi").load("<base_path>/.hoodie/metadata/").show`, it 
throws NullPointerException from when creating the HFile reader:
   
   ```
   Caused by: java.lang.NullPointerException
     at 
org.apache.hudi.org.apache.hadoop.hbase.io.hfile.CacheConfig.<init>(CacheConfig.java:178)
     at 
org.apache.hudi.org.apache.hadoop.hbase.io.hfile.CacheConfig.<init>(CacheConfig.java:167)
     at 
org.apache.hudi.org.apache.hadoop.hbase.io.hfile.CacheConfig.<init>(CacheConfig.java:163)
     at 
org.apache.hudi.HoodieBaseRelation$.$anonfun$createHFileReader$1(HoodieBaseRelation.scala:531)
     at 
org.apache.hudi.HoodieBaseRelation.$anonfun$createBaseFileReader$1(HoodieBaseRelation.scala:482)
     at 
org.apache.hudi.HoodieMergeOnReadRDD.readBaseFile(HoodieMergeOnReadRDD.scala:130)
     at 
org.apache.hudi.HoodieMergeOnReadRDD.compute(HoodieMergeOnReadRDD.scala:100)
   ```
   
   This exception only happens when the metadata table has the HFile base files.
   
   The root cause is that after `new SerializableConfiguration(hadoopConf)` is 
broadcasted to the executors, on the executor side, the instance does not have 
the `configuration` instance enclosed, i.e., `hadoopConfBroadcast.value.get()`, 
so the configuration is not properly broadcasted.  Then, the HFile CacheConfig 
cannot use the null hadoop conf for initialization.  Replacing 
`SerializableConfiguration` with `SerializableWritable` solves the problem.
   
   Note that this problem can only be reproducible with `yarn` master in Spark. 
 Using `local` as master, this exception does not happen.
   
   ## Brief change log
   
     - Replace `SerializableConfiguration` with `SerializableWritable` for 
broadcasting the hadoop conf.
   
   ## Verify this pull request
   
   After this PR, the metadata table with base files in HFile format can be 
successfully read on EMR using Spark 3.1.3 and 3.2.1, with yarn as master.
   
   ## Committer checklist
   
    - [ ] Has a corresponding JIRA in PR title & commit
    
    - [ ] Commit message is descriptive of the change
    
    - [ ] CI is green
   
    - [ ] Necessary doc changes done or have another open PR
          
    - [ ] For large changes, please consider breaking it into sub-tasks under 
an umbrella JIRA.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to