alexeykudinkin commented on a change in pull request #5004:
URL: https://github.com/apache/hudi/pull/5004#discussion_r830393033



##########
File path: 
hudi-common/src/main/java/org/apache/hudi/io/storage/HoodieHFileReader.java
##########
@@ -250,7 +259,7 @@ public BloomFilter readBloomFilter() {
    */
   public List<Pair<String, R>> readRecords(List<String> keys, Schema schema) 
throws IOException {
     this.schema = schema;
-    reader.loadFileInfo();
+    reader.getHFileInfo();

Review comment:
       @yihua are you planning on removing them?

##########
File path: packaging/hudi-flink-bundle/pom.xml
##########
@@ -147,10 +148,18 @@
 
                   <include>org.apache.hbase:hbase-common</include>
                   <include>org.apache.hbase:hbase-client</include>
+                  <include>org.apache.hbase:hbase-hadoop-compat</include>

Review comment:
       Yeah, it's tedious manual process for sure, but i think we can do it 
pretty fast: we just look at the packages imported by HFile, then look at files 
that are imported by HFile, and so on. Then after that we can run the tests if 
we collected it properly or not.
   
   The hypothesis is that this set should be reasonably bounded (why wouldn't 
it?) so this iteration should be pretty fast. 
   
   Can you please create a task and link it here to follow-up?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to