yihua commented on code in PR #12866:
URL: https://github.com/apache/hudi/pull/12866#discussion_r1978085487


##########
hudi-hadoop-common/src/main/java/org/apache/hudi/common/bootstrap/index/hfile/HFileBootstrapIndexWriter.java:
##########
@@ -52,29 +51,29 @@
 import java.util.Map;
 import java.util.stream.Collectors;
 
-import static 
org.apache.hudi.common.bootstrap.index.hfile.HFileBootstrapIndex.INDEX_INFO_KEY;
+import static 
org.apache.hudi.common.bootstrap.index.hfile.HFileBootstrapIndex.INDEX_INFO_KEY_STRING;
 import static 
org.apache.hudi.common.bootstrap.index.hfile.HFileBootstrapIndex.fileIdIndexPath;
 import static 
org.apache.hudi.common.bootstrap.index.hfile.HFileBootstrapIndex.getFileGroupKey;
 import static 
org.apache.hudi.common.bootstrap.index.hfile.HFileBootstrapIndex.getPartitionKey;
 import static 
org.apache.hudi.common.bootstrap.index.hfile.HFileBootstrapIndex.partitionIndexPath;
 import static org.apache.hudi.common.util.StringUtils.getUTF8Bytes;
 
-public class HBaseHFileBootstrapIndexWriter extends BootstrapIndex.IndexWriter 
{
-  private static final Logger LOG = 
LoggerFactory.getLogger(HBaseHFileBootstrapIndexWriter.class);
+public class HFileBootstrapIndexWriter extends BootstrapIndex.IndexWriter {

Review Comment:
   Similarly, we should move the `HFileBootstrapIndexWriter` class back to 
`hudi-common` once it's dehadooped.



##########
hudi-hadoop-common/src/main/java/org/apache/hudi/common/bootstrap/index/hfile/HFileBootstrapIndexWriter.java:
##########
@@ -196,15 +194,15 @@ public void close() {
   @Override
   public void begin() {
     try {
-      HFileContext meta = new HFileContextBuilder().withCellComparator(new 
org.apache.hudi.common.bootstrap.index.HFileBootstrapIndex.HoodieKVComparator()).build();
-      this.indexByPartitionWriter = 
HFile.getWriterFactory(metaClient.getStorageConf().unwrapAs(Configuration.class),
-              new 
CacheConfig(metaClient.getStorageConf().unwrapAs(Configuration.class)))
-          .withPath((FileSystem) metaClient.getStorage().getFileSystem(), new 
Path(indexByPartitionPath.toUri()))
-          .withFileContext(meta).create();
-      this.indexByFileIdWriter = 
HFile.getWriterFactory(metaClient.getStorageConf().unwrapAs(Configuration.class),
-              new 
CacheConfig(metaClient.getStorageConf().unwrapAs(Configuration.class)))
-          .withPath((FileSystem) metaClient.getStorage().getFileSystem(), new 
Path(indexByFileIdPath.toUri()))
-          .withFileContext(meta).create();
+      HFileContext context = HFileContext.builder().build();
+      FsPermission fsPermission = FsPermission.getFileDefault();
+      FileSystem fs = (FileSystem) metaClient.getStorage().getFileSystem();
+      FSDataOutputStream outputStreamForPartitionWriter = 
HoodieHFileUtils.create(
+          fs, new Path(indexByPartitionPath.toUri()), fsPermission, true);

Review Comment:
   Let's directly use `HoodieStorage` (`metaClient.getStorage()`) and avoid 
using `FsPermission`.  Use `OutputStream` returned by `HoodieStorage` API for 
writing a file.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to