virajjasani commented on a change in pull request #1913:
URL: https://github.com/apache/hbase/pull/1913#discussion_r441447292



##########
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/wal/BoundedRecoveredHFilesOutputSink.java
##########
@@ -118,6 +122,7 @@ public void append(RegionEntryBuffer buffer) throws 
IOException {
         openingWritersNum.decrementAndGet();
       } finally {
         writer.close();
+        LOG.trace("Closed {}, edits={}", writer.getPath(), familyCells.size());

Review comment:
       same here

##########
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/wal/BoundedRecoveredHFilesOutputSink.java
##########
@@ -99,12 +102,13 @@ public void append(RegionEntryBuffer buffer) throws 
IOException {
       }
     }
 
-    // The key point is create a new writer for each column family, write 
edits then close writer.
+    // Create a new hfile writer for each column family, write edits then 
close writer.
     String regionName = Bytes.toString(buffer.encodedRegionName);
     for (Map.Entry<String, CellSet> cellsEntry : familyCells.entrySet()) {
       String familyName = cellsEntry.getKey();
       StoreFileWriter writer = createRecoveredHFileWriter(buffer.tableName, 
regionName,
         familySeqIds.get(familyName), familyName, isMetaTable);
+      LOG.trace("Created {}", writer.getPath());

Review comment:
       Good to guard with `LOG.isTraceEnabled()`?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to