ctubbsii commented on a change in pull request #2185:
URL: https://github.com/apache/accumulo/pull/2185#discussion_r666551581



##########
File path: server/manager/src/main/java/org/apache/accumulo/manager/Manager.java
##########
@@ -1120,6 +1122,9 @@ boolean canSuspendTablets() {
       if (null != upgradeMetadataFuture) {
         upgradeMetadataFuture.get();
       }
+      if (null != upgradeFilesFuture) {
+        upgradeFilesFuture.get();
+      }

Review comment:
       Well, usually we don't upgrade files at all, because the number of files 
tends to be excessive and not scalable. So, we support upgrading files over 
time in a distributed way, by reading multiple older versions, and writing only 
newer versions. See RFile code. I think we even still have some remaining code 
for reading MapFiles (which we used before RFile existed). In general, we don't 
want to upgrade files at all at this phase, certainly not isolated to a single 
machine, when we could have thousands and thousands of files across HDFS.
   
   In this case, I was concerned that these files being dropped depend on 
metadata changes... since there could be references to these sorted WAL files 
in the metadata. That would be one example. In general, though, we just don't 
upgrade files at this point at all.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to