ctubbsii commented on a change in pull request #1759:
URL: https://github.com/apache/accumulo/pull/1759#discussion_r516886203
##########
File path:
server/manager/src/main/java/org/apache/accumulo/master/tableOps/create/CreateTable.java
##########
@@ -72,6 +77,9 @@ public long isReady(long tid, Master environment) throws
Exception {
Utils.getIdLock().lock();
try {
String tName = tableInfo.getTableName();
+ if(tName.equals("ci")){
+ Thread.sleep(10000000);
+ }
Review comment:
This needs to be removed. A user could have a table by this name.
##########
File path:
server/manager/src/main/java/org/apache/accumulo/master/tableOps/create/FinishCreateTable.java
##########
@@ -62,17 +67,23 @@ public long isReady(long tid, Master environment) {
env.getEventCoordinator().event("Created table %s ",
tableInfo.getTableName());
if (tableInfo.getInitialSplitSize() > 0) {
- cleanupSplitFiles(env);
+ cleanupSplitFiles(tid, env);
}
return null;
}
- private void cleanupSplitFiles(Master env) throws IOException {
+ private void cleanupSplitFiles(long tid, Master env) throws IOException {
// it is sufficient to delete from the parent, because both files are in
the same directory, and
// we want to delete the directory also
- Path p = tableInfo.getSplitPath().getParent();
- FileSystem fs = p.getFileSystem(env.getContext().getHadoopConf());
- fs.delete(p, true);
+ try {
+ Path p = tableInfo.getSplitPath().getParent();
+ FileSystem fs = p.getFileSystem(env.getContext().getHadoopConf());
+ fs.delete(p, true);
+ } catch (NullPointerException | IOException e) {
+ var spdir =
Optional.ofNullable(tableInfo).map(TableInfo::getSplitDirsPath).orElse(null);
+ log.error("{} Failed to cleanup splits file after table was created,
split dir {} ",
+ FateTxId.formatTid(tid), spdir, e);
+ }
Review comment:
I don't think it's possible for tableInfo to be null here, given how
this class is constructed. And, if the split size is `>0`, it's also not
possible for the splits file locations to be null. Also, I think we should
focus just on handling exceptions pertaining to the specific cleanup task that
we're performing. So, if these do happen to be null because of some
serialization bug or whatever, that shouldn't be handled here, but handled by
the framework if the exception is thrown from the method (if the framework
isn't capable of handling exceptions thrown by this method, the interface
shouldn't have `throws Exception` on the method, and if that's a problem, it is
a separate issue that should be fixed in a separate PR to fix the FaTE
framework itself).
Also, the tx id is already in the path to the splits file, so it's possibly
redundant to have that here (but I'm not opposed to keeping it).
Also, we probably want to log the parent path (the one we're trying to
delete recursively), rather than just one of the two file names.
So, all told, I think something like this would be a little cleaner (with
similar changes made in the other places):
```suggestion
Path p = null;
try {
p = tableInfo.getSplitPath().getParent();
FileSystem fs = p.getFileSystem(env.getContext().getHadoopConf());
fs.delete(p, true);
} catch (IOException e) {
log.error("Table was created, but failed to clean up temporary splits
files at {}", p, e);
}
```
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]