cbaenziger commented on code in PR #4998:
URL: https://github.com/apache/hadoop/pull/4998#discussion_r992538793
##########
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/LocalKeyStoreProvider.java:
##########
@@ -142,21 +142,27 @@ protected void initFileSystem(URI uri)
@Override
public void flush() throws IOException {
- super.flush();
- if (LOG.isDebugEnabled()) {
- LOG.debug("Resetting permissions to '" + permissions + "'");
- }
- if (!Shell.WINDOWS) {
- Files.setPosixFilePermissions(Paths.get(file.getCanonicalPath()),
- permissions);
- } else {
- // FsPermission expects a 10-character string because of the leading
- // directory indicator, i.e. "drwx------". The JDK toString method
returns
- // a 9-character string, so prepend a leading character.
- FsPermission fsPermission = FsPermission.valueOf(
- "-" + PosixFilePermissions.toString(permissions));
- FileUtil.setPermission(file, fsPermission);
+ try {
+ super.getWriteLock().lock();
+ file.createNewFile();
+ if (LOG.isDebugEnabled()) {
+ LOG.debug("Resetting permissions to '" + permissions + "'");
+ }
+ if (!Shell.WINDOWS) {
+ Files.setPosixFilePermissions(Paths.get(file.getCanonicalPath()),
+ permissions);
+ } else {
+ // FsPermission expects a 10-character string because of the leading
+ // directory indicator, i.e. "drwx------". The JDK toString method
returns
+ // a 9-character string, so prepend a leading character.
+ FsPermission fsPermission = FsPermission.valueOf(
+ "-" + PosixFilePermissions.toString(permissions));
+ FileUtil.setPermission(file, fsPermission);
+ }
+ } finally {
+ super.getWriteLock().unlock();
}
+ super.flush();
Review Comment:
## My initial assumptions were:
If we fail to set permissions, I would expect an `IOError` or the like to
interrupt execution and for us to not get to the flush call.
I am a bit novice to Java ReadWriteLocks, and was thinking we could deadlock
if we do not release the write lock as flush in the [super
class](https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/AbstractJavaKeyStoreProvider.java#L285)
also attempts to acquire the writeLock.
Lastly, I was thinking nothing in Hadoop which would be setting more
permissive permissions on this file.
## My updated understanding thanks to your question:
I think the locks are handled per-thread though and not per-scope reading
the
[JavaDocs](https://docs.oracle.com/javase/7/docs/api/java/util/concurrent/locks/ReentrantReadWriteLock.html)
for ReentrantReadWriteLock. And it appears one can acquire a write lock
multiple times, so I think flush can move inside the initial lock.
Further, in reading the JavaDocs and other uses in Hadoop, it looks like the
write lock acquisition should be outside the try/finally block.
Code updated as such.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]