xinglin commented on a change in pull request #3205:
URL: https://github.com/apache/hadoop/pull/3205#discussion_r673673927
##########
File path:
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirWriteFileOp.java
##########
@@ -393,11 +403,12 @@ static HdfsFileStatus startFile(
fsn.checkFsObjectLimit();
INodeFile newNode = null;
INodesInPath parent =
- FSDirMkdirOp.createAncestorDirectories(fsd, iip, permissions);
+ FSDirMkdirOp.createMissingDirs(fsd, iip.getParentINodesInPath(),
permissions);
if (parent != null) {
iip = addFile(fsd, parent, iip.getLastLocalName(), permissions,
replication, blockSize, holder, clientMachine, shouldReplicate,
- ecPolicyName, storagePolicy);
+ ecPolicyName, storagePolicy, parent.getPath().equalsIgnoreCase(
+ iip.getPath()));
Review comment:
I am not sure whether I understand how
`parent.getPath().equalsIgnoreCase(iip.getPath())` translates into
`isLockAcquired`. First, the comparison above is to compare two paths but then
the second is whether the lock is acquired or not. These two concepts are not
the same. secondly, presumably `parent` points to parent dir and `iip` points
to the current file. shouldn't
`parent.getPath().equalsIgnoreCase(iip.getPath())` always returns false?
##########
File path:
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirWriteFileOp.java
##########
@@ -541,13 +552,51 @@ private static INodesInPath addFile(
FSDirectory fsd, INodesInPath existing, byte[] localName,
PermissionStatus permissions, short replication, long preferredBlockSize,
String clientName, String clientMachine, boolean shouldReplicate,
- String ecPolicyName, String storagePolicy) throws IOException {
+ String ecPolicyName, String storagePolicy, boolean isLockAcquired)
+ throws IOException {
Preconditions.checkNotNull(existing);
long modTime = now();
INodesInPath newiip;
fsd.writeLock();
try {
+ INodeFile newNode = createINodeFile(fsd, existing, localName,
+ permissions, replication, preferredBlockSize, clientName,
+ clientMachine, shouldReplicate, ecPolicyName, storagePolicy,
modTime);
+
+ if(!isLockAcquired){
+ // parent lock is not acquired earlier, so get the lock first
+ INode[] missing = new INode[]{newNode};
+ existing = existing.getExistingINodes();
+ if(missing.length == 0){
Review comment:
`missing` is initialized at line 569 and I don't see any line to code
trying to modifying `missing`. How can it become length of 0 here?
##########
File path:
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirWriteFileOp.java
##########
@@ -541,13 +552,51 @@ private static INodesInPath addFile(
FSDirectory fsd, INodesInPath existing, byte[] localName,
PermissionStatus permissions, short replication, long preferredBlockSize,
String clientName, String clientMachine, boolean shouldReplicate,
- String ecPolicyName, String storagePolicy) throws IOException {
+ String ecPolicyName, String storagePolicy, boolean isLockAcquired)
+ throws IOException {
Preconditions.checkNotNull(existing);
long modTime = now();
INodesInPath newiip;
fsd.writeLock();
try {
+ INodeFile newNode = createINodeFile(fsd, existing, localName,
+ permissions, replication, preferredBlockSize, clientName,
+ clientMachine, shouldReplicate, ecPolicyName, storagePolicy,
modTime);
+
+ if(!isLockAcquired){
+ // parent lock is not acquired earlier, so get the lock first
+ INode[] missing = new INode[]{newNode};
+ existing = existing.getExistingINodes();
+ if(missing.length == 0){
+ return existing;
+ }
+ // switch the locks
+ fsd.getINodeMap().latchWriteLock(existing, missing);
+ }
+
+ newiip = fsd.addINode(existing, newNode, permissions.getPermission());
+ } finally {
+ fsd.writeUnlock();
Review comment:
Do we need to release write locks for all partitions here, locked by
latchWriteLock() at line 575? If the old code is trying to release the global
lock, it seems we should release the partition locks as well?
##########
File path:
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirWriteFileOp.java
##########
@@ -687,6 +722,14 @@ static boolean completeFile(FSNamesystem fsn,
FSPermissionChecker pc,
}
checkBlock(fsn, last);
INodesInPath iip = fsn.dir.resolvePath(pc, src, fileId);
+
+ assert (iip.getLastINode() instanceof INodeFile);
+ INode[] missing = new INode[] { iip.getLastINode() };
+ INodesInPath existing = iip.getParentINodesInPath();
+ // switch the locks
+ FSDirectory fsd = fsn.getFSDirectory();
+ fsd.getINodeMap().latchWriteLock(existing, missing);
+
Review comment:
do we want to add this section of code here? completeFile() seems to be
part of file writes, instead of file creates. Secondly, if we are switching the
locks, why don't I see the old global lock here?
##########
File path:
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirWriteFileOp.java
##########
@@ -541,13 +552,51 @@ private static INodesInPath addFile(
FSDirectory fsd, INodesInPath existing, byte[] localName,
PermissionStatus permissions, short replication, long preferredBlockSize,
String clientName, String clientMachine, boolean shouldReplicate,
- String ecPolicyName, String storagePolicy) throws IOException {
+ String ecPolicyName, String storagePolicy, boolean isLockAcquired)
+ throws IOException {
Preconditions.checkNotNull(existing);
long modTime = now();
INodesInPath newiip;
fsd.writeLock();
try {
+ INodeFile newNode = createINodeFile(fsd, existing, localName,
+ permissions, replication, preferredBlockSize, clientName,
+ clientMachine, shouldReplicate, ecPolicyName, storagePolicy,
modTime);
+
+ if(!isLockAcquired){
+ // parent lock is not acquired earlier, so get the lock first
+ INode[] missing = new INode[]{newNode};
+ existing = existing.getExistingINodes();
Review comment:
Do we need to call `getExistingINodes()` here? Shouldn't `existing` be
properly set when we call createMissingDirs() in startFile()?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]