sokui commented on PR #7700:
URL: https://github.com/apache/ozone/pull/7700#issuecomment-2599044200
> > My consideration is that
org.apache.hadoop.ozone.om.request.util.OMMultipartUploadUtils#getMultipartOpenKey
is used by multiple places, including S3ExpiredMultipartUploadsAbortRequest
and S3MultipartUploadAbortRequest. By updating one place here, it benefits for
two places (both not work currently). Secondly, if my current implementation of
getMultipartKeyFSO is more reliable, there is no reason to only limit this
benefit to S3MultipartUploadAbortRequest. All the other places should use this
as well.
>
> My worry is that there might be some places where the expectations for
`OMMultipartUploadUtils#getMultipartKeyFSO` is to access the open key/file
table or there might some places where the multipartInfoTable entry does not
exist yet, which might result in NPE which might crash the OM (we should handle
the possible NPE). However, I'm OK as long as there are no test regressions.
>
> > For the tests, I will take a look the failure. For the new test you
suggested, do you know if there is similar testing existing so that I can
reference it? I am not super familiar with ozone code base. So if there is no
such similar code there, could you pls show me some code snippet which I can
start with. Really appreciate it!
>
> You can start with `TestOzoneClientMultipartUploadWithFSO` integration
test.
I started with some test with the following code. But It seem I cannot
delete the directory.
```
String parentDir = "a/b";
keyName = parentDir + UUID.randomUUID();
OzoneManager ozoneManager = cluster.getOzoneManager();
String buckKey = ozoneManager.getMetadataManager()
.getBucketKey(volume.getName(), bucket.getName());
OmBucketInfo buckInfo =
ozoneManager.getMetadataManager().getBucketTable().get(buckKey);
BucketLayout bucketLayout = buckInfo.getBucketLayout();
String uploadID = initiateMultipartUploadWithAsserts(bucket, keyName,
RATIS,
ONE);
bucket.deleteDirectory(parentDir, false);
```
It gave me the following error:
```
KEY_NOT_FOUND org.apache.hadoop.ozone.om.exceptions.OMException: Unable to
get file status: volume: 8f75be75-539f-4fb1-8cc0-8c1123f1710f bucket:
2ce72254-3315-4662-95ae-cd09132ef932 key: a/b
at
org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolClientSideTranslatorPB.handleError(OzoneManagerProtocolClientSideTranslatorPB.java:763)
at
org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolClientSideTranslatorPB.deleteKey(OzoneManagerProtocolClientSideTranslatorPB.java:962)
at
org.apache.hadoop.ozone.client.rpc.RpcClient.deleteKey(RpcClient.java:1631)
at
org.apache.hadoop.ozone.client.OzoneBucket.deleteDirectory(OzoneBucket.java:689)
at
org.apache.hadoop.ozone.client.rpc.TestOzoneClientMultipartUploadWithFSO.testAbortUploadSuccessWithMissingParentDirectories(TestOzoneClientMultipartUploadWithFSO.java:648)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at java.base/java.util.ArrayList.forEach(ArrayList.java:1540)
at java.base/java.util.ArrayList.forEach(ArrayList.java:1540)
```
I checked the `bucket.deleteDirectory()` method. It looks like this:
```
public void deleteDirectory(String key, boolean recursive)
throws IOException {
proxy.deleteKey(volumeName, name, key, recursive);
}
```
Just wonder if it is deleting a key or a directory? And if it is directory,
why I got the above exception?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]