swagle commented on a change in pull request #50: HDDS-2131. Optimize replication type and creation time calculation in S3 MPU list call. URL: https://github.com/apache/hadoop-ozone/pull/50#discussion_r336331809
########## File path: hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java ########## @@ -1323,29 +1323,36 @@ public OmMultipartUploadList listMultipartUploads(String volumeName, List<OmMultipartUpload> collect = multipartUploadKeys.stream() .map(OmMultipartUpload::from) - .map(upload -> { + .peek(upload -> { String dbKey = metadataManager .getOzoneKey(upload.getVolumeName(), upload.getBucketName(), upload.getKeyName()); try { - Table<String, OmKeyInfo> openKeyTable = - metadataManager.getOpenKeyTable(); + Table<String, OmMultipartKeyInfo> keyInfoTable = + metadataManager.getMultipartInfoTable(); - OmKeyInfo omKeyInfo = - openKeyTable.get(upload.getDbKey()); + OmMultipartKeyInfo multipartKeyInfo = + keyInfoTable.get(upload.getDbKey()); upload.setCreationTime( - Instant.ofEpochMilli(omKeyInfo.getCreationTime())); - - upload.setReplicationType(omKeyInfo.getType()); - upload.setReplicationFactor(omKeyInfo.getFactor()); + Instant.ofEpochMilli(multipartKeyInfo.getCreationTime())); + + TreeMap<Integer, PartKeyInfo> partKeyInfoMap = + multipartKeyInfo.getPartKeyInfoMap(); + if (!partKeyInfoMap.isEmpty()) { + PartKeyInfo partKeyInfo = Review comment: Hi @bharatviswa504 not sure I understand, the map is not empty we get the type and factor from the first part. isEmpty() => size == 0 ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org