bharatviswa504 commented on a change in pull request #1104: URL: https://github.com/apache/hadoop-ozone/pull/1104#discussion_r447318712
########## File path: hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/BucketManagerImpl.java ########## @@ -136,54 +137,49 @@ public void createBucket(OmBucketInfo bucketInfo) throws IOException { throw new OMException("Bucket already exist", OMException.ResultCodes.BUCKET_ALREADY_EXISTS); } + BucketEncryptionKeyInfo bek = bucketInfo.getEncryptionKeyInfo(); - BucketEncryptionKeyInfo.Builder bekb = null; - if (bek != null) { - if (kmsProvider == null) { - throw new OMException("Invalid KMS provider, check configuration " + - CommonConfigurationKeys.HADOOP_SECURITY_KEY_PROVIDER_PATH, - OMException.ResultCodes.INVALID_KMS_PROVIDER); - } - if (bek.getKeyName() == null) { - throw new OMException("Bucket encryption key needed.", OMException - .ResultCodes.BUCKET_ENCRYPTION_KEY_NOT_FOUND); - } - // Talk to KMS to retrieve the bucket encryption key info. - KeyProvider.Metadata metadata = getKMSProvider().getMetadata( - bek.getKeyName()); - if (metadata == null) { - throw new OMException("Bucket encryption key " + bek.getKeyName() - + " doesn't exist.", - OMException.ResultCodes.BUCKET_ENCRYPTION_KEY_NOT_FOUND); - } - // If the provider supports pool for EDEKs, this will fill in the pool - kmsProvider.warmUpEncryptedKeys(bek.getKeyName()); - bekb = new BucketEncryptionKeyInfo.Builder() - .setKeyName(bek.getKeyName()) - .setVersion(CryptoProtocolVersion.ENCRYPTION_ZONES) - .setSuite(CipherSuite.convert(metadata.getCipher())); - } - List<OzoneAcl> acls = new ArrayList<>(); - acls.addAll(bucketInfo.getAcls()); - volumeArgs.getAclMap().getDefaultAclList().forEach( - a -> acls.add(OzoneAcl.fromProtobufWithAccessType(a))); - - OmBucketInfo.Builder omBucketInfoBuilder = OmBucketInfo.newBuilder() - .setVolumeName(bucketInfo.getVolumeName()) - .setBucketName(bucketInfo.getBucketName()) - .setAcls(acls) - .setStorageType(bucketInfo.getStorageType()) - .setIsVersionEnabled(bucketInfo.getIsVersionEnabled()) - .setCreationTime(Time.now()) - .addAllMetadata(bucketInfo.getMetadata()); + + boolean hasSourceVolume = bucketInfo.getSourceVolume() != null; + boolean hasSourceBucket = bucketInfo.getSourceBucket() != null; + + if (hasSourceBucket != hasSourceVolume) { + throw new OMException("Both source volume and source bucket are " + + "required for bucket links", + OMException.ResultCodes.INVALID_REQUEST); + } + + if (bek != null && hasSourceBucket) { + throw new OMException("Encryption cannot be set for bucket links", + OMException.ResultCodes.INVALID_REQUEST); + } + + BucketEncryptionKeyInfo.Builder bekb = + createBucketEncryptionKeyInfoBuilder(bek); + + OmBucketInfo.Builder omBucketInfoBuilder = bucketInfo.toBuilder() + .setCreationTime(Time.now()); + + List<OzoneManagerProtocolProtos.OzoneAclInfo> defaultAclList = Review comment: Reading more understood that if we create link for /vol1/buck1 -> /vol2/buck2 (source) We create in DB /vol1/buck1 and they have sourceVolume as vol2 and sourceBucket as buck2. Now, when someone calls lookupKey on unresolved bucket, during actual request of lookupKey, this will result in Bucket_NOT_FOUND. Do you think, we need to make sure that source volume/source bucket exists during link creation to avoid such scenarios? My reason was this looks strange, the user thinks he created a link bucket with some source volume/source bucket that has passed without any issues, but now when creating key it is saying bucket does not exist. Following ln -s <<source>> <<dest>> looks confusing in our scenario. ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org --------------------------------------------------------------------- To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org