fapifta commented on code in PR #4417:
URL: https://github.com/apache/ozone/pull/4417#discussion_r1144891874


##########
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/token/ShortLivedTokenIdentifier.java:
##########
@@ -33,18 +34,22 @@ public abstract class ShortLivedTokenIdentifier extends 
TokenIdentifier {
 
   private String ownerId;
   private Instant expiry;
-  private String certSerialId;
+  private UUID secretKeyId;
 
   public abstract String getService();
 
   protected ShortLivedTokenIdentifier() {
   }
 
-  protected ShortLivedTokenIdentifier(String ownerId, Instant expiry,
-      String certSerialId) {
+  protected ShortLivedTokenIdentifier(String ownerId, Instant expiry) {
     this.ownerId = ownerId;
     this.expiry = expiry;
-    this.certSerialId = certSerialId;
+  }
+
+  protected ShortLivedTokenIdentifier(String ownerId, Instant expiry,

Review Comment:
   Currently this constructor is not used, may I ask what is the plan with the 
secretKeyId, and if it would be a necessary field?
   At the moment it seems that there is a large number of construction magic is 
happening around the secretKeyId, and I would love to simplify and reduce the 
number of constructors here, unless it is not already planned as a next step 
hence I thought I am asking first.



##########
hadoop-hdds/interface-client/src/main/proto/hdds.proto:
##########
@@ -357,9 +357,10 @@ message BlockTokenSecretProto {
     required string ownerId = 1;
     required string blockId = 2;
     required uint64 expiryDate = 3;
-    required string omCertSerialId = 4;
+    optional string omCertSerialId = 4 [deprecated=true];
     repeated AccessModeProto modes = 5;
     required uint64 maxLength = 6;
+    required UUID secretKeyId = 7;

Review Comment:
   As the comment says above the message definition: "When adding further 
fields, make sure they are optional as they would otherwise not be backwards 
compatible."
   
   I am wondering if the comment is right though... if the client does not try 
to understand the protocol message, just relies it, we might be fine, as in 
that case the fact that we do not support rolling upgrade, and require the 
server sides to be the same version will prevent us from having any problems by 
adding a new required field here... If this assumption about the client side is 
not true, then older clients will fail when they are supplied with the token, 
as an earlier required field may not present, and then a new field is there.



##########
hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneSecurityUtil.java:
##########
@@ -73,6 +78,20 @@ public static boolean 
isHttpSecurityEnabled(ConfigurationSource conf) {
         OZONE_HTTP_SECURITY_ENABLED_DEFAULT);
   }
 
+  public static boolean isBlockTokenEnabled(ConfigurationSource conf) {

Review Comment:
   These methods seem to be unnecessary.
   There is a class called SecurityConfig that handles the configuration values 
that are fetched from the configuration object in these methods. The only use 
for the first two method is the third one, and in the patch the third one is 
used only at one place, in the HDDSDataNodeService start method in which if the 
tokens are enabled, the code starts the verifier client. Now in the 
HDDSDataNodeService#start we already have a SecurityConfig instance created, so 
the isTokenEnable (which I would call areTokensEnabled) can be implemented for 
convenience on SecurityConfig if we want to, or the HDDSDataNodeService can 
just check the two value based on the SecurityConfig#isXXXXXTokenEnabled() 
methods.



##########
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/ec/reconstruction/TokenHelper.java:
##########
@@ -61,7 +61,7 @@ class TokenHelper {
 
     // checking certClient != null instead of 
securityConfig.isSecurityEnabled()
     // to allow integration test without full kerberos etc. setup
-    boolean securityEnabled = certClient != null;
+    boolean securityEnabled = secretKeyClient != null;
 
     if (securityEnabled && (blockTokenEnabled || containerTokenEnabled)) {
       user = UserGroupInformation.getCurrentUser().getShortUserName();

Review Comment:
   We do not need to check for the expiry time and grace period here, as it was 
only important while we used a certificate to sign the token, as we are using a 
shared secret from now on, which is generated separately from the certs.



##########
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/ec/reconstruction/TokenHelper.java:
##########
@@ -82,17 +82,15 @@ class TokenHelper {
       }
 
       if (blockTokenEnabled) {

Review Comment:
   we should check here if security is enabled or not... Formerly if security 
was not enabled, there were no certificates to sign the tokens with... also in 
that case having tokens does not seem to be meaningful, have you checked if we 
do any meaningful thing with tokens when security is disabled, or even if it 
works at all?



##########
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/ec/reconstruction/TokenHelper.java:
##########


Review Comment:
   Considering the fact that this class is only used from the 
ECReconstructionCoordinator, and that lives in the Datanode, not in SCM or OM. 
This means that DNs are generating tokens as part of the EC reconstruction.
   
   Anyway... this class I would consider to remove and just incorporate what we 
need into the reconstruction coordinator. So there instead of tokenhelper, we 
can create the token secret managers based on the configuration, and then just 
call the generate token methods and encode the tokens.



##########
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java:
##########
@@ -601,9 +604,10 @@ private OzoneManager(OzoneConfiguration conf, 
StartupOption startupOption)
       certClient = new OMCertificateClient(secConfig, omStorage,
           scmInfo == null ? null : scmInfo.getScmId(), this::saveNewCertId,
           this::terminateOM);
+      secretKeyClient = DefaultSecretKeySignerClient.create(conf);

Review Comment:
   Here the code creates the secretKeyClient in case security is enabled, but 
later on in the createBlockTokenSecretManager() method we set the 
secretKeyClient to the blockTokenManager via its constructor even if it is 
null, and we create the blockTokenManager if block tokens are enabled, even 
without security being enabled.
   
   Does it have a value to have block token secret manager when security is not 
enabled, and this client is effectively null?



##########
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java:
##########
@@ -986,38 +990,29 @@ private OzoneDelegationTokenSecretManager 
createDelegationTokenSecretManager(
         .build();
   }
 
-  private OzoneBlockTokenSecretManager createBlockTokenSecretManager(
-      OzoneConfiguration conf) {
+  private OzoneBlockTokenSecretManager createBlockTokenSecretManager() {
 
-    long expiryTime = conf.getTimeDuration(
+    long expiryTime = configuration.getTimeDuration(

Review Comment:
   These checks are related to certificates, if we switch to a shared secret 
and symmetric encription and we skip certificates, the certificate lifetime and 
expiry data is not relevant here anymore and we should not check for it here I 
believe.



##########
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java:
##########
@@ -1038,15 +1033,19 @@ public void startSecretManager() {
       LOG.error("Unable to read key pair for OM.", e);
       throw new UncheckedIOException(e);
     }
+
     if (secConfig.isBlockTokenEnabled() && blockTokenMgr != null) {
+      LOG.info("Starting secret key client.");
       try {
-        LOG.info("Starting OM block token secret manager");
-        blockTokenMgr.start(certClient);
+        secretKeyClient.start(configuration);
       } catch (IOException e) {
-        // Unable to start secret manager.
-        LOG.error("Error starting block token secret manager.", e);
+        LOG.error("Unable to initialize secret key.", e);
         throw new UncheckedIOException(e);
       }
+      // A backdoor for integration-tests to inject a custom secretKeyClient.
+      // This allows testing token in integration test without fully setting
+      // up a working secure cluster.
+      blockTokenMgr.setSecretKeyClient(secretKeyClient);

Review Comment:
   Why aren't we do this in the OzoneManager's setSecretKeyClient method 
instead of here in the start? For me this approach seems to be more error prone.
   



##########
hadoop-hdds/framework/src/test/java/org/apache/hadoop/hdds/security/token/TestOzoneBlockTokenIdentifier.java:
##########
@@ -17,157 +17,74 @@
  */
 package org.apache.hadoop.hdds.security.token;
 
-import java.io.ByteArrayInputStream;
-import java.io.DataInputStream;
-import java.io.File;
-import java.io.IOException;
-import java.security.GeneralSecurityException;
-import java.security.InvalidKeyException;
-import java.security.KeyPair;
-import java.security.NoSuchAlgorithmException;
-import java.security.NoSuchProviderException;
-import java.security.PrivateKey;
-import java.security.Signature;
-import java.security.SignatureException;
-import java.security.cert.Certificate;
-import java.security.cert.CertificateEncodingException;
-import java.security.cert.X509Certificate;
-import java.util.ArrayList;
-import java.util.Collections;
-import java.util.EnumSet;
-import java.util.List;
-import java.util.Map;
-import javax.crypto.KeyGenerator;
-import javax.crypto.Mac;
-import javax.crypto.SecretKey;
-
-import org.apache.commons.lang3.RandomStringUtils;
 import org.apache.commons.lang3.RandomUtils;
-import org.apache.hadoop.fs.FileUtil;
 import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
+import org.apache.hadoop.hdds.security.symmetric.ManagedSecretKey;
+import org.apache.hadoop.hdds.security.symmetric.SecretKeyTestUtil;
 import org.apache.hadoop.io.Text;
-import org.apache.hadoop.security.ssl.KeyStoreTestUtil;
 import org.apache.hadoop.security.token.Token;
-import org.apache.ozone.test.GenericTestUtils;
 import org.apache.hadoop.util.Time;
-import org.junit.jupiter.api.AfterEach;
 import org.junit.jupiter.api.Assertions;
-import org.junit.jupiter.api.BeforeAll;
+import org.junit.jupiter.api.BeforeEach;
 import org.junit.jupiter.api.Test;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
+
+import java.io.ByteArrayInputStream;
+import java.io.DataInputStream;
+import java.io.IOException;
+import java.util.EnumSet;
+
+import static java.time.Duration.ofDays;
+import static java.time.Instant.now;
+import static org.junit.jupiter.api.Assertions.assertFalse;
+import static org.junit.jupiter.api.Assertions.assertTrue;
 
 /**
  * Test class for {@link OzoneBlockTokenIdentifier}.
  */
 public class TestOzoneBlockTokenIdentifier {
+  private long expiryTime;
+  private ManagedSecretKey secretKey;
 
-  private static final Logger LOG = LoggerFactory
-      .getLogger(TestOzoneBlockTokenIdentifier.class);
-  private static final String BASEDIR = GenericTestUtils
-      .getTempPath(TestOzoneBlockTokenIdentifier.class.getSimpleName());
-  private static final String KEYSTORES_DIR =
-      new File(BASEDIR).getAbsolutePath();
-  private static long expiryTime;
-  private static KeyPair keyPair;
-  private static X509Certificate cert;
-
-  @BeforeAll
-  public static void setUp() throws Exception {
-    File base = new File(BASEDIR);
-    FileUtil.fullyDelete(base);
-    base.mkdirs();
+  @BeforeEach
+  public void setUp() throws Exception {
     expiryTime = Time.monotonicNow() + 60 * 60 * 24;
 
-    // Create Ozone Master key pair.
-    keyPair = KeyStoreTestUtil.generateKeyPair("RSA");
-    // Create Ozone Master certificate (SCM CA issued cert) and key store.
-    cert = KeyStoreTestUtil
-        .generateCertificate("CN=OzoneMaster", keyPair, 30, "SHA256withRSA");
-  }
-
-  @AfterEach
-  public void cleanUp() throws Exception {
-    // KeyStoreTestUtil.cleanupSSLConfig(KEYSTORES_DIR, sslConfsDir);
+    secretKey = SecretKeyTestUtil.generateHmac(now(), ofDays(1));
   }
 
   @Test
-  public void testSignToken() throws GeneralSecurityException, IOException {
-    String keystore = new File(KEYSTORES_DIR, "keystore.jks")
-        .getAbsolutePath();
-    String truststore = new File(KEYSTORES_DIR, "truststore.jks")
-        .getAbsolutePath();
-    String trustPassword = "trustPass";
-    String keyStorePassword = "keyStorePass";
-    String keyPassword = "keyPass";
-
-
-    KeyStoreTestUtil.createKeyStore(keystore, keyStorePassword, keyPassword,
-        "OzoneMaster", keyPair.getPrivate(), cert);
-
-    // Create trust store and put the certificate in the trust store
-    Map<String, X509Certificate> certs = Collections.singletonMap("server",
-        cert);
-    KeyStoreTestUtil.createTrustStore(truststore, trustPassword, certs);
-
-    // Sign the OzoneMaster Token with Ozone Master private key
-    PrivateKey privateKey = keyPair.getPrivate();
+  public void testSignToken() {
     OzoneBlockTokenIdentifier tokenId = new OzoneBlockTokenIdentifier(
         "testUser", "84940",
         EnumSet.allOf(HddsProtos.BlockTokenSecretProto.AccessModeProto.class),
-        expiryTime, cert.getSerialNumber().toString(), 128L);
-    byte[] signedToken = signTokenAsymmetric(tokenId, privateKey);
-
-    // Verify a valid signed OzoneMaster Token with Ozone Master
-    // public key(certificate)
-    boolean isValidToken = verifyTokenAsymmetric(tokenId, signedToken, cert);
-    LOG.info("{} is {}", tokenId, isValidToken ? "valid." : "invalid.");
+        expiryTime, secretKey.getId(), 128L);
+    byte[] signedToken = secretKey.sign(tokenId);
 
-    // Verify an invalid signed OzoneMaster Token with Ozone Master
-    // public key(certificate)
-    tokenId = new OzoneBlockTokenIdentifier("", "",
-        EnumSet.allOf(HddsProtos.BlockTokenSecretProto.AccessModeProto.class),
-        expiryTime, cert.getSerialNumber().toString(), 128L);
-    LOG.info("Unsigned token {} is {}", tokenId,
-        verifyTokenAsymmetric(tokenId, RandomUtils.nextBytes(128), cert));
+    // Verify a valid signed OzoneMaster Token with Ozone Master.
+    assertTrue(secretKey.isValidSignature(tokenId.getBytes(), signedToken));
 
+    // // Verify an invalid signed OzoneMaster Token with Ozone Master.

Review Comment:
   nit: double comment mark



##########
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/MiniOzoneCluster.java:
##########
@@ -17,13 +17,6 @@
  */
 package org.apache.hadoop.ozone;
 
-import java.io.IOException;

Review Comment:
   Can we please skip the re-ordering of imports?



##########
hadoop-hdds/framework/src/test/resources/mockito-extensions/org.mockito.plugins.MockMaker:
##########
@@ -0,0 +1,16 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+mock-maker-inline

Review Comment:
   I know this file is skipped by checkStyle, but we recently did an effort to 
have a newline at the end of every file, and enforced it by checkstyle where we 
could... So please add one here as well ;)
   
   On the other hand... can you explain why we need this? It is really not 
obvious from the test changes and from quickly checking the documentation if 
and why we need this to be specified.



##########
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/ec/reconstruction/TokenHelper.java:
##########
@@ -61,7 +61,7 @@ class TokenHelper {
 
     // checking certClient != null instead of 
securityConfig.isSecurityEnabled()
     // to allow integration test without full kerberos etc. setup
-    boolean securityEnabled = certClient != null;
+    boolean securityEnabled = secretKeyClient != null;
 
     if (securityEnabled && (blockTokenEnabled || containerTokenEnabled)) {

Review Comment:
   I am wondering whether we need this if statement at all here... instead of 
storing the UGI's short user name in a variable why can't we just get it in the 
generateKey when we already know we need it?



##########
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/token/OzoneBlockTokenIdentifier.java:
##########
@@ -51,6 +53,20 @@ public class OzoneBlockTokenIdentifier extends 
ShortLivedTokenIdentifier {
   private EnumSet<AccessModeProto> modes;
   private long maxLength;
 
+  public OzoneBlockTokenIdentifier(

Review Comment:
   I was wondering if we really need both of these constructor... I understand 
that we have two from the older version as well, but the only difference is how 
we get the blockId parameter, as a BlockID or as a String.
   The String representation is used in the protocol, I believe we should not 
use it elsewhere, but apparently we are doing it right now. I think this we 
might address along with the series of changes that we are doing to go from 
asymmetric to symmetric keys. What do you think?
   I don't want to address this right now, but should we go ahead and have a 
task for this?



##########
hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/ozone/container/common/TestEndPoint.java:
##########
@@ -182,7 +182,7 @@ public void testTmpDirCleanup() throws Exception {
         serverAddress, 1000)) {
       DatanodeDetails datanodeDetails = randomDatanodeDetails();
       OzoneContainer ozoneContainer = new OzoneContainer(
-          datanodeDetails, conf, getContext(datanodeDetails), null);
+          datanodeDetails, conf, getContext(datanodeDetails), null, null);

Review Comment:
   nit: if you leave the constructor with the shortened parameter list, use it 
here as well as in other tests ;)



##########
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/DatanodeStateMachine.java:
##########
@@ -245,6 +247,12 @@ public DatanodeStateMachine(DatanodeDetails 
datanodeDetails,
     queueMetrics = DatanodeQueueMetrics.create(this);
   }
 
+  @VisibleForTesting
+  public DatanodeStateMachine(DatanodeDetails datanodeDetails,

Review Comment:
   I am not really a fan of adding a constructor just for testing...
   I don't really have hard reasons, for me it just seems unnatural to have a 
short constructor in order to type less in tests instead of exactly declare 
there that we deliberately skip passing on things that might be used during the 
lifetime of the object.
   But please note, this is something I can live with, I just wanted to express 
that I would not do it.



##########
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/ozoneimpl/OzoneContainer.java:
##########
@@ -264,6 +267,16 @@ public OzoneContainer(
         new AtomicReference<>(InitializingStatus.UNINITIALIZED);
   }
 
+  /**
+   * Shorthand constructor used for testing in non-secure context.
+   */
+  @VisibleForTesting
+  public OzoneContainer(

Review Comment:
   I can say the same as I did for HDDSStateMachine's similar constructor.



##########
hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/security/symmetric/DefaultSecretKeyVerifierClient.java:
##########
@@ -0,0 +1,105 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdds.security.symmetric;
+
+import com.google.common.cache.CacheBuilder;
+import com.google.common.cache.CacheLoader;
+import com.google.common.cache.LoadingCache;
+import org.apache.hadoop.hdds.conf.ConfigurationSource;
+import org.apache.hadoop.hdds.protocol.SCMSecurityProtocol;
+import org.apache.hadoop.hdds.security.exception.SCMSecurityException;
+import org.jetbrains.annotations.NotNull;

Review Comment:
   We generally do not use jetbrains annotations.



##########
hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/security/symmetric/DefaultSecretKeyVerifierClient.java:
##########
@@ -0,0 +1,105 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdds.security.symmetric;
+
+import com.google.common.cache.CacheBuilder;
+import com.google.common.cache.CacheLoader;
+import com.google.common.cache.LoadingCache;
+import org.apache.hadoop.hdds.conf.ConfigurationSource;
+import org.apache.hadoop.hdds.protocol.SCMSecurityProtocol;
+import org.apache.hadoop.hdds.security.exception.SCMSecurityException;
+import org.jetbrains.annotations.NotNull;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.time.Duration;
+import java.util.UUID;
+import java.util.concurrent.ExecutionException;
+import java.util.concurrent.TimeUnit;
+
+import static 
org.apache.hadoop.hdds.security.symmetric.SecretKeyConfig.parseExpiryDuration;
+import static 
org.apache.hadoop.hdds.security.symmetric.SecretKeyConfig.parseRotateDuration;
+import static org.apache.hadoop.hdds.utils.HddsServerUtil.getScmSecurityClient;
+
+/**
+ * Default implementation of {@link SecretKeyVerifierClient} that fetches
+ * SecretKeys remotely via {@link SCMSecurityProtocol}.
+ */
+public class DefaultSecretKeyVerifierClient implements SecretKeyVerifierClient 
{
+  private static final Logger LOG =
+      LoggerFactory.getLogger(DefaultSecretKeyVerifierClient.class);
+
+  private final LoadingCache<UUID, ManagedSecretKey> cache;
+
+  DefaultSecretKeyVerifierClient(SCMSecurityProtocol scmSecurityProtocol,
+                                 ConfigurationSource conf) {
+    Duration expiryDuration = parseExpiryDuration(conf);
+    Duration rotateDuration = parseRotateDuration(conf);
+    long cacheSize = expiryDuration.toMillis() / rotateDuration.toMillis() + 1;
+
+    CacheLoader<UUID, ManagedSecretKey> loader =
+        new CacheLoader<UUID, ManagedSecretKey>() {
+          @NotNull
+          @Override
+          public ManagedSecretKey load(@NotNull UUID id) throws Exception {
+            ManagedSecretKey secretKey = scmSecurityProtocol.getSecretKey(id);
+            LOG.info("Secret key fetched from SCM: {}", secretKey);
+            return secretKey;
+          }
+        };
+
+    LOG.info("Initializing secret key cache with size {}, TTL {}",
+        cacheSize, expiryDuration);
+    cache = CacheBuilder.newBuilder()
+        .maximumSize(cacheSize)
+        .expireAfterWrite(expiryDuration.toMillis(), TimeUnit.MILLISECONDS)
+        .recordStats()
+        .build(loader);
+  }
+
+  @Override
+  public ManagedSecretKey getSecretKey(UUID id) throws SCMSecurityException {
+    try {
+      return cache.get(id);
+    } catch (ExecutionException e) {
+      return handleCacheException(e, id);
+    }
+  }
+
+  private <T> T handleCacheException(ExecutionException e, UUID id)

Review Comment:
   I think we should just declare this method as void, as we do not return 
anything but throw an exception in all branches.



##########
hadoop-hdds/interface-admin/src/main/proto/ScmAdminProtocol.proto:
##########
@@ -529,7 +529,8 @@ message ContainerTokenSecretProto {
     required string ownerId = 1;
     required ContainerID containerId = 2;
     required uint64 expiryDate = 3;
-    required string certSerialId = 4;
+    optional string certSerialId = 4 [deprecated=true];
+    required UUID secretKeyId = 5;

Review Comment:
   Probably we can not add this as required... see details in the comment for 
BlockTokenSecretProto.



##########
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/StorageContainerManager.java:
##########
@@ -955,8 +953,8 @@ private ContainerTokenSecretManager 
createContainerTokenSecretManager(
       scmCertificateClient = new SCMCertificateClient(securityConfig,
           certSerialNumber, SCM_ROOT_CA_COMPONENT_NAME);
     }
-    return new ContainerTokenSecretManager(securityConfig,
-        expiryTime);
+    return new ContainerTokenSecretManager(expiryTime,

Review Comment:
   Before this code piece, there is the regular checks for certificate 
lifetime, and existence. Do we still need those?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to