GlenGeng commented on a change in pull request #1981:
URL: https://github.com/apache/ozone/pull/1981#discussion_r588006734
##########
File path:
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/block/DeletedBlockLogStateManagerImpl.java
##########
@@ -194,8 +192,7 @@ public void increaseRetryCountOfTransactionInDB(
// then set the retry value to -1, stop retrying, admins can
// analyze those blocks and purge them manually by SCMCli.
DeletedBlocksTransaction.Builder builder =
block.toBuilder().setCount(-1);
- deletedTable.putWithBatch(
- transactionBuffer.getCurrentBatchOperation(), txID, builder.build());
+ transactionBuffer.addToBuffer(deletedTable, txID, builder.build());
Review comment:
there will be memory leak in `deletingTxIDs` and `skippingRetryTxIDs`,
since `onFlush` won/t be called without ratis.
##########
File path:
hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/scm/metadata/DBTransactionBuffer.java
##########
@@ -0,0 +1,37 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdds.scm.metadata;
+
+import org.apache.hadoop.hdds.utils.db.Table;
+
+import java.io.Closeable;
+import java.io.IOException;
+
+/**
+ * DB transaction that abstracts the updates to the underlying datastore.
+ */
+public interface DBTransactionBuffer<KEY, VALUE> extends Closeable {
+
+ void addToBuffer(Table<KEY, VALUE> table, KEY key, VALUE value) throws
+ IOException;
+
+ void removeFromBuffer(Table<KEY, VALUE> table, KEY key) throws
+ IOException;
+
+ void close() throws IOException;
Review comment:
no need of this close method, exactly the same as that in `Closeable`
##########
File path:
hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/block/TestDeletedBlockLog.java
##########
@@ -85,23 +86,24 @@
private ContainerManagerV2 containerManager;
private StorageContainerManager scm;
private List<DatanodeDetails> dnList;
- private DBTransactionBuffer dbTransactionBuffer;
+ private SCMHADBTransactionBuffer SCMHADBTransactionBuffer;
@Before
public void setup() throws Exception {
testDir = GenericTestUtils.getTestDir(
TestDeletedBlockLog.class.getSimpleName());
conf = new OzoneConfiguration();
+ conf.setBoolean(ScmConfigKeys.OZONE_SCM_HA_ENABLE_KEY, true);
conf.setInt(OZONE_SCM_BLOCK_DELETION_MAX_RETRY, 20);
conf.set(HddsConfigKeys.OZONE_METADATA_DIRS, testDir.getAbsolutePath());
scm = TestUtils.getScm(conf);
containerManager = Mockito.mock(ContainerManagerV2.class);
- dbTransactionBuffer =
- new MockDBTransactionBuffer(scm.getScmMetadataStore().getStore());
+ SCMHADBTransactionBuffer =
+ new MockSCMHADBTransactionBuffer(scm.getScmMetadataStore().getStore());
Review comment:
Agree. Can `MockSCMHADBTransactionBuffer` share the same logic
`addToBuffer/removeFromBuffer` as `SCMDBTransactionBufferImpl`, and let its
`flush` to throw exception ?
##########
File path:
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/ha/SCMHAManagerImpl.java
##########
@@ -85,25 +94,27 @@ public SCMHAManagerImpl(final ConfigurationSource conf,
*/
@Override
public void start() throws IOException {
- ratisServer.start();
- if (ratisServer.getDivision().getGroup().getPeers().isEmpty()) {
- // this is a bootstrapped node
- // It will first try to add itself to existing ring
- boolean success = HAUtils.addSCM(OzoneConfiguration.of(conf),
- new AddSCMRequest.Builder().setClusterId(scm.getClusterId())
- .setScmId(scm.getScmId()).setRatisAddr(
- scm.getSCMHANodeDetails().getLocalNodeDetails()
- // TODO : Should we use IP instead of hostname??
- .getRatisHostPortStr()).build(), scm.getSCMNodeId());
- if (!success) {
- throw new IOException("Adding SCM to existing HA group failed");
+ if (ratisServer != null) {
Review comment:
avoid too deep indent.
How about
```
if (ratisServer == null) {
return
}
// original code
```
##########
File path:
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/ha/SCMHAInvocationHandler.java
##########
@@ -50,16 +50,20 @@ public SCMHAInvocationHandler(final RequestType requestType,
this.requestType = requestType;
this.localHandler = localHandler;
this.ratisHandler = ratisHandler;
- ratisHandler.registerStateMachineHandler(requestType, localHandler);
+ if (ratisHandler != null) {
+ ratisHandler.registerStateMachineHandler(requestType, localHandler);
+ }
}
@Override
public Object invoke(final Object proxy, final Method method,
final Object[] args) throws Throwable {
try {
long startTime = Time.monotonicNow();
- final Object result = method.isAnnotationPresent(Replicate.class) ?
- invokeRatis(method, args) : invokeLocal(method, args);
+ final Object result =
Review comment:
Please add a Preconditions.checkNotNull for ratisHandler at invokeRatis
too. Thanks!
##########
File path:
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/block/DeletedBlockLogStateManagerImpl.java
##########
@@ -170,8 +169,7 @@ public void removeTransactionsFromDB(ArrayList<Long> txIDs)
throws IOException {
deletingTxIDs.addAll(txIDs);
for (Long txID : txIDs) {
- deletedTable.deleteWithBatch(
- transactionBuffer.getCurrentBatchOperation(), txID);
+ transactionBuffer.removeFromBuffer(deletedTable, txID);
Review comment:
Should not call `deletingTxIDs.addAll(txIDs);` if ratis is not enabled.
##########
File path:
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMClientProtocolServer.java
##########
@@ -68,12 +68,7 @@
import java.io.IOException;
import java.net.InetSocketAddress;
-import java.util.ArrayList;
-import java.util.Collections;
-import java.util.List;
-import java.util.Map;
-import java.util.Set;
-import java.util.TreeSet;
+import java.util.*;
Review comment:
expand it ?
##########
File path:
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/ha/MockSCMHADBTransactionBuffer.java
##########
@@ -20,22 +20,22 @@
import org.apache.hadoop.hdds.utils.db.BatchOperation;
import org.apache.hadoop.hdds.utils.db.DBStore;
import org.apache.hadoop.hdds.utils.db.RDBBatchOperation;
+import org.apache.hadoop.hdds.utils.db.Table;
import org.apache.ratis.statemachine.SnapshotInfo;
import java.io.IOException;
-public class MockDBTransactionBuffer implements DBTransactionBuffer {
+public class MockSCMHADBTransactionBuffer implements SCMHADBTransactionBuffer {
private DBStore dbStore;
private BatchOperation currentBatchOperation;
- public MockDBTransactionBuffer() {
+ public MockSCMHADBTransactionBuffer() {
}
- public MockDBTransactionBuffer(DBStore store) {
+ public MockSCMHADBTransactionBuffer(DBStore store) {
this.dbStore = store;
}
- @Override
public BatchOperation getCurrentBatchOperation() {
Review comment:
change getCurrentBatchOperation to private.
##########
File path:
hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/TestUtils.java
##########
@@ -476,7 +476,7 @@ public static void quasiCloseContainer(ContainerManagerV2
containerManager,
public static StorageContainerManager getScmSimple(OzoneConfiguration conf)
throws IOException, AuthenticationException {
SCMConfigurator configurator = new SCMConfigurator();
- conf.setBoolean(ScmConfigKeys.OZONE_SCM_HA_ENABLE_KEY, true);
+ // conf.setBoolean(ScmConfigKeys.OZONE_SCM_HA_ENABLE_KEY, true);
Review comment:
I guess, this is the flag to enable ratis SCM in test ? Better add a
comment, since it is easy to overlook.
##########
File path:
hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/scm/metadata/DBTransactionBuffer.java
##########
@@ -0,0 +1,37 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdds.scm.metadata;
+
+import org.apache.hadoop.hdds.utils.db.Table;
+
+import java.io.Closeable;
+import java.io.IOException;
+
+/**
+ * DB transaction that abstracts the updates to the underlying datastore.
+ */
+public interface DBTransactionBuffer<KEY, VALUE> extends Closeable {
+
+ void addToBuffer(Table<KEY, VALUE> table, KEY key, VALUE value) throws
+ IOException;
+
+ void removeFromBuffer(Table<KEY, VALUE> table, KEY key) throws
Review comment:
generic method may be better than generic class here. We can fix it
later if it is too complicated.
##########
File path:
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/StorageContainerManager.java
##########
@@ -454,10 +454,11 @@ private void initializeSystemManagers(OzoneConfiguration
conf,
if (configurator.getScmContext() != null) {
scmContext = configurator.getScmContext();
} else {
+ long term = SCMHAUtils.isSCMHAEnabled(conf) ? 0 :
SCMContext.INVALID_TERM;
Review comment:
Good !
Better add a comment here.
```
// When term equals SCMContext.INVALID_TERM, the isLeader() and
getTermOfLeader() will always pass.
```
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]