errose28 commented on a change in pull request #1702:
URL: https://github.com/apache/ozone/pull/1702#discussion_r544471398
##########
File path:
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/metadata/DatanodeStoreSchemaTwoImpl.java
##########
@@ -41,5 +46,13 @@ public DatanodeStoreSchemaTwoImpl(ConfigurationSource config,
throws IOException {
super(config, containerID, new DatanodeSchemaTwoDBDefinition(dbPath),
openReadOnly);
+ this.txnTable =
+ (TypedTable<Long, DeletedBlocksTransaction>) new
+ DatanodeSchemaTwoDBDefinition(
+ dbPath).getTxnBlocksColumnFamily().getTable(getStore());
+ }
+
+ public TypedTable<Long, DeletedBlocksTransaction> getTxnTable(){
+ return txnTable;
Review comment:
All other table getters in `AbstractDatanodeStore` return `Table` type.
I think we can do the same for this getter, making the cast in the constructor
unnecessary.
##########
File path:
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/statemachine/background/BlockDeletingService.java
##########
@@ -50,14 +48,22 @@
import org.apache.hadoop.ozone.container.common.utils.ReferenceCountedDB;
import org.apache.hadoop.ozone.container.keyvalue.KeyValueContainerData;
import org.apache.hadoop.ozone.container.keyvalue.helpers.BlockUtils;
+import org.apache.hadoop.ozone.container.metadata.DatanodeStore;
+import org.apache.hadoop.ozone.container.metadata.DatanodeStoreSchemaTwoImpl;
import org.apache.hadoop.ozone.container.ozoneimpl.OzoneContainer;
import org.apache.hadoop.util.Time;
+import org.apache.hadoop.hdds.protocol.proto
+ .StorageContainerDatanodeProtocolProtos.DeletedBlocksTransaction;
import com.google.common.collect.Lists;
+
import static
org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_BLOCK_DELETING_CONTAINER_LIMIT_PER_INTERVAL;
import static
org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_BLOCK_DELETING_CONTAINER_LIMIT_PER_INTERVAL_DEFAULT;
import static
org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_BLOCK_DELETING_LIMIT_PER_CONTAINER;
import static
org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_BLOCK_DELETING_LIMIT_PER_CONTAINER_DEFAULT;
+import static org.apache.hadoop.ozone.OzoneConsts.SCHEMA_V1;
+import static org.apache.hadoop.ozone.OzoneConsts.SCHEMA_V2;
+
Review comment:
Github won't let me comment on unchanged lines, but while we are
touching this code might as well remove the old TODO on line 75. Pretty sure
block deleting service works for all storage layouts now.
##########
File path:
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/statemachine/background/BlockDeletingService.java
##########
@@ -312,9 +338,102 @@ public BackgroundTaskResult call() throws Exception {
}
crr.addAll(succeedBlocks);
return crr;
- } finally {
- container.writeUnlock();
+ } catch (IOException exception) {
+ LOG.warn(
+ "Deletion operation was not successful for container: " + container
+ .getContainerData().getContainerID());
+ }
+ return crr;
+ }
+
+ public ContainerBackgroundTaskResult deleteViaSchema2(
+ ReferenceCountedDB meta, Container container, File dataDir,
+ long startTime) {
+ ContainerBackgroundTaskResult crr = new ContainerBackgroundTaskResult();
+ try {
+ Table<String, BlockData> blockDataTable =
+ meta.getStore().getBlockDataTable();
+ DatanodeStore ds = meta.getStore();
+ DatanodeStoreSchemaTwoImpl dnStoreTwoImpl =
+ (DatanodeStoreSchemaTwoImpl) ds;
+ Table<Long, DeletedBlocksTransaction>
+ delTxTable = dnStoreTwoImpl.getTxnTable();
+ List<DeletedBlocksTransaction> delBlocks = new ArrayList<>();
+ int totalBlocks = 0;
+ try (TableIterator<Long,
+ ? extends Table.KeyValue<Long, DeletedBlocksTransaction>> iter =
+ dnStoreTwoImpl.getTxnTable().iterator()) {
+ while (iter.hasNext() && (totalBlocks < blockLimitPerTask)) {
+ DeletedBlocksTransaction delTx = iter.next().getValue();
+ totalBlocks += delTx.getLocalIDList().size();
+ delBlocks.add(delTx);
+ }
+ }
+
+ if (delBlocks.isEmpty()) {
+ LOG.debug("No transaction found in container : {}",
+ containerData.getContainerID());
+ }
+
+ int deleteBlockCount = 0;
+ LOG.debug("Container : {}, To-Delete blocks : {}",
+ containerData.getContainerID(), delBlocks.size());
+ if (!dataDir.exists() || !dataDir.isDirectory()) {
+ LOG.error("Invalid container data dir {} : "
+ + "does not exist or not a directory",
dataDir.getAbsolutePath());
+ return crr;
+ }
+
+ Handler handler = Objects.requireNonNull(ozoneContainer.getDispatcher()
+ .getHandler(container.getContainerType()));
+
+ for (DeletedBlocksTransaction entry: delBlocks) {
+ for (Long blkLong : entry.getLocalIDList()) {
+ String blk = blkLong.toString();
+ BlockData blkInfo = blockDataTable.get(blk);
Review comment:
If it is not too difficult, could we add a test where a transaction is
submitted for deletion multiple times to test this case? If such a test already
exists and I missed it please point me to where it is.
##########
File path:
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/DeleteBlocksCommandHandler.java
##########
@@ -283,7 +319,58 @@ private void deleteKeyValueContainerBlocks(
// update pending deletion blocks count and delete transaction ID in
// in-memory container status
containerData.updateDeleteTransactionId(delTX.getTxID());
+ containerData.incrPendingDeletionBlocks(newDeletionBlocks);
+ }
+ }
+ }
+ }
+
+ private void updateMetaData(
+ KeyValueContainerData containerData, DeletedBlocksTransaction delTX,
+ int newDeletionBlocks) throws IOException {
+ long containerId = delTX.getContainerID();
+ if (LOG.isDebugEnabled()) {
+ LOG.debug("Processing Container : {}, DB path : {}", containerId,
+ containerData.getMetadataPath());
+ }
+
+ if (delTX.getTxID() < containerData.getDeleteTransactionId()) {
+ if (LOG.isDebugEnabled()) {
+ LOG.debug(String.format("Ignoring delete blocks for containerId: %d."
+ + " Outdated delete transactionId %d < %d", containerId,
+ delTX.getTxID(), containerData.getDeleteTransactionId()));
+ }
+ return;
+ }
+ try(ReferenceCountedDB containerDB =
+ BlockUtils.getDB(containerData, conf)) {
+
+ if (newDeletionBlocks > 0) {
+ // Finally commit the DB counters.
+ try(BatchOperation batchOperation =
+ containerDB.getStore().getBatchHandler().initBatchOperation()) {
+ Table< String, Long > metadataTable = containerDB.getStore()
+ .getMetadataTable();
+
+ // In memory is updated only when existing delete transactionID is
+ // greater.
+ if (delTX.getTxID() > containerData.getDeleteTransactionId()) {
+ // Update in DB pending delete key count and delete transaction ID.
+ metadataTable.putWithBatch(batchOperation,
+ OzoneConsts.DELETE_TRANSACTION_KEY, delTX.getTxID());
+ }
+
+ long pendingDeleteBlocks =
+ containerData.getNumPendingDeletionBlocks() + newDeletionBlocks;
+ metadataTable.putWithBatch(batchOperation,
+ OzoneConsts.PENDING_DELETE_BLOCK_COUNT, pendingDeleteBlocks);
+ containerDB.getStore().getBatchHandler()
+ .commitBatchOperation(batchOperation);
Review comment:
I'm not super familiar with the SCM code, but will SCM use the same
transaction ID on retry if it sends a delete block commands to the datanode and
doesn't get an ack? If so, this code will incorrectly update metadata in this
case. Once [Lokesh's
comment](https://github.com/apache/ozone/pull/1702/files#r544184639) is
addressed, and if the `<` there is changed to `<=`, then I believe the check on
358 can be removed and we should be OK to handle duplicate transaction IDs.
##########
File path:
hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/common/TestBlockDeletingService.java
##########
@@ -158,65 +190,167 @@ private void createToDeleteBlocks(ContainerSet
containerSet,
}
byte[] arr = randomAlphanumeric(1048576).getBytes(UTF_8);
ChunkBuffer buffer = ChunkBuffer.wrap(ByteBuffer.wrap(arr));
+ int txnID = 0;
for (int x = 0; x < numOfContainers; x++) {
long containerID = ContainerTestHelper.getTestContainerID();
KeyValueContainerData data =
new KeyValueContainerData(containerID, layout,
ContainerTestHelper.CONTAINER_MAX_SIZE,
UUID.randomUUID().toString(), datanodeUuid);
data.closeContainer();
+ data.setSchemaVersion(schemaVersion);
KeyValueContainer container = new KeyValueContainer(data, conf);
container.create(volumeSet,
new RoundRobinVolumeChoosingPolicy(), scmId);
containerSet.addContainer(container);
data = (KeyValueContainerData) containerSet.getContainer(
containerID).getContainerData();
- long chunkLength = 100;
- try(ReferenceCountedDB metadata = BlockUtils.getDB(data, conf)) {
- for (int j = 0; j < numOfBlocksPerContainer; j++) {
+ if (data.getSchemaVersion().equals(SCHEMA_V1)) {
+ try(ReferenceCountedDB metadata = BlockUtils.getDB(data, conf)) {
Review comment:
For readability, can we refactor the contents of each schema 1 and
schema 2 block of this if statement into their own methods, named according to
which schema version they correspond to, as is done in other parts of the code?
This way, there is a consistent pattern of calling either `*SchemaV1` or
`*SchemaV2` when they need different code paths.
##########
File path:
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/DeleteBlocksCommandHandler.java
##########
@@ -129,7 +134,16 @@ public void handle(SCMCommand command, OzoneContainer
container,
cont.getContainerData();
cont.writeLock();
try {
- deleteKeyValueContainerBlocks(containerData, entry);
+ if (containerData.getSchemaVersion().equals(SCHEMA_V1)) {
+ deleteKeyValueContainerBlocks(containerData, entry);
+ } else if (containerData.getSchemaVersion().equals(SCHEMA_V2)) {
+ putTxns(containerData, entry, newDeletionBlocks,
Review comment:
Can we rename `putTxns` and `deleteKeyValueContainerBlocks` to reflect
the schema version they correspond to. For example,
`markBlocksForDeletionSchemaV1` and `markBlocksForDeletionSchemaV2`?
##########
File path:
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/statemachine/background/BlockDeletingService.java
##########
@@ -312,9 +338,102 @@ public BackgroundTaskResult call() throws Exception {
}
crr.addAll(succeedBlocks);
return crr;
- } finally {
- container.writeUnlock();
+ } catch (IOException exception) {
+ LOG.warn(
+ "Deletion operation was not successful for container: " + container
+ .getContainerData().getContainerID());
+ }
+ return crr;
+ }
+
+ public ContainerBackgroundTaskResult deleteViaSchema2(
+ ReferenceCountedDB meta, Container container, File dataDir,
+ long startTime) {
+ ContainerBackgroundTaskResult crr = new ContainerBackgroundTaskResult();
+ try {
+ Table<String, BlockData> blockDataTable =
+ meta.getStore().getBlockDataTable();
+ DatanodeStore ds = meta.getStore();
+ DatanodeStoreSchemaTwoImpl dnStoreTwoImpl =
+ (DatanodeStoreSchemaTwoImpl) ds;
+ Table<Long, DeletedBlocksTransaction>
+ delTxTable = dnStoreTwoImpl.getTxnTable();
+ List<DeletedBlocksTransaction> delBlocks = new ArrayList<>();
+ int totalBlocks = 0;
+ try (TableIterator<Long,
+ ? extends Table.KeyValue<Long, DeletedBlocksTransaction>> iter =
+ dnStoreTwoImpl.getTxnTable().iterator()) {
+ while (iter.hasNext() && (totalBlocks < blockLimitPerTask)) {
+ DeletedBlocksTransaction delTx = iter.next().getValue();
+ totalBlocks += delTx.getLocalIDList().size();
+ delBlocks.add(delTx);
+ }
+ }
+
+ if (delBlocks.isEmpty()) {
+ LOG.debug("No transaction found in container : {}",
+ containerData.getContainerID());
+ }
+
+ int deleteBlockCount = 0;
+ LOG.debug("Container : {}, To-Delete blocks : {}",
+ containerData.getContainerID(), delBlocks.size());
+ if (!dataDir.exists() || !dataDir.isDirectory()) {
+ LOG.error("Invalid container data dir {} : "
+ + "does not exist or not a directory",
dataDir.getAbsolutePath());
+ return crr;
+ }
+
+ Handler handler = Objects.requireNonNull(ozoneContainer.getDispatcher()
+ .getHandler(container.getContainerType()));
+
+ for (DeletedBlocksTransaction entry: delBlocks) {
+ for (Long blkLong : entry.getLocalIDList()) {
+ String blk = blkLong.toString();
+ BlockData blkInfo = blockDataTable.get(blk);
Review comment:
If a block has already been deleted and we try to delete it again like
in [Lokesh's
example](https://github.com/apache/ozone/pull/1702/files#r544211111), we should
do nothing. This implementation will result in in a NullPointerException in
`handler#deleteBlock` when the block is not found in the block data table. Note
that the original version of the code handles this case when marking blocks for
deletion with a prefix, but since we are operating on transactions, not
prefixes, we have to do the check here when actually deleting the block.
##########
File path:
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/TestStorageContainerManager.java
##########
@@ -289,8 +291,7 @@ public void testBlockDeletionTransactions() throws
Exception {
return false;
}
}, 1000, 10000);
- Assert.assertTrue(helper.getAllBlocks(containerIDs).isEmpty());
-
+ Assert.assertTrue(helper.verifyBlocksWithTxnTable(containerBlocks));
Review comment:
This will only work for schema version two. `SCHEMA_LATEST` should be
able to be changed to V1 and this test should still pass. Therefore, we need
something along the lines of `verifyWithSchemaV1` and `verifyWithSchemaV2`
methods.
##########
File path:
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/metadata/DatanodeStoreSchemaTwoImpl.java
##########
@@ -41,5 +46,13 @@ public DatanodeStoreSchemaTwoImpl(ConfigurationSource config,
throws IOException {
super(config, containerID, new DatanodeSchemaTwoDBDefinition(dbPath),
openReadOnly);
+ this.txnTable =
+ (TypedTable<Long, DeletedBlocksTransaction>) new
Review comment:
The getters in the parent class return `Table`s instead of
`TypedTable`s. I think we can do that here too, then we do not need this cast
and the getter and private field can be changed accordingly.
##########
File path:
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/metadata/DatanodeSchemaOneDBDefinition.java
##########
@@ -88,4 +88,10 @@ protected DatanodeSchemaOneDBDefinition(String dbPath) {
getDeletedBlocksColumnFamily() {
return DELETED_BLOCKS;
}
+
+ @Override
+ public DBColumnFamilyDefinition[] getColumnFamilies() {
+ return new DBColumnFamilyDefinition[] {getBlockDataColumnFamily(),
+ getMetadataColumnFamily(), getDeletedBlocksColumnFamily() };
Review comment:
Do we need this override? It looks the same as the parent method.
##########
File path:
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/metadata/DatanodeSchemaOneDBDefinition.java
##########
@@ -88,4 +88,10 @@ protected DatanodeSchemaOneDBDefinition(String dbPath) {
getDeletedBlocksColumnFamily() {
return DELETED_BLOCKS;
}
+
+ @Override
+ public DBColumnFamilyDefinition[] getColumnFamilies() {
+ return new DBColumnFamilyDefinition[] {getBlockDataColumnFamily(),
+ getMetadataColumnFamily(), getDeletedBlocksColumnFamily() };
+ }
Review comment:
I don't think this override is necessary. It looks the same as the
parent class method.
##########
File path:
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/DeleteBlocksCommandHandler.java
##########
@@ -283,7 +319,58 @@ private void deleteKeyValueContainerBlocks(
// update pending deletion blocks count and delete transaction ID in
// in-memory container status
containerData.updateDeleteTransactionId(delTX.getTxID());
+ containerData.incrPendingDeletionBlocks(newDeletionBlocks);
+ }
+ }
+ }
+ }
+
+ private void updateMetaData(
+ KeyValueContainerData containerData, DeletedBlocksTransaction delTX,
+ int newDeletionBlocks) throws IOException {
+ long containerId = delTX.getContainerID();
+ if (LOG.isDebugEnabled()) {
+ LOG.debug("Processing Container : {}, DB path : {}", containerId,
+ containerData.getMetadataPath());
+ }
+
+ if (delTX.getTxID() < containerData.getDeleteTransactionId()) {
+ if (LOG.isDebugEnabled()) {
+ LOG.debug(String.format("Ignoring delete blocks for containerId: %d."
+ + " Outdated delete transactionId %d < %d", containerId,
+ delTX.getTxID(), containerData.getDeleteTransactionId()));
+ }
+ return;
+ }
+ try(ReferenceCountedDB containerDB =
+ BlockUtils.getDB(containerData, conf)) {
+
+ if (newDeletionBlocks > 0) {
+ // Finally commit the DB counters.
+ try(BatchOperation batchOperation =
+ containerDB.getStore().getBatchHandler().initBatchOperation()) {
+ Table< String, Long > metadataTable = containerDB.getStore()
+ .getMetadataTable();
+
+ // In memory is updated only when existing delete transactionID is
+ // greater.
+ if (delTX.getTxID() > containerData.getDeleteTransactionId()) {
+ // Update in DB pending delete key count and delete transaction ID.
+ metadataTable.putWithBatch(batchOperation,
+ OzoneConsts.DELETE_TRANSACTION_KEY, delTX.getTxID());
+ }
+
+ long pendingDeleteBlocks =
+ containerData.getNumPendingDeletionBlocks() + newDeletionBlocks;
+ metadataTable.putWithBatch(batchOperation,
+ OzoneConsts.PENDING_DELETE_BLOCK_COUNT, pendingDeleteBlocks);
Review comment:
I am not very familiar with the SCM code, but will SCM use the same
transaction ID to retry a deletion if it does not get an ack from the datanode?
If so, this code will result in a duplicate metadata update if the DN did in
fact get the first transaction. If IDs are reused for retries, then when
[Lokesh's comment](https://github.com/apache/ozone/pull/1702/files#r544184639)
is addressed, I think changing the `<` to `<=` in that check, then removing
the check on line 358 should handle it.
##########
File path:
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/DeleteBlocksCommandHandler.java
##########
@@ -283,7 +319,58 @@ private void deleteKeyValueContainerBlocks(
// update pending deletion blocks count and delete transaction ID in
// in-memory container status
containerData.updateDeleteTransactionId(delTX.getTxID());
+ containerData.incrPendingDeletionBlocks(newDeletionBlocks);
+ }
+ }
+ }
+ }
+
+ private void updateMetaData(
Review comment:
And if we do this for both versions, might as well batch writes of the
transaction data and metadata together as well. Even though the original does
not do this, there is a chance of a crash between the transaction update and
the metadata update, causing them to become out of sync.
##########
File path:
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/DeleteBlocksCommandHandler.java
##########
@@ -283,7 +319,58 @@ private void deleteKeyValueContainerBlocks(
// update pending deletion blocks count and delete transaction ID in
// in-memory container status
containerData.updateDeleteTransactionId(delTX.getTxID());
+ containerData.incrPendingDeletionBlocks(newDeletionBlocks);
+ }
+ }
+ }
+ }
+
+ private void updateMetaData(
Review comment:
Tagging on to this, can we update the metadata and deletion data in a
single batch operation? This is not done in the original code, but it could
lead to metadata becoming out of sync so I think it would be a good addition if
we are calling the same metadata update method for both versions.
##########
File path:
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/statemachine/background/BlockDeletingService.java
##########
@@ -312,9 +338,102 @@ public BackgroundTaskResult call() throws Exception {
}
crr.addAll(succeedBlocks);
return crr;
- } finally {
- container.writeUnlock();
+ } catch (IOException exception) {
+ LOG.warn(
+ "Deletion operation was not successful for container: " + container
+ .getContainerData().getContainerID());
Review comment:
Looks like in the earlier version of the code, an IOException here would
have been propagated up out of the `call` method. Are there any pros or cons to
catching the exception and returning the partially completed
`ContainerBackgroundTaskResult` instead?
##########
File path:
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/statemachine/background/BlockDeletingService.java
##########
@@ -312,9 +338,102 @@ public BackgroundTaskResult call() throws Exception {
}
crr.addAll(succeedBlocks);
return crr;
- } finally {
- container.writeUnlock();
+ } catch (IOException exception) {
+ LOG.warn(
+ "Deletion operation was not successful for container: " + container
+ .getContainerData().getContainerID());
+ }
+ return crr;
+ }
+
+ public ContainerBackgroundTaskResult deleteViaSchema2(
+ ReferenceCountedDB meta, Container container, File dataDir,
+ long startTime) {
+ ContainerBackgroundTaskResult crr = new ContainerBackgroundTaskResult();
+ try {
+ Table<String, BlockData> blockDataTable =
+ meta.getStore().getBlockDataTable();
+ DatanodeStore ds = meta.getStore();
+ DatanodeStoreSchemaTwoImpl dnStoreTwoImpl =
+ (DatanodeStoreSchemaTwoImpl) ds;
+ Table<Long, DeletedBlocksTransaction>
+ delTxTable = dnStoreTwoImpl.getTxnTable();
+ List<DeletedBlocksTransaction> delBlocks = new ArrayList<>();
+ int totalBlocks = 0;
+ try (TableIterator<Long,
+ ? extends Table.KeyValue<Long, DeletedBlocksTransaction>> iter =
+ dnStoreTwoImpl.getTxnTable().iterator()) {
+ while (iter.hasNext() && (totalBlocks < blockLimitPerTask)) {
Review comment:
Not a huge issue, but technically we can violate the `blockLimitPerTask`
here if `totalBlocks + delTx.getLocalIDList().size() > blockLimitPerTask` when
we enter the last loop iteration. Probably better to do something like this:
```suggestion
while (iter.hasNext()) {
DeletedBlocksTransaction delTx = iter.next().getValue();
int txnBlockCount = delTx.getLocalIDList().size();
if (totalBlocks + txnBlockCount <= blockLimitPerTask) {
totalBlocks += txnBlockCount;
delBlocks.add(delTx);
}
}
```
##########
File path:
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/statemachine/background/BlockDeletingService.java
##########
@@ -312,9 +338,102 @@ public BackgroundTaskResult call() throws Exception {
}
crr.addAll(succeedBlocks);
return crr;
- } finally {
- container.writeUnlock();
+ } catch (IOException exception) {
+ LOG.warn(
+ "Deletion operation was not successful for container: " + container
+ .getContainerData().getContainerID());
Review comment:
It looks like the exception here was propagated by the `call` method in
the original version of the code. Any pros/cons to returning the unfinished
`ContainerBackgroundTaskReport` on exception instead?
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]