[
https://issues.apache.org/jira/browse/HDDS-361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16666475#comment-16666475
]
Anu Engineer commented on HDDS-361:
-----------------------------------
Thank you for updating the patch. I had to rebase to the trunk to review. Some
very minor comments below.
*BlockDeletingService.java:*
*
** Execute delete transaction for every entry in the pending delete table.
{code:java}
while (iter.hasNext())}
{code}
** Should we have a maximum delete per run, say 1000 delete Tx per call?
Surely I am missing something here, could you please explain to me why we need
to do this?
{code:java}
** deleteTxnFilter.accept(keyValue.getKey())
{code}
** nit: Would it be possible to keep track of how many bytes are deleted too,
say via chunk delete calls? or the block info itself?
{code:java}
** numBlocksDeleted += delTxn.getLocalIDCount()
{code}
** Do we have to update the BCSID after each delete?
* The Change in BlockIterator.java is spurious.
* Nit: DeleteBlocksCommandhanlder.java: segregateTxnByContainerID – Shouldn't
this be already done at the SCM layer?
* Nit: KeyValueContainerUtil.java: Line 178: Remove the commented out code
block?
> Use DBStore and TableStore for DN metadata
> ------------------------------------------
>
> Key: HDDS-361
> URL: https://issues.apache.org/jira/browse/HDDS-361
> Project: Hadoop Distributed Data Store
> Issue Type: Bug
> Reporter: Xiaoyu Yao
> Assignee: Lokesh Jain
> Priority: Major
> Attachments: HDDS-361.001.patch, HDDS-361.002.patch,
> HDDS-361.003.patch
>
>
> As part of OM performance improvement we used Tables for storing a particular
> type of key value pair in the rocks db. This Jira aims to use Tables for
> separating block keys and deletion transactions in the container db.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]