[ 
https://issues.apache.org/jira/browse/HDDS-361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16689900#comment-16689900
 ] 

Hanisha Koneru edited comment on HDDS-361 at 11/16/18 7:10 PM:
---------------------------------------------------------------

[~ljain], thanks for the updated patch.
{quote}The patch I have uploaded makes best effort to delete a block. On 
failure we can consider two options.
 # Retain the delete transaction in the db. We cannot update the number of 
pending deletion blocks partially based on successful block deletions. If we 
want to do it then we need to alter the delete transaction (to include only 
blocks which could not be deleted). This altered transaction needs to be put 
back into the db which is not a good idea. Therefore we can retain the entire 
transaction on failure of block deletion
 # We can ignore the block which could not be deleted and garbage collection 
can handle this block deletion.
 I think we should retain the entire delete transaction in case of failure in 
deletion of any block{quote}
With option 1, lets say out of 1000 blocks, 1 block deletion was unsuccessful 
and so we keep the entire transaction in the DB. Because of 1 block, the entire 
transaction would be retried (multiple times if for some reason that one block 
is not able to be deleted). This might have a cascading effect unless checked.
 What issues do you see with putting an altered transaction back into the DB? 
We could probably batch the transactions together to put into DB?


was (Author: hanishakoneru):
[~ljain]
{quote}The patch I have uploaded makes best effort to delete a block. On 
failure we can consider two options.
 # Retain the delete transaction in the db. We cannot update the number of 
pending deletion blocks partially based on successful block deletions. If we 
want to do it then we need to alter the delete transaction (to include only 
blocks which could not be deleted). This altered transaction needs to be put 
back into the db which is not a good idea. Therefore we can retain the entire 
transaction on failure of block deletion
 # We can ignore the block which could not be deleted and garbage collection 
can handle this block deletion.
 I think we should retain the entire delete transaction in case of failure in 
deletion of any block{quote}
With option 1, lets say out of 1000 blocks, 1 block deletion was unsuccessful 
and so we keep the entire transaction in the DB. Because of 1 block, the entire 
transaction would be retried (multiple times if for some reason that one block 
is not able to be deleted). This might have a cascading effect unless checked.
 What issues do you see with putting an altered transaction back into the DB? 
We could probably batch the transactions together to put into DB?

> Use DBStore and TableStore for DN metadata
> ------------------------------------------
>
>                 Key: HDDS-361
>                 URL: https://issues.apache.org/jira/browse/HDDS-361
>             Project: Hadoop Distributed Data Store
>          Issue Type: Bug
>            Reporter: Xiaoyu Yao
>            Assignee: Lokesh Jain
>            Priority: Major
>         Attachments: HDDS-361.001.patch, HDDS-361.002.patch, 
> HDDS-361.003.patch
>
>
> As part of OM performance improvement we used Tables for storing a particular 
> type of key value pair in the rocks db. This Jira aims to use Tables for 
> separating block keys and deletion transactions in the container db.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to