[jira] [Updated] (HDDS-267) Handle consistency issues during container update/close

2018-08-08 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDDS-267:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Handle consistency issues during container update/close
> ---
>
> Key: HDDS-267
> URL: https://issues.apache.org/jira/browse/HDDS-267
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-267.001.patch, HDDS-267.002.patch, 
> HDDS-267.003.patch, HDDS-267.004.patch, HDDS-267.005.patch
>
>
> During container update and close, the .container file on disk is modified. 
> We should make sure that the in-memory state and the on-disk state for a 
> container are consistent. 
> A write lock is obtained before updating the container data during close or 
> update operations.
> During update operation, if the on-disk update of .container file fails, then 
> the container metadata is in-memory is also reset to the old value.
> During close operation, if the on-disk update of .container file fails, then 
> the in-memory containerState is set to CLOSING so that no new operations are 
> permitted. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-267) Handle consistency issues during container update/close

2018-08-08 Thread Hanisha Koneru (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16573841#comment-16573841
 ] 

Hanisha Koneru edited comment on HDDS-267 at 8/8/18 9:04 PM:
-

Thanks [~arpitagarwal] and [~bharatviswa] for reviews.
 I have updated patch v05 to handle the create and update container file cases 
separately. Thanks Bharat for catching it.

The test failures are unrelated to this patch and pass locally.


was (Author: hanishakoneru):
Thanks [~arpitagarwal] and [~bharatviswa] for reviews.
I have updated patch v05 to handle the create and update container file cases 
separately. Thanks Bharat for catching it.

> Handle consistency issues during container update/close
> ---
>
> Key: HDDS-267
> URL: https://issues.apache.org/jira/browse/HDDS-267
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-267.001.patch, HDDS-267.002.patch, 
> HDDS-267.003.patch, HDDS-267.004.patch, HDDS-267.005.patch
>
>
> During container update and close, the .container file on disk is modified. 
> We should make sure that the in-memory state and the on-disk state for a 
> container are consistent. 
> A write lock is obtained before updating the container data during close or 
> update operations.
> During update operation, if the on-disk update of .container file fails, then 
> the container metadata is in-memory is also reset to the old value.
> During close operation, if the on-disk update of .container file fails, then 
> the in-memory containerState is set to CLOSING so that no new operations are 
> permitted. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-267) Handle consistency issues during container update/close

2018-08-08 Thread Hanisha Koneru (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16573841#comment-16573841
 ] 

Hanisha Koneru commented on HDDS-267:
-

Thanks [~arpitagarwal] and [~bharatviswa] for reviews.
I have updated patch v05 to handle the create and update container file cases 
separately. Thanks Bharat for catching it.

> Handle consistency issues during container update/close
> ---
>
> Key: HDDS-267
> URL: https://issues.apache.org/jira/browse/HDDS-267
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-267.001.patch, HDDS-267.002.patch, 
> HDDS-267.003.patch, HDDS-267.004.patch, HDDS-267.005.patch
>
>
> During container update and close, the .container file on disk is modified. 
> We should make sure that the in-memory state and the on-disk state for a 
> container are consistent. 
> A write lock is obtained before updating the container data during close or 
> update operations.
> During update operation, if the on-disk update of .container file fails, then 
> the container metadata is in-memory is also reset to the old value.
> During close operation, if the on-disk update of .container file fails, then 
> the in-memory containerState is set to CLOSING so that no new operations are 
> permitted. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-267) Handle consistency issues during container update/close

2018-08-08 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDDS-267:

Attachment: HDDS-267.005.patch

> Handle consistency issues during container update/close
> ---
>
> Key: HDDS-267
> URL: https://issues.apache.org/jira/browse/HDDS-267
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-267.001.patch, HDDS-267.002.patch, 
> HDDS-267.003.patch, HDDS-267.004.patch, HDDS-267.005.patch
>
>
> During container update and close, the .container file on disk is modified. 
> We should make sure that the in-memory state and the on-disk state for a 
> container are consistent. 
> A write lock is obtained before updating the container data during close or 
> update operations.
> During update operation, if the on-disk update of .container file fails, then 
> the container metadata is in-memory is also reset to the old value.
> During close operation, if the on-disk update of .container file fails, then 
> the in-memory containerState is set to CLOSING so that no new operations are 
> permitted. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-328) Support export and import of the KeyValueContainer

2018-08-07 Thread Hanisha Koneru (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16572447#comment-16572447
 ] 

Hanisha Koneru commented on HDDS-328:
-

Thanks [~elek] for working on this improvement.

A few comments:
 * Do we allow copying open containers? I am thinking not. We should add a 
check for that the container is closed before packing it.

 * When creating a container object and choosing the volume for it, we need to 
specify the max container size. The max size is a configurable parameter. The 
dest node might have a different max size setting from the source container.
 For example, lets say the container to be copied has 5GB max size and is full. 
We want to copy it into a node with max size set to 2GB and as such we choose a 
volume with 2.5GB space left. When actually copying the container data, we 
would get disk out of space exception as we are trying to copy 5GB whereas we 
have only 2.5GB space on disk.
 To avoid this, we should first copy and extract the container tar into the new 
node and then update the path variables.

 * Is there any reason for making the container file in tarball a hidden file?

 * Can we change the *_METADATADB_DIR_NAME to indicate that it is the db file. 
Currently, we have both .container file and .db file under the metadata 
directory. So it might get confusing on what "metadata” denotes.

 * We should acquire the container write lock before unpacking and updating in 
KeyValueContainer#importContainerData.

 * Once the container has been successfully copied, we should update its other 
metrics such as keyCount

> Support export and import of the KeyValueContainer
> --
>
> Key: HDDS-328
> URL: https://issues.apache.org/jira/browse/HDDS-328
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Critical
> Fix For: 0.2.1
>
> Attachments: HDDS-328.002.patch
>
>
> In HDDS-75 we pack the container data to an archive file, copy to other 
> datanodes and create the container from the archive.
> As I wrote in the comment of HDDS-75 I propose to separate the patch to make 
> it easier to review.
> In this patch we need to extend the existing Container interface with adding 
> export/import methods to save the container data to one binary input/output 
> stream. 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-267) Handle consistency issues during container update/close

2018-08-07 Thread Hanisha Koneru (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16572108#comment-16572108
 ] 

Hanisha Koneru commented on HDDS-267:
-

Rebased.. Thanks [~bharatviswa].

> Handle consistency issues during container update/close
> ---
>
> Key: HDDS-267
> URL: https://issues.apache.org/jira/browse/HDDS-267
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-267.001.patch, HDDS-267.002.patch, 
> HDDS-267.003.patch, HDDS-267.004.patch
>
>
> During container update and close, the .container file on disk is modified. 
> We should make sure that the in-memory state and the on-disk state for a 
> container are consistent. 
> A write lock is obtained before updating the container data during close or 
> update operations.
> During update operation, if the on-disk update of .container file fails, then 
> the container metadata is in-memory is also reset to the old value.
> During close operation, if the on-disk update of .container file fails, then 
> the in-memory containerState is set to CLOSING so that no new operations are 
> permitted. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-267) Handle consistency issues during container update/close

2018-08-07 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDDS-267:

Attachment: HDDS-267.004.patch

> Handle consistency issues during container update/close
> ---
>
> Key: HDDS-267
> URL: https://issues.apache.org/jira/browse/HDDS-267
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-267.001.patch, HDDS-267.002.patch, 
> HDDS-267.003.patch, HDDS-267.004.patch
>
>
> During container update and close, the .container file on disk is modified. 
> We should make sure that the in-memory state and the on-disk state for a 
> container are consistent. 
> A write lock is obtained before updating the container data during close or 
> update operations.
> During update operation, if the on-disk update of .container file fails, then 
> the container metadata is in-memory is also reset to the old value.
> During close operation, if the on-disk update of .container file fails, then 
> the in-memory containerState is set to CLOSING so that no new operations are 
> permitted. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-305) Datanode StateContext#addContainerActionIfAbsent will add container action even if there already is a ContainerAction

2018-07-30 Thread Hanisha Koneru (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16562381#comment-16562381
 ] 

Hanisha Koneru commented on HDDS-305:
-

Thanks [~nandakumar131] for the improvement.
+1 pending Jenkins.

> Datanode StateContext#addContainerActionIfAbsent will add container action 
> even if there already is a ContainerAction
> -
>
> Key: HDDS-305
> URL: https://issues.apache.org/jira/browse/HDDS-305
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode
>Affects Versions: 0.2.1
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-305.000.patch
>
>
> StateContext#addContainerActionIfAbsent should add the ContainerAction only 
> if the same action is absent. The same container for which the same action is 
> created might have different "number of keys" or "used bytes" making the 
> actions not equal and causing duplicated values in {{containerActions}} queue.
> Thanks to [~hanishakoneru] for pointing this out.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-267) Handle consistency issues during container update/close

2018-07-30 Thread Hanisha Koneru (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16562321#comment-16562321
 ] 

Hanisha Koneru edited comment on HDDS-267 at 7/30/18 6:42 PM:
--

Rebased on top of trunk and fixed unit test failure.


was (Author: hanishakoneru):
Rebased on top of trunk.

> Handle consistency issues during container update/close
> ---
>
> Key: HDDS-267
> URL: https://issues.apache.org/jira/browse/HDDS-267
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-267.001.patch, HDDS-267.002.patch, 
> HDDS-267.003.patch
>
>
> During container update and close, the .container file on disk is modified. 
> We should make sure that the in-memory state and the on-disk state for a 
> container are consistent. 
> A write lock is obtained before updating the container data during close or 
> update operations.
> During update operation, if the on-disk update of .container file fails, then 
> the container metadata is in-memory is also reset to the old value.
> During close operation, if the on-disk update of .container file fails, then 
> the in-memory containerState is set to CLOSING so that no new operations are 
> permitted. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-267) Handle consistency issues during container update/close

2018-07-30 Thread Hanisha Koneru (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16562321#comment-16562321
 ] 

Hanisha Koneru commented on HDDS-267:
-

Rebased on top of trunk.

> Handle consistency issues during container update/close
> ---
>
> Key: HDDS-267
> URL: https://issues.apache.org/jira/browse/HDDS-267
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-267.001.patch, HDDS-267.002.patch, 
> HDDS-267.003.patch
>
>
> During container update and close, the .container file on disk is modified. 
> We should make sure that the in-memory state and the on-disk state for a 
> container are consistent. 
> A write lock is obtained before updating the container data during close or 
> update operations.
> During update operation, if the on-disk update of .container file fails, then 
> the container metadata is in-memory is also reset to the old value.
> During close operation, if the on-disk update of .container file fails, then 
> the in-memory containerState is set to CLOSING so that no new operations are 
> permitted. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-267) Handle consistency issues during container update/close

2018-07-30 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDDS-267:

Attachment: HDDS-267.003.patch

> Handle consistency issues during container update/close
> ---
>
> Key: HDDS-267
> URL: https://issues.apache.org/jira/browse/HDDS-267
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-267.001.patch, HDDS-267.002.patch, 
> HDDS-267.003.patch
>
>
> During container update and close, the .container file on disk is modified. 
> We should make sure that the in-memory state and the on-disk state for a 
> container are consistent. 
> A write lock is obtained before updating the container data during close or 
> update operations.
> During update operation, if the on-disk update of .container file fails, then 
> the container metadata is in-memory is also reset to the old value.
> During close operation, if the on-disk update of .container file fails, then 
> the in-memory containerState is set to CLOSING so that no new operations are 
> permitted. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-248) Refactor DatanodeContainerProtocol.proto

2018-07-30 Thread Hanisha Koneru (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16562127#comment-16562127
 ] 

Hanisha Koneru commented on HDDS-248:
-

Thank you [~bharatviswa] for reviewing and committing the patch.

 

> Refactor DatanodeContainerProtocol.proto 
> -
>
> Key: HDDS-248
> URL: https://issues.apache.org/jira/browse/HDDS-248
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-248.001.patch, HDDS-248.002.patch, 
> HDDS-248.003.patch, HDDS-248.004.patch
>
>
> This Jira proposes to cleanup the DatanodeContainerProtocol protos and 
> refactor as per the new implementation of StorageIO in HDDS-48. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-267) Handle consistency issues during container update/close

2018-07-27 Thread Hanisha Koneru (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16560484#comment-16560484
 ] 

Hanisha Koneru commented on HDDS-267:
-

Rebased on top of current trunk.

> Handle consistency issues during container update/close
> ---
>
> Key: HDDS-267
> URL: https://issues.apache.org/jira/browse/HDDS-267
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-267.001.patch, HDDS-267.002.patch
>
>
> During container update and close, the .container file on disk is modified. 
> We should make sure that the in-memory state and the on-disk state for a 
> container are consistent. 
> A write lock is obtained before updating the container data during close or 
> update operations.
> During update operation, if the on-disk update of .container file fails, then 
> the container metadata is in-memory is also reset to the old value.
> During close operation, if the on-disk update of .container file fails, then 
> the in-memory containerState is set to CLOSING so that no new operations are 
> permitted. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-267) Handle consistency issues during container update/close

2018-07-27 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDDS-267:

Attachment: HDDS-267.002.patch

> Handle consistency issues during container update/close
> ---
>
> Key: HDDS-267
> URL: https://issues.apache.org/jira/browse/HDDS-267
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-267.001.patch, HDDS-267.002.patch
>
>
> During container update and close, the .container file on disk is modified. 
> We should make sure that the in-memory state and the on-disk state for a 
> container are consistent. 
> A write lock is obtained before updating the container data during close or 
> update operations.
> During update operation, if the on-disk update of .container file fails, then 
> the container metadata is in-memory is also reset to the old value.
> During close operation, if the on-disk update of .container file fails, then 
> the in-memory containerState is set to CLOSING so that no new operations are 
> permitted. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-248) Refactor DatanodeContainerProtocol.proto

2018-07-27 Thread Hanisha Koneru (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16560469#comment-16560469
 ] 

Hanisha Koneru commented on HDDS-248:
-

Thanks for the review [~bharatviswa].

Updated the patch.

> Refactor DatanodeContainerProtocol.proto 
> -
>
> Key: HDDS-248
> URL: https://issues.apache.org/jira/browse/HDDS-248
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-248.001.patch, HDDS-248.002.patch, 
> HDDS-248.003.patch, HDDS-248.004.patch
>
>
> This Jira proposes to cleanup the DatanodeContainerProtocol protos and 
> refactor as per the new implementation of StorageIO in HDDS-48. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-248) Refactor DatanodeContainerProtocol.proto

2018-07-27 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDDS-248:

Attachment: HDDS-248.004.patch

> Refactor DatanodeContainerProtocol.proto 
> -
>
> Key: HDDS-248
> URL: https://issues.apache.org/jira/browse/HDDS-248
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-248.001.patch, HDDS-248.002.patch, 
> HDDS-248.003.patch, HDDS-248.004.patch
>
>
> This Jira proposes to cleanup the DatanodeContainerProtocol protos and 
> refactor as per the new implementation of StorageIO in HDDS-48. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-270) Move generic container util functions to ContianerUtils

2018-07-27 Thread Hanisha Koneru (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16560017#comment-16560017
 ] 

Hanisha Koneru commented on HDDS-270:
-

Thanks [~anu] for committing the patch.

> Move generic container util functions to ContianerUtils
> ---
>
> Key: HDDS-270
> URL: https://issues.apache.org/jira/browse/HDDS-270
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-270.001.patch
>
>
> Some container util functions such as getContainerFile() are common for all 
> ContainerTypes. These functions should be moved to ContainerUtils.
> Also moved some fucntions to KeyValueContainer as applicable.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-270) Move generic container util functions to ContianerUtils

2018-07-26 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDDS-270:

Summary: Move generic container util functions to ContianerUtils  (was: 
Move generic container utils to ContianerUitls)

> Move generic container util functions to ContianerUtils
> ---
>
> Key: HDDS-270
> URL: https://issues.apache.org/jira/browse/HDDS-270
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HDDS-270.001.patch
>
>
> Some container util functions such as getContainerFile() are common for all 
> ContainerTypes. These functions should be moved to ContainerUtils.
> Also moved some fucntions to KeyValueContainer as applicable.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-270) Move generic container utils to ContianerUitls

2018-07-26 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDDS-270:

Attachment: HDDS-270.001.patch

> Move generic container utils to ContianerUitls
> --
>
> Key: HDDS-270
> URL: https://issues.apache.org/jira/browse/HDDS-270
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HDDS-270.001.patch
>
>
> Some container util functions such as getContainerFile() are common for all 
> ContainerTypes. These functions should be moved to ContainerUtils.
> Also moved some fucntions to KeyValueContainer as applicable.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-270) Move generic container utils to ContianerUitls

2018-07-26 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDDS-270:

Status: Patch Available  (was: Open)

> Move generic container utils to ContianerUitls
> --
>
> Key: HDDS-270
> URL: https://issues.apache.org/jira/browse/HDDS-270
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HDDS-270.001.patch
>
>
> Some container util functions such as getContainerFile() are common for all 
> ContainerTypes. These functions should be moved to ContainerUtils.
> Also moved some fucntions to KeyValueContainer as applicable.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-252) Eliminate the datanode ID file

2018-07-25 Thread Hanisha Koneru (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16556387#comment-16556387
 ] 

Hanisha Koneru commented on HDDS-252:
-

Thanks for working on this [~bharatviswa].

In VolumeSet, we are iterating over all the version files twice. Once to check 
dnUuid is consistent and again to intialize the volumes. These two operations 
can be combined into one. We can have something similar to 
{{VolumSet#checkAndSetClusterID()}} for datanodeUUID also. 
 If there are no version files, then after initializing volumes, VolumeSet can 
assign itself a new UUID. When the volumes are formatted after VersionEndPoint 
response from SCM, this new UUID will be persisted on disk.

We do not need to store the {{DatanodeDetails}} in each HddsVolume. 
DatanodeUUID would suffice.

> Eliminate the datanode ID file
> --
>
> Key: HDDS-252
> URL: https://issues.apache.org/jira/browse/HDDS-252
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-252.00.patch, HDDS-252.01.patch, HDDS-252.02.patch, 
> HDDS-252.03.patch, HDDS-252.04.patch
>
>
> This Jira is to remove the datanodeID file. After ContainerIO  work (HDDS-48 
> branch) is merged, we have a version file in each Volume which stores 
> datanodeUuid and some additional fields in that file.
> And also if this disk containing datanodeId path is removed, that DN will now 
> be unusable with current code.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-267) Handle consistency issues during container update/close

2018-07-25 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDDS-267:

Description: 
During container update and close, the .container file on disk is modified. We 
should make sure that the in-memory state and the on-disk state for a container 
are consistent. 

A write lock is obtained before updating the container data during close or 
update operations.

During update operation, if the on-disk update of .container file fails, then 
the container metadata is in-memory is also reset to the old value.

During close operation, if the on-disk update of .container file fails, then 
the in-memory containerState is set to CLOSING so that no new operations are 
permitted. 

  was:
During container update and close, the .container file on disk is modified. We 
should make sure that the in-memory state and the on-disk state for a container 
are consistent. 

During update operation, if the on-disk update of .container file fails, then 
the container metadata is in-memory is also reset to the old value.

During close operation, if the on-disk update of .container file fails, then 
the in-memory containerState is set to CLOSING so that no new operations are 
permitted. 


> Handle consistency issues during container update/close
> ---
>
> Key: HDDS-267
> URL: https://issues.apache.org/jira/browse/HDDS-267
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HDDS-267.001.patch
>
>
> During container update and close, the .container file on disk is modified. 
> We should make sure that the in-memory state and the on-disk state for a 
> container are consistent. 
> A write lock is obtained before updating the container data during close or 
> update operations.
> During update operation, if the on-disk update of .container file fails, then 
> the container metadata is in-memory is also reset to the old value.
> During close operation, if the on-disk update of .container file fails, then 
> the in-memory containerState is set to CLOSING so that no new operations are 
> permitted. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-267) Handle consistency issues during container update/close

2018-07-25 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDDS-267:

Status: Patch Available  (was: Open)

> Handle consistency issues during container update/close
> ---
>
> Key: HDDS-267
> URL: https://issues.apache.org/jira/browse/HDDS-267
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HDDS-267.001.patch
>
>
> During container update and close, the .container file on disk is modified. 
> We should make sure that the in-memory state and the on-disk state for a 
> container are consistent. 
> A write lock is obtained before updating the container data during close or 
> update operations.
> During update operation, if the on-disk update of .container file fails, then 
> the container metadata is in-memory is also reset to the old value.
> During close operation, if the on-disk update of .container file fails, then 
> the in-memory containerState is set to CLOSING so that no new operations are 
> permitted. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-267) Handle consistency issues during container update/close

2018-07-25 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDDS-267:

Description: 
During container update and close, the .container file on disk is modified. We 
should make sure that the in-memory state and the on-disk state for a container 
are consistent. 

During update operation, if the on-disk update of .container file fails, then 
the container metadata is in-memory is also reset to the old value.

During close operation, if the on-disk update of .container file fails, then 
the in-memory containerState is set to CLOSING so that no new operations are 
permitted. 

  was:During container update and close, the .container file on disk is 
modified. We should make sure that the in-memory state and the on-disk state 
for a container are consistent. 


> Handle consistency issues during container update/close
> ---
>
> Key: HDDS-267
> URL: https://issues.apache.org/jira/browse/HDDS-267
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HDDS-267.001.patch
>
>
> During container update and close, the .container file on disk is modified. 
> We should make sure that the in-memory state and the on-disk state for a 
> container are consistent. 
> During update operation, if the on-disk update of .container file fails, then 
> the container metadata is in-memory is also reset to the old value.
> During close operation, if the on-disk update of .container file fails, then 
> the in-memory containerState is set to CLOSING so that no new operations are 
> permitted. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-267) Handle consistency issues during container update/close

2018-07-25 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDDS-267:

Attachment: HDDS-267.001.patch

> Handle consistency issues during container update/close
> ---
>
> Key: HDDS-267
> URL: https://issues.apache.org/jira/browse/HDDS-267
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HDDS-267.001.patch
>
>
> During container update and close, the .container file on disk is modified. 
> We should make sure that the in-memory state and the on-disk state for a 
> container are consistent. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-266) Integrate checksum into .container file

2018-07-25 Thread Hanisha Koneru (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16556137#comment-16556137
 ] 

Hanisha Koneru commented on HDDS-266:
-

Thank you [~nandakumar131] and [~bharatviswa] for reviewing and committing the 
patch.

> Integrate checksum into .container file
> ---
>
> Key: HDDS-266
> URL: https://issues.apache.org/jira/browse/HDDS-266
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-266.001.patch, HDDS-266.002.patch, 
> HDDS-266.003.patch, HDDS-266.004.patch
>
>
> Currently, each container metadata has 2 files - .container and .checksum 
> file.
> In this Jira, we propose to integrate the checksum into the .container file 
> itself. This will help with synchronization during container updates.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-248) Refactor DatanodeContainerProtocol.proto

2018-07-25 Thread Hanisha Koneru (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16556135#comment-16556135
 ] 

Hanisha Koneru commented on HDDS-248:
-

Hi [~nandakumar131], Rebased patch v03 on top of trunk.

> Refactor DatanodeContainerProtocol.proto 
> -
>
> Key: HDDS-248
> URL: https://issues.apache.org/jira/browse/HDDS-248
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-248.001.patch, HDDS-248.002.patch, 
> HDDS-248.003.patch
>
>
> This Jira proposes to cleanup the DatanodeContainerProtocol protos and 
> refactor as per the new implementation of StorageIO in HDDS-48. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-248) Refactor DatanodeContainerProtocol.proto

2018-07-25 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDDS-248:

Status: Patch Available  (was: Open)

> Refactor DatanodeContainerProtocol.proto 
> -
>
> Key: HDDS-248
> URL: https://issues.apache.org/jira/browse/HDDS-248
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-248.001.patch, HDDS-248.002.patch, 
> HDDS-248.003.patch
>
>
> This Jira proposes to cleanup the DatanodeContainerProtocol protos and 
> refactor as per the new implementation of StorageIO in HDDS-48. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-248) Refactor DatanodeContainerProtocol.proto

2018-07-25 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDDS-248:

Attachment: HDDS-248.003.patch

> Refactor DatanodeContainerProtocol.proto 
> -
>
> Key: HDDS-248
> URL: https://issues.apache.org/jira/browse/HDDS-248
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-248.001.patch, HDDS-248.002.patch, 
> HDDS-248.003.patch
>
>
> This Jira proposes to cleanup the DatanodeContainerProtocol protos and 
> refactor as per the new implementation of StorageIO in HDDS-48. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-248) Refactor DatanodeContainerProtocol.proto

2018-07-24 Thread Hanisha Koneru (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16554800#comment-16554800
 ] 

Hanisha Koneru commented on HDDS-248:
-

Thanks [~anu], [~xyao] and [~arpitagarwal] for the offline discussion.
Removed the oneof protobuf feature in patch v02 as this feature is available 
form Protobuf 2.6 onwards only.

> Refactor DatanodeContainerProtocol.proto 
> -
>
> Key: HDDS-248
> URL: https://issues.apache.org/jira/browse/HDDS-248
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-248.001.patch, HDDS-248.002.patch
>
>
> This Jira proposes to cleanup the DatanodeContainerProtocol protos and 
> refactor as per the new implementation of StorageIO in HDDS-48. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-248) Refactor DatanodeContainerProtocol.proto

2018-07-24 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDDS-248:

Attachment: HDDS-248.002.patch

> Refactor DatanodeContainerProtocol.proto 
> -
>
> Key: HDDS-248
> URL: https://issues.apache.org/jira/browse/HDDS-248
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-248.001.patch, HDDS-248.002.patch
>
>
> This Jira proposes to cleanup the DatanodeContainerProtocol protos and 
> refactor as per the new implementation of StorageIO in HDDS-48. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-266) Integrate checksum into .container file

2018-07-24 Thread Hanisha Koneru (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16554620#comment-16554620
 ] 

Hanisha Koneru commented on HDDS-266:
-

Thanks for the review [~nandakumar131].
I have addressed your comments and fixed the license warning in patch v04.

Also added DUMMY_CHECKSUM to store the 0 byte array.
{code:java}
private static final String DUMMY_CHECKSUM = new String(new byte[64],
CHARSET_ENCODING);
public void setChecksumTo0ByteArray() {
this.checksum = DUMMY_CHECKSUM;
  }
{code}

> Integrate checksum into .container file
> ---
>
> Key: HDDS-266
> URL: https://issues.apache.org/jira/browse/HDDS-266
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-266.001.patch, HDDS-266.002.patch, 
> HDDS-266.003.patch, HDDS-266.004.patch
>
>
> Currently, each container metadata has 2 files - .container and .checksum 
> file.
> In this Jira, we propose to integrate the checksum into the .container file 
> itself. This will help with synchronization during container updates.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-266) Integrate checksum into .container file

2018-07-24 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDDS-266:

Attachment: HDDS-266.004.patch

> Integrate checksum into .container file
> ---
>
> Key: HDDS-266
> URL: https://issues.apache.org/jira/browse/HDDS-266
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-266.001.patch, HDDS-266.002.patch, 
> HDDS-266.003.patch, HDDS-266.004.patch
>
>
> Currently, each container metadata has 2 files - .container and .checksum 
> file.
> In this Jira, we propose to integrate the checksum into the .container file 
> itself. This will help with synchronization during container updates.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-266) Integrate checksum into .container file

2018-07-23 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDDS-266:

Attachment: HDDS-266.003.patch

> Integrate checksum into .container file
> ---
>
> Key: HDDS-266
> URL: https://issues.apache.org/jira/browse/HDDS-266
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-266.001.patch, HDDS-266.002.patch, 
> HDDS-266.003.patch
>
>
> Currently, each container metadata has 2 files - .container and .checksum 
> file.
> In this Jira, we propose to integrate the checksum into the .container file 
> itself. This will help with synchronization during container updates.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-266) Integrate checksum into .container file

2018-07-23 Thread Hanisha Koneru (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16553174#comment-16553174
 ] 

Hanisha Koneru commented on HDDS-266:
-

Thanks for the review [~bharatviswa]. Addressed your comments in patch v03.

Test failures are unrelated.

> Integrate checksum into .container file
> ---
>
> Key: HDDS-266
> URL: https://issues.apache.org/jira/browse/HDDS-266
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-266.001.patch, HDDS-266.002.patch, 
> HDDS-266.003.patch
>
>
> Currently, each container metadata has 2 files - .container and .checksum 
> file.
> In this Jira, we propose to integrate the checksum into the .container file 
> itself. This will help with synchronization during container updates.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-203) Add getCommittedBlockLength API in datanode

2018-07-20 Thread Hanisha Koneru (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16551369#comment-16551369
 ] 

Hanisha Koneru commented on HDDS-203:
-

Thanks for working on this [~shashikant].
Patch LGTM overall.

Just few NITs:
* In Proto file, can we rename the field name in ContainerCommandRequestProto 
and ContainerCommandResponseProto to {{getCommittedBlockLength}} just to be 
consistent with other fields.

* Typo in KeyUtils, line 149 : etCommittedBlockLength

* In KeyManagerImpl#getCommittedBlockLength(), the Precondition checks are 
redundant. These are required fields and we already parse the containerID in 
HddsDispatcher.

> Add getCommittedBlockLength API in datanode
> ---
>
> Key: HDDS-203
> URL: https://issues.apache.org/jira/browse/HDDS-203
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client, Ozone Datanode
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-203.00.patch, HDDS-203.01.patch, HDDS-203.02.patch, 
> HDDS-203.03.patch
>
>
> When a container gets closed on the Datanode while the active Writes are 
> happening by OzoneClient, Client Write requests will fail with 
> ContainerClosedException. In such case, ozone Client needs to enquire the 
> last committed block length from dataNodes and update the OzoneMaster with 
> the updated length for the block. This Jira proposes to add to RPC call to 
> get the last committed length of a block on a Datanode.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-266) Integrate checksum into .container file

2018-07-20 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDDS-266:

Attachment: HDDS-266.002.patch

> Integrate checksum into .container file
> ---
>
> Key: HDDS-266
> URL: https://issues.apache.org/jira/browse/HDDS-266
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-266.001.patch, HDDS-266.002.patch
>
>
> Currently, each container metadata has 2 files - .container and .checksum 
> file.
> In this Jira, we propose to integrate the checksum into the .container file 
> itself. This will help with synchronization during container updates.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-266) Integrate checksum into .container file

2018-07-20 Thread Hanisha Koneru (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16551182#comment-16551182
 ] 

Hanisha Koneru commented on HDDS-266:
-

Rebased the patch and fixed findbug and relevant tests.

> Integrate checksum into .container file
> ---
>
> Key: HDDS-266
> URL: https://issues.apache.org/jira/browse/HDDS-266
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-266.001.patch, HDDS-266.002.patch
>
>
> Currently, each container metadata has 2 files - .container and .checksum 
> file.
> In this Jira, we propose to integrate the checksum into the .container file 
> itself. This will help with synchronization during container updates.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-250) Cleanup ContainerData

2018-07-20 Thread Hanisha Koneru (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16551115#comment-16551115
 ] 

Hanisha Koneru commented on HDDS-250:
-

Thank you all for the reviews. I have committed this to trunk.

> Cleanup ContainerData
> -
>
> Key: HDDS-250
> URL: https://issues.apache.org/jira/browse/HDDS-250
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-250.000.patch, HDDS-250.001.patch, 
> HDDS-250.002.patch, HDDS-250.003.patch
>
>
> The following functions in ContainerData are redundant. MetadataPath and 
> ChunksPath are specific to KeyValueContainerData. 
> ContainerPath is the common path in ContainerData which points to the base 
> dir of the container.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-250) Cleanup ContainerData

2018-07-20 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-250?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDDS-250:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Cleanup ContainerData
> -
>
> Key: HDDS-250
> URL: https://issues.apache.org/jira/browse/HDDS-250
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-250.000.patch, HDDS-250.001.patch, 
> HDDS-250.002.patch, HDDS-250.003.patch
>
>
> The following functions in ContainerData are redundant. MetadataPath and 
> ChunksPath are specific to KeyValueContainerData. 
> ContainerPath is the common path in ContainerData which points to the base 
> dir of the container.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-250) Cleanup ContainerData

2018-07-20 Thread Hanisha Koneru (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16551031#comment-16551031
 ] 

Hanisha Koneru commented on HDDS-250:
-

Since there are no further comments, I will commit patch v03 shortly.

> Cleanup ContainerData
> -
>
> Key: HDDS-250
> URL: https://issues.apache.org/jira/browse/HDDS-250
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-250.000.patch, HDDS-250.001.patch, 
> HDDS-250.002.patch, HDDS-250.003.patch
>
>
> The following functions in ContainerData are redundant. MetadataPath and 
> ChunksPath are specific to KeyValueContainerData. 
> ContainerPath is the common path in ContainerData which points to the base 
> dir of the container.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-270) Move generic container utils to ContianerUitls

2018-07-19 Thread Hanisha Koneru (JIRA)
Hanisha Koneru created HDDS-270:
---

 Summary: Move generic container utils to ContianerUitls
 Key: HDDS-270
 URL: https://issues.apache.org/jira/browse/HDDS-270
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Hanisha Koneru
Assignee: Hanisha Koneru


Some container util functions such as getContainerFile() are common for all 
ContainerTypes. These functions should be moved to ContainerUtils.

Also moved some fucntions to KeyValueContainer as applicable.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-249) Fail if multiple SCM IDs on the DataNode and add SCM ID check after version request

2018-07-19 Thread Hanisha Koneru (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16549870#comment-16549870
 ] 

Hanisha Koneru commented on HDDS-249:
-

Thanks [~bharatviswa] for working on this.
Patch v04 LGTM.

I have just one minor comment. In TestEndPoint, line 206, when checking the 
output, can we verify that the "missing scm directory" error is for the 
expected scmId.

> Fail if multiple SCM IDs on the DataNode and add SCM ID check after version 
> request
> ---
>
> Key: HDDS-249
> URL: https://issues.apache.org/jira/browse/HDDS-249
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-249.00.patch, HDDS-249.01.patch, HDDS-249.02.patch, 
> HDDS-249.03.patch, HDDS-249.04.patch
>
>
> This Jira take care of following conditions:
>  # If multiple Scm directories exist on datanode, it fails that volume.
>  # validate SCMID response from SCM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-266) Integrate checksum into .container file

2018-07-19 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDDS-266:

Attachment: HDDS-266.001.patch

> Integrate checksum into .container file
> ---
>
> Key: HDDS-266
> URL: https://issues.apache.org/jira/browse/HDDS-266
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HDDS-266.001.patch
>
>
> Currently, each container metadata has 2 files - .container and .checksum 
> file.
> In this Jira, we propose to integrate the checksum into the .container file 
> itself. This will help with synchronization during container updates.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-266) Integrate checksum into .container file

2018-07-19 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDDS-266:

Status: Patch Available  (was: Open)

> Integrate checksum into .container file
> ---
>
> Key: HDDS-266
> URL: https://issues.apache.org/jira/browse/HDDS-266
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HDDS-266.001.patch
>
>
> Currently, each container metadata has 2 files - .container and .checksum 
> file.
> In this Jira, we propose to integrate the checksum into the .container file 
> itself. This will help with synchronization during container updates.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-267) Handle consistency issues during container update/close

2018-07-19 Thread Hanisha Koneru (JIRA)
Hanisha Koneru created HDDS-267:
---

 Summary: Handle consistency issues during container update/close
 Key: HDDS-267
 URL: https://issues.apache.org/jira/browse/HDDS-267
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Hanisha Koneru
Assignee: Hanisha Koneru


During container update and close, the .container file on disk is modified. We 
should make sure that the in-memory state and the on-disk state for a container 
are consistent. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-266) Integrate checksum into .container file

2018-07-19 Thread Hanisha Koneru (JIRA)
Hanisha Koneru created HDDS-266:
---

 Summary: Integrate checksum into .container file
 Key: HDDS-266
 URL: https://issues.apache.org/jira/browse/HDDS-266
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Hanisha Koneru
Assignee: Hanisha Koneru


Currently, each container metadata has 2 files - .container and .checksum file.
In this Jira, we propose to integrate the checksum into the .container file 
itself. This will help with synchronization during container updates.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-250) Cleanup ContainerData

2018-07-19 Thread Hanisha Koneru (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16549648#comment-16549648
 ] 

Hanisha Koneru commented on HDDS-250:
-

Thanks [~ljain]. I have created HDDS-265 to track this.

I will go ahead and commit patch v03 if there are no further comments.

Test failures are unrelated to this patch.

> Cleanup ContainerData
> -
>
> Key: HDDS-250
> URL: https://issues.apache.org/jira/browse/HDDS-250
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-250.000.patch, HDDS-250.001.patch, 
> HDDS-250.002.patch, HDDS-250.003.patch
>
>
> The following functions in ContainerData are redundant. MetadataPath and 
> ChunksPath are specific to KeyValueContainerData. 
> ContainerPath is the common path in ContainerData which points to the base 
> dir of the container.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-265) Move numPendingDeletionBlocks and deleteTransactionId from ContainerData to KeyValueContainerData

2018-07-19 Thread Hanisha Koneru (JIRA)
Hanisha Koneru created HDDS-265:
---

 Summary: Move numPendingDeletionBlocks and deleteTransactionId 
from ContainerData to KeyValueContainerData
 Key: HDDS-265
 URL: https://issues.apache.org/jira/browse/HDDS-265
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Hanisha Koneru


"numPendingDeletionBlocks" and "deleteTransactionId" fields are specific to 
KeyValueContainers. As such they should be moved to KeyValueContainerData from 
ContainerData.

ContainerReport should also be refactored to take in this change. 

Please refer to [~ljain]'s comment in HDDS-250.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-257) Hook up VolumeSet#shutdown from HddsDispatcher#shutdown

2018-07-18 Thread Hanisha Koneru (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16548321#comment-16548321
 ] 

Hanisha Koneru commented on HDDS-257:
-

Thanks for the review [~bharatviswa]. Added a unit test in patch v02.

> Hook up VolumeSet#shutdown from HddsDispatcher#shutdown
> ---
>
> Key: HDDS-257
> URL: https://issues.apache.org/jira/browse/HDDS-257
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-257.001.patch, HDDS-257.002.patch
>
>
> When HddsDispatcher is shutdown, it should call the VolumeSet#shutdown to 
> shut down the volumes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-257) Hook up VolumeSet#shutdown from HddsDispatcher#shutdown

2018-07-18 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-257?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDDS-257:

Attachment: HDDS-257.002.patch

> Hook up VolumeSet#shutdown from HddsDispatcher#shutdown
> ---
>
> Key: HDDS-257
> URL: https://issues.apache.org/jira/browse/HDDS-257
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-257.001.patch, HDDS-257.002.patch
>
>
> When HddsDispatcher is shutdown, it should call the VolumeSet#shutdown to 
> shut down the volumes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-250) Cleanup ContainerData

2018-07-18 Thread Hanisha Koneru (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16548296#comment-16548296
 ] 

Hanisha Koneru commented on HDDS-250:
-

Thanks for the review [~ljain].

I agree that we should have {{numPendingDeletionBlocks}} and 
{{deleteTransactionId}} in KeyValueContainerData rather than in ContainerData. 

But that will be a fairly big refactoring by itself. The whole 
{{BlockDeletingService}} would have to be refactored as it depends heavily on 
ContainerData.numPendingDeletionBlocks.

I think it would be better if we do that refactoring in a separate Jira along 
with the ContainerReport refactoring. What do you think?

> Cleanup ContainerData
> -
>
> Key: HDDS-250
> URL: https://issues.apache.org/jira/browse/HDDS-250
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-250.000.patch, HDDS-250.001.patch, 
> HDDS-250.002.patch, HDDS-250.003.patch
>
>
> The following functions in ContainerData are redundant. MetadataPath and 
> ChunksPath are specific to KeyValueContainerData. 
> ContainerPath is the common path in ContainerData which points to the base 
> dir of the container.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-261) Fix TestOzoneConfigurationFields

2018-07-18 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-261?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru resolved HDDS-261.
-
Resolution: Duplicate

Sorry this is a duplicate of HDDS-255.

> Fix TestOzoneConfigurationFields
> 
>
> Key: HDDS-261
> URL: https://issues.apache.org/jira/browse/HDDS-261
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Priority: Minor
>
> HDDS-187 added a config key - {{hdds.command.status.report.interval}} to 
> {{HddsConfigKeys}}. This class needs to be added to the 
> {{configurationClasses}} field in {{TestOzoneConfigurationFields}} also so 
> that the above mentioned config key is loaded into 
> configurationMemberVariables.
> {code:java}
> configurationClasses =
> new Class[] {OzoneConfigKeys.class, ScmConfigKeys.class,
> OMConfigKeys.class};{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-261) Fix TestOzoneConfigurationFields

2018-07-18 Thread Hanisha Koneru (JIRA)
Hanisha Koneru created HDDS-261:
---

 Summary: Fix TestOzoneConfigurationFields
 Key: HDDS-261
 URL: https://issues.apache.org/jira/browse/HDDS-261
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Hanisha Koneru


HDDS-187 added a config key - {{hdds.command.status.report.interval}} to 
{{HddsConfigKeys}}. This class needs to be added to the 
{{configurationClasses}} field in {{TestOzoneConfigurationFields}} also so that 
the above mentioned config key is loaded into configurationMemberVariables.
{code:java}
configurationClasses =
new Class[] {OzoneConfigKeys.class, ScmConfigKeys.class,
OMConfigKeys.class};{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-257) Hook up VolumeSet#shutdown from HddsDispatcher#shutdown

2018-07-13 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-257?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDDS-257:

Assignee: Hanisha Koneru
  Status: Patch Available  (was: Open)

> Hook up VolumeSet#shutdown from HddsDispatcher#shutdown
> ---
>
> Key: HDDS-257
> URL: https://issues.apache.org/jira/browse/HDDS-257
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-257.001.patch
>
>
> When HddsDispatcher is shutdown, it should call the VolumeSet#shutdown to 
> shut down the volumes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-257) Hook up VolumeSet#shutdown from HddsDispatcher#shutdown

2018-07-13 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-257?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDDS-257:

Attachment: HDDS-257.001.patch

> Hook up VolumeSet#shutdown from HddsDispatcher#shutdown
> ---
>
> Key: HDDS-257
> URL: https://issues.apache.org/jira/browse/HDDS-257
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-257.001.patch
>
>
> When HddsDispatcher is shutdown, it should call the VolumeSet#shutdown to 
> shut down the volumes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-257) Hook up VolumeSet#shutdown from HddsDispatcher#shutdown

2018-07-13 Thread Hanisha Koneru (JIRA)
Hanisha Koneru created HDDS-257:
---

 Summary: Hook up VolumeSet#shutdown from HddsDispatcher#shutdown
 Key: HDDS-257
 URL: https://issues.apache.org/jira/browse/HDDS-257
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Hanisha Koneru
 Fix For: 0.2.1


When HddsDispatcher is shutdown, it should call the VolumeSet#shutdown to shut 
down the volumes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-250) Cleanup ContainerData

2018-07-13 Thread Hanisha Koneru (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543713#comment-16543713
 ] 

Hanisha Koneru commented on HDDS-250:
-

Thanks for the review [~xyao].


 HDDS-251 is refactoring the BlockDeletingService to work with the new Storage 
layer. I believe BlockDeletingService is specific to KeyValue containers. 
[~ljain], can you please confirm if my understanding is correct?

> Cleanup ContainerData
> -
>
> Key: HDDS-250
> URL: https://issues.apache.org/jira/browse/HDDS-250
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-250.000.patch, HDDS-250.001.patch, 
> HDDS-250.002.patch, HDDS-250.003.patch
>
>
> The following functions in ContainerData are redundant. MetadataPath and 
> ChunksPath are specific to KeyValueContainerData. 
> ContainerPath is the common path in ContainerData which points to the base 
> dir of the container.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-250) Cleanup ContainerData

2018-07-13 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-250?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDDS-250:

Attachment: HDDS-250.003.patch

> Cleanup ContainerData
> -
>
> Key: HDDS-250
> URL: https://issues.apache.org/jira/browse/HDDS-250
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-250.000.patch, HDDS-250.001.patch, 
> HDDS-250.002.patch, HDDS-250.003.patch
>
>
> The following functions in ContainerData are redundant. MetadataPath and 
> ChunksPath are specific to KeyValueContainerData. 
> ContainerPath is the common path in ContainerData which points to the base 
> dir of the container.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-241) Handle Volume in inconsistent state

2018-07-13 Thread Hanisha Koneru (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543607#comment-16543607
 ] 

Hanisha Koneru commented on HDDS-241:
-

Thanks [~xyao] for the review.

Addressed the review comments in patch v02 and fixed the failing unit tests.

> Handle Volume in inconsistent state
> ---
>
> Key: HDDS-241
> URL: https://issues.apache.org/jira/browse/HDDS-241
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-241.001.patch, HDDS-241.002.patch
>
>
> During startup, a volume can be in an inconsistent state if 
>  # Volume Root path is a file and not a directory
>  # Volume Root is non empty but VERSION file does not exist
> If a volume is detected to be in an inconsistent state, we should skip 
> loading it during startup.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-241) Handle Volume in inconsistent state

2018-07-13 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDDS-241:

Attachment: HDDS-241.002.patch

> Handle Volume in inconsistent state
> ---
>
> Key: HDDS-241
> URL: https://issues.apache.org/jira/browse/HDDS-241
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-241.001.patch, HDDS-241.002.patch
>
>
> During startup, a volume can be in an inconsistent state if 
>  # Volume Root path is a file and not a directory
>  # Volume Root is non empty but VERSION file does not exist
> If a volume is detected to be in an inconsistent state, we should skip 
> loading it during startup.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-241) Handle Volume in inconsistent state

2018-07-13 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDDS-241:

Attachment: (was: HDDS-241.002.patch)

> Handle Volume in inconsistent state
> ---
>
> Key: HDDS-241
> URL: https://issues.apache.org/jira/browse/HDDS-241
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-241.001.patch
>
>
> During startup, a volume can be in an inconsistent state if 
>  # Volume Root path is a file and not a directory
>  # Volume Root is non empty but VERSION file does not exist
> If a volume is detected to be in an inconsistent state, we should skip 
> loading it during startup.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-241) Handle Volume in inconsistent state

2018-07-13 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDDS-241:

Attachment: HDDS-241.002.patch

> Handle Volume in inconsistent state
> ---
>
> Key: HDDS-241
> URL: https://issues.apache.org/jira/browse/HDDS-241
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-241.001.patch, HDDS-241.002.patch
>
>
> During startup, a volume can be in an inconsistent state if 
>  # Volume Root path is a file and not a directory
>  # Volume Root is non empty but VERSION file does not exist
> If a volume is detected to be in an inconsistent state, we should skip 
> loading it during startup.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-250) Cleanup ContainerData

2018-07-13 Thread Hanisha Koneru (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543539#comment-16543539
 ] 

Hanisha Koneru commented on HDDS-250:
-

Thanks [~bharatviswa] for the review. 

Addressed review comments in patch v02. 
The failing unit test passes locally and the failure is unrelated to this 
patch. It fails due to Timeout. 

> Cleanup ContainerData
> -
>
> Key: HDDS-250
> URL: https://issues.apache.org/jira/browse/HDDS-250
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-250.000.patch, HDDS-250.001.patch, 
> HDDS-250.002.patch
>
>
> The following functions in ContainerData are redundant. MetadataPath and 
> ChunksPath are specific to KeyValueContainerData. 
> ContainerPath is the common path in ContainerData which points to the base 
> dir of the container.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-250) Cleanup ContainerData

2018-07-13 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-250?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDDS-250:

Attachment: HDDS-250.002.patch

> Cleanup ContainerData
> -
>
> Key: HDDS-250
> URL: https://issues.apache.org/jira/browse/HDDS-250
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-250.000.patch, HDDS-250.001.patch, 
> HDDS-250.002.patch
>
>
> The following functions in ContainerData are redundant. MetadataPath and 
> ChunksPath are specific to KeyValueContainerData. 
> ContainerPath is the common path in ContainerData which points to the base 
> dir of the container.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-249) Fail if multiple SCM IDs on the DataNode and add SCM ID check after version request

2018-07-12 Thread Hanisha Koneru (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16542266#comment-16542266
 ] 

Hanisha Koneru commented on HDDS-249:
-

Thanks for working on this, [~bharatviswa].
 * I think this work is dependent on HDDS-241. Lets say we  have a situation 
where a volume has an scmDir but the VERSION file is missing. And during 
formatting, we fail to create the Version file. The following check in 
HddsVolumeUtil assumes that the file inside hdds dir is the VERSION file 
whereas, in this situation, it is an scmDir. We will end up creating two 
scmDirs with no Version File and return the volume as healthy.
{code:java}
File[] hddsFiles = hddsVolume.getHddsRootDir().listFiles();
if (hddsFiles !=null && hddsFiles.length == 1) {
  // DN started for first time or this is a newly added volume.
  // So we create scm directory. So only version file should be available.
  if (!scmDir.mkdir()) {
logger.error("Unable to create scmDir {}", scmDir);
  }
  result = true;
} else if (!scmDir.exists()) {
  // Already existing volume, and this is not first time dn is started
  logger.error("Volume {} is in Inconsistent state, missing scm {} " +
  "directory", volumeRoot, scmId);
} else {
  result = true;
}
{code}
Once we HDDS-241 goes in, we can detect an inconsistent volume and this 
situation can be avoided.

 * We will also need to verify that the scmId matches the name of the scmDir 
inside hddsVolume dir.

 * NIT : Unrelated to this change, VersionEndPointTask, line#80 - null check is 
for clusterId. Could you please fix that also along with this change. 

> Fail if multiple SCM IDs on the DataNode and add SCM ID check after version 
> request
> ---
>
> Key: HDDS-249
> URL: https://issues.apache.org/jira/browse/HDDS-249
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-249.00.patch, HDDS-249.01.patch, HDDS-249.02.patch
>
>
> This Jira take care of following conditions:
>  # If multiple Scm directories exist on datanode, it fails that volume.
>  # validate SCMID response from SCM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-250) Cleanup ContainerData

2018-07-12 Thread Hanisha Koneru (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16542146#comment-16542146
 ] 

Hanisha Koneru commented on HDDS-250:
-

Thanks for the review [~bharatviswa].

Updated the patch to address your comment and also made the following changes.
 * Made ContainerData an abstract class. Each new ContainerType should extend 
this class.
 * Renamed some variables as per their usage.

> Cleanup ContainerData
> -
>
> Key: HDDS-250
> URL: https://issues.apache.org/jira/browse/HDDS-250
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-250.000.patch, HDDS-250.001.patch
>
>
> The following functions in ContainerData are redundant. MetadataPath and 
> ChunksPath are specific to KeyValueContainerData. 
> ContainerPath is the common path in ContainerData which points to the base 
> dir of the container.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-250) Cleanup ContainerData

2018-07-12 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-250?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDDS-250:

Attachment: HDDS-250.001.patch

> Cleanup ContainerData
> -
>
> Key: HDDS-250
> URL: https://issues.apache.org/jira/browse/HDDS-250
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-250.000.patch, HDDS-250.001.patch
>
>
> The following functions in ContainerData are redundant. MetadataPath and 
> ChunksPath are specific to KeyValueContainerData. 
> ContainerPath is the common path in ContainerData which points to the base 
> dir of the container.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-250) Cleanup ContainerData

2018-07-10 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-250?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDDS-250:

Attachment: HDDS-250.000.patch

> Cleanup ContainerData
> -
>
> Key: HDDS-250
> URL: https://issues.apache.org/jira/browse/HDDS-250
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-250.000.patch
>
>
> The following functions in ContainerData are redundant. MetadataPath and 
> ChunksPath are specific to KeyValueContainerData. 
> ContainerPath is the common path in ContainerData which points to the base 
> dir of the container.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-250) Cleanup ContainerData

2018-07-10 Thread Hanisha Koneru (JIRA)
Hanisha Koneru created HDDS-250:
---

 Summary: Cleanup ContainerData
 Key: HDDS-250
 URL: https://issues.apache.org/jira/browse/HDDS-250
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Hanisha Koneru
Assignee: Hanisha Koneru
 Fix For: 0.2.1


The following functions in ContainerData are redundant. MetadataPath and 
ChunksPath are specific to KeyValueContainerData. 

ContainerPath is the common path in ContainerData which points to the base dir 
of the container.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-248) Refactor DatanodeContainerProtocol.proto

2018-07-10 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDDS-248:

Status: Patch Available  (was: Open)

> Refactor DatanodeContainerProtocol.proto 
> -
>
> Key: HDDS-248
> URL: https://issues.apache.org/jira/browse/HDDS-248
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-248.001.patch
>
>
> This Jira proposes to cleanup the DatanodeContainerProtocol protos and 
> refactor as per the new implementation of StorageIO in HDDS-48. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-248) Refactor DatanodeContainerProtocol.proto

2018-07-10 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDDS-248:

Attachment: HDDS-248.001.patch

> Refactor DatanodeContainerProtocol.proto 
> -
>
> Key: HDDS-248
> URL: https://issues.apache.org/jira/browse/HDDS-248
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-248.001.patch
>
>
> This Jira proposes to cleanup the DatanodeContainerProtocol protos and 
> refactor as per the new implementation of StorageIO in HDDS-48. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-248) Refactor DatanodeContainerProtocol.proto

2018-07-10 Thread Hanisha Koneru (JIRA)
Hanisha Koneru created HDDS-248:
---

 Summary: Refactor DatanodeContainerProtocol.proto 
 Key: HDDS-248
 URL: https://issues.apache.org/jira/browse/HDDS-248
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Hanisha Koneru
Assignee: Hanisha Koneru
 Fix For: 0.2.1


This Jira proposes to cleanup the DatanodeContainerProtocol protos and refactor 
as per the new implementation of StorageIO in HDDS-48. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-241) Handle Volume in inconsistent state

2018-07-10 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDDS-241:

Status: Patch Available  (was: Open)

> Handle Volume in inconsistent state
> ---
>
> Key: HDDS-241
> URL: https://issues.apache.org/jira/browse/HDDS-241
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-241.001.patch
>
>
> During startup, a volume can be in an inconsistent state if 
>  # Volume Root path is a file and not a directory
>  # Volume Root is non empty but VERSION file does not exist
> If a volume is detected to be in an inconsistent state, we should skip 
> loading it during startup.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-241) Handle Volume in inconsistent state

2018-07-09 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDDS-241:

Attachment: HDDS-241.001.patch

> Handle Volume in inconsistent state
> ---
>
> Key: HDDS-241
> URL: https://issues.apache.org/jira/browse/HDDS-241
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-241.001.patch
>
>
> During startup, a volume can be in an inconsistent state if 
>  # Volume Root path is a file and not a directory
>  # Volume Root is non empty but VERSION file does not exist
> If a volume is detected to be in an inconsistent state, we should skip 
> loading it during startup.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-241) Handle Volume in inconsistent state

2018-07-09 Thread Hanisha Koneru (JIRA)
Hanisha Koneru created HDDS-241:
---

 Summary: Handle Volume in inconsistent state
 Key: HDDS-241
 URL: https://issues.apache.org/jira/browse/HDDS-241
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Hanisha Koneru
Assignee: Hanisha Koneru
 Fix For: 0.2.1


During startup, a volume can be in an inconsistent state if 
 # Volume Root path is a file and not a directory
 # Volume Root is non empty but VERSION file does not exist

If a volume is detected to be in an inconsistent state, we should skip loading 
it during startup.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-213) Single lock to synchronize KeyValueContainer#update

2018-07-09 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDDS-213:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Single lock to synchronize KeyValueContainer#update
> ---
>
> Key: HDDS-213
> URL: https://issues.apache.org/jira/browse/HDDS-213
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-213-HDDS-48.000.patch, HDDS-213-HDDS-48.001.patch, 
> HDDS-213-HDDS-48.002.patch, HDDS-213-HDDS-48.003.patch, 
> HDDS-213-HDDS-48.004.patch, HDDS-213-HDDS-48.005.patch
>
>
> When updating the container metadata, the in-memory state and on-disk state 
> should be updated under the same lock.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-213) Single lock to synchronize KeyValueContainer#update

2018-07-09 Thread Hanisha Koneru (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16537220#comment-16537220
 ] 

Hanisha Koneru commented on HDDS-213:
-

Thanks for the reviews, [~bharatviswa].

Committed to feature branch HDDS-48.

> Single lock to synchronize KeyValueContainer#update
> ---
>
> Key: HDDS-213
> URL: https://issues.apache.org/jira/browse/HDDS-213
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-213-HDDS-48.000.patch, HDDS-213-HDDS-48.001.patch, 
> HDDS-213-HDDS-48.002.patch, HDDS-213-HDDS-48.003.patch, 
> HDDS-213-HDDS-48.004.patch, HDDS-213-HDDS-48.005.patch
>
>
> When updating the container metadata, the in-memory state and on-disk state 
> should be updated under the same lock.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-213) Single lock to synchronize KeyValueContainer#update

2018-07-09 Thread Hanisha Koneru (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16537075#comment-16537075
 ] 

Hanisha Koneru commented on HDDS-213:
-

Thanks Bharat for the review.. Addressed it is patch v05.

> Single lock to synchronize KeyValueContainer#update
> ---
>
> Key: HDDS-213
> URL: https://issues.apache.org/jira/browse/HDDS-213
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-213-HDDS-48.000.patch, HDDS-213-HDDS-48.001.patch, 
> HDDS-213-HDDS-48.002.patch, HDDS-213-HDDS-48.003.patch, 
> HDDS-213-HDDS-48.004.patch, HDDS-213-HDDS-48.005.patch
>
>
> When updating the container metadata, the in-memory state and on-disk state 
> should be updated under the same lock.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-213) Single lock to synchronize KeyValueContainer#update

2018-07-09 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDDS-213:

Attachment: HDDS-213-HDDS-48.005.patch

> Single lock to synchronize KeyValueContainer#update
> ---
>
> Key: HDDS-213
> URL: https://issues.apache.org/jira/browse/HDDS-213
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-213-HDDS-48.000.patch, HDDS-213-HDDS-48.001.patch, 
> HDDS-213-HDDS-48.002.patch, HDDS-213-HDDS-48.003.patch, 
> HDDS-213-HDDS-48.004.patch, HDDS-213-HDDS-48.005.patch
>
>
> When updating the container metadata, the in-memory state and on-disk state 
> should be updated under the same lock.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-213) Single lock to synchronize KeyValueContainer#update

2018-07-08 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDDS-213:

Attachment: HDDS-213-HDDS-48.004.patch

> Single lock to synchronize KeyValueContainer#update
> ---
>
> Key: HDDS-213
> URL: https://issues.apache.org/jira/browse/HDDS-213
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-213-HDDS-48.000.patch, HDDS-213-HDDS-48.001.patch, 
> HDDS-213-HDDS-48.002.patch, HDDS-213-HDDS-48.003.patch, 
> HDDS-213-HDDS-48.004.patch
>
>
> When updating the container metadata, the in-memory state and on-disk state 
> should be updated under the same lock.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-213) Single lock to synchronize KeyValueContainer#update

2018-07-06 Thread Hanisha Koneru (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16535505#comment-16535505
 ] 

Hanisha Koneru commented on HDDS-213:
-

Thanks [~bharatviswa] for the review.

Updated the patch.

> Single lock to synchronize KeyValueContainer#update
> ---
>
> Key: HDDS-213
> URL: https://issues.apache.org/jira/browse/HDDS-213
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-213-HDDS-48.000.patch, HDDS-213-HDDS-48.001.patch, 
> HDDS-213-HDDS-48.002.patch, HDDS-213-HDDS-48.003.patch
>
>
> When updating the container metadata, the in-memory state and on-disk state 
> should be updated under the same lock.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-213) Single lock to synchronize KeyValueContainer#update

2018-07-06 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDDS-213:

Attachment: HDDS-213-HDDS-48.003.patch

> Single lock to synchronize KeyValueContainer#update
> ---
>
> Key: HDDS-213
> URL: https://issues.apache.org/jira/browse/HDDS-213
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-213-HDDS-48.000.patch, HDDS-213-HDDS-48.001.patch, 
> HDDS-213-HDDS-48.002.patch, HDDS-213-HDDS-48.003.patch
>
>
> When updating the container metadata, the in-memory state and on-disk state 
> should be updated under the same lock.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-237) Add updateDeleteTransactionId

2018-07-06 Thread Hanisha Koneru (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16535403#comment-16535403
 ] 

Hanisha Koneru commented on HDDS-237:
-

Thanks [~bharatviswa] for this patch.

+1 pending Jenkins.

> Add updateDeleteTransactionId
> -
>
> Key: HDDS-237
> URL: https://issues.apache.org/jira/browse/HDDS-237
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-237-HDDS-48.00.patch
>
>
> Add updateDeleteTransactionId to our new classes, which is added to 
> ContainerData in HDDS-178. This is being done to merge HDDS-48 in to trunk.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-213) Single lock to synchronize KeyValueContainer#update

2018-07-06 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDDS-213:

Attachment: HDDS-213-HDDS-48.002.patch

> Single lock to synchronize KeyValueContainer#update
> ---
>
> Key: HDDS-213
> URL: https://issues.apache.org/jira/browse/HDDS-213
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-213-HDDS-48.000.patch, HDDS-213-HDDS-48.001.patch, 
> HDDS-213-HDDS-48.002.patch
>
>
> When updating the container metadata, the in-memory state and on-disk state 
> should be updated under the same lock.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-211) Add a create container Lock

2018-07-06 Thread Hanisha Koneru (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16535367#comment-16535367
 ] 

Hanisha Koneru commented on HDDS-211:
-

Thanks [~bharatviswa] for the fix.

+1 pending Jenkins.

> Add a create container Lock
> ---
>
> Key: HDDS-211
> URL: https://issues.apache.org/jira/browse/HDDS-211
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-211-HDDS-48.00.patch, HDDS-211-HDDS-48.01.patch, 
> HDDS-211-HDDS-48.02.patch
>
>
> Add a lock to guard multiple creations of the same container.
> When multiple clients, try to create a container with the same containerID, 
> we should succeed for one client, and for remaining clients we should throw 
> StorageContainerException. 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-213) Single lock to synchronize KeyValueContainer#update

2018-07-05 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDDS-213:

Attachment: HDDS-213-HDDS-48.001.patch

> Single lock to synchronize KeyValueContainer#update
> ---
>
> Key: HDDS-213
> URL: https://issues.apache.org/jira/browse/HDDS-213
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-213-HDDS-48.000.patch, HDDS-213-HDDS-48.001.patch
>
>
> When updating the container metadata, the in-memory state and on-disk state 
> should be updated under the same lock.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-182) CleanUp Reimplemented classes

2018-07-05 Thread Hanisha Koneru (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16534216#comment-16534216
 ] 

Hanisha Koneru commented on HDDS-182:
-

Thanks for catching it, [~bharatviswa].

Removed the checks.

> CleanUp Reimplemented classes
> -
>
> Key: HDDS-182
> URL: https://issues.apache.org/jira/browse/HDDS-182
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HDDS-182-HDDS-48.001.patch, HDDS-182-HDDS-48.002.patch, 
> HDDS-182-HDDS-48.003.patch
>
>
> Cleanup container-service's ozone.container.common package. The following 
> classes have been refactored and re-implemented. The unused classes/ methods 
> should be cleaned up.
>  # org.apache.hadoop.ozone.container.common.helpers.ChunkUtils
>  # org.apache.hadoop.ozone.container.common.helpers.KeyUtils
>  # org.apache.hadoop.ozone.container.common.helpers.ContainerData
>  # org.apache.hadoop.ozone.container.common.impl.Dispatcher
>  # org.apache.hadoop.ozone.container.common.impl.ChunkManagerImpl
>  # org.apache.hadoop.ozone.container.common.impl.KeyManagerImpl
>  # org.apache.hadoop.ozone.container.common.impl.ContainerManagerImpl
>  # org.apache.hadoop.ozone.container.common.impl.Dispatcher
> Also, fix integration tests broken by deleting these classes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-182) CleanUp Reimplemented classes

2018-07-05 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDDS-182:

Attachment: HDDS-182-HDDS-48.003.patch

> CleanUp Reimplemented classes
> -
>
> Key: HDDS-182
> URL: https://issues.apache.org/jira/browse/HDDS-182
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HDDS-182-HDDS-48.001.patch, HDDS-182-HDDS-48.002.patch, 
> HDDS-182-HDDS-48.003.patch
>
>
> Cleanup container-service's ozone.container.common package. The following 
> classes have been refactored and re-implemented. The unused classes/ methods 
> should be cleaned up.
>  # org.apache.hadoop.ozone.container.common.helpers.ChunkUtils
>  # org.apache.hadoop.ozone.container.common.helpers.KeyUtils
>  # org.apache.hadoop.ozone.container.common.helpers.ContainerData
>  # org.apache.hadoop.ozone.container.common.impl.Dispatcher
>  # org.apache.hadoop.ozone.container.common.impl.ChunkManagerImpl
>  # org.apache.hadoop.ozone.container.common.impl.KeyManagerImpl
>  # org.apache.hadoop.ozone.container.common.impl.ContainerManagerImpl
>  # org.apache.hadoop.ozone.container.common.impl.Dispatcher
> Also, fix integration tests broken by deleting these classes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-182) CleanUp Reimplemented classes

2018-07-05 Thread Hanisha Koneru (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16534067#comment-16534067
 ] 

Hanisha Koneru commented on HDDS-182:
-

Thanks [~bharatviswa].
Addressed the review comments and some junit and findbug errors.

> CleanUp Reimplemented classes
> -
>
> Key: HDDS-182
> URL: https://issues.apache.org/jira/browse/HDDS-182
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HDDS-182-HDDS-48.001.patch, HDDS-182-HDDS-48.002.patch
>
>
> Cleanup container-service's ozone.container.common package. The following 
> classes have been refactored and re-implemented. The unused classes/ methods 
> should be cleaned up.
>  # org.apache.hadoop.ozone.container.common.helpers.ChunkUtils
>  # org.apache.hadoop.ozone.container.common.helpers.KeyUtils
>  # org.apache.hadoop.ozone.container.common.helpers.ContainerData
>  # org.apache.hadoop.ozone.container.common.impl.Dispatcher
>  # org.apache.hadoop.ozone.container.common.impl.ChunkManagerImpl
>  # org.apache.hadoop.ozone.container.common.impl.KeyManagerImpl
>  # org.apache.hadoop.ozone.container.common.impl.ContainerManagerImpl
>  # org.apache.hadoop.ozone.container.common.impl.Dispatcher
> Also, fix integration tests broken by deleting these classes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-182) CleanUp Reimplemented classes

2018-07-05 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDDS-182:

Attachment: HDDS-182-HDDS-48.002.patch

> CleanUp Reimplemented classes
> -
>
> Key: HDDS-182
> URL: https://issues.apache.org/jira/browse/HDDS-182
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HDDS-182-HDDS-48.001.patch, HDDS-182-HDDS-48.002.patch
>
>
> Cleanup container-service's ozone.container.common package. The following 
> classes have been refactored and re-implemented. The unused classes/ methods 
> should be cleaned up.
>  # org.apache.hadoop.ozone.container.common.helpers.ChunkUtils
>  # org.apache.hadoop.ozone.container.common.helpers.KeyUtils
>  # org.apache.hadoop.ozone.container.common.helpers.ContainerData
>  # org.apache.hadoop.ozone.container.common.impl.Dispatcher
>  # org.apache.hadoop.ozone.container.common.impl.ChunkManagerImpl
>  # org.apache.hadoop.ozone.container.common.impl.KeyManagerImpl
>  # org.apache.hadoop.ozone.container.common.impl.ContainerManagerImpl
>  # org.apache.hadoop.ozone.container.common.impl.Dispatcher
> Also, fix integration tests broken by deleting these classes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-213) Single lock to synchronize KeyValueContainer#update

2018-07-03 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDDS-213:

Description: When updating the container metadata, the in-memory state and 
on-disk state should be updated under the same lock.

> Single lock to synchronize KeyValueContainer#update
> ---
>
> Key: HDDS-213
> URL: https://issues.apache.org/jira/browse/HDDS-213
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-213-HDDS-48.000.patch
>
>
> When updating the container metadata, the in-memory state and on-disk state 
> should be updated under the same lock.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-215) Handle Container Already Exists exception on client side

2018-07-03 Thread Hanisha Koneru (JIRA)
Hanisha Koneru created HDDS-215:
---

 Summary: Handle Container Already Exists exception on client side
 Key: HDDS-215
 URL: https://issues.apache.org/jira/browse/HDDS-215
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Hanisha Koneru


When creating containers on DN, if we get CONTAINER_ALREADY_EXISTS exception, 
it should be handled on the the client side.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-213) Single lock to synchronize KeyValueContainer#update

2018-07-03 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDDS-213:

Attachment: HDDS-213-HDDS-48.000.patch

> Single lock to synchronize KeyValueContainer#update
> ---
>
> Key: HDDS-213
> URL: https://issues.apache.org/jira/browse/HDDS-213
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-213-HDDS-48.000.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-213) Single lock to synchronize KeyValueContainer#update

2018-07-03 Thread Hanisha Koneru (JIRA)
Hanisha Koneru created HDDS-213:
---

 Summary: Single lock to synchronize KeyValueContainer#update
 Key: HDDS-213
 URL: https://issues.apache.org/jira/browse/HDDS-213
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Hanisha Koneru
Assignee: Hanisha Koneru
 Fix For: 0.2.1






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-182) CleanUp Reimplemented classes

2018-07-03 Thread Hanisha Koneru (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16530573#comment-16530573
 ] 

Hanisha Koneru edited comment on HDDS-182 at 7/3/18 5:33 PM:
-

* Fixed the following integration tests in this patch
 ## TestContainerPersistence
 ## TestContainerServer
 ## TestSCMCli - Ignoring this test for now. Will open a new Jira to fix this.
 * Changed containerId to containerID in ContainerData to be consistent with 
naming convention (for eg. clusterID, scmID).
 * Removed restriction of not updating the existing container metadata fields.
 * Fixed TestKeyValueHandler failing in Jenkins run.


was (Author: hanishakoneru):
* Fixed the following integration tests in this patch
*# TestContainerPersistence
*# TestContainerServer
*# TestSCMCli - Ignoring this test for now. Will open a new Jira to fix this.
* Changed containerId to containerID in ContainerData to be consistent with 
naming convention (for eg. clusterID, scmID).
* Removed restriction of not updating the existing container metadata fields.

> CleanUp Reimplemented classes
> -
>
> Key: HDDS-182
> URL: https://issues.apache.org/jira/browse/HDDS-182
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HDDS-182-HDDS-48.001.patch
>
>
> Cleanup container-service's ozone.container.common package. The following 
> classes have been refactored and re-implemented. The unused classes/ methods 
> should be cleaned up.
>  # org.apache.hadoop.ozone.container.common.helpers.ChunkUtils
>  # org.apache.hadoop.ozone.container.common.helpers.KeyUtils
>  # org.apache.hadoop.ozone.container.common.helpers.ContainerData
>  # org.apache.hadoop.ozone.container.common.impl.Dispatcher
>  # org.apache.hadoop.ozone.container.common.impl.ChunkManagerImpl
>  # org.apache.hadoop.ozone.container.common.impl.KeyManagerImpl
>  # org.apache.hadoop.ozone.container.common.impl.ContainerManagerImpl
>  # org.apache.hadoop.ozone.container.common.impl.Dispatcher
> Also, fix integration tests broken by deleting these classes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-182) CleanUp Reimplemented classes

2018-07-03 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDDS-182:

Status: Patch Available  (was: In Progress)

> CleanUp Reimplemented classes
> -
>
> Key: HDDS-182
> URL: https://issues.apache.org/jira/browse/HDDS-182
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HDDS-182-HDDS-48.001.patch
>
>
> Cleanup container-service's ozone.container.common package. The following 
> classes have been refactored and re-implemented. The unused classes/ methods 
> should be cleaned up.
>  # org.apache.hadoop.ozone.container.common.helpers.ChunkUtils
>  # org.apache.hadoop.ozone.container.common.helpers.KeyUtils
>  # org.apache.hadoop.ozone.container.common.helpers.ContainerData
>  # org.apache.hadoop.ozone.container.common.impl.Dispatcher
>  # org.apache.hadoop.ozone.container.common.impl.ChunkManagerImpl
>  # org.apache.hadoop.ozone.container.common.impl.KeyManagerImpl
>  # org.apache.hadoop.ozone.container.common.impl.ContainerManagerImpl
>  # org.apache.hadoop.ozone.container.common.impl.Dispatcher
> Also, fix integration tests broken by deleting these classes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-182) CleanUp Reimplemented classes

2018-07-03 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDDS-182:

Attachment: HDDS-182-HDDS-48.001.patch

> CleanUp Reimplemented classes
> -
>
> Key: HDDS-182
> URL: https://issues.apache.org/jira/browse/HDDS-182
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HDDS-182-HDDS-48.001.patch
>
>
> Cleanup container-service's ozone.container.common package. The following 
> classes have been refactored and re-implemented. The unused classes/ methods 
> should be cleaned up.
>  # org.apache.hadoop.ozone.container.common.helpers.ChunkUtils
>  # org.apache.hadoop.ozone.container.common.helpers.KeyUtils
>  # org.apache.hadoop.ozone.container.common.helpers.ContainerData
>  # org.apache.hadoop.ozone.container.common.impl.Dispatcher
>  # org.apache.hadoop.ozone.container.common.impl.ChunkManagerImpl
>  # org.apache.hadoop.ozone.container.common.impl.KeyManagerImpl
>  # org.apache.hadoop.ozone.container.common.impl.ContainerManagerImpl
>  # org.apache.hadoop.ozone.container.common.impl.Dispatcher
> Also, fix integration tests broken by deleting these classes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-182) CleanUp Reimplemented classes

2018-07-03 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDDS-182:

Attachment: (was: HDDS-182-HDDS-48.000.patch)

> CleanUp Reimplemented classes
> -
>
> Key: HDDS-182
> URL: https://issues.apache.org/jira/browse/HDDS-182
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
>
> Cleanup container-service's ozone.container.common package. The following 
> classes have been refactored and re-implemented. The unused classes/ methods 
> should be cleaned up.
>  # org.apache.hadoop.ozone.container.common.helpers.ChunkUtils
>  # org.apache.hadoop.ozone.container.common.helpers.KeyUtils
>  # org.apache.hadoop.ozone.container.common.helpers.ContainerData
>  # org.apache.hadoop.ozone.container.common.impl.Dispatcher
>  # org.apache.hadoop.ozone.container.common.impl.ChunkManagerImpl
>  # org.apache.hadoop.ozone.container.common.impl.KeyManagerImpl
>  # org.apache.hadoop.ozone.container.common.impl.ContainerManagerImpl
>  # org.apache.hadoop.ozone.container.common.impl.Dispatcher
> Also, fix integration tests broken by deleting these classes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-176) Add keyCount and container maximum size to ContainerData

2018-07-03 Thread Hanisha Koneru (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16531650#comment-16531650
 ] 

Hanisha Koneru edited comment on HDDS-176 at 7/3/18 4:54 PM:
-

LGTM. +1.

Thanks [~bharatviswa] for the contribution. Committed to HDDS-48 branch.


was (Author: hanishakoneru):
Thanks [~bharatviswa].

LGTM. +1.

> Add keyCount and container maximum size to ContainerData
> 
>
> Key: HDDS-176
> URL: https://issues.apache.org/jira/browse/HDDS-176
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-176-HDDS-48.00.patch, HDDS-176-HDDS-48.01.patch
>
>
> # ContainerData, should hold container maximum size, and this should be 
> serialized into .container file. This is needed because after some time, 
> container size can be changed. So, old containers will have different max 
> size than the newly created containers.
>  # And also add KeyCount which says the number of keys in the container.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



<    1   2   3   4   5   6   7   8   9   10   >