[jira] [Assigned] (HDDS-1834) ozone fs -mkdir -p does not create parent directories
[ https://issues.apache.org/jira/browse/HDDS-1834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lokesh Jain reassigned HDDS-1834: - Assignee: Lokesh Jain > ozone fs -mkdir -p does not create parent directories > - > > Key: HDDS-1834 > URL: https://issues.apache.org/jira/browse/HDDS-1834 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: Ozone Filesystem >Reporter: Doroszlai, Attila >Assignee: Lokesh Jain >Priority: Major > > ozonesecure-ozonefs acceptance test is failing, because {{ozone fs -mkdir > -p}} only creates key for the specific directory, not its parents. > {noformat} > ozone fs -mkdir -p o3fs://bucket1.fstest/testdir/deep > {noformat} > Previous result: > {noformat:title=https://ci.anzix.net/job/ozone-nightly/176/artifact/hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT/compose/result/log.html#s1-s16-t3-k2} > $ ozone sh key list o3://om/fstest/bucket1 | grep -v WARN | jq -r > '.[].keyName' > testdir/ > testdir/deep/ > {noformat} > Current result: > {noformat:title=https://ci.anzix.net/job/ozone-nightly/177/artifact/hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT/compose/result/log.html#s1-s16-t3-k2} > $ ozone sh key list o3://om/fstest/bucket1 | grep -v WARN | jq -r > '.[].keyName' > testdir/deep/ > {noformat} > The failure happens on first operation that tries to use {{testdir/}} > directly: > {noformat} > $ ozone fs -touch o3fs://bucket1.fstest/testdir/TOUCHFILE.txt > ls: `o3fs://bucket1.fstest/testdir': No such file or directory > {noformat} -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-1834) ozone fs -mkdir -p does not create parent directories
[ https://issues.apache.org/jira/browse/HDDS-1834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16888798#comment-16888798 ] Lokesh Jain commented on HDDS-1834: --- [~adoroszlai] Thanks for reporting the issue! On my local setup it is working. {code:java} hadoop32 ljain$ docker exec 6bbbee05e7e6 ozone fs -mkdir -p o3fs://bucket1.vol1/testdir/deep hadoop32 ljain$ docker exec 6bbbee05e7e6 ozone fs -touch o3fs://bucket1.vol1/testdir/TOUCHFILE.txt hadoop32 ljain$ docker exec 6bbbee05e7e6 ozone fs -ls o3fs://bucket1.vol1/testdir/TOUCHFILE.txt -rw-rw-rw- 1 hadoop hadoop 0 2019-07-19 11:19 o3fs://bucket1.vol1/testdir/TOUCHFILE.txt hadoop32 ljain$ docker exec 6bbbee05e7e6 ozone fs -ls o3fs://bucket1.vol1/testdir/ Found 2 items -rw-rw-rw- 1 hadoop hadoop 0 2019-07-19 11:19 o3fs://bucket1.vol1/testdir/TOUCHFILE.txt drwxrwxrwx - hadoop hadoop 0 2019-07-19 11:19 o3fs://bucket1.vol1/testdir/deep hadoop32 ljain$ docker exec 6bbbee05e7e6 ozone fs -ls o3fs://bucket1.vol1/ Found 1 items drwxrwxrwx - hadoop hadoop 0 2019-07-19 11:19 o3fs://bucket1.vol1/testdir {code} > ozone fs -mkdir -p does not create parent directories > - > > Key: HDDS-1834 > URL: https://issues.apache.org/jira/browse/HDDS-1834 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: Ozone Filesystem >Reporter: Doroszlai, Attila >Assignee: Lokesh Jain >Priority: Major > > ozonesecure-ozonefs acceptance test is failing, because {{ozone fs -mkdir > -p}} only creates key for the specific directory, not its parents. > {noformat} > ozone fs -mkdir -p o3fs://bucket1.fstest/testdir/deep > {noformat} > Previous result: > {noformat:title=https://ci.anzix.net/job/ozone-nightly/176/artifact/hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT/compose/result/log.html#s1-s16-t3-k2} > $ ozone sh key list o3://om/fstest/bucket1 | grep -v WARN | jq -r > '.[].keyName' > testdir/ > testdir/deep/ > {noformat} > Current result: > {noformat:title=https://ci.anzix.net/job/ozone-nightly/177/artifact/hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT/compose/result/log.html#s1-s16-t3-k2} > $ ozone sh key list o3://om/fstest/bucket1 | grep -v WARN | jq -r > '.[].keyName' > testdir/deep/ > {noformat} > The failure happens on first operation that tries to use {{testdir/}} > directly: > {noformat} > $ ozone fs -touch o3fs://bucket1.fstest/testdir/TOUCHFILE.txt > ls: `o3fs://bucket1.fstest/testdir': No such file or directory > {noformat} -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDDS-1834) ozone fs -mkdir -p does not create parent directories
[ https://issues.apache.org/jira/browse/HDDS-1834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16888798#comment-16888798 ] Lokesh Jain edited comment on HDDS-1834 at 7/19/19 11:28 AM: - [~adoroszlai] Thanks for reporting the issue! On my local setup it is working. {code:java} hadoop32 ljain$ docker exec 6bbbee05e7e6 ozone fs -mkdir -p o3fs://bucket1.vol1/testdir/deep hadoop32 ljain$ docker exec 6bbbee05e7e6 ozone fs -touch o3fs://bucket1.vol1/testdir/TOUCHFILE.txt hadoop32 ljain$ docker exec 6bbbee05e7e6 ozone fs -ls o3fs://bucket1.vol1/testdir/TOUCHFILE.txt -rw-rw-rw- 1 hadoop hadoop 0 2019-07-19 11:19 o3fs://bucket1.vol1/testdir/TOUCHFILE.txt hadoop32 ljain$ docker exec 6bbbee05e7e6 ozone fs -ls o3fs://bucket1.vol1/testdir/ Found 2 items -rw-rw-rw- 1 hadoop hadoop 0 2019-07-19 11:19 o3fs://bucket1.vol1/testdir/TOUCHFILE.txt drwxrwxrwx - hadoop hadoop 0 2019-07-19 11:19 o3fs://bucket1.vol1/testdir/deep hadoop32 ljain$ docker exec 6bbbee05e7e6 ozone fs -ls o3fs://bucket1.vol1/ Found 1 items drwxrwxrwx - hadoop hadoop 0 2019-07-19 11:19 o3fs://bucket1.vol1/testdir {code} was (Author: ljain): [~adoroszlai] Thanks for reporting the issue! On my local setup it is working. {code:java} hadoop32 ljain$ docker exec 6bbbee05e7e6 ozone fs -mkdir -p o3fs://bucket1.vol1/testdir/deep hadoop32 ljain$ docker exec 6bbbee05e7e6 ozone fs -touch o3fs://bucket1.vol1/testdir/TOUCHFILE.txt hadoop32 ljain$ docker exec 6bbbee05e7e6 ozone fs -ls o3fs://bucket1.vol1/testdir/TOUCHFILE.txt -rw-rw-rw- 1 hadoop hadoop 0 2019-07-19 11:19 o3fs://bucket1.vol1/testdir/TOUCHFILE.txt hadoop32 ljain$ docker exec 6bbbee05e7e6 ozone fs -ls o3fs://bucket1.vol1/testdir/ Found 2 items -rw-rw-rw- 1 hadoop hadoop 0 2019-07-19 11:19 o3fs://bucket1.vol1/testdir/TOUCHFILE.txt drwxrwxrwx - hadoop hadoop 0 2019-07-19 11:19 o3fs://bucket1.vol1/testdir/deep hadoop32 ljain$ docker exec 6bbbee05e7e6 ozone fs -ls o3fs://bucket1.vol1/ Found 1 items drwxrwxrwx - hadoop hadoop 0 2019-07-19 11:19 o3fs://bucket1.vol1/testdir {code} > ozone fs -mkdir -p does not create parent directories > - > > Key: HDDS-1834 > URL: https://issues.apache.org/jira/browse/HDDS-1834 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: Ozone Filesystem >Reporter: Doroszlai, Attila >Assignee: Lokesh Jain >Priority: Major > > ozonesecure-ozonefs acceptance test is failing, because {{ozone fs -mkdir > -p}} only creates key for the specific directory, not its parents. > {noformat} > ozone fs -mkdir -p o3fs://bucket1.fstest/testdir/deep > {noformat} > Previous result: > {noformat:title=https://ci.anzix.net/job/ozone-nightly/176/artifact/hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT/compose/result/log.html#s1-s16-t3-k2} > $ ozone sh key list o3://om/fstest/bucket1 | grep -v WARN | jq -r > '.[].keyName' > testdir/ > testdir/deep/ > {noformat} > Current result: > {noformat:title=https://ci.anzix.net/job/ozone-nightly/177/artifact/hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT/compose/result/log.html#s1-s16-t3-k2} > $ ozone sh key list o3://om/fstest/bucket1 | grep -v WARN | jq -r > '.[].keyName' > testdir/deep/ > {noformat} > The failure happens on first operation that tries to use {{testdir/}} > directly: > {noformat} > $ ozone fs -touch o3fs://bucket1.fstest/testdir/TOUCHFILE.txt > ls: `o3fs://bucket1.fstest/testdir': No such file or directory > {noformat} -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDDS-1834) ozone fs -mkdir -p does not create parent directories in ozonesecure
[ https://issues.apache.org/jira/browse/HDDS-1834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lokesh Jain reassigned HDDS-1834: - Assignee: (was: Lokesh Jain) > ozone fs -mkdir -p does not create parent directories in ozonesecure > > > Key: HDDS-1834 > URL: https://issues.apache.org/jira/browse/HDDS-1834 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: Ozone Filesystem >Reporter: Doroszlai, Attila >Priority: Major > > ozonesecure-ozonefs acceptance test is failing, because {{ozone fs -mkdir > -p}} only creates key for the specific directory, not its parents. > {noformat} > ozone fs -mkdir -p o3fs://bucket1.fstest/testdir/deep > {noformat} > Previous result: > {noformat:title=https://ci.anzix.net/job/ozone-nightly/176/artifact/hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT/compose/result/log.html#s1-s16-t3-k2} > $ ozone sh key list o3://om/fstest/bucket1 | grep -v WARN | jq -r > '.[].keyName' > testdir/ > testdir/deep/ > {noformat} > Current result: > {noformat:title=https://ci.anzix.net/job/ozone-nightly/177/artifact/hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT/compose/result/log.html#s1-s16-t3-k2} > $ ozone sh key list o3://om/fstest/bucket1 | grep -v WARN | jq -r > '.[].keyName' > testdir/deep/ > {noformat} > The failure happens on first operation that tries to use {{testdir/}} > directly: > {noformat} > $ ozone fs -touch o3fs://bucket1.fstest/testdir/TOUCHFILE.txt > ls: `o3fs://bucket1.fstest/testdir': No such file or directory > {noformat} -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-1816) ContainerStateMachine should limit number of pending apply transactions
[ https://issues.apache.org/jira/browse/HDDS-1816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16890287#comment-16890287 ] Lokesh Jain commented on HDDS-1816: --- [~nandakumar131] it is good to have but not a blocker for 0.4.1 release. > ContainerStateMachine should limit number of pending apply transactions > --- > > Key: HDDS-1816 > URL: https://issues.apache.org/jira/browse/HDDS-1816 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Lokesh Jain >Assignee: Lokesh Jain >Priority: Major > > ContainerStateMachine should limit number of pending apply transactions in > order to avoid excessive heap usage by the pending transactions. -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-1816) ContainerStateMachine should limit number of pending apply transactions
[ https://issues.apache.org/jira/browse/HDDS-1816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lokesh Jain updated HDDS-1816: -- Status: Patch Available (was: Open) > ContainerStateMachine should limit number of pending apply transactions > --- > > Key: HDDS-1816 > URL: https://issues.apache.org/jira/browse/HDDS-1816 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Lokesh Jain >Assignee: Lokesh Jain >Priority: Major > Labels: pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > > ContainerStateMachine should limit number of pending apply transactions in > order to avoid excessive heap usage by the pending transactions. -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-1834) ozone fs -mkdir -p does not create parent directories in ozonesecure
[ https://issues.apache.org/jira/browse/HDDS-1834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16892634#comment-16892634 ] Lokesh Jain commented on HDDS-1834: --- HDDS-1481 changes the mkdir logic for OzoneFileSystem. Earlier all the parent directories were created as part of mkdir. We removed that change to just add key for the corresponding directory. The failure here might be related to acls enabled in ozonesecure compose file. > ozone fs -mkdir -p does not create parent directories in ozonesecure > > > Key: HDDS-1834 > URL: https://issues.apache.org/jira/browse/HDDS-1834 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: Ozone Filesystem >Reporter: Doroszlai, Attila >Priority: Blocker > > ozonesecure-ozonefs acceptance test is failing, because {{ozone fs -mkdir > -p}} only creates key for the specific directory, not its parents. > {noformat} > ozone fs -mkdir -p o3fs://bucket1.fstest/testdir/deep > {noformat} > Previous result: > {noformat:title=https://ci.anzix.net/job/ozone-nightly/176/artifact/hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT/compose/result/log.html#s1-s16-t3-k2} > $ ozone sh key list o3://om/fstest/bucket1 | grep -v WARN | jq -r > '.[].keyName' > testdir/ > testdir/deep/ > {noformat} > Current result: > {noformat:title=https://ci.anzix.net/job/ozone-nightly/177/artifact/hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT/compose/result/log.html#s1-s16-t3-k2} > $ ozone sh key list o3://om/fstest/bucket1 | grep -v WARN | jq -r > '.[].keyName' > testdir/deep/ > {noformat} > The failure happens on first operation that tries to use {{testdir/}} > directly: > {noformat} > $ ozone fs -touch o3fs://bucket1.fstest/testdir/TOUCHFILE.txt > ls: `o3fs://bucket1.fstest/testdir': No such file or directory > {noformat} -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-1834) parent directories not found in secure setup due to ACL check
[ https://issues.apache.org/jira/browse/HDDS-1834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16892790#comment-16892790 ] Lokesh Jain commented on HDDS-1834: --- The problem exists in general for checkAccess. There are two bugs associated with checkAccess. # In OzoneFileSystem use cases, for access of a descendant checkAccess of any ancestor is not done. Currently while accessing a/b/c.txt we do not check the access for a/ and a/b/ and do a access check only for the path a/b/c.txt # In HDDS-1481 while doing mkdir, the ancestor directories are not created if they do not exist. checkAccess method only checks for the key provided and therefore fails with KEY_NOT_FOUND error. It should do a check for existence of a directory using getFileStatus. KeyManagerImpl#checkAccess:1645-1657 {code:java} OmKeyInfo keyInfo = metadataManager.getKeyTable().get(objectKey); if (keyInfo == null) { objectKey = OzoneFSUtils.addTrailingSlashIfNeeded(objectKey); keyInfo = metadataManager.getKeyTable().get(objectKey); if(keyInfo == null) { keyInfo = metadataManager.getOpenKeyTable().get(objectKey); if (keyInfo == null) { throw new OMException("Key not found, checkAccess failed. Key:" + objectKey, KEY_NOT_FOUND); } } } {code} Example illustrating the problem 2. {code:java} ozone sh key list o3://om/fstest/bucket1/ [ { "version" : 0, "md5hash" : null, "createdOn" : "Thu, 25 Jul 2019 11:26:02 GMT", "modifiedOn" : "Thu, 25 Jul 2019 11:26:02 GMT", "size" : 0, "keyName" : "testdir/deep/", "type" : null }, { "version" : 0, "md5hash" : null, "createdOn" : "Thu, 25 Jul 2019 11:26:09 GMT", "modifiedOn" : "Thu, 01 Jan 1970 00:12:54 GMT", "size" : 22808, "keyName" : "testdir/deep/MOVED.TXT", "type" : null }, { "version" : 0, "md5hash" : null, "createdOn" : "Thu, 25 Jul 2019 11:26:18 GMT", "modifiedOn" : "Thu, 01 Jan 1970 00:12:44 GMT", "size" : 22808, "keyName" : "testdir/deep/PUTFILE.txt", "type" : null } ] ozone sh key info o3://om/fstest/bucket1/testdir KEY_NOT_FOUND Key not found, checkAccess failed. Key:/fstest/bucket1/testdir/ {code} > parent directories not found in secure setup due to ACL check > - > > Key: HDDS-1834 > URL: https://issues.apache.org/jira/browse/HDDS-1834 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: Ozone Filesystem >Reporter: Doroszlai, Attila >Assignee: Doroszlai, Attila >Priority: Blocker > > ozonesecure-ozonefs acceptance test is failing, because {{ozone fs -mkdir > -p}} only creates key for the specific directory, not its parents. > {noformat} > ozone fs -mkdir -p o3fs://bucket1.fstest/testdir/deep > {noformat} > Previous result: > {noformat:title=https://ci.anzix.net/job/ozone-nightly/176/artifact/hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT/compose/result/log.html#s1-s16-t3-k2} > $ ozone sh key list o3://om/fstest/bucket1 | grep -v WARN | jq -r > '.[].keyName' > testdir/ > testdir/deep/ > {noformat} > Current result: > {noformat:title=https://ci.anzix.net/job/ozone-nightly/177/artifact/hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT/compose/result/log.html#s1-s16-t3-k2} > $ ozone sh key list o3://om/fstest/bucket1 | grep -v WARN | jq -r > '.[].keyName' > testdir/deep/ > {noformat} > The failure happens on first operation that tries to use {{testdir/}} > directly: > {noformat} > $ ozone fs -touch o3fs://bucket1.fstest/testdir/TOUCHFILE.txt > ls: `o3fs://bucket1.fstest/testdir': No such file or directory > {noformat} -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDDS-1834) parent directories not found in secure setup due to ACL check
[ https://issues.apache.org/jira/browse/HDDS-1834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16892790#comment-16892790 ] Lokesh Jain edited comment on HDDS-1834 at 7/25/19 2:14 PM: There are two bugs associated with checkAccess. # In OzoneFileSystem use cases, for access of a descendant checkAccess of any ancestor is not done. Currently while accessing a/b/c.txt we do not check the access for a/ and a/b/ and do a access check only for the path a/b/c.txt # In HDDS-1481 while doing mkdir, the ancestor directories are not created if they do not exist. checkAccess method only checks for the key provided and therefore fails with KEY_NOT_FOUND error. It should do a check for existence of a directory using getFileStatus. KeyManagerImpl#checkAccess:1645-1657 {code:java} OmKeyInfo keyInfo = metadataManager.getKeyTable().get(objectKey); if (keyInfo == null) { objectKey = OzoneFSUtils.addTrailingSlashIfNeeded(objectKey); keyInfo = metadataManager.getKeyTable().get(objectKey); if(keyInfo == null) { keyInfo = metadataManager.getOpenKeyTable().get(objectKey); if (keyInfo == null) { throw new OMException("Key not found, checkAccess failed. Key:" + objectKey, KEY_NOT_FOUND); } } } {code} Example illustrating the problem 2. {code:java} ozone sh key list o3://om/fstest/bucket1/ [ { "version" : 0, "md5hash" : null, "createdOn" : "Thu, 25 Jul 2019 11:26:02 GMT", "modifiedOn" : "Thu, 25 Jul 2019 11:26:02 GMT", "size" : 0, "keyName" : "testdir/deep/", "type" : null }, { "version" : 0, "md5hash" : null, "createdOn" : "Thu, 25 Jul 2019 11:26:09 GMT", "modifiedOn" : "Thu, 01 Jan 1970 00:12:54 GMT", "size" : 22808, "keyName" : "testdir/deep/MOVED.TXT", "type" : null }, { "version" : 0, "md5hash" : null, "createdOn" : "Thu, 25 Jul 2019 11:26:18 GMT", "modifiedOn" : "Thu, 01 Jan 1970 00:12:44 GMT", "size" : 22808, "keyName" : "testdir/deep/PUTFILE.txt", "type" : null } ] ozone sh key info o3://om/fstest/bucket1/testdir KEY_NOT_FOUND Key not found, checkAccess failed. Key:/fstest/bucket1/testdir/ {code} was (Author: ljain): The problem exists in general for checkAccess. There are two bugs associated with checkAccess. # In OzoneFileSystem use cases, for access of a descendant checkAccess of any ancestor is not done. Currently while accessing a/b/c.txt we do not check the access for a/ and a/b/ and do a access check only for the path a/b/c.txt # In HDDS-1481 while doing mkdir, the ancestor directories are not created if they do not exist. checkAccess method only checks for the key provided and therefore fails with KEY_NOT_FOUND error. It should do a check for existence of a directory using getFileStatus. KeyManagerImpl#checkAccess:1645-1657 {code:java} OmKeyInfo keyInfo = metadataManager.getKeyTable().get(objectKey); if (keyInfo == null) { objectKey = OzoneFSUtils.addTrailingSlashIfNeeded(objectKey); keyInfo = metadataManager.getKeyTable().get(objectKey); if(keyInfo == null) { keyInfo = metadataManager.getOpenKeyTable().get(objectKey); if (keyInfo == null) { throw new OMException("Key not found, checkAccess failed. Key:" + objectKey, KEY_NOT_FOUND); } } } {code} Example illustrating the problem 2. {code:java} ozone sh key list o3://om/fstest/bucket1/ [ { "version" : 0, "md5hash" : null, "createdOn" : "Thu, 25 Jul 2019 11:26:02 GMT", "modifiedOn" : "Thu, 25 Jul 2019 11:26:02 GMT", "size" : 0, "keyName" : "testdir/deep/", "type" : null }, { "version" : 0, "md5hash" : null, "createdOn" : "Thu, 25 Jul 2019 11:26:09 GMT", "modifiedOn" : "Thu, 01 Jan 1970 00:12:54 GMT", "size" : 22808, "keyName" : "testdir/deep/MOVED.TXT", "type" : null }, { "version" : 0, "md5hash" : null, "createdOn" : "Thu, 25 Jul 2019 11:26:18 GMT", "modifiedOn" : "Thu, 01 Jan 1970 00:12:44 GMT", "size" : 22808, "keyName" : "testdir/deep/PUTFILE.txt", "type" : null } ] ozone sh key info o3://om/fstest/bucket1/testdir KEY_NOT_FOUND Key not found, checkAccess failed. Key:/fstest/bucket1/testdir/ {code} > parent directories not found in secure setup due to ACL check > - > > Key: HDDS-1834 > URL: https://issues.apache.org/jira/browse/HDDS-1834 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: Ozone Filesystem >Reporter: Doroszlai, Attila >Assignee: Doroszlai, Attila >Priority: Blocker > > ozonesecure-ozonefs acceptance test is failing, because {{ozone fs -mkdir > -p}} only creates key for the specific directory, not its parents. > {noformat} > ozone fs -mkdir -p o3fs://bucket1.fstest/testdir/deep > {noformat} > Previous result: > {noformat:title=https://ci.anzix.net/job/ozone-nightly/176/artifact/hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT/compose/result/log.html#s1-s16-t3-k2} > $
[jira] [Created] (HDFS-14692) Upload button should not encode complete url
Lokesh Jain created HDFS-14692: -- Summary: Upload button should not encode complete url Key: HDFS-14692 URL: https://issues.apache.org/jira/browse/HDFS-14692 Project: Hadoop HDFS Issue Type: Bug Reporter: Lokesh Jain Assignee: Lokesh Jain explorer.js#modal-upload-file-button currently does not work with knox. The function encodes the complete url and thus creates a malformed url. This leads to an error while uploading the file. Example of malformed url - "https%3A//127.0.0.1%3A/gateway/default/webhdfs/v1/app-logs/BUILDING.txt?op=CREATE&noredirect=true" -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-14692) Upload button should not encode complete url
[ https://issues.apache.org/jira/browse/HDFS-14692?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lokesh Jain updated HDFS-14692: --- Attachment: HDFS-14692.001.patch > Upload button should not encode complete url > > > Key: HDFS-14692 > URL: https://issues.apache.org/jira/browse/HDFS-14692 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Lokesh Jain >Assignee: Lokesh Jain >Priority: Major > Attachments: HDFS-14692.001.patch > > > explorer.js#modal-upload-file-button currently does not work with knox. The > function encodes the complete url and thus creates a malformed url. This > leads to an error while uploading the file. > Example of malformed url - > "https%3A//127.0.0.1%3A/gateway/default/webhdfs/v1/app-logs/BUILDING.txt?op=CREATE&noredirect=true" -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-14692) Upload button should not encode complete url
[ https://issues.apache.org/jira/browse/HDFS-14692?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lokesh Jain updated HDFS-14692: --- Status: Patch Available (was: Open) > Upload button should not encode complete url > > > Key: HDFS-14692 > URL: https://issues.apache.org/jira/browse/HDFS-14692 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Lokesh Jain >Assignee: Lokesh Jain >Priority: Major > Attachments: HDFS-14692.001.patch > > > explorer.js#modal-upload-file-button currently does not work with knox. The > function encodes the complete url and thus creates a malformed url. This > leads to an error while uploading the file. > Example of malformed url - > "https%3A//127.0.0.1%3A/gateway/default/webhdfs/v1/app-logs/BUILDING.txt?op=CREATE&noredirect=true" -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14692) Upload button should not encode complete url
[ https://issues.apache.org/jira/browse/HDFS-14692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16898275#comment-16898275 ] Lokesh Jain commented on HDFS-14692: The patch fixes the issue by encoding just the directory part in the url. The upload file button still fails with Mixed content error after the fix. The mixed content error would require another fix. {code:java} jquery-3.3.1.min.js:2 Mixed Content: The page at 'https://127.0.0.1:/gateway/default/hdfs/explorer.html#/app-logs' was loaded over HTTPS, but requested an insecure XMLHttpRequest endpoint 'http://nn-host:50075/webhdfs/v1/app-logs/BUILDING.txt?op=CREATE&doas=drwho&namenoderpcaddress=nn-host:8020&createflag=&createparent=true&overwrite=false'. This request has been blocked; the content must be served over HTTPS. {code} > Upload button should not encode complete url > > > Key: HDFS-14692 > URL: https://issues.apache.org/jira/browse/HDFS-14692 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Lokesh Jain >Assignee: Lokesh Jain >Priority: Major > Attachments: HDFS-14692.001.patch > > > explorer.js#modal-upload-file-button currently does not work with knox. The > function encodes the complete url and thus creates a malformed url. This > leads to an error while uploading the file. > Example of malformed url - > "https%3A//127.0.0.1%3A/gateway/default/webhdfs/v1/app-logs/BUILDING.txt?op=CREATE&noredirect=true" -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-1208) ContainerStateMachine should set chunk data as state machine data for ratis
[ https://issues.apache.org/jira/browse/HDDS-1208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lokesh Jain updated HDDS-1208: -- Status: Patch Available (was: Open) > ContainerStateMachine should set chunk data as state machine data for ratis > --- > > Key: HDDS-1208 > URL: https://issues.apache.org/jira/browse/HDDS-1208 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Lokesh Jain >Assignee: Lokesh Jain >Priority: Major > Attachments: HDDS-1208.001.patch > > > Currently ContainerStateMachine sets ContainerCommandRequestProto as state > machine data. This requires converting the ContainerCommandRequestProto to a > bytestring which leads to redundant buffer copy in case of write chunk > request. This can be avoided by setting the chunk data as the state machine > data for a log entry in ratis. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-1208) ContainerStateMachine should set chunk data as state machine data for ratis
[ https://issues.apache.org/jira/browse/HDDS-1208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lokesh Jain updated HDDS-1208: -- Attachment: HDDS-1208.001.patch > ContainerStateMachine should set chunk data as state machine data for ratis > --- > > Key: HDDS-1208 > URL: https://issues.apache.org/jira/browse/HDDS-1208 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Lokesh Jain >Assignee: Lokesh Jain >Priority: Major > Attachments: HDDS-1208.001.patch > > > Currently ContainerStateMachine sets ContainerCommandRequestProto as state > machine data. This requires converting the ContainerCommandRequestProto to a > bytestring which leads to redundant buffer copy in case of write chunk > request. This can be avoided by setting the chunk data as the state machine > data for a log entry in ratis. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-1199) In Healthy Pipeline rule consider pipelines with all replicationType and replicationFactor
[ https://issues.apache.org/jira/browse/HDDS-1199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16782970#comment-16782970 ] Lokesh Jain commented on HDDS-1199: --- [~bharatviswa] Standalone pipelines are not reported by datanode. But if the nodes involved in standalone pipelines are reported then standalone pipeline can be considered as healthy. > In Healthy Pipeline rule consider pipelines with all replicationType and > replicationFactor > -- > > Key: HDDS-1199 > URL: https://issues.apache.org/jira/browse/HDDS-1199 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: SCM >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham >Priority: Major > > In the current HealthyPipelineRule, we considered only pipeline type ratis > and replication factor 3 pipelines for 10%. > > This Jira is to consider all the pipelines with all replication factor for > the 10% threshold. (Means each pipeline-type with factor should meet 10%) -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-1208) ContainerStateMachine should set chunk data as state machine data for ratis
[ https://issues.apache.org/jira/browse/HDDS-1208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lokesh Jain updated HDDS-1208: -- Attachment: HDDS-1208.002.patch > ContainerStateMachine should set chunk data as state machine data for ratis > --- > > Key: HDDS-1208 > URL: https://issues.apache.org/jira/browse/HDDS-1208 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Lokesh Jain >Assignee: Lokesh Jain >Priority: Major > Attachments: HDDS-1208.001.patch, HDDS-1208.002.patch > > > Currently ContainerStateMachine sets ContainerCommandRequestProto as state > machine data. This requires converting the ContainerCommandRequestProto to a > bytestring which leads to redundant buffer copy in case of write chunk > request. This can be avoided by setting the chunk data as the state machine > data for a log entry in ratis. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-1171) Add benchmark for OM and OM client in Genesis
[ https://issues.apache.org/jira/browse/HDDS-1171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16783189#comment-16783189 ] Lokesh Jain commented on HDDS-1171: --- [~anu] Can you please review the patch? The picocli was added to Genesis in HDDS-1104. > Add benchmark for OM and OM client in Genesis > - > > Key: HDDS-1171 > URL: https://issues.apache.org/jira/browse/HDDS-1171 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Lokesh Jain >Assignee: Lokesh Jain >Priority: Major > Attachments: HDDS-1171.001.patch, HDDS-1171.002.patch > > > This Jira aims to add benchmark for OM and OM client in Genesis. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-1128) Create stateful manager class for the pipeline creation scheduling
[ https://issues.apache.org/jira/browse/HDDS-1128?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lokesh Jain updated HDDS-1128: -- Attachment: HDDS-1128.002.patch > Create stateful manager class for the pipeline creation scheduling > -- > > Key: HDDS-1128 > URL: https://issues.apache.org/jira/browse/HDDS-1128 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Elek, Marton >Assignee: Lokesh Jain >Priority: Critical > Attachments: HDDS-1128.001.patch, HDDS-1128.002.patch > > > HDDS-1076 introduced a new static variable in RatisPipelineProvider: > Scheduler. It seems to be a global variable which makes the testing harder. > [~shashikant] also suggested to remove it: > {quote}It would be a good idea to move the scheduler Class Utility into some > common utility package so that it can be used in multiple places as and when > needed. > {quote} > I agree. And findbug also complains about it: > {quote}H D ST: Write to static field > org.apache.hadoop.hdds.scm.pipeline.RatisPipelineProvider.scheduler from > instance method new > org.apache.hadoop.hdds.scm.pipeline.RatisPipelineProvider(NodeManager, > PipelineStateManager, Configuration) At RatisPipelineProvider.java:[line 56] > {quote} > I think we need a new class which includes both the state of > RatisPipelineUtils.isPipelineCreatorRunning and > RaitsPipelineProvider.Scheduler. It should have one instance which is > available for the classes which requires it. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-1128) Create stateful manager class for the pipeline creation scheduling
[ https://issues.apache.org/jira/browse/HDDS-1128?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16783221#comment-16783221 ] Lokesh Jain commented on HDDS-1128: --- v2 patch makes some changes based on comments by [~nandakumar131] in offline discussion. There are some renaming changes and changes in the PipelineManager interface. The api for finalizePipeline and removePipeline has been replaced by finalizeAndDestroyPipeline in the interface. > Create stateful manager class for the pipeline creation scheduling > -- > > Key: HDDS-1128 > URL: https://issues.apache.org/jira/browse/HDDS-1128 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Elek, Marton >Assignee: Lokesh Jain >Priority: Critical > Attachments: HDDS-1128.001.patch, HDDS-1128.002.patch > > > HDDS-1076 introduced a new static variable in RatisPipelineProvider: > Scheduler. It seems to be a global variable which makes the testing harder. > [~shashikant] also suggested to remove it: > {quote}It would be a good idea to move the scheduler Class Utility into some > common utility package so that it can be used in multiple places as and when > needed. > {quote} > I agree. And findbug also complains about it: > {quote}H D ST: Write to static field > org.apache.hadoop.hdds.scm.pipeline.RatisPipelineProvider.scheduler from > instance method new > org.apache.hadoop.hdds.scm.pipeline.RatisPipelineProvider(NodeManager, > PipelineStateManager, Configuration) At RatisPipelineProvider.java:[line 56] > {quote} > I think we need a new class which includes both the state of > RatisPipelineUtils.isPipelineCreatorRunning and > RaitsPipelineProvider.Scheduler. It should have one instance which is > available for the classes which requires it. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-1171) Add benchmark for OM and OM client in Genesis
[ https://issues.apache.org/jira/browse/HDDS-1171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lokesh Jain updated HDDS-1171: -- Attachment: HDDS-1171.003.patch > Add benchmark for OM and OM client in Genesis > - > > Key: HDDS-1171 > URL: https://issues.apache.org/jira/browse/HDDS-1171 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Lokesh Jain >Assignee: Lokesh Jain >Priority: Major > Attachments: HDDS-1171.001.patch, HDDS-1171.002.patch, > HDDS-1171.003.patch > > > This Jira aims to add benchmark for OM and OM client in Genesis. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-1171) Add benchmark for OM and OM client in Genesis
[ https://issues.apache.org/jira/browse/HDDS-1171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16784024#comment-16784024 ] Lokesh Jain commented on HDDS-1171: --- [~anu] Thanks for reviewing the patch! I have uploaded rebased v3 patch. > Add benchmark for OM and OM client in Genesis > - > > Key: HDDS-1171 > URL: https://issues.apache.org/jira/browse/HDDS-1171 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Lokesh Jain >Assignee: Lokesh Jain >Priority: Major > Attachments: HDDS-1171.001.patch, HDDS-1171.002.patch, > HDDS-1171.003.patch > > > This Jira aims to add benchmark for OM and OM client in Genesis. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDDS-1220) KeyManager#openKey should release the bucket lock before doing an allocateBlock
Lokesh Jain created HDDS-1220: - Summary: KeyManager#openKey should release the bucket lock before doing an allocateBlock Key: HDDS-1220 URL: https://issues.apache.org/jira/browse/HDDS-1220 Project: Hadoop Distributed Data Store Issue Type: Bug Reporter: Lokesh Jain Assignee: Lokesh Jain Currently KeyManager#openKey makes an allocateBlock call without releasing the bucket lock. Since allocateBlock requires a rpc connection to SCM, the handler thread in OM would hold the bucket lock until rpc is complete. Since allocateBlock call does not require a bucket lock to be held, allocateBlock call can be made after releasing the bucket lock. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDDS-1221) Introduce fine grained lock in Ozone Manager for key operations
Lokesh Jain created HDDS-1221: - Summary: Introduce fine grained lock in Ozone Manager for key operations Key: HDDS-1221 URL: https://issues.apache.org/jira/browse/HDDS-1221 Project: Hadoop Distributed Data Store Issue Type: Bug Reporter: Lokesh Jain Assignee: Lokesh Jain Currently ozone manager acquires bucket lock for key operations in OM. We can introduce fine grained lock for key operations in ozone manager. This would help in increasing throughput for key operations in a bucket. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDDS-1210) Ratis pipeline creation doesn't check raft client reply status during initialization
[ https://issues.apache.org/jira/browse/HDDS-1210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16784764#comment-16784764 ] Lokesh Jain edited comment on HDDS-1210 at 3/5/19 6:41 PM: --- [~msingh] Thanks for working on this! The patch looks good to me. Sorry for the late review. I have few minor comments. # MockRatisPipelineProvider can extend RatisPipelineProvider. Then we would only need to override the initializePipeline function. # We can also place MockRatisPipelineProvider inside org.apache.hadoop.hdds.scm.pipeline. Then we would not have to change the acess modifier for PipelineStateManager and RatisPipelineProvider classes. # Also should we send SCMPipelineManager instance inside the constructor argument for pipeline factory? This would make sure that PipelineStateManager and other classes are not exposed outside the pipeline package. # There are a few checkstyle issues. was (Author: ljain): [~msingh] Thanks for working on this! The patch looks good to me. Sorry for the late review. I have few minor comments. # MockRatisPipelineProvider can extend RatisPipelineProvider. Then we would only need to override the initializePipeline function. # We can also place MockRatisPipelineProvider inside org.apache.hadoop.hdds.scm.pipeline. Then we would not have to change the acess modifier for PipelineStateManager and RatisPipelineProvider classes. # Also should we send SCMPipelineManager instance inside the constructor argument for pipeline factory? This would make sure that PipelineStateManager and other classes are not exposed outside the pipeline package. > Ratis pipeline creation doesn't check raft client reply status during > initialization > - > > Key: HDDS-1210 > URL: https://issues.apache.org/jira/browse/HDDS-1210 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: SCM >Affects Versions: 0.4.0 >Reporter: Mukul Kumar Singh >Assignee: Mukul Kumar Singh >Priority: Major > Fix For: 0.4.0 > > Attachments: HDDS-1210.001.patch, HDDS-1210.002.patch, > HDDS-1210.003.patch, HDDS-1210.004.patch, HDDS-1210.005.patch, > HDDS-1210.006.patch > > > Ratis pipeline are initialized using `raftClient.groupAdd`. However the > pipeline initialization can fail and this can only be determined by > raftClientReply status. > {code} > callRatisRpc(pipeline.getNodes(), ozoneConf, > (raftClient, peer) -> raftClient.groupAdd(group, peer.getId())); > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-1210) Ratis pipeline creation doesn't check raft client reply status during initialization
[ https://issues.apache.org/jira/browse/HDDS-1210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16784764#comment-16784764 ] Lokesh Jain commented on HDDS-1210: --- [~msingh] Thanks for working on this! The patch looks good to me. Sorry for the late review. I have few minor comments. # MockRatisPipelineProvider can extend RatisPipelineProvider. Then we would only need to override the initializePipeline function. # We can also place MockRatisPipelineProvider inside org.apache.hadoop.hdds.scm.pipeline. Then we would not have to change the acess modifier for PipelineStateManager and RatisPipelineProvider classes. # Also should we send SCMPipelineManager instance inside the constructor argument for pipeline factory? This would make sure that PipelineStateManager and other classes are not exposed outside the pipeline package. > Ratis pipeline creation doesn't check raft client reply status during > initialization > - > > Key: HDDS-1210 > URL: https://issues.apache.org/jira/browse/HDDS-1210 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: SCM >Affects Versions: 0.4.0 >Reporter: Mukul Kumar Singh >Assignee: Mukul Kumar Singh >Priority: Major > Fix For: 0.4.0 > > Attachments: HDDS-1210.001.patch, HDDS-1210.002.patch, > HDDS-1210.003.patch, HDDS-1210.004.patch, HDDS-1210.005.patch, > HDDS-1210.006.patch > > > Ratis pipeline are initialized using `raftClient.groupAdd`. However the > pipeline initialization can fail and this can only be determined by > raftClientReply status. > {code} > callRatisRpc(pipeline.getNodes(), ozoneConf, > (raftClient, peer) -> raftClient.groupAdd(group, peer.getId())); > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-1171) Add benchmark for OM and OM client in Genesis
[ https://issues.apache.org/jira/browse/HDDS-1171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16785235#comment-16785235 ] Lokesh Jain commented on HDDS-1171: --- [~anu] Thanks for committing the patch! Sorry for missing the checkstyle issues. > Add benchmark for OM and OM client in Genesis > - > > Key: HDDS-1171 > URL: https://issues.apache.org/jira/browse/HDDS-1171 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Lokesh Jain >Assignee: Lokesh Jain >Priority: Major > Fix For: 0.4.0, 0.5.0 > > Attachments: HDDS-1171.001.patch, HDDS-1171.002.patch, > HDDS-1171.003.patch > > > This Jira aims to add benchmark for OM and OM client in Genesis. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-1208) ContainerStateMachine should set chunk data as state machine data for ratis
[ https://issues.apache.org/jira/browse/HDDS-1208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lokesh Jain updated HDDS-1208: -- Attachment: HDDS-1208.003.patch > ContainerStateMachine should set chunk data as state machine data for ratis > --- > > Key: HDDS-1208 > URL: https://issues.apache.org/jira/browse/HDDS-1208 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Lokesh Jain >Assignee: Lokesh Jain >Priority: Major > Attachments: HDDS-1208.001.patch, HDDS-1208.002.patch, > HDDS-1208.003.patch > > > Currently ContainerStateMachine sets ContainerCommandRequestProto as state > machine data. This requires converting the ContainerCommandRequestProto to a > bytestring which leads to redundant buffer copy in case of write chunk > request. This can be avoided by setting the chunk data as the state machine > data for a log entry in ratis. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-1208) ContainerStateMachine should set chunk data as state machine data for ratis
[ https://issues.apache.org/jira/browse/HDDS-1208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16785425#comment-16785425 ] Lokesh Jain commented on HDDS-1208: --- Uploaded rebased v3 patch. > ContainerStateMachine should set chunk data as state machine data for ratis > --- > > Key: HDDS-1208 > URL: https://issues.apache.org/jira/browse/HDDS-1208 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Lokesh Jain >Assignee: Lokesh Jain >Priority: Major > Attachments: HDDS-1208.001.patch, HDDS-1208.002.patch, > HDDS-1208.003.patch > > > Currently ContainerStateMachine sets ContainerCommandRequestProto as state > machine data. This requires converting the ContainerCommandRequestProto to a > bytestring which leads to redundant buffer copy in case of write chunk > request. This can be avoided by setting the chunk data as the state machine > data for a log entry in ratis. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-1210) Ratis pipeline creation doesn't check raft client reply status during initialization
[ https://issues.apache.org/jira/browse/HDDS-1210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16785445#comment-16785445 ] Lokesh Jain commented on HDDS-1210: --- [~msingh] Thanks for updating the patch! The patch looks good to me. +1. > Ratis pipeline creation doesn't check raft client reply status during > initialization > - > > Key: HDDS-1210 > URL: https://issues.apache.org/jira/browse/HDDS-1210 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: SCM >Affects Versions: 0.4.0 >Reporter: Mukul Kumar Singh >Assignee: Mukul Kumar Singh >Priority: Major > Fix For: 0.4.0 > > Attachments: HDDS-1210.001.patch, HDDS-1210.002.patch, > HDDS-1210.003.patch, HDDS-1210.004.patch, HDDS-1210.005.patch, > HDDS-1210.006.patch, HDDS-1210.007.patch, HDDS-1210.008.patch > > > Ratis pipeline are initialized using `raftClient.groupAdd`. However the > pipeline initialization can fail and this can only be determined by > raftClientReply status. > {code} > callRatisRpc(pipeline.getNodes(), ozoneConf, > (raftClient, peer) -> raftClient.groupAdd(group, peer.getId())); > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-1208) ContainerStateMachine should set chunk data as state machine data for ratis
[ https://issues.apache.org/jira/browse/HDDS-1208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16785820#comment-16785820 ] Lokesh Jain commented on HDDS-1208: --- [~msingh] Thanks for reviewing the patch! I have committed the patch to trunk. > ContainerStateMachine should set chunk data as state machine data for ratis > --- > > Key: HDDS-1208 > URL: https://issues.apache.org/jira/browse/HDDS-1208 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Lokesh Jain >Assignee: Lokesh Jain >Priority: Major > Attachments: HDDS-1208.001.patch, HDDS-1208.002.patch, > HDDS-1208.003.patch > > > Currently ContainerStateMachine sets ContainerCommandRequestProto as state > machine data. This requires converting the ContainerCommandRequestProto to a > bytestring which leads to redundant buffer copy in case of write chunk > request. This can be avoided by setting the chunk data as the state machine > data for a log entry in ratis. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-1208) ContainerStateMachine should set chunk data as state machine data for ratis
[ https://issues.apache.org/jira/browse/HDDS-1208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lokesh Jain updated HDDS-1208: -- Resolution: Resolved Status: Resolved (was: Patch Available) > ContainerStateMachine should set chunk data as state machine data for ratis > --- > > Key: HDDS-1208 > URL: https://issues.apache.org/jira/browse/HDDS-1208 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Lokesh Jain >Assignee: Lokesh Jain >Priority: Major > Attachments: HDDS-1208.001.patch, HDDS-1208.002.patch, > HDDS-1208.003.patch > > > Currently ContainerStateMachine sets ContainerCommandRequestProto as state > machine data. This requires converting the ContainerCommandRequestProto to a > bytestring which leads to redundant buffer copy in case of write chunk > request. This can be avoided by setting the chunk data as the state machine > data for a log entry in ratis. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-1208) ContainerStateMachine should set chunk data as state machine data for ratis
[ https://issues.apache.org/jira/browse/HDDS-1208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lokesh Jain updated HDDS-1208: -- Fix Version/s: (was: 0.5.0) 0.4.0 > ContainerStateMachine should set chunk data as state machine data for ratis > --- > > Key: HDDS-1208 > URL: https://issues.apache.org/jira/browse/HDDS-1208 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Lokesh Jain >Assignee: Lokesh Jain >Priority: Major > Fix For: 0.4.0 > > Attachments: HDDS-1208.001.patch, HDDS-1208.002.patch, > HDDS-1208.003.patch > > > Currently ContainerStateMachine sets ContainerCommandRequestProto as state > machine data. This requires converting the ContainerCommandRequestProto to a > bytestring which leads to redundant buffer copy in case of write chunk > request. This can be avoided by setting the chunk data as the state machine > data for a log entry in ratis. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-1220) KeyManager#openKey should release the bucket lock before doing an allocateBlock
[ https://issues.apache.org/jira/browse/HDDS-1220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lokesh Jain updated HDDS-1220: -- Status: Patch Available (was: Open) > KeyManager#openKey should release the bucket lock before doing an > allocateBlock > --- > > Key: HDDS-1220 > URL: https://issues.apache.org/jira/browse/HDDS-1220 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Lokesh Jain >Assignee: Lokesh Jain >Priority: Major > Attachments: HDDS-1220.001.patch > > > Currently KeyManager#openKey makes an allocateBlock call without releasing > the bucket lock. Since allocateBlock requires a rpc connection to SCM, the > handler thread in OM would hold the bucket lock until rpc is complete. Since > allocateBlock call does not require a bucket lock to be held, allocateBlock > call can be made after releasing the bucket lock. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-1220) KeyManager#openKey should release the bucket lock before doing an allocateBlock
[ https://issues.apache.org/jira/browse/HDDS-1220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lokesh Jain updated HDDS-1220: -- Attachment: HDDS-1220.001.patch > KeyManager#openKey should release the bucket lock before doing an > allocateBlock > --- > > Key: HDDS-1220 > URL: https://issues.apache.org/jira/browse/HDDS-1220 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Lokesh Jain >Assignee: Lokesh Jain >Priority: Major > Attachments: HDDS-1220.001.patch > > > Currently KeyManager#openKey makes an allocateBlock call without releasing > the bucket lock. Since allocateBlock requires a rpc connection to SCM, the > handler thread in OM would hold the bucket lock until rpc is complete. Since > allocateBlock call does not require a bucket lock to be held, allocateBlock > call can be made after releasing the bucket lock. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-1220) KeyManager#openKey should release the bucket lock before doing an allocateBlock
[ https://issues.apache.org/jira/browse/HDDS-1220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lokesh Jain updated HDDS-1220: -- Resolution: Resolved Status: Resolved (was: Patch Available) > KeyManager#openKey should release the bucket lock before doing an > allocateBlock > --- > > Key: HDDS-1220 > URL: https://issues.apache.org/jira/browse/HDDS-1220 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Lokesh Jain >Assignee: Lokesh Jain >Priority: Major > Attachments: HDDS-1220.001.patch > > > Currently KeyManager#openKey makes an allocateBlock call without releasing > the bucket lock. Since allocateBlock requires a rpc connection to SCM, the > handler thread in OM would hold the bucket lock until rpc is complete. Since > allocateBlock call does not require a bucket lock to be held, allocateBlock > call can be made after releasing the bucket lock. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-1220) KeyManager#openKey should release the bucket lock before doing an allocateBlock
[ https://issues.apache.org/jira/browse/HDDS-1220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lokesh Jain updated HDDS-1220: -- Fix Version/s: 0.5.0 0.4.0 > KeyManager#openKey should release the bucket lock before doing an > allocateBlock > --- > > Key: HDDS-1220 > URL: https://issues.apache.org/jira/browse/HDDS-1220 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Lokesh Jain >Assignee: Lokesh Jain >Priority: Major > Fix For: 0.4.0, 0.5.0 > > Attachments: HDDS-1220.001.patch > > > Currently KeyManager#openKey makes an allocateBlock call without releasing > the bucket lock. Since allocateBlock requires a rpc connection to SCM, the > handler thread in OM would hold the bucket lock until rpc is complete. Since > allocateBlock call does not require a bucket lock to be held, allocateBlock > call can be made after releasing the bucket lock. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-1220) KeyManager#openKey should release the bucket lock before doing an allocateBlock
[ https://issues.apache.org/jira/browse/HDDS-1220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16789342#comment-16789342 ] Lokesh Jain commented on HDDS-1220: --- [~msingh] Thanks for reviewing the patch. I have committed it to trunk and ozone-0.4. > KeyManager#openKey should release the bucket lock before doing an > allocateBlock > --- > > Key: HDDS-1220 > URL: https://issues.apache.org/jira/browse/HDDS-1220 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Lokesh Jain >Assignee: Lokesh Jain >Priority: Major > Fix For: 0.4.0, 0.5.0 > > Attachments: HDDS-1220.001.patch > > > Currently KeyManager#openKey makes an allocateBlock call without releasing > the bucket lock. Since allocateBlock requires a rpc connection to SCM, the > handler thread in OM would hold the bucket lock until rpc is complete. Since > allocateBlock call does not require a bucket lock to be held, allocateBlock > call can be made after releasing the bucket lock. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDDS-1248) TestSecureOzoneRpcClient fails intermittently
Lokesh Jain created HDDS-1248: - Summary: TestSecureOzoneRpcClient fails intermittently Key: HDDS-1248 URL: https://issues.apache.org/jira/browse/HDDS-1248 Project: Hadoop Distributed Data Store Issue Type: Bug Reporter: Lokesh Jain Fix For: 0.4.0 TestSecureOzoneRpcClient fails intermittently with the following exception. {code:java} java.io.IOException: Unexpected Storage Container Exception: java.util.concurrent.ExecutionException: java.util.concurrent.CompletionException: org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException: Block token verification failed. Fail to find any token (empty or null. at org.apache.hadoop.hdds.scm.storage.BlockOutputStream.handleFullBuffer(BlockOutputStream.java:338) at org.apache.hadoop.hdds.scm.storage.BlockOutputStream.write(BlockOutputStream.java:238) at org.apache.hadoop.ozone.client.io.BlockOutputStreamEntry.write(BlockOutputStreamEntry.java:131) at org.apache.hadoop.ozone.client.io.KeyOutputStream.handleWrite(KeyOutputStream.java:310) at org.apache.hadoop.ozone.client.io.KeyOutputStream.write(KeyOutputStream.java:271) at org.apache.hadoop.ozone.client.io.OzoneOutputStream.write(OzoneOutputStream.java:49) at org.apache.hadoop.ozone.client.rpc.TestOzoneRpcClientAbstract.uploadPart(TestOzoneRpcClientAbstract.java:2188) at org.apache.hadoop.ozone.client.rpc.TestOzoneRpcClientAbstract.doMultipartUpload(TestOzoneRpcClientAbstract.java:2131) at org.apache.hadoop.ozone.client.rpc.TestOzoneRpcClientAbstract.testMultipartUpload(TestOzoneRpcClientAbstract.java:1721) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.runners.ParentRunner.run(ParentRunner.java:309) at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365) at org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273) at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238) at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159) at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384) at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345) at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126) at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418) Caused by: java.util.concurrent.ExecutionException: java.util.concurrent.CompletionException: org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException: Block token verification failed. Fail to find any token (empty or null. at java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357) at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1895) at org.apache.hadoop.hdds.scm.storage.BlockOutputStream.waitOnFlushFutures(BlockOutputStream.java:543) at org.apache.hadoop.hdds.scm.storage.BlockOutputStream.handleFullBuffer(BlockOutputStream.java:333) ... 35 more Caused by: java.util.concurrent.CompletionException: org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException: Block token verification failed. Fail to find any token (empty or null. at
[jira] [Resolved] (HDDS-1248) TestSecureOzoneRpcClient fails intermittently
[ https://issues.apache.org/jira/browse/HDDS-1248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lokesh Jain resolved HDDS-1248. --- Resolution: Duplicate > TestSecureOzoneRpcClient fails intermittently > - > > Key: HDDS-1248 > URL: https://issues.apache.org/jira/browse/HDDS-1248 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Lokesh Jain >Priority: Major > Fix For: 0.4.0 > > > > TestSecureOzoneRpcClient fails intermittently with the following exception. > {code:java} > java.io.IOException: Unexpected Storage Container Exception: > java.util.concurrent.ExecutionException: > java.util.concurrent.CompletionException: > org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException: > Block token verification failed. Fail to find any token (empty or null. > at > org.apache.hadoop.hdds.scm.storage.BlockOutputStream.handleFullBuffer(BlockOutputStream.java:338) > at > org.apache.hadoop.hdds.scm.storage.BlockOutputStream.write(BlockOutputStream.java:238) > at > org.apache.hadoop.ozone.client.io.BlockOutputStreamEntry.write(BlockOutputStreamEntry.java:131) > at > org.apache.hadoop.ozone.client.io.KeyOutputStream.handleWrite(KeyOutputStream.java:310) > at > org.apache.hadoop.ozone.client.io.KeyOutputStream.write(KeyOutputStream.java:271) > at > org.apache.hadoop.ozone.client.io.OzoneOutputStream.write(OzoneOutputStream.java:49) > at > org.apache.hadoop.ozone.client.rpc.TestOzoneRpcClientAbstract.uploadPart(TestOzoneRpcClientAbstract.java:2188) > at > org.apache.hadoop.ozone.client.rpc.TestOzoneRpcClientAbstract.doMultipartUpload(TestOzoneRpcClientAbstract.java:2131) > at > org.apache.hadoop.ozone.client.rpc.TestOzoneRpcClientAbstract.testMultipartUpload(TestOzoneRpcClientAbstract.java:1721) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50) > at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238) > at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63) > at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236) > at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53) > at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at org.junit.runners.ParentRunner.run(ParentRunner.java:309) > at > org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365) > at > org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273) > at > org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238) > at > org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159) > at > org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384) > at > org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345) > at > org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126) > at > org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418) > Caused by: java.util.concurrent.ExecutionException: > java.util.concurrent.CompletionException: > org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException: > Block token verification failed. Fail to find any token (empty or null. > at > java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357) > at > java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1895) > at > org.apache.hadoop.hdds.scm.storage.BlockOutputStream.waitOnFlushFutures(BlockOutputStream.java:543) > at > org.apache.hadoop.hdds.scm.sto
[jira] [Commented] (HDDS-1248) TestSecureOzoneRpcClient fails intermittently
[ https://issues.apache.org/jira/browse/HDDS-1248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16789389#comment-16789389 ] Lokesh Jain commented on HDDS-1248: --- This test will be fixed as part of HDDS-1095. > TestSecureOzoneRpcClient fails intermittently > - > > Key: HDDS-1248 > URL: https://issues.apache.org/jira/browse/HDDS-1248 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Lokesh Jain >Priority: Major > Fix For: 0.4.0 > > > > TestSecureOzoneRpcClient fails intermittently with the following exception. > {code:java} > java.io.IOException: Unexpected Storage Container Exception: > java.util.concurrent.ExecutionException: > java.util.concurrent.CompletionException: > org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException: > Block token verification failed. Fail to find any token (empty or null. > at > org.apache.hadoop.hdds.scm.storage.BlockOutputStream.handleFullBuffer(BlockOutputStream.java:338) > at > org.apache.hadoop.hdds.scm.storage.BlockOutputStream.write(BlockOutputStream.java:238) > at > org.apache.hadoop.ozone.client.io.BlockOutputStreamEntry.write(BlockOutputStreamEntry.java:131) > at > org.apache.hadoop.ozone.client.io.KeyOutputStream.handleWrite(KeyOutputStream.java:310) > at > org.apache.hadoop.ozone.client.io.KeyOutputStream.write(KeyOutputStream.java:271) > at > org.apache.hadoop.ozone.client.io.OzoneOutputStream.write(OzoneOutputStream.java:49) > at > org.apache.hadoop.ozone.client.rpc.TestOzoneRpcClientAbstract.uploadPart(TestOzoneRpcClientAbstract.java:2188) > at > org.apache.hadoop.ozone.client.rpc.TestOzoneRpcClientAbstract.doMultipartUpload(TestOzoneRpcClientAbstract.java:2131) > at > org.apache.hadoop.ozone.client.rpc.TestOzoneRpcClientAbstract.testMultipartUpload(TestOzoneRpcClientAbstract.java:1721) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50) > at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238) > at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63) > at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236) > at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53) > at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at org.junit.runners.ParentRunner.run(ParentRunner.java:309) > at > org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365) > at > org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273) > at > org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238) > at > org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159) > at > org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384) > at > org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345) > at > org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126) > at > org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418) > Caused by: java.util.concurrent.ExecutionException: > java.util.concurrent.CompletionException: > org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException: > Block token verification failed. Fail to find any token (empty or null. > at > java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357) > at > java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1895) > at > org.apache.hadoop.hdds.scm.storage.BlockOutputStream.waitOnFlushFutu
[jira] [Reopened] (HDDS-1248) TestSecureOzoneRpcClient fails intermittently
[ https://issues.apache.org/jira/browse/HDDS-1248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lokesh Jain reopened HDDS-1248: --- The test calls BlockTokenIdentifier#setTestStub(true) in TestSecureOzoneRpcClient#testKeyOpFailureWithoutBlockToken. Since testStub is true all the concurrently running tests fail with Block token verification failed exception. > TestSecureOzoneRpcClient fails intermittently > - > > Key: HDDS-1248 > URL: https://issues.apache.org/jira/browse/HDDS-1248 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Lokesh Jain >Priority: Major > Fix For: 0.4.0 > > > > TestSecureOzoneRpcClient fails intermittently with the following exception. > {code:java} > java.io.IOException: Unexpected Storage Container Exception: > java.util.concurrent.ExecutionException: > java.util.concurrent.CompletionException: > org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException: > Block token verification failed. Fail to find any token (empty or null. > at > org.apache.hadoop.hdds.scm.storage.BlockOutputStream.handleFullBuffer(BlockOutputStream.java:338) > at > org.apache.hadoop.hdds.scm.storage.BlockOutputStream.write(BlockOutputStream.java:238) > at > org.apache.hadoop.ozone.client.io.BlockOutputStreamEntry.write(BlockOutputStreamEntry.java:131) > at > org.apache.hadoop.ozone.client.io.KeyOutputStream.handleWrite(KeyOutputStream.java:310) > at > org.apache.hadoop.ozone.client.io.KeyOutputStream.write(KeyOutputStream.java:271) > at > org.apache.hadoop.ozone.client.io.OzoneOutputStream.write(OzoneOutputStream.java:49) > at > org.apache.hadoop.ozone.client.rpc.TestOzoneRpcClientAbstract.uploadPart(TestOzoneRpcClientAbstract.java:2188) > at > org.apache.hadoop.ozone.client.rpc.TestOzoneRpcClientAbstract.doMultipartUpload(TestOzoneRpcClientAbstract.java:2131) > at > org.apache.hadoop.ozone.client.rpc.TestOzoneRpcClientAbstract.testMultipartUpload(TestOzoneRpcClientAbstract.java:1721) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50) > at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238) > at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63) > at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236) > at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53) > at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at org.junit.runners.ParentRunner.run(ParentRunner.java:309) > at > org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365) > at > org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273) > at > org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238) > at > org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159) > at > org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384) > at > org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345) > at > org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126) > at > org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418) > Caused by: java.util.concurrent.ExecutionException: > java.util.concurrent.CompletionException: > org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException: > Block token verification failed. Fail to find any token (empty or null. > at > java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357) > at > java.util.concurrent.Co
[jira] [Commented] (HDDS-1128) Create stateful manager class for the pipeline creation scheduling
[ https://issues.apache.org/jira/browse/HDDS-1128?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16790336#comment-16790336 ] Lokesh Jain commented on HDDS-1128: --- v3 patch fixes the test failure in TestRatisPipelineUtils. > Create stateful manager class for the pipeline creation scheduling > -- > > Key: HDDS-1128 > URL: https://issues.apache.org/jira/browse/HDDS-1128 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Elek, Marton >Assignee: Lokesh Jain >Priority: Major > Attachments: HDDS-1128.001.patch, HDDS-1128.002.patch, > HDDS-1128.003.patch > > > HDDS-1076 introduced a new static variable in RatisPipelineProvider: > Scheduler. It seems to be a global variable which makes the testing harder. > [~shashikant] also suggested to remove it: > {quote}It would be a good idea to move the scheduler Class Utility into some > common utility package so that it can be used in multiple places as and when > needed. > {quote} > I agree. And findbug also complains about it: > {quote}H D ST: Write to static field > org.apache.hadoop.hdds.scm.pipeline.RatisPipelineProvider.scheduler from > instance method new > org.apache.hadoop.hdds.scm.pipeline.RatisPipelineProvider(NodeManager, > PipelineStateManager, Configuration) At RatisPipelineProvider.java:[line 56] > {quote} > I think we need a new class which includes both the state of > RatisPipelineUtils.isPipelineCreatorRunning and > RaitsPipelineProvider.Scheduler. It should have one instance which is > available for the classes which requires it. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-1128) Create stateful manager class for the pipeline creation scheduling
[ https://issues.apache.org/jira/browse/HDDS-1128?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lokesh Jain updated HDDS-1128: -- Attachment: HDDS-1128.003.patch > Create stateful manager class for the pipeline creation scheduling > -- > > Key: HDDS-1128 > URL: https://issues.apache.org/jira/browse/HDDS-1128 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Elek, Marton >Assignee: Lokesh Jain >Priority: Major > Attachments: HDDS-1128.001.patch, HDDS-1128.002.patch, > HDDS-1128.003.patch > > > HDDS-1076 introduced a new static variable in RatisPipelineProvider: > Scheduler. It seems to be a global variable which makes the testing harder. > [~shashikant] also suggested to remove it: > {quote}It would be a good idea to move the scheduler Class Utility into some > common utility package so that it can be used in multiple places as and when > needed. > {quote} > I agree. And findbug also complains about it: > {quote}H D ST: Write to static field > org.apache.hadoop.hdds.scm.pipeline.RatisPipelineProvider.scheduler from > instance method new > org.apache.hadoop.hdds.scm.pipeline.RatisPipelineProvider(NodeManager, > PipelineStateManager, Configuration) At RatisPipelineProvider.java:[line 56] > {quote} > I think we need a new class which includes both the state of > RatisPipelineUtils.isPipelineCreatorRunning and > RaitsPipelineProvider.Scheduler. It should have one instance which is > available for the classes which requires it. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDDS-1128) Create stateful manager class for the pipeline creation scheduling
[ https://issues.apache.org/jira/browse/HDDS-1128?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16790336#comment-16790336 ] Lokesh Jain edited comment on HDDS-1128 at 3/12/19 8:31 AM: [~nandakumar131] Thanks for reviewing the patch! v3 patch fixes the test failure in TestRatisPipelineUtils. was (Author: ljain): v3 patch fixes the test failure in TestRatisPipelineUtils. > Create stateful manager class for the pipeline creation scheduling > -- > > Key: HDDS-1128 > URL: https://issues.apache.org/jira/browse/HDDS-1128 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Elek, Marton >Assignee: Lokesh Jain >Priority: Major > Attachments: HDDS-1128.001.patch, HDDS-1128.002.patch, > HDDS-1128.003.patch > > > HDDS-1076 introduced a new static variable in RatisPipelineProvider: > Scheduler. It seems to be a global variable which makes the testing harder. > [~shashikant] also suggested to remove it: > {quote}It would be a good idea to move the scheduler Class Utility into some > common utility package so that it can be used in multiple places as and when > needed. > {quote} > I agree. And findbug also complains about it: > {quote}H D ST: Write to static field > org.apache.hadoop.hdds.scm.pipeline.RatisPipelineProvider.scheduler from > instance method new > org.apache.hadoop.hdds.scm.pipeline.RatisPipelineProvider(NodeManager, > PipelineStateManager, Configuration) At RatisPipelineProvider.java:[line 56] > {quote} > I think we need a new class which includes both the state of > RatisPipelineUtils.isPipelineCreatorRunning and > RaitsPipelineProvider.Scheduler. It should have one instance which is > available for the classes which requires it. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-1095) OzoneManager#openKey should do multiple block allocations in a single SCM rpc call
[ https://issues.apache.org/jira/browse/HDDS-1095?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16790394#comment-16790394 ] Lokesh Jain commented on HDDS-1095: --- [~msingh] Thanks for updating the patch! The path looks good to me. I have a few minor comments. # In ScmBlockLocationProtocolServerSideTranslatorPB, we should return the blocks in the response proto even if they are less than the requested number of blocks. Also can we add another error code for this situation? # ScmBlockLocationProtocolClientSideTranslatorPB:114 - We can initialise the array list with the received number of blocks. # SCMBlockProtocolServer:167 - Same as above. > OzoneManager#openKey should do multiple block allocations in a single SCM rpc > call > -- > > Key: HDDS-1095 > URL: https://issues.apache.org/jira/browse/HDDS-1095 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: SCM >Affects Versions: 0.4.0 >Reporter: Mukul Kumar Singh >Assignee: Mukul Kumar Singh >Priority: Major > Fix For: 0.4.0 > > Attachments: HDDS-1095.001.patch, HDDS-1095.002.patch, > HDDS-1095.003.patch, HDDS-1095.004.patch, HDDS-1095.005.patch, > HDDS-1095.006.patch > > > Currently in KeyManagerImpl#openKey, for a large key allocation, multiple > blocks are allocated in different rpc calls. If the key length is already > known, then multiple blocks can be allocated in one rpc call to SCM. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-1095) OzoneManager#openKey should do multiple block allocations in a single SCM rpc call
[ https://issues.apache.org/jira/browse/HDDS-1095?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16790414#comment-16790414 ] Lokesh Jain commented on HDDS-1095: --- [~msingh] Thanks for updating the patch! It looks good to me. +1. > OzoneManager#openKey should do multiple block allocations in a single SCM rpc > call > -- > > Key: HDDS-1095 > URL: https://issues.apache.org/jira/browse/HDDS-1095 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: SCM >Affects Versions: 0.4.0 >Reporter: Mukul Kumar Singh >Assignee: Mukul Kumar Singh >Priority: Major > Fix For: 0.4.0 > > Attachments: HDDS-1095.001.patch, HDDS-1095.002.patch, > HDDS-1095.003.patch, HDDS-1095.004.patch, HDDS-1095.005.patch, > HDDS-1095.006.patch, HDDS-1095.007.patch > > > Currently in KeyManagerImpl#openKey, for a large key allocation, multiple > blocks are allocated in different rpc calls. If the key length is already > known, then multiple blocks can be allocated in one rpc call to SCM. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-1185) Optimize GetFileStatus in OzoneFileSystem by reducing the number of rpc call to OM.
[ https://issues.apache.org/jira/browse/HDDS-1185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16792860#comment-16792860 ] Lokesh Jain commented on HDDS-1185: --- [~msingh] Thanks for working on this! The patch looks good to me. Please find my comments below. # KeyManagerImpl:1220 - redundant call to getBucketInfo. # KeyManagerImpl:1242-46 - We can avoid the call to listKeys? # OzoneManager:2570 - We need to use incNumGetFileStatusFails() function # There are a few checkstyle issues. > Optimize GetFileStatus in OzoneFileSystem by reducing the number of rpc call > to OM. > --- > > Key: HDDS-1185 > URL: https://issues.apache.org/jira/browse/HDDS-1185 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task > Components: Ozone Filesystem >Affects Versions: 0.4.0 >Reporter: Mukul Kumar Singh >Assignee: Mukul Kumar Singh >Priority: Critical > Fix For: 0.5.0 > > Attachments: HDDS-1185.001.patch, HDDS-1185.002.patch, > HDDS-1185.003.patch > > > GetFileStatus sends multiple rpc calls to Ozone Manager to fetch the file > status for a given file. This can be optimized by performing all the > processing on the OzoneManager for this. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDDS-1300) Optimize non-recursive ozone filesystem apis
Lokesh Jain created HDDS-1300: - Summary: Optimize non-recursive ozone filesystem apis Key: HDDS-1300 URL: https://issues.apache.org/jira/browse/HDDS-1300 Project: Hadoop Distributed Data Store Issue Type: Sub-task Components: Ozone Filesystem, Ozone Manager Reporter: Lokesh Jain Assignee: Lokesh Jain This Jira aims to optimise non recursive apis in ozone file system. The Jira would add support for such apis in Ozone manager in order to reduce the number of rpc calls to Ozone Manager. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDDS-1301) Optimize recursive ozone filesystem apis
Lokesh Jain created HDDS-1301: - Summary: Optimize recursive ozone filesystem apis Key: HDDS-1301 URL: https://issues.apache.org/jira/browse/HDDS-1301 Project: Hadoop Distributed Data Store Issue Type: Sub-task Reporter: Lokesh Jain Assignee: Lokesh Jain This Jira aims to optimise recursive apis in ozone file system. These are the apis which have a recursive flag which requires an operation to be performed on all the children of the directory. The Jira would add support for recursive apis in Ozone manager in order to reduce the number of rpc calls to Ozone Manager. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-1301) Optimize recursive ozone filesystem apis
[ https://issues.apache.org/jira/browse/HDDS-1301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lokesh Jain updated HDDS-1301: -- Description: This Jira aims to optimise recursive apis in ozone file system. These are the apis which have a recursive flag which requires an operation to be performed on all the children of the directory. The Jira would add support for recursive apis in Ozone manager in order to reduce the number of rpc calls to Ozone Manager. Also currently these operations are not atomic. This Jira would make all the operations in ozone filesystem atomic. (was: This Jira aims to optimise recursive apis in ozone file system. These are the apis which have a recursive flag which requires an operation to be performed on all the children of the directory. The Jira would add support for recursive apis in Ozone manager in order to reduce the number of rpc calls to Ozone Manager.) > Optimize recursive ozone filesystem apis > > > Key: HDDS-1301 > URL: https://issues.apache.org/jira/browse/HDDS-1301 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Lokesh Jain >Assignee: Lokesh Jain >Priority: Major > > This Jira aims to optimise recursive apis in ozone file system. These are the > apis which have a recursive flag which requires an operation to be performed > on all the children of the directory. The Jira would add support for > recursive apis in Ozone manager in order to reduce the number of rpc calls to > Ozone Manager. Also currently these operations are not atomic. This Jira > would make all the operations in ozone filesystem atomic. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-1300) Optimize non-recursive ozone filesystem apis
[ https://issues.apache.org/jira/browse/HDDS-1300?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lokesh Jain updated HDDS-1300: -- Attachment: HDDS-1300.001.patch > Optimize non-recursive ozone filesystem apis > > > Key: HDDS-1300 > URL: https://issues.apache.org/jira/browse/HDDS-1300 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task > Components: Ozone Filesystem, Ozone Manager >Reporter: Lokesh Jain >Assignee: Lokesh Jain >Priority: Major > Attachments: HDDS-1300.001.patch > > > This Jira aims to optimise non recursive apis in ozone file system. The Jira > would add support for such apis in Ozone manager in order to reduce the > number of rpc calls to Ozone Manager. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-1300) Optimize non-recursive ozone filesystem apis
[ https://issues.apache.org/jira/browse/HDDS-1300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16795078#comment-16795078 ] Lokesh Jain commented on HDDS-1300: --- v1 patch adds support for createDirectory api. Other apis would be added in later patch. The patch can be submitted after HDDS-1185. > Optimize non-recursive ozone filesystem apis > > > Key: HDDS-1300 > URL: https://issues.apache.org/jira/browse/HDDS-1300 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task > Components: Ozone Filesystem, Ozone Manager >Reporter: Lokesh Jain >Assignee: Lokesh Jain >Priority: Major > Attachments: HDDS-1300.001.patch > > > This Jira aims to optimise non recursive apis in ozone file system. The Jira > would add support for such apis in Ozone manager in order to reduce the > number of rpc calls to Ozone Manager. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-1306) TestContainerStateManagerIntegration fails in Ratis shutdown
[ https://issues.apache.org/jira/browse/HDDS-1306?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16796176#comment-16796176 ] Lokesh Jain commented on HDDS-1306: --- The issue is occurring because the raft server creation thread is blocked in ratis. During pipeline creation, ratis creates a new RaftServerImpl for the pipeline in the datanode. This creation is done in common thread pool. All the threads in the common thread pool are blocked by ReplicationActivityStatus#fireReplicationStart call. This function calls Thread.sleep which is blocking the thread in common thread pool. I think we should use scheduled thread pool or a better executor for this purpose. Regarding the exception seen in the test: When the RaftServerImpl creation is unblocked the raft server has already moved to CLOSING state. So when it tries to move it to RUNNING state it throws the illegal state exception. > TestContainerStateManagerIntegration fails in Ratis shutdown > > > Key: HDDS-1306 > URL: https://issues.apache.org/jira/browse/HDDS-1306 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: Ozone Datanode, test >Reporter: Arpit Agarwal >Assignee: Lokesh Jain >Priority: Blocker > > TestContainerStateManagerIntegration occasionally fails in Ratis shutdown. > Other test cases like TestScmChillMode may be failing due to the same error. > Full stack trace in a comment below since it's a lot of text. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-1306) TestContainerStateManagerIntegration fails in Ratis shutdown
[ https://issues.apache.org/jira/browse/HDDS-1306?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lokesh Jain updated HDDS-1306: -- Attachment: HDDS-1306.001.patch > TestContainerStateManagerIntegration fails in Ratis shutdown > > > Key: HDDS-1306 > URL: https://issues.apache.org/jira/browse/HDDS-1306 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: Ozone Datanode, test >Reporter: Arpit Agarwal >Assignee: Lokesh Jain >Priority: Blocker > Attachments: HDDS-1306.001.patch > > > TestContainerStateManagerIntegration occasionally fails in Ratis shutdown. > Other test cases like TestScmChillMode may be failing due to the same error. > Full stack trace in a comment below since it's a lot of text. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-1306) TestContainerStateManagerIntegration fails in Ratis shutdown
[ https://issues.apache.org/jira/browse/HDDS-1306?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lokesh Jain updated HDDS-1306: -- Status: Patch Available (was: Open) > TestContainerStateManagerIntegration fails in Ratis shutdown > > > Key: HDDS-1306 > URL: https://issues.apache.org/jira/browse/HDDS-1306 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: Ozone Datanode, test >Reporter: Arpit Agarwal >Assignee: Lokesh Jain >Priority: Blocker > Attachments: HDDS-1306.001.patch > > > TestContainerStateManagerIntegration occasionally fails in Ratis shutdown. > Other test cases like TestScmChillMode may be failing due to the same error. > Full stack trace in a comment below since it's a lot of text. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-1306) TestContainerStateManagerIntegration fails in Ratis shutdown
[ https://issues.apache.org/jira/browse/HDDS-1306?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16796794#comment-16796794 ] Lokesh Jain commented on HDDS-1306: --- | CompletableFuture.runAsync which uses thread from ForkJoinPool, so how we are blocking all threads. Here only one thread is being used, and it is not spawning multiple threads. [~bharatviswa] There are 7 threads in common thread pool(determined by number of cores in the system) and in total there are 7 junit tests running. Each test spawns a MiniOzoneCluster where one thread in the common pool is blocked by ReplicationActivityStatus. In total all 7 threads of common pool are blocked by the 7 tests. This blocks RaftServer creation in just the last junit test running, all other tests pass. [~arpitagarwal] TestContainerStateManagerIntegration passes in the precommit build. I will check if any other test failures are related. > TestContainerStateManagerIntegration fails in Ratis shutdown > > > Key: HDDS-1306 > URL: https://issues.apache.org/jira/browse/HDDS-1306 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: Ozone Datanode, test >Reporter: Arpit Agarwal >Assignee: Lokesh Jain >Priority: Blocker > Attachments: HDDS-1306.001.patch > > > TestContainerStateManagerIntegration occasionally fails in Ratis shutdown. > Other test cases like TestScmChillMode may be failing due to the same error. > Full stack trace in a comment below since it's a lot of text. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-1306) TestContainerStateManagerIntegration fails in Ratis shutdown
[ https://issues.apache.org/jira/browse/HDDS-1306?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16796909#comment-16796909 ] Lokesh Jain commented on HDDS-1306: --- [~arpitagarwal] Thanks for reviewing the patch! I have uploaded v2 patch which closes the scheduler. > TestContainerStateManagerIntegration fails in Ratis shutdown > > > Key: HDDS-1306 > URL: https://issues.apache.org/jira/browse/HDDS-1306 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: Ozone Datanode, test >Reporter: Arpit Agarwal >Assignee: Lokesh Jain >Priority: Blocker > Attachments: HDDS-1306.001.patch, HDDS-1306.002.patch > > > TestContainerStateManagerIntegration occasionally fails in Ratis shutdown. > Other test cases like TestScmChillMode may be failing due to the same error. > Full stack trace in a comment below since it's a lot of text. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-1306) TestContainerStateManagerIntegration fails in Ratis shutdown
[ https://issues.apache.org/jira/browse/HDDS-1306?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lokesh Jain updated HDDS-1306: -- Attachment: HDDS-1306.002.patch > TestContainerStateManagerIntegration fails in Ratis shutdown > > > Key: HDDS-1306 > URL: https://issues.apache.org/jira/browse/HDDS-1306 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: Ozone Datanode, test >Reporter: Arpit Agarwal >Assignee: Lokesh Jain >Priority: Blocker > Attachments: HDDS-1306.001.patch, HDDS-1306.002.patch > > > TestContainerStateManagerIntegration occasionally fails in Ratis shutdown. > Other test cases like TestScmChillMode may be failing due to the same error. > Full stack trace in a comment below since it's a lot of text. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-1185) Optimize GetFileStatus in OzoneFileSystem by reducing the number of rpc call to OM.
[ https://issues.apache.org/jira/browse/HDDS-1185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16798890#comment-16798890 ] Lokesh Jain commented on HDDS-1185: --- [~msingh] Thanks for updating the patch! The patch looks good to me. +1. > Optimize GetFileStatus in OzoneFileSystem by reducing the number of rpc call > to OM. > --- > > Key: HDDS-1185 > URL: https://issues.apache.org/jira/browse/HDDS-1185 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task > Components: Ozone Filesystem >Affects Versions: 0.4.0 >Reporter: Mukul Kumar Singh >Assignee: Mukul Kumar Singh >Priority: Critical > Fix For: 0.5.0 > > Attachments: HDDS-1185.001.patch, HDDS-1185.002.patch, > HDDS-1185.003.patch, HDDS-1185.004.patch, HDDS-1185.005.patch, > HDDS-1185.006.patch > > > GetFileStatus sends multiple rpc calls to Ozone Manager to fetch the file > status for a given file. This can be optimized by performing all the > processing on the OzoneManager for this. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-1300) Optimize non-recursive ozone filesystem apis
[ https://issues.apache.org/jira/browse/HDDS-1300?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lokesh Jain updated HDDS-1300: -- Attachment: HDDS-1300.002.patch > Optimize non-recursive ozone filesystem apis > > > Key: HDDS-1300 > URL: https://issues.apache.org/jira/browse/HDDS-1300 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task > Components: Ozone Filesystem, Ozone Manager >Reporter: Lokesh Jain >Assignee: Lokesh Jain >Priority: Major > Attachments: HDDS-1300.001.patch, HDDS-1300.002.patch > > > This Jira aims to optimise non recursive apis in ozone file system. The Jira > would add support for such apis in Ozone manager in order to reduce the > number of rpc calls to Ozone Manager. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-1300) Optimize non-recursive ozone filesystem apis
[ https://issues.apache.org/jira/browse/HDDS-1300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16800084#comment-16800084 ] Lokesh Jain commented on HDDS-1300: --- v2 patch adds implementation for createFile, createDirectory and readFile. > Optimize non-recursive ozone filesystem apis > > > Key: HDDS-1300 > URL: https://issues.apache.org/jira/browse/HDDS-1300 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task > Components: Ozone Filesystem, Ozone Manager >Reporter: Lokesh Jain >Assignee: Lokesh Jain >Priority: Major > Attachments: HDDS-1300.001.patch, HDDS-1300.002.patch > > > This Jira aims to optimise non recursive apis in ozone file system. The Jira > would add support for such apis in Ozone manager in order to reduce the > number of rpc calls to Ozone Manager. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-1185) Optimize GetFileStatus in OzoneFileSystem by reducing the number of rpc call to OM.
[ https://issues.apache.org/jira/browse/HDDS-1185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16800085#comment-16800085 ] Lokesh Jain commented on HDDS-1185: --- [~msingh] The patch needs to be rebased. > Optimize GetFileStatus in OzoneFileSystem by reducing the number of rpc call > to OM. > --- > > Key: HDDS-1185 > URL: https://issues.apache.org/jira/browse/HDDS-1185 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task > Components: Ozone Filesystem >Affects Versions: 0.4.0 >Reporter: Mukul Kumar Singh >Assignee: Mukul Kumar Singh >Priority: Critical > Fix For: 0.5.0 > > Attachments: HDDS-1185.001.patch, HDDS-1185.002.patch, > HDDS-1185.003.patch, HDDS-1185.004.patch, HDDS-1185.005.patch, > HDDS-1185.006.patch > > > GetFileStatus sends multiple rpc calls to Ozone Manager to fetch the file > status for a given file. This can be optimized by performing all the > processing on the OzoneManager for this. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-1185) Optimize GetFileStatus in OzoneFileSystem by reducing the number of rpc call to OM.
[ https://issues.apache.org/jira/browse/HDDS-1185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16800438#comment-16800438 ] Lokesh Jain commented on HDDS-1185: --- [~msingh] Thanks for updating the patch! The v8 patch looks good to me. +1. > Optimize GetFileStatus in OzoneFileSystem by reducing the number of rpc call > to OM. > --- > > Key: HDDS-1185 > URL: https://issues.apache.org/jira/browse/HDDS-1185 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task > Components: Ozone Filesystem >Affects Versions: 0.4.0 >Reporter: Mukul Kumar Singh >Assignee: Mukul Kumar Singh >Priority: Critical > Fix For: 0.5.0 > > Attachments: HDDS-1185.001.patch, HDDS-1185.002.patch, > HDDS-1185.003.patch, HDDS-1185.004.patch, HDDS-1185.005.patch, > HDDS-1185.006.patch, HDDS-1185.007.patch, HDDS-1185.008.patch > > > GetFileStatus sends multiple rpc calls to Ozone Manager to fetch the file > status for a given file. This can be optimized by performing all the > processing on the OzoneManager for this. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-1300) Optimize non-recursive ozone filesystem apis
[ https://issues.apache.org/jira/browse/HDDS-1300?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lokesh Jain updated HDDS-1300: -- Attachment: HDDS-1300.003.patch > Optimize non-recursive ozone filesystem apis > > > Key: HDDS-1300 > URL: https://issues.apache.org/jira/browse/HDDS-1300 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task > Components: Ozone Filesystem, Ozone Manager >Reporter: Lokesh Jain >Assignee: Lokesh Jain >Priority: Major > Attachments: HDDS-1300.001.patch, HDDS-1300.002.patch, > HDDS-1300.003.patch > > > This Jira aims to optimise non recursive apis in ozone file system. The Jira > would add support for such apis in Ozone manager in order to reduce the > number of rpc calls to Ozone Manager. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-1300) Optimize non-recursive ozone filesystem apis
[ https://issues.apache.org/jira/browse/HDDS-1300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16800446#comment-16800446 ] Lokesh Jain commented on HDDS-1300: --- Uploaded rebased v3 patch. > Optimize non-recursive ozone filesystem apis > > > Key: HDDS-1300 > URL: https://issues.apache.org/jira/browse/HDDS-1300 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task > Components: Ozone Filesystem, Ozone Manager >Reporter: Lokesh Jain >Assignee: Lokesh Jain >Priority: Major > Attachments: HDDS-1300.001.patch, HDDS-1300.002.patch, > HDDS-1300.003.patch > > > This Jira aims to optimise non recursive apis in ozone file system. The Jira > would add support for such apis in Ozone manager in order to reduce the > number of rpc calls to Ozone Manager. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-1300) Optimize non-recursive ozone filesystem apis
[ https://issues.apache.org/jira/browse/HDDS-1300?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lokesh Jain updated HDDS-1300: -- Status: Patch Available (was: Open) > Optimize non-recursive ozone filesystem apis > > > Key: HDDS-1300 > URL: https://issues.apache.org/jira/browse/HDDS-1300 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task > Components: Ozone Filesystem, Ozone Manager >Reporter: Lokesh Jain >Assignee: Lokesh Jain >Priority: Major > Attachments: HDDS-1300.001.patch, HDDS-1300.002.patch, > HDDS-1300.003.patch > > > This Jira aims to optimise non recursive apis in ozone file system. The Jira > would add support for such apis in Ozone manager in order to reduce the > number of rpc calls to Ozone Manager. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-1300) Optimize non-recursive ozone filesystem apis
[ https://issues.apache.org/jira/browse/HDDS-1300?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lokesh Jain updated HDDS-1300: -- Attachment: HDDS-1300.004.patch > Optimize non-recursive ozone filesystem apis > > > Key: HDDS-1300 > URL: https://issues.apache.org/jira/browse/HDDS-1300 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task > Components: Ozone Filesystem, Ozone Manager >Reporter: Lokesh Jain >Assignee: Lokesh Jain >Priority: Major > Attachments: HDDS-1300.001.patch, HDDS-1300.002.patch, > HDDS-1300.003.patch, HDDS-1300.004.patch > > > This Jira aims to optimise non recursive apis in ozone file system. The Jira > would add support for such apis in Ozone manager in order to reduce the > number of rpc calls to Ozone Manager. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-1300) Optimize non-recursive ozone filesystem apis
[ https://issues.apache.org/jira/browse/HDDS-1300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16801952#comment-16801952 ] Lokesh Jain commented on HDDS-1300: --- [~msingh] Thanks for reviewing the patch! v4 patch addresses your comments. | 4) KeyManagerImpl:1395. lets rename this to create/createFile to follow the same naming convention as OzoneFilesystem. The same for lookupFile on line 1443 as well. For lookupFile I was thinking of maintaining the readFile api in OzoneClientAdapterImpl because here we are returning an input stream. The lookup file api is returning OmKeyInfo. > Optimize non-recursive ozone filesystem apis > > > Key: HDDS-1300 > URL: https://issues.apache.org/jira/browse/HDDS-1300 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task > Components: Ozone Filesystem, Ozone Manager >Reporter: Lokesh Jain >Assignee: Lokesh Jain >Priority: Major > Attachments: HDDS-1300.001.patch, HDDS-1300.002.patch, > HDDS-1300.003.patch, HDDS-1300.004.patch > > > This Jira aims to optimise non recursive apis in ozone file system. The Jira > would add support for such apis in Ozone manager in order to reduce the > number of rpc calls to Ozone Manager. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-1300) Optimize non-recursive ozone filesystem apis
[ https://issues.apache.org/jira/browse/HDDS-1300?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lokesh Jain updated HDDS-1300: -- Attachment: HDDS-1300.005.patch > Optimize non-recursive ozone filesystem apis > > > Key: HDDS-1300 > URL: https://issues.apache.org/jira/browse/HDDS-1300 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task > Components: Ozone Filesystem, Ozone Manager >Reporter: Lokesh Jain >Assignee: Lokesh Jain >Priority: Major > Attachments: HDDS-1300.001.patch, HDDS-1300.002.patch, > HDDS-1300.003.patch, HDDS-1300.004.patch, HDDS-1300.005.patch > > > This Jira aims to optimise non recursive apis in ozone file system. The Jira > would add support for such apis in Ozone manager in order to reduce the > number of rpc calls to Ozone Manager. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-1300) Optimize non-recursive ozone filesystem apis
[ https://issues.apache.org/jira/browse/HDDS-1300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16803062#comment-16803062 ] Lokesh Jain commented on HDDS-1300: --- v5 patch fixes the checkstyle and findbugs. Will create jiras for unit test failures. > Optimize non-recursive ozone filesystem apis > > > Key: HDDS-1300 > URL: https://issues.apache.org/jira/browse/HDDS-1300 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task > Components: Ozone Filesystem, Ozone Manager >Reporter: Lokesh Jain >Assignee: Lokesh Jain >Priority: Major > Attachments: HDDS-1300.001.patch, HDDS-1300.002.patch, > HDDS-1300.003.patch, HDDS-1300.004.patch, HDDS-1300.005.patch > > > This Jira aims to optimise non recursive apis in ozone file system. The Jira > would add support for such apis in Ozone manager in order to reduce the > number of rpc calls to Ozone Manager. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDDS-1341) TestContainerReplication#testContainerReplication fails intermittently
Lokesh Jain created HDDS-1341: - Summary: TestContainerReplication#testContainerReplication fails intermittently Key: HDDS-1341 URL: https://issues.apache.org/jira/browse/HDDS-1341 Project: Hadoop Distributed Data Store Issue Type: Bug Reporter: Lokesh Jain The test fails intermittently. The link to the test report can be found below. https://builds.apache.org/job/PreCommit-HDDS-Build/2582/testReport/ {code:java} java.lang.AssertionError: Container is not replicated to the destination datanode at org.junit.Assert.fail(Assert.java:88) at org.junit.Assert.assertTrue(Assert.java:41) at org.junit.Assert.assertNotNull(Assert.java:621) at org.apache.hadoop.ozone.container.TestContainerReplication.testContainerReplication(TestContainerReplication.java:139) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDDS-1342) TestOzoneManagerHA#testOMProxyProviderFailoverOnConnectionFailure fails intermittently
Lokesh Jain created HDDS-1342: - Summary: TestOzoneManagerHA#testOMProxyProviderFailoverOnConnectionFailure fails intermittently Key: HDDS-1342 URL: https://issues.apache.org/jira/browse/HDDS-1342 Project: Hadoop Distributed Data Store Issue Type: Bug Reporter: Lokesh Jain The test fails intermittently. The link to the test report can be found below. [https://builds.apache.org/job/PreCommit-HDDS-Build/2582/testReport/] {code:java} java.net.ConnectException: Call From ea902c1cb730/172.17.0.3 to localhost:10174 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:831) at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:755) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1515) at org.apache.hadoop.ipc.Client.call(Client.java:1457) at org.apache.hadoop.ipc.Client.call(Client.java:1367) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116) at com.sun.proxy.$Proxy34.submitRequest(Unknown Source) at sun.reflect.GeneratedMethodAccessor30.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy34.submitRequest(Unknown Source) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hdds.tracing.TraceAllMethod.invoke(TraceAllMethod.java:66) at com.sun.proxy.$Proxy34.submitRequest(Unknown Source) at org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolClientSideTranslatorPB.submitRequest(OzoneManagerProtocolClientSideTranslatorPB.java:310) at org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolClientSideTranslatorPB.createVolume(OzoneManagerProtocolClientSideTranslatorPB.java:343) at org.apache.hadoop.ozone.client.rpc.RpcClient.createVolume(RpcClient.java:275) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.ozone.client.OzoneClientInvocationHandler.invoke(OzoneClientInvocationHandler.java:54) at com.sun.proxy.$Proxy86.createVolume(Unknown Source) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hdds.tracing.TraceAllMethod.invoke(TraceAllMethod.java:66) at com.sun.proxy.$Proxy86.createVolume(Unknown Source) at org.apache.hadoop.ozone.client.ObjectStore.createVolume(ObjectStore.java:100) at org.apache.hadoop.ozone.om.TestOzoneManagerHA.createVolumeTest(TestOzoneManagerHA.java:162) at org.apache.hadoop.ozone.om.TestOzoneManagerHA.testOMProxyProviderFailoverOnConnectionFailure(TestOzoneManagerHA.java:237) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke
[jira] [Created] (HDDS-1343) TestNodeFailure times out intermittently
Lokesh Jain created HDDS-1343: - Summary: TestNodeFailure times out intermittently Key: HDDS-1343 URL: https://issues.apache.org/jira/browse/HDDS-1343 Project: Hadoop Distributed Data Store Issue Type: Bug Reporter: Lokesh Jain TestNodeFailure times out while waiting for cluster to be ready. This is done in cluster setup. {code:java} java.lang.Thread.State: WAITING (on object monitor) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at java.util.concurrent.ArrayBlockingQueue.take(ArrayBlockingQueue.java:403) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) at org.apache.hadoop.test.GenericTestUtils.waitFor(GenericTestUtils.java:389) at org.apache.hadoop.ozone.MiniOzoneClusterImpl.waitForClusterToBeReady(MiniOzoneClusterImpl.java:140) at org.apache.hadoop.hdds.scm.pipeline.TestNodeFailure.init(TestNodeFailure.java:74) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.runners.ParentRunner.run(ParentRunner.java:309) at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365) at org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273) at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238) at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159) at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384) at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345) at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126) at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418) {code} 5 datanodes out of 6 are able to heartbeat in the test result [https://builds.apache.org/job/PreCommit-HDDS-Build/2582/testReport/]. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-1300) Optimize non-recursive ozone filesystem apis
[ https://issues.apache.org/jira/browse/HDDS-1300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16803103#comment-16803103 ] Lokesh Jain commented on HDDS-1300: --- I have created HDDS-1341, HDDS-1342 and HDDS-1343 to track the test failures. All the tests pass on the local machine and are not related to the patch. > Optimize non-recursive ozone filesystem apis > > > Key: HDDS-1300 > URL: https://issues.apache.org/jira/browse/HDDS-1300 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task > Components: Ozone Filesystem, Ozone Manager >Reporter: Lokesh Jain >Assignee: Lokesh Jain >Priority: Major > Attachments: HDDS-1300.001.patch, HDDS-1300.002.patch, > HDDS-1300.003.patch, HDDS-1300.004.patch, HDDS-1300.005.patch > > > This Jira aims to optimise non recursive apis in ozone file system. The Jira > would add support for such apis in Ozone manager in order to reduce the > number of rpc calls to Ozone Manager. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-1300) Optimize non-recursive ozone filesystem apis
[ https://issues.apache.org/jira/browse/HDDS-1300?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lokesh Jain updated HDDS-1300: -- Attachment: HDDS-1300.006.patch > Optimize non-recursive ozone filesystem apis > > > Key: HDDS-1300 > URL: https://issues.apache.org/jira/browse/HDDS-1300 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task > Components: Ozone Filesystem, Ozone Manager >Reporter: Lokesh Jain >Assignee: Lokesh Jain >Priority: Major > Attachments: HDDS-1300.001.patch, HDDS-1300.002.patch, > HDDS-1300.003.patch, HDDS-1300.004.patch, HDDS-1300.005.patch, > HDDS-1300.006.patch > > > This Jira aims to optimise non recursive apis in ozone file system. The Jira > would add support for such apis in Ozone manager in order to reduce the > number of rpc calls to Ozone Manager. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-1300) Optimize non-recursive ozone filesystem apis
[ https://issues.apache.org/jira/browse/HDDS-1300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16803839#comment-16803839 ] Lokesh Jain commented on HDDS-1300: --- [~msingh] Thanks for reviewing the patch! v6 patch addresses your comments. > Optimize non-recursive ozone filesystem apis > > > Key: HDDS-1300 > URL: https://issues.apache.org/jira/browse/HDDS-1300 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task > Components: Ozone Filesystem, Ozone Manager >Reporter: Lokesh Jain >Assignee: Lokesh Jain >Priority: Major > Attachments: HDDS-1300.001.patch, HDDS-1300.002.patch, > HDDS-1300.003.patch, HDDS-1300.004.patch, HDDS-1300.005.patch, > HDDS-1300.006.patch > > > This Jira aims to optimise non recursive apis in ozone file system. The Jira > would add support for such apis in Ozone manager in order to reduce the > number of rpc calls to Ozone Manager. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-1300) Optimize non-recursive ozone filesystem apis
[ https://issues.apache.org/jira/browse/HDDS-1300?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lokesh Jain updated HDDS-1300: -- Attachment: HDDS-1300.007.patch > Optimize non-recursive ozone filesystem apis > > > Key: HDDS-1300 > URL: https://issues.apache.org/jira/browse/HDDS-1300 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task > Components: Ozone Filesystem, Ozone Manager >Reporter: Lokesh Jain >Assignee: Lokesh Jain >Priority: Major > Attachments: HDDS-1300.001.patch, HDDS-1300.002.patch, > HDDS-1300.003.patch, HDDS-1300.004.patch, HDDS-1300.005.patch, > HDDS-1300.006.patch, HDDS-1300.007.patch > > > This Jira aims to optimise non recursive apis in ozone file system. The Jira > would add support for such apis in Ozone manager in order to reduce the > number of rpc calls to Ozone Manager. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-1300) Optimize non-recursive ozone filesystem apis
[ https://issues.apache.org/jira/browse/HDDS-1300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16803954#comment-16803954 ] Lokesh Jain commented on HDDS-1300: --- [~msingh] Based on offline discussion v7 patch avoids allocateBlock call while lock is held in createFile. > Optimize non-recursive ozone filesystem apis > > > Key: HDDS-1300 > URL: https://issues.apache.org/jira/browse/HDDS-1300 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task > Components: Ozone Filesystem, Ozone Manager >Reporter: Lokesh Jain >Assignee: Lokesh Jain >Priority: Major > Attachments: HDDS-1300.001.patch, HDDS-1300.002.patch, > HDDS-1300.003.patch, HDDS-1300.004.patch, HDDS-1300.005.patch, > HDDS-1300.006.patch, HDDS-1300.007.patch > > > This Jira aims to optimise non recursive apis in ozone file system. The Jira > would add support for such apis in Ozone manager in order to reduce the > number of rpc calls to Ozone Manager. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-1300) Optimize non-recursive ozone filesystem apis
[ https://issues.apache.org/jira/browse/HDDS-1300?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lokesh Jain updated HDDS-1300: -- Attachment: HDDS-1300.008.patch > Optimize non-recursive ozone filesystem apis > > > Key: HDDS-1300 > URL: https://issues.apache.org/jira/browse/HDDS-1300 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task > Components: Ozone Filesystem, Ozone Manager >Reporter: Lokesh Jain >Assignee: Lokesh Jain >Priority: Major > Attachments: HDDS-1300.001.patch, HDDS-1300.002.patch, > HDDS-1300.003.patch, HDDS-1300.004.patch, HDDS-1300.005.patch, > HDDS-1300.006.patch, HDDS-1300.007.patch, HDDS-1300.008.patch > > > This Jira aims to optimise non recursive apis in ozone file system. The Jira > would add support for such apis in Ozone manager in order to reduce the > number of rpc calls to Ozone Manager. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-1300) Optimize non-recursive ozone filesystem apis
[ https://issues.apache.org/jira/browse/HDDS-1300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16804739#comment-16804739 ] Lokesh Jain commented on HDDS-1300: --- [~bharatviswa] Thanks for reviewing the patch! v8 patch removes the allocateBlock call in createFile function. The allocateBlock call can be added in a followup jira. > Optimize non-recursive ozone filesystem apis > > > Key: HDDS-1300 > URL: https://issues.apache.org/jira/browse/HDDS-1300 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task > Components: Ozone Filesystem, Ozone Manager >Reporter: Lokesh Jain >Assignee: Lokesh Jain >Priority: Major > Attachments: HDDS-1300.001.patch, HDDS-1300.002.patch, > HDDS-1300.003.patch, HDDS-1300.004.patch, HDDS-1300.005.patch, > HDDS-1300.006.patch, HDDS-1300.007.patch, HDDS-1300.008.patch > > > This Jira aims to optimise non recursive apis in ozone file system. The Jira > would add support for such apis in Ozone manager in order to reduce the > number of rpc calls to Ozone Manager. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-1300) Optimize non-recursive ozone filesystem apis
[ https://issues.apache.org/jira/browse/HDDS-1300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16805006#comment-16805006 ] Lokesh Jain commented on HDDS-1300: --- [~msingh] [~bharatviswa] Thanks for reviewing the patch! I have committed the patch to trunk. > Optimize non-recursive ozone filesystem apis > > > Key: HDDS-1300 > URL: https://issues.apache.org/jira/browse/HDDS-1300 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task > Components: Ozone Filesystem, Ozone Manager >Reporter: Lokesh Jain >Assignee: Lokesh Jain >Priority: Major > Attachments: HDDS-1300.001.patch, HDDS-1300.002.patch, > HDDS-1300.003.patch, HDDS-1300.004.patch, HDDS-1300.005.patch, > HDDS-1300.006.patch, HDDS-1300.007.patch, HDDS-1300.008.patch > > > This Jira aims to optimise non recursive apis in ozone file system. The Jira > would add support for such apis in Ozone manager in order to reduce the > number of rpc calls to Ozone Manager. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-1300) Optimize non-recursive ozone filesystem apis
[ https://issues.apache.org/jira/browse/HDDS-1300?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lokesh Jain updated HDDS-1300: -- Fix Version/s: 0.5.0 > Optimize non-recursive ozone filesystem apis > > > Key: HDDS-1300 > URL: https://issues.apache.org/jira/browse/HDDS-1300 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task > Components: Ozone Filesystem, Ozone Manager >Reporter: Lokesh Jain >Assignee: Lokesh Jain >Priority: Major > Fix For: 0.5.0 > > Attachments: HDDS-1300.001.patch, HDDS-1300.002.patch, > HDDS-1300.003.patch, HDDS-1300.004.patch, HDDS-1300.005.patch, > HDDS-1300.006.patch, HDDS-1300.007.patch, HDDS-1300.008.patch > > > This Jira aims to optimise non recursive apis in ozone file system. The Jira > would add support for such apis in Ozone manager in order to reduce the > number of rpc calls to Ozone Manager. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-1300) Optimize non-recursive ozone filesystem apis
[ https://issues.apache.org/jira/browse/HDDS-1300?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lokesh Jain updated HDDS-1300: -- Resolution: Resolved Status: Resolved (was: Patch Available) > Optimize non-recursive ozone filesystem apis > > > Key: HDDS-1300 > URL: https://issues.apache.org/jira/browse/HDDS-1300 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task > Components: Ozone Filesystem, Ozone Manager >Reporter: Lokesh Jain >Assignee: Lokesh Jain >Priority: Major > Attachments: HDDS-1300.001.patch, HDDS-1300.002.patch, > HDDS-1300.003.patch, HDDS-1300.004.patch, HDDS-1300.005.patch, > HDDS-1300.006.patch, HDDS-1300.007.patch, HDDS-1300.008.patch > > > This Jira aims to optimise non recursive apis in ozone file system. The Jira > would add support for such apis in Ozone manager in order to reduce the > number of rpc calls to Ozone Manager. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Reopened] (HDDS-1134) OzoneFileSystem#create should allocate alteast one block for future writes.
[ https://issues.apache.org/jira/browse/HDDS-1134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lokesh Jain reopened HDDS-1134: --- Reopening issue as it was not fixed in HDDS-1300. > OzoneFileSystem#create should allocate alteast one block for future writes. > --- > > Key: HDDS-1134 > URL: https://issues.apache.org/jira/browse/HDDS-1134 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: Ozone Manager >Affects Versions: 0.4.0 >Reporter: Mukul Kumar Singh >Assignee: Mukul Kumar Singh >Priority: Major > Attachments: HDDS-1134.001.patch > > > While opening a new key, OM should at least allocate one block for the key, > this should be done in case the client is not sure about the number of block. > However for users of OzoneFS, if the key is being created for a directory, > then no blocks should be allocated. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-325) Add event watcher for delete blocks command
[ https://issues.apache.org/jira/browse/HDDS-325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lokesh Jain updated HDDS-325: - Attachment: HDDS-325.005.patch > Add event watcher for delete blocks command > --- > > Key: HDDS-325 > URL: https://issues.apache.org/jira/browse/HDDS-325 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: Ozone Datanode, SCM >Reporter: Lokesh Jain >Assignee: Lokesh Jain >Priority: Major > Fix For: 0.2.1 > > Attachments: HDDS-325.001.patch, HDDS-325.002.patch, > HDDS-325.003.patch, HDDS-325.004.patch, HDDS-325.005.patch > > > This Jira aims to add watcher for deleteBlocks command. It removes the > current rpc call required for datanode to send the acknowledgement for > deleteBlocks. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-325) Add event watcher for delete blocks command
[ https://issues.apache.org/jira/browse/HDDS-325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16589207#comment-16589207 ] Lokesh Jain commented on HDDS-325: -- [~elek] I have uploaded rebased v5 patch. > Add event watcher for delete blocks command > --- > > Key: HDDS-325 > URL: https://issues.apache.org/jira/browse/HDDS-325 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: Ozone Datanode, SCM >Reporter: Lokesh Jain >Assignee: Lokesh Jain >Priority: Major > Fix For: 0.2.1 > > Attachments: HDDS-325.001.patch, HDDS-325.002.patch, > HDDS-325.003.patch, HDDS-325.004.patch, HDDS-325.005.patch > > > This Jira aims to add watcher for deleteBlocks command. It removes the > current rpc call required for datanode to send the acknowledgement for > deleteBlocks. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-325) Add event watcher for delete blocks command
[ https://issues.apache.org/jira/browse/HDDS-325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16593257#comment-16593257 ] Lokesh Jain commented on HDDS-325: -- {quote}I am not sure but I think in RetriableDatanodeEventWatcher.onTimeout we need to send the message to SCMEvents.DATANODE_COMMAND and not SCMEvents.RETRIABLE_DATANODE_COMMAND (A unit test would help to decide this question...) {quote} If we send DATANODE_COMMAND then this is never retried on timeout. Therefore I am firing the RETRIABLE_DATANODE_COMMAND. Although this will lead to infinite number of retries because currently we are not limiting the number of retries. {quote}Let's say we have two kind of commands : new CommandForDatanode<>(dnId, new DeleteBlocksCommand) new CommandForDatanode<>(dnId, new EatBananaCommand) Both could be sent to the SCMEvents.RETRIABLE_DATANODE_COMMAND for RetriableDatanodeEventWatcher (and for the scmNodeManager) and they could handle both of them. {quote} The problem is CMD_STATUS_REPORT is a collection of command status from the datanode. Each of these command status prevent timeout for specific events. Therefore we will need to watch either the events fired by CommandStatusReportHandler or if we watch CMD_STATUS_REPORT then we will need to change the event watcher logic for watching event which combines many replies. The problem I was mentioning occurred if we watch events fired by CommandStatusReportHandler. {quote}we can create a builder (EventHandler.watchEvents is almost like a builder). {quote} I like this idea. We can very easily separate start events and end events using a builder. Further we can create a way to add our own timeout or completion logic for such events rather than using a default one. With logic I mean we can provide a custom function which handles these events in event queue. This way we can easily handle CMD_STATUS_REPORT. I will upload another patch for handling other comments and try adding a unit test. I agree we do not need the watchEvents function for now and can do it as part of another Jira when required. > Add event watcher for delete blocks command > --- > > Key: HDDS-325 > URL: https://issues.apache.org/jira/browse/HDDS-325 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: Ozone Datanode, SCM >Reporter: Lokesh Jain >Assignee: Lokesh Jain >Priority: Major > Fix For: 0.2.1 > > Attachments: HDDS-325.001.patch, HDDS-325.002.patch, > HDDS-325.003.patch, HDDS-325.004.patch, HDDS-325.005.patch > > > This Jira aims to add watcher for deleteBlocks command. It removes the > current rpc call required for datanode to send the acknowledgement for > deleteBlocks. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-325) Add event watcher for delete blocks command
[ https://issues.apache.org/jira/browse/HDDS-325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lokesh Jain updated HDDS-325: - Attachment: HDDS-325.006.patch > Add event watcher for delete blocks command > --- > > Key: HDDS-325 > URL: https://issues.apache.org/jira/browse/HDDS-325 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: Ozone Datanode, SCM >Reporter: Lokesh Jain >Assignee: Lokesh Jain >Priority: Major > Fix For: 0.2.1 > > Attachments: HDDS-325.001.patch, HDDS-325.002.patch, > HDDS-325.003.patch, HDDS-325.004.patch, HDDS-325.005.patch, HDDS-325.006.patch > > > This Jira aims to add watcher for deleteBlocks command. It removes the > current rpc call required for datanode to send the acknowledgement for > deleteBlocks. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-325) Add event watcher for delete blocks command
[ https://issues.apache.org/jira/browse/HDDS-325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16596352#comment-16596352 ] Lokesh Jain commented on HDDS-325: -- Uploaded rebased v6 patch which addresses [~elek] comments. I have modified the test case in TestBlockDeletion to verify the event to be fired by watcher. > Add event watcher for delete blocks command > --- > > Key: HDDS-325 > URL: https://issues.apache.org/jira/browse/HDDS-325 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: Ozone Datanode, SCM >Reporter: Lokesh Jain >Assignee: Lokesh Jain >Priority: Major > Fix For: 0.2.1 > > Attachments: HDDS-325.001.patch, HDDS-325.002.patch, > HDDS-325.003.patch, HDDS-325.004.patch, HDDS-325.005.patch, HDDS-325.006.patch > > > This Jira aims to add watcher for deleteBlocks command. It removes the > current rpc call required for datanode to send the acknowledgement for > deleteBlocks. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDDS-386) Create a datanode cli
Lokesh Jain created HDDS-386: Summary: Create a datanode cli Key: HDDS-386 URL: https://issues.apache.org/jira/browse/HDDS-386 Project: Hadoop Distributed Data Store Issue Type: Bug Components: Ozone Datanode Reporter: Lokesh Jain Assignee: Lokesh Jain Fix For: 0.2.1 For block deletion we need a debug cli on the datanode to know the state of the containers and number of chunks present in the container. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-386) Create a datanode debug cli
[ https://issues.apache.org/jira/browse/HDDS-386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lokesh Jain updated HDDS-386: - Summary: Create a datanode debug cli (was: Create a datanode cli) > Create a datanode debug cli > --- > > Key: HDDS-386 > URL: https://issues.apache.org/jira/browse/HDDS-386 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: Ozone Datanode >Reporter: Lokesh Jain >Assignee: Lokesh Jain >Priority: Major > Fix For: 0.2.1 > > > For block deletion we need a debug cli on the datanode to know the state of > the containers and number of chunks present in the container. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDDS-397) Handle deletion for keys with no blocks
Lokesh Jain created HDDS-397: Summary: Handle deletion for keys with no blocks Key: HDDS-397 URL: https://issues.apache.org/jira/browse/HDDS-397 Project: Hadoop Distributed Data Store Issue Type: Bug Components: Ozone Manager Reporter: Lokesh Jain Assignee: Lokesh Jain Fix For: 0.2.1 Keys which do not contain blocks can be deleted directly from OzoneManager. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-358) Use DBStore and TableStore for DeleteKeyService
[ https://issues.apache.org/jira/browse/HDDS-358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16602256#comment-16602256 ] Lokesh Jain commented on HDDS-358: -- [~anu] Can you please rebase the patch? The patch looks good to me. Please find my comments below. # KeyDeletingService - We can have the logs in KeyDeletingTask and can convert them to debug instead. We can have the log for the case when block deletion result from SCM is a failure. We can also have the log for the number of keys which are being deleted by the service. # We also need to start the KeyDeletingService and the block deletion tests. We can do it as part of separate Jira though. # OmMetadataManagerImpl:50 - Star import > Use DBStore and TableStore for DeleteKeyService > --- > > Key: HDDS-358 > URL: https://issues.apache.org/jira/browse/HDDS-358 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Xiaoyu Yao >Assignee: Anu Engineer >Priority: Major > Fix For: 0.2.1 > > Attachments: HDDS-358.001.patch > > > DeleteKeysService and OpenKeyDeleteService. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-358) Use DBStore and TableStore for DeleteKeyService
[ https://issues.apache.org/jira/browse/HDDS-358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16604289#comment-16604289 ] Lokesh Jain commented on HDDS-358: -- [~anu] Thanks for updating the patch! v3 patch looks good to me. +1 > Use DBStore and TableStore for DeleteKeyService > --- > > Key: HDDS-358 > URL: https://issues.apache.org/jira/browse/HDDS-358 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Xiaoyu Yao >Assignee: Anu Engineer >Priority: Major > Fix For: 0.2.1 > > Attachments: HDDS-358.001.patch, HDDS-358.002.patch, > HDDS-358.003.patch > > > DeleteKeysService and OpenKeyDeleteService. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-325) Add event watcher for delete blocks command
[ https://issues.apache.org/jira/browse/HDDS-325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lokesh Jain updated HDDS-325: - Attachment: HDDS-325.007.patch > Add event watcher for delete blocks command > --- > > Key: HDDS-325 > URL: https://issues.apache.org/jira/browse/HDDS-325 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: Ozone Datanode, SCM >Reporter: Lokesh Jain >Assignee: Lokesh Jain >Priority: Major > Fix For: 0.2.1 > > Attachments: HDDS-325.001.patch, HDDS-325.002.patch, > HDDS-325.003.patch, HDDS-325.004.patch, HDDS-325.005.patch, > HDDS-325.006.patch, HDDS-325.007.patch > > > This Jira aims to add watcher for deleteBlocks command. It removes the > current rpc call required for datanode to send the acknowledgement for > deleteBlocks. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-325) Add event watcher for delete blocks command
[ https://issues.apache.org/jira/browse/HDDS-325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16604720#comment-16604720 ] Lokesh Jain commented on HDDS-325: -- Uploaded rebased v7 patch. > Add event watcher for delete blocks command > --- > > Key: HDDS-325 > URL: https://issues.apache.org/jira/browse/HDDS-325 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: Ozone Datanode, SCM >Reporter: Lokesh Jain >Assignee: Lokesh Jain >Priority: Major > Fix For: 0.2.1 > > Attachments: HDDS-325.001.patch, HDDS-325.002.patch, > HDDS-325.003.patch, HDDS-325.004.patch, HDDS-325.005.patch, > HDDS-325.006.patch, HDDS-325.007.patch > > > This Jira aims to add watcher for deleteBlocks command. It removes the > current rpc call required for datanode to send the acknowledgement for > deleteBlocks. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-397) Handle deletion for keys with no blocks
[ https://issues.apache.org/jira/browse/HDDS-397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lokesh Jain updated HDDS-397: - Status: Patch Available (was: Open) > Handle deletion for keys with no blocks > --- > > Key: HDDS-397 > URL: https://issues.apache.org/jira/browse/HDDS-397 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: Ozone Manager >Reporter: Lokesh Jain >Assignee: Lokesh Jain >Priority: Major > Fix For: 0.2.1 > > Attachments: HDDS-397.001.patch > > > Keys which do not contain blocks can be deleted directly from OzoneManager. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-397) Handle deletion for keys with no blocks
[ https://issues.apache.org/jira/browse/HDDS-397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lokesh Jain updated HDDS-397: - Attachment: HDDS-397.001.patch > Handle deletion for keys with no blocks > --- > > Key: HDDS-397 > URL: https://issues.apache.org/jira/browse/HDDS-397 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: Ozone Manager >Reporter: Lokesh Jain >Assignee: Lokesh Jain >Priority: Major > Fix For: 0.2.1 > > Attachments: HDDS-397.001.patch > > > Keys which do not contain blocks can be deleted directly from OzoneManager. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDFS-13893) DiskBalancer: no validations for Disk balancer commands
[ https://issues.apache.org/jira/browse/HDFS-13893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lokesh Jain reassigned HDFS-13893: -- Assignee: Lokesh Jain > DiskBalancer: no validations for Disk balancer commands > > > Key: HDFS-13893 > URL: https://issues.apache.org/jira/browse/HDFS-13893 > Project: Hadoop HDFS > Issue Type: Bug > Components: diskbalancer >Reporter: Harshakiran Reddy >Assignee: Lokesh Jain >Priority: Major > Labels: newbie > > {{Scenario:-}} > > 1 Run the Disk Balancer commands with extra arguments passing > {noformat} > hadoopclient> hdfs diskbalancer -plan hostname --thresholdPercentage 2 > *sgfsdgfs* > 2018-08-31 14:57:35,454 INFO planner.GreedyPlanner: Starting plan for Node : > hostname:50077 > 2018-08-31 14:57:35,457 INFO planner.GreedyPlanner: Disk Volume set > fb67f00c-e333-4f38-a3a6-846a30d4205a Type : DISK plan completed. > 2018-08-31 14:57:35,457 INFO planner.GreedyPlanner: Compute Plan for Node : > hostname:50077 took 23 ms > 2018-08-31 14:57:35,457 INFO command.Command: Writing plan to: > 2018-08-31 14:57:35,457 INFO command.Command: > /system/diskbalancer/2018-Aug-31-14-57-35/hostname.plan.json > Writing plan to: > /system/diskbalancer/2018-Aug-31-14-57-35/hostname.plan.json > {noformat} > Expected Output:- > = > Disk balancer commands should be fail if we pass any invalid arguments or > extra arguments. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org