[jira] [Commented] (HDDS-629) Make ApplyTransaction calls in ContainerStateMachine idempotent
[ https://issues.apache.org/jira/browse/HDDS-629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16649719#comment-16649719 ] Shashikant Banerjee commented on HDDS-629: -- Thanks [~jnp], for the review. Patch v5 addresses the review comments. The test failures reported here are not related to the patch. > Make ApplyTransaction calls in ContainerStateMachine idempotent > --- > > Key: HDDS-629 > URL: https://issues.apache.org/jira/browse/HDDS-629 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Shashikant Banerjee >Assignee: Shashikant Banerjee >Priority: Major > Attachments: HDDS-629.000.patch, HDDS-629.001.patch, > HDDS-629.002.patch, HDDS-629.003.patch, HDDS-629.004.patch, HDDS-629.005.patch > > > When a Datanode restarts, it may lead up to a case where it can reapply > already applied Transactions when it joins the pipeline again . For this > requirement, all ApplyTransaction calls in Ratis need to be made idempotent -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-629) Make ApplyTransaction calls in ContainerStateMachine idempotent
[ https://issues.apache.org/jira/browse/HDDS-629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shashikant Banerjee updated HDDS-629: - Attachment: HDDS-629.005.patch > Make ApplyTransaction calls in ContainerStateMachine idempotent > --- > > Key: HDDS-629 > URL: https://issues.apache.org/jira/browse/HDDS-629 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Shashikant Banerjee >Assignee: Shashikant Banerjee >Priority: Major > Attachments: HDDS-629.000.patch, HDDS-629.001.patch, > HDDS-629.002.patch, HDDS-629.003.patch, HDDS-629.004.patch, HDDS-629.005.patch > > > When a Datanode restarts, it may lead up to a case where it can reapply > already applied Transactions when it joins the pipeline again . For this > requirement, all ApplyTransaction calls in Ratis need to be made idempotent -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-629) Make ApplyTransaction calls in ContainerStateMachine idempotent
[ https://issues.apache.org/jira/browse/HDDS-629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16649709#comment-16649709 ] Jitendra Nath Pandey commented on HDDS-629: --- Please fix the javadoc for {{validateChunkForOverwrite}}. > Make ApplyTransaction calls in ContainerStateMachine idempotent > --- > > Key: HDDS-629 > URL: https://issues.apache.org/jira/browse/HDDS-629 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Shashikant Banerjee >Assignee: Shashikant Banerjee >Priority: Major > Attachments: HDDS-629.000.patch, HDDS-629.001.patch, > HDDS-629.002.patch, HDDS-629.003.patch, HDDS-629.004.patch > > > When a Datanode restarts, it may lead up to a case where it can reapply > already applied Transactions when it joins the pipeline again . For this > requirement, all ApplyTransaction calls in Ratis need to be made idempotent -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-13992) cross-cluster rack awareness for distcp
Ruslan Dautkhanov created HDFS-13992: Summary: cross-cluster rack awareness for distcp Key: HDFS-13992 URL: https://issues.apache.org/jira/browse/HDFS-13992 Project: Hadoop HDFS Issue Type: New Feature Affects Versions: 2.7.7, 3.0.3, 3.1.1, 2.8.4 Reporter: Ruslan Dautkhanov Would be great if distcp supported cross-cluster rack awareness. For example, we have hdfs cluster1 and hdfs cluster2. Both clusters span three switches, and both have rack awareness enabled. And also both clusters name same switches same way. So when distcp runs data replication job, it could replicate hdfs blocks only to counterpart datanodes on destination cluster that are in the same physical network switch, minimizing latencies and maximizing bandwidth. It could be an option, activate through `distcp` clommand-line switch. We have multiple clusters with default replication of 3 and all those cluster live in same three different "racks" / "top of the rack switches". This could drastically minimize inter-switch network traffic during huge distcp jobs. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-656) Add logic for pipeline report and action processing in new pipeline code
[ https://issues.apache.org/jira/browse/HDDS-656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16649544#comment-16649544 ] Hadoop QA commented on HDDS-656: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 5 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 22s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 16s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 22m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 46s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 41s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 19m 41s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-ozone/integration-test {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 55s{color} | {color:red} hadoop-hdds/server-scm in trunk has 1 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 31s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 28s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 21m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 21m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 21m 30s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 3m 53s{color} | {color:orange} root: The patch generated 5 new + 1 unchanged - 13 fixed = 6 total (was 14) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 6s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-ozone/integration-test {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 21s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 6s{color} | {color:green} common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 30s{color} | {color:green} client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 15m 56s{color} | {color:red} server-scm in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 6m 28s{color} | {color:red} integration-test in the patch failed. {color} | | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 42s{color} | {color:red} The patch generated 1 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}148m 2s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests
[jira] [Commented] (HDDS-629) Make ApplyTransaction calls in ContainerStateMachine idempotent
[ https://issues.apache.org/jira/browse/HDDS-629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16649512#comment-16649512 ] Hadoop QA commented on HDDS-629: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 27s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 24s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 23s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 34s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 54s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 8s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 43s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-ozone/integration-test {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 51s{color} | {color:red} hadoop-hdds/container-service in trunk has 1 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 23s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 17m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 6s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-ozone/integration-test {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 0s{color} | {color:green} container-service in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 10m 6s{color} | {color:red} integration-test in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 38s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}105m 6s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdds.scm.container.TestContainerStateManagerIntegration | | | hadoop.ozone.web.TestOzoneRestWithMiniCluster | | | hadoop.hdds.scm.pipeline.TestNodeFailure | | | hadoop.ozone.client.rest.TestOzoneRestClient | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 | | JIRA Issue | HDDS-629 | | JIRA Patch URL |
[jira] [Updated] (HDDS-656) Add logic for pipeline report and action processing in new pipeline code
[ https://issues.apache.org/jira/browse/HDDS-656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lokesh Jain updated HDDS-656: - Status: Patch Available (was: Open) > Add logic for pipeline report and action processing in new pipeline code > > > Key: HDDS-656 > URL: https://issues.apache.org/jira/browse/HDDS-656 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: SCM >Reporter: Lokesh Jain >Assignee: Lokesh Jain >Priority: Major > Attachments: HDDS-656.001.patch > > > As part of pipeline refactoring, new pipeline management classes were added > as part of HDDS-587. This Jira adds logic for pipeline report and action > processing in the new code. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-656) Add logic for pipeline report and action processing in new pipeline code
[ https://issues.apache.org/jira/browse/HDDS-656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lokesh Jain updated HDDS-656: - Attachment: HDDS-656.001.patch > Add logic for pipeline report and action processing in new pipeline code > > > Key: HDDS-656 > URL: https://issues.apache.org/jira/browse/HDDS-656 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: SCM >Reporter: Lokesh Jain >Assignee: Lokesh Jain >Priority: Major > Attachments: HDDS-656.001.patch > > > As part of pipeline refactoring, new pipeline management classes were added > as part of HDDS-587. This Jira adds logic for pipeline report and action > processing in the new code. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDDS-656) Add logic for pipeline report and action processing in new pipeline code
Lokesh Jain created HDDS-656: Summary: Add logic for pipeline report and action processing in new pipeline code Key: HDDS-656 URL: https://issues.apache.org/jira/browse/HDDS-656 Project: Hadoop Distributed Data Store Issue Type: Bug Components: SCM Reporter: Lokesh Jain Assignee: Lokesh Jain As part of pipeline refactoring, new pipeline management classes were added as part of HDDS-587. This Jira adds logic for pipeline report and action processing in the new code. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-629) Make ApplyTransaction calls in ContainerStateMachine idempotent
[ https://issues.apache.org/jira/browse/HDDS-629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16649478#comment-16649478 ] Shashikant Banerjee commented on HDDS-629: -- Patch v4, fixes the checkstyle as well as the unit test failures. > Make ApplyTransaction calls in ContainerStateMachine idempotent > --- > > Key: HDDS-629 > URL: https://issues.apache.org/jira/browse/HDDS-629 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Shashikant Banerjee >Assignee: Shashikant Banerjee >Priority: Major > Attachments: HDDS-629.000.patch, HDDS-629.001.patch, > HDDS-629.002.patch, HDDS-629.003.patch, HDDS-629.004.patch > > > When a Datanode restarts, it may lead up to a case where it can reapply > already applied Transactions when it joins the pipeline again . For this > requirement, all ApplyTransaction calls in Ratis need to be made idempotent -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-629) Make ApplyTransaction calls in ContainerStateMachine idempotent
[ https://issues.apache.org/jira/browse/HDDS-629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shashikant Banerjee updated HDDS-629: - Attachment: HDDS-629.004.patch > Make ApplyTransaction calls in ContainerStateMachine idempotent > --- > > Key: HDDS-629 > URL: https://issues.apache.org/jira/browse/HDDS-629 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Shashikant Banerjee >Assignee: Shashikant Banerjee >Priority: Major > Attachments: HDDS-629.000.patch, HDDS-629.001.patch, > HDDS-629.002.patch, HDDS-629.003.patch, HDDS-629.004.patch > > > When a Datanode restarts, it may lead up to a case where it can reapply > already applied Transactions when it joins the pipeline again . For this > requirement, all ApplyTransaction calls in Ratis need to be made idempotent -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-629) Make ApplyTransaction calls in ContainerStateMachine idempotent
[ https://issues.apache.org/jira/browse/HDDS-629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16649343#comment-16649343 ] Hadoop QA commented on HDDS-629: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 37s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 2m 10s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 19s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 33s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 19s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 17m 14s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-ozone/integration-test {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 56s{color} | {color:red} hadoop-hdds/container-service in trunk has 1 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 7s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 24s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 24m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 24m 6s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 4m 10s{color} | {color:orange} root: The patch generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 18s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-ozone/integration-test {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 26s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 1m 15s{color} | {color:red} container-service in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 10m 31s{color} | {color:red} integration-test in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 1m 7s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}128m 53s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.ozone.container.keyvalue.TestBlockManagerImpl | | | hadoop.ozone.client.rpc.TestOzoneRpcClient | | | hadoop.ozone.client.rpc.TestContainerStateMachineFailures | | | hadoop.ozone.TestStorageContainerManager | | | hadoop.hdds.scm.container.TestContainerStateManagerIntegration | | | hadoop.ozone.web.client.TestKeysRatis | \\ \\ || Subsystem || Report/Notes || |
[jira] [Commented] (HDDS-519) Implement ListBucket REST endpoint
[ https://issues.apache.org/jira/browse/HDDS-519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16649337#comment-16649337 ] Hudson commented on HDDS-519: - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15211 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/15211/]) HDDS-519. Implement ListBucket REST endpoint. Contributed by LiXin Ge. (elek: rev 5033deb13b7f393d165e282b0c3b9e1ee1390bb2) * (add) hadoop-ozone/s3gateway/src/test/java/org/apache/hadoop/ozone/s3/bucket/TestListBucket.java * (add) hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/bucket/ListBucketResponse.java * (edit) hadoop-ozone/s3gateway/src/test/java/org/apache/hadoop/ozone/client/OzoneVolumeStub.java * (add) hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/bucket/ListBucket.java * (add) hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/commontypes/BucketMetadata.java * (edit) hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/exception/S3ErrorTable.java > Implement ListBucket REST endpoint > -- > > Key: HDDS-519 > URL: https://issues.apache.org/jira/browse/HDDS-519 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Elek, Marton >Assignee: LiXin Ge >Priority: Major > Labels: newbie > Fix For: 0.3.0 > > Attachments: HDDS-519.000.patch > > > You can also name it as GetService. > See te AWS reference: > https://docs.aws.amazon.com/AmazonS3/latest/API/RESTServiceGET.html > The List Bucket API needs the call to be handled at the root resource > (“/{volume}”). > > This implementation of the GET operation returns a list of all buckets owned > by the authenticated sender of the request. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-519) Implement ListBucket REST endpoint
[ https://issues.apache.org/jira/browse/HDDS-519?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Elek, Marton updated HDDS-519: -- Resolution: Fixed Fix Version/s: 0.3.0 Status: Resolved (was: Patch Available) Committed to the trunk/ozone-0.3. Thank you very much [~GeLiXin] the contribution. > Implement ListBucket REST endpoint > -- > > Key: HDDS-519 > URL: https://issues.apache.org/jira/browse/HDDS-519 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Elek, Marton >Assignee: LiXin Ge >Priority: Major > Labels: newbie > Fix For: 0.3.0 > > Attachments: HDDS-519.000.patch > > > You can also name it as GetService. > See te AWS reference: > https://docs.aws.amazon.com/AmazonS3/latest/API/RESTServiceGET.html > The List Bucket API needs the call to be handled at the root resource > (“/{volume}”). > > This implementation of the GET operation returns a list of all buckets owned > by the authenticated sender of the request. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDDS-519) Implement ListBucket REST endpoint
[ https://issues.apache.org/jira/browse/HDDS-519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16649322#comment-16649322 ] Elek, Marton edited comment on HDDS-519 at 10/14/18 10:02 AM: -- +1. LGTM Tested with ozones3 cluster and curl, and works well. Note: I agree, the volume not found exception is not clear, as there is no volume at s3. One option is to throw bucket not found with custom exception message but it's not so good. But we need to solve it later. We are just moving to an other approach where we have no volume in the url and the volume comes from the username -> volumename naming convention (user elek will use s3elek volume for all the s3 operation). Summary: will commit it shortly. was (Author: elek): +1. LGTM Tested with ozones3 cluster and curl, and works well. Note: I agree, the volume not found exception is not clear, as there is no volume at s3. One option is to throw bucket not found with custom exception message but it's not so good. But we need to solve it later. We are just moving to an other approach where we have no volume in the url and the volume comes from the username -> volumename naming convention (user elek will use s3elek volume for all the s3 operation). Summary: will commit in shortly. > Implement ListBucket REST endpoint > -- > > Key: HDDS-519 > URL: https://issues.apache.org/jira/browse/HDDS-519 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Elek, Marton >Assignee: LiXin Ge >Priority: Major > Labels: newbie > Attachments: HDDS-519.000.patch > > > You can also name it as GetService. > See te AWS reference: > https://docs.aws.amazon.com/AmazonS3/latest/API/RESTServiceGET.html > The List Bucket API needs the call to be handled at the root resource > (“/{volume}”). > > This implementation of the GET operation returns a list of all buckets owned > by the authenticated sender of the request. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-519) Implement ListBucket REST endpoint
[ https://issues.apache.org/jira/browse/HDDS-519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16649322#comment-16649322 ] Elek, Marton commented on HDDS-519: --- +1. LGTM Tested with ozones3 cluster and curl, and works well. Note: I agree, the volume not found exception is not clear, as there is no volume at s3. One option is to throw bucket not found with custom exception message but it's not so good. But we need to solve it later. We are just moving to an other approach where we have no volume in the url and the volume comes from the username -> volumename naming convention (user elek will use s3elek volume for all the s3 operation). Summary: will commit in shortly. > Implement ListBucket REST endpoint > -- > > Key: HDDS-519 > URL: https://issues.apache.org/jira/browse/HDDS-519 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Elek, Marton >Assignee: LiXin Ge >Priority: Major > Labels: newbie > Attachments: HDDS-519.000.patch > > > You can also name it as GetService. > See te AWS reference: > https://docs.aws.amazon.com/AmazonS3/latest/API/RESTServiceGET.html > The List Bucket API needs the call to be handled at the root resource > (“/{volume}”). > > This implementation of the GET operation returns a list of all buckets owned > by the authenticated sender of the request. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-629) Make ApplyTransaction calls in ContainerStateMachine idempotent
[ https://issues.apache.org/jira/browse/HDDS-629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16649316#comment-16649316 ] Hadoop QA commented on HDDS-629: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 29s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 2m 10s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 47s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 48s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 3s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 9s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 24s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-ozone/integration-test {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 5s{color} | {color:red} hadoop-hdds/container-service in trunk has 1 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 9s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 27s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 22m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 22m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 10s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-ozone/integration-test {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 7s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 3s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 1m 16s{color} | {color:red} container-service in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 12m 28s{color} | {color:red} integration-test in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 45s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}121m 47s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.ozone.container.keyvalue.TestBlockManagerImpl | | | hadoop.ozone.web.client.TestKeysRatis | | | hadoop.hdds.scm.container.TestContainerStateManagerIntegration | | | hadoop.ozone.om.TestScmChillMode | | | hadoop.ozone.container.TestContainerReplication | | | hadoop.ozone.container.common.impl.TestContainerPersistence | | | hadoop.ozone.container.ozoneimpl.TestOzoneContainer | | |
[jira] [Commented] (HDDS-651) Rename o3 to o3fs for Filesystem
[ https://issues.apache.org/jira/browse/HDDS-651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16649314#comment-16649314 ] Elek, Marton commented on HDDS-651: --- {quote} The attached patch changes core-default.xml as well. {quote} Yes, but we had a branch cut for hadoop-3.2. core-default.xml on the branch-3.2 also should be modified. (before the release!) > Rename o3 to o3fs for Filesystem > > > Key: HDDS-651 > URL: https://issues.apache.org/jira/browse/HDDS-651 > Project: Hadoop Distributed Data Store > Issue Type: Improvement >Reporter: Namit Maheshwari >Assignee: Jitendra Nath Pandey >Priority: Blocker > Attachments: HDDS-651.1.patch, HDDS-651.2.patch, HDDS-651.3.patch > > > I propose that we rename o3 to o3fs for Filesystem. > It creates a lot of confusion while using the same name o3 for different > purposes. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-651) Rename o3 to o3fs for Filesystem
[ https://issues.apache.org/jira/browse/HDDS-651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16649307#comment-16649307 ] Hadoop QA commented on HDDS-651: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 2m 4s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 23s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 43s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 33s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 4m 2s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 56s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-ozone/dist hadoop-ozone/docs {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 42s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 51s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 28s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 20s{color} | {color:red} dist in the patch failed. {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 17m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 0s{color} | {color:green} There were no new shellcheck issues. {color} | | {color:green}+1{color} | {color:green} shelldocs {color} | {color:green} 0m 30s{color} | {color:green} There were no new shelldocs issues. {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 29s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-ozone/dist hadoop-ozone/docs {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 49s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 17s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 19s{color} | {color:green} common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 11s{color} | {color:green} ozonefs in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 38s{color} | {color:green} dist in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} |
[jira] [Commented] (HDDS-629) Make ApplyTransaction calls in ContainerStateMachine idempotent
[ https://issues.apache.org/jira/browse/HDDS-629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16649301#comment-16649301 ] Shashikant Banerjee commented on HDDS-629: -- Patch v3 removes the truncate option while opening the chunk file to write and hence the TODO item as well. This is not required. It also fixes the failed test cases. > Make ApplyTransaction calls in ContainerStateMachine idempotent > --- > > Key: HDDS-629 > URL: https://issues.apache.org/jira/browse/HDDS-629 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Shashikant Banerjee >Assignee: Shashikant Banerjee >Priority: Major > Attachments: HDDS-629.000.patch, HDDS-629.001.patch, > HDDS-629.002.patch, HDDS-629.003.patch > > > When a Datanode restarts, it may lead up to a case where it can reapply > already applied Transactions when it joins the pipeline again . For this > requirement, all ApplyTransaction calls in Ratis need to be made idempotent -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-629) Make ApplyTransaction calls in ContainerStateMachine idempotent
[ https://issues.apache.org/jira/browse/HDDS-629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shashikant Banerjee updated HDDS-629: - Attachment: HDDS-629.003.patch > Make ApplyTransaction calls in ContainerStateMachine idempotent > --- > > Key: HDDS-629 > URL: https://issues.apache.org/jira/browse/HDDS-629 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Shashikant Banerjee >Assignee: Shashikant Banerjee >Priority: Major > Attachments: HDDS-629.000.patch, HDDS-629.001.patch, > HDDS-629.002.patch, HDDS-629.003.patch > > > When a Datanode restarts, it may lead up to a case where it can reapply > already applied Transactions when it joins the pipeline again . For this > requirement, all ApplyTransaction calls in Ratis need to be made idempotent -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-629) Make ApplyTransaction calls in ContainerStateMachine idempotent
[ https://issues.apache.org/jira/browse/HDDS-629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16649283#comment-16649283 ] Shashikant Banerjee commented on HDDS-629: -- Thanks [~jnp], for the review comments. {code:java} In ChunkUtils.java, the chunkfile is being truncated to zero length. It is possible that the chunk being overwritten starts from an offset. The code assumes that every chunkInfo is for a new file and offset is always zero. Is added TODO statement meant for the above? {code} Currently, by default the overwrite of a chunk file is not permitted. Once we have the feature to overwrite the chunk file, we need to handle the case. The TODO is added for the same. Patch addresses rest of the review comments. > Make ApplyTransaction calls in ContainerStateMachine idempotent > --- > > Key: HDDS-629 > URL: https://issues.apache.org/jira/browse/HDDS-629 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Shashikant Banerjee >Assignee: Shashikant Banerjee >Priority: Major > Attachments: HDDS-629.000.patch, HDDS-629.001.patch, > HDDS-629.002.patch > > > When a Datanode restarts, it may lead up to a case where it can reapply > already applied Transactions when it joins the pipeline again . For this > requirement, all ApplyTransaction calls in Ratis need to be made idempotent -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-629) Make ApplyTransaction calls in ContainerStateMachine idempotent
[ https://issues.apache.org/jira/browse/HDDS-629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shashikant Banerjee updated HDDS-629: - Attachment: HDDS-629.002.patch > Make ApplyTransaction calls in ContainerStateMachine idempotent > --- > > Key: HDDS-629 > URL: https://issues.apache.org/jira/browse/HDDS-629 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Shashikant Banerjee >Assignee: Shashikant Banerjee >Priority: Major > Attachments: HDDS-629.000.patch, HDDS-629.001.patch, > HDDS-629.002.patch > > > When a Datanode restarts, it may lead up to a case where it can reapply > already applied Transactions when it joins the pipeline again . For this > requirement, all ApplyTransaction calls in Ratis need to be made idempotent -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-651) Rename o3 to o3fs for Filesystem
[ https://issues.apache.org/jira/browse/HDDS-651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16649276#comment-16649276 ] Jitendra Nath Pandey commented on HDDS-651: --- HDDS-651.3.patch addresses the checkstyle issues. > Rename o3 to o3fs for Filesystem > > > Key: HDDS-651 > URL: https://issues.apache.org/jira/browse/HDDS-651 > Project: Hadoop Distributed Data Store > Issue Type: Improvement >Reporter: Namit Maheshwari >Assignee: Jitendra Nath Pandey >Priority: Blocker > Attachments: HDDS-651.1.patch, HDDS-651.2.patch, HDDS-651.3.patch > > > I propose that we rename o3 to o3fs for Filesystem. > It creates a lot of confusion while using the same name o3 for different > purposes. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-651) Rename o3 to o3fs for Filesystem
[ https://issues.apache.org/jira/browse/HDDS-651?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jitendra Nath Pandey updated HDDS-651: -- Attachment: HDDS-651.3.patch > Rename o3 to o3fs for Filesystem > > > Key: HDDS-651 > URL: https://issues.apache.org/jira/browse/HDDS-651 > Project: Hadoop Distributed Data Store > Issue Type: Improvement >Reporter: Namit Maheshwari >Assignee: Jitendra Nath Pandey >Priority: Blocker > Attachments: HDDS-651.1.patch, HDDS-651.2.patch, HDDS-651.3.patch > > > I propose that we rename o3 to o3fs for Filesystem. > It creates a lot of confusion while using the same name o3 for different > purposes. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org