[jira] [Updated] (HDDS-351) Add chill mode state to SCM
[ https://issues.apache.org/jira/browse/HDDS-351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nanda kumar updated HDDS-351: - Fix Version/s: 0.2.1 > Add chill mode state to SCM > --- > > Key: HDDS-351 > URL: https://issues.apache.org/jira/browse/HDDS-351 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Ajay Kumar >Assignee: Ajay Kumar >Priority: Major > Fix For: 0.2.1 > > Attachments: HDDS-351.00.patch > > > Add chill mode state to SCM -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-376) Create custom message structure for use in AuditLogging
[ https://issues.apache.org/jira/browse/HDDS-376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nanda kumar updated HDDS-376: - Fix Version/s: 0.2.1 > Create custom message structure for use in AuditLogging > --- > > Key: HDDS-376 > URL: https://issues.apache.org/jira/browse/HDDS-376 > Project: Hadoop Distributed Data Store > Issue Type: Improvement >Reporter: Dinesh Chitlangia >Assignee: Dinesh Chitlangia >Priority: Major > Labels: audit, logging > Fix For: 0.2.1 > > Attachments: HDDS-376.001.patch, HDDS-376.002.patch, > HDDS-376.003.patch, HDDS-376.004.patch > > > In HDDS-198 we introduced a framework for AuditLogging in Ozone. > We had used StructuredDataMessage for formatting the messages to be logged. > > Based on discussion with [~jnp] and [~anu], this Jira proposes to create a > custom message structure to generate audit messages in the following format: > user=xxx ip=xxx op=_ \{key=val, key1=val1..} ret=XX -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-376) Create custom message structure for use in AuditLogging
[ https://issues.apache.org/jira/browse/HDDS-376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16592462#comment-16592462 ] genericqa commented on HDDS-376: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 38s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 32s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 18s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 33s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 3s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 23s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 14s{color} | {color:green} hadoop-hdds/common: The patch generated 0 new + 0 unchanged - 35 fixed = 0 total (was 35) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 37s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 0s{color} | {color:green} common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 26s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 54m 46s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 | | JIRA Issue | HDDS-376 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12937117/HDDS-376.004.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 50758d5d6821 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / a4121c7 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_181 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/PreCommit-HDDS-Build/848/testReport/ | | Max. process+thread count | 325 (vs. ulimit of 1) | | modules | C: hadoop-hdds/common U: hadoop-hdds/common | | Console output | https://builds.apache.org/job/PreCommit-HDDS-Build/848/console | | Powered by | Apache Yetus 0.8.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Create custom message structure for use in AuditLogging > --- > > Key: HDDS-376 >
[jira] [Commented] (HDDS-351) Add chill mode state to SCM
[ https://issues.apache.org/jira/browse/HDDS-351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16592455#comment-16592455 ] genericqa commented on HDDS-351: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 20s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 4 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 23s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 55s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 22s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 59s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 20s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 16m 28s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-ozone/integration-test {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 21s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 27s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 22s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 17s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 3m 2s{color} | {color:orange} root: The patch generated 28 new + 17 unchanged - 1 fixed = 45 total (was 18) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 47s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-ozone/integration-test {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 40s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 6s{color} | {color:green} common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 43s{color} | {color:green} framework in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 1m 49s{color} | {color:red} server-scm in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 13m 0s{color} | {color:red} integration-test in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 44s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}119m 18s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests |
[jira] [Updated] (HDDS-227) Use Grpc as the default transport protocol for Standalone pipeline
[ https://issues.apache.org/jira/browse/HDDS-227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] chencan updated HDDS-227: - Status: Patch Available (was: Open) > Use Grpc as the default transport protocol for Standalone pipeline > -- > > Key: HDDS-227 > URL: https://issues.apache.org/jira/browse/HDDS-227 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Mukul Kumar Singh >Assignee: chencan >Priority: Major > Fix For: 0.2.1 > > Attachments: HDDS-227.001.patch, HDDS-227.002.patch, > HDDS-227.003.patch, HDDS-227.004.patch > > > Using a config, Standalone pipeline can currently choose between Grpc and > Netty based transport protocol, this jira proposes to use only grpc as the > transport protocol. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-227) Use Grpc as the default transport protocol for Standalone pipeline
[ https://issues.apache.org/jira/browse/HDDS-227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] chencan updated HDDS-227: - Attachment: HDDS-227.004.patch > Use Grpc as the default transport protocol for Standalone pipeline > -- > > Key: HDDS-227 > URL: https://issues.apache.org/jira/browse/HDDS-227 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Mukul Kumar Singh >Assignee: chencan >Priority: Major > Fix For: 0.2.1 > > Attachments: HDDS-227.001.patch, HDDS-227.002.patch, > HDDS-227.003.patch, HDDS-227.004.patch > > > Using a config, Standalone pipeline can currently choose between Grpc and > Netty based transport protocol, this jira proposes to use only grpc as the > transport protocol. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-376) Create custom message structure for use in AuditLogging
[ https://issues.apache.org/jira/browse/HDDS-376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16592430#comment-16592430 ] genericqa commented on HDDS-376: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 24s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 12s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 33s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 18s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 34s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 9s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 3s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 13s{color} | {color:green} hadoop-hdds/common: The patch generated 0 new + 0 unchanged - 35 fixed = 0 total (was 35) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 29s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 37s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 53s{color} | {color:green} common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 25s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 55m 51s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 | | JIRA Issue | HDDS-376 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12937115/HDDS-376.004.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux a62940b34cad 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / a4121c7 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_181 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/PreCommit-HDDS-Build/847/testReport/ | | Max. process+thread count | 301 (vs. ulimit of 1) | | modules | C: hadoop-hdds/common U: hadoop-hdds/common | | Console output | https://builds.apache.org/job/PreCommit-HDDS-Build/847/console | | Powered by | Apache Yetus 0.8.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Create custom message structure for use in AuditLogging > --- > > Key: HDDS-376 >
[jira] [Updated] (HDDS-227) Use Grpc as the default transport protocol for Standalone pipeline
[ https://issues.apache.org/jira/browse/HDDS-227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] chencan updated HDDS-227: - Status: Open (was: Patch Available) > Use Grpc as the default transport protocol for Standalone pipeline > -- > > Key: HDDS-227 > URL: https://issues.apache.org/jira/browse/HDDS-227 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Mukul Kumar Singh >Assignee: chencan >Priority: Major > Fix For: 0.2.1 > > Attachments: HDDS-227.001.patch, HDDS-227.002.patch, > HDDS-227.003.patch > > > Using a config, Standalone pipeline can currently choose between Grpc and > Netty based transport protocol, this jira proposes to use only grpc as the > transport protocol. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-376) Create custom message structure for use in AuditLogging
[ https://issues.apache.org/jira/browse/HDDS-376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dinesh Chitlangia updated HDDS-376: --- Status: Open (was: Patch Available) > Create custom message structure for use in AuditLogging > --- > > Key: HDDS-376 > URL: https://issues.apache.org/jira/browse/HDDS-376 > Project: Hadoop Distributed Data Store > Issue Type: Improvement >Reporter: Dinesh Chitlangia >Assignee: Dinesh Chitlangia >Priority: Major > Labels: audit, logging > Attachments: HDDS-376.001.patch, HDDS-376.002.patch, > HDDS-376.003.patch, HDDS-376.004.patch > > > In HDDS-198 we introduced a framework for AuditLogging in Ozone. > We had used StructuredDataMessage for formatting the messages to be logged. > > Based on discussion with [~jnp] and [~anu], this Jira proposes to create a > custom message structure to generate audit messages in the following format: > user=xxx ip=xxx op=_ \{key=val, key1=val1..} ret=XX -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-376) Create custom message structure for use in AuditLogging
[ https://issues.apache.org/jira/browse/HDDS-376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dinesh Chitlangia updated HDDS-376: --- Attachment: HDDS-376.004.patch Status: Patch Available (was: Open) > Create custom message structure for use in AuditLogging > --- > > Key: HDDS-376 > URL: https://issues.apache.org/jira/browse/HDDS-376 > Project: Hadoop Distributed Data Store > Issue Type: Improvement >Reporter: Dinesh Chitlangia >Assignee: Dinesh Chitlangia >Priority: Major > Labels: audit, logging > Attachments: HDDS-376.001.patch, HDDS-376.002.patch, > HDDS-376.003.patch, HDDS-376.004.patch > > > In HDDS-198 we introduced a framework for AuditLogging in Ozone. > We had used StructuredDataMessage for formatting the messages to be logged. > > Based on discussion with [~jnp] and [~anu], this Jira proposes to create a > custom message structure to generate audit messages in the following format: > user=xxx ip=xxx op=_ \{key=val, key1=val1..} ret=XX -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-376) Create custom message structure for use in AuditLogging
[ https://issues.apache.org/jira/browse/HDDS-376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dinesh Chitlangia updated HDDS-376: --- Attachment: (was: HDDS-376.004.patch) > Create custom message structure for use in AuditLogging > --- > > Key: HDDS-376 > URL: https://issues.apache.org/jira/browse/HDDS-376 > Project: Hadoop Distributed Data Store > Issue Type: Improvement >Reporter: Dinesh Chitlangia >Assignee: Dinesh Chitlangia >Priority: Major > Labels: audit, logging > Attachments: HDDS-376.001.patch, HDDS-376.002.patch, > HDDS-376.003.patch, HDDS-376.004.patch > > > In HDDS-198 we introduced a framework for AuditLogging in Ozone. > We had used StructuredDataMessage for formatting the messages to be logged. > > Based on discussion with [~jnp] and [~anu], this Jira proposes to create a > custom message structure to generate audit messages in the following format: > user=xxx ip=xxx op=_ \{key=val, key1=val1..} ret=XX -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-376) Create custom message structure for use in AuditLogging
[ https://issues.apache.org/jira/browse/HDDS-376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16592427#comment-16592427 ] genericqa commented on HDDS-376: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 32s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 17s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 32s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 19s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 34s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 8s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 4s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 28s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 13s{color} | {color:orange} hadoop-hdds/common: The patch generated 2 new + 24 unchanged - 11 fixed = 26 total (was 35) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 41s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 54s{color} | {color:green} common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 26s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 56m 28s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 | | JIRA Issue | HDDS-376 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12937102/HDDS-376.003.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux e5ccdc0e3448 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / a4121c7 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_181 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-HDDS-Build/845/artifact/out/diff-checkstyle-hadoop-hdds_common.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDDS-Build/845/testReport/ | | Max. process+thread count | 302 (vs. ulimit of 1) | | modules | C: hadoop-hdds/common U: hadoop-hdds/common | | Console output | https://builds.apache.org/job/PreCommit-HDDS-Build/845/console | | Powered by | Apache Yetus 0.8.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Create custom
[jira] [Commented] (HDFS-13655) RBF: Add missing ClientProtocol APIs to RBF
[ https://issues.apache.org/jira/browse/HDFS-13655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16592424#comment-16592424 ] Yiqun Lin commented on HDFS-13655: -- I'm also a little worried about the current JIRA numbers. I take a quick glance of recent RBF JIRAs. There are not only error handling of the Router admin, also have others following: * HDFS-13845:The default MountTableResolver cannot get multi-destination path for the DestinationOrder * HDFS-13841: After Click on another Tab in Router Federation UI page it's not redirecting to new tab * HDFS-13835: RBF: Unable to add files after changing the order * HDFS-13852: RBF: The DN_REPORT_TIME_OUT and DN_REPORT_CACHE_EXPIRE should be configured in RBFConfigKeys. * HDFS-13802: RBF: Remove FSCK from Router Web UI, because fsck is not supported currently For the critical issues, we can directly merge to trunk. For most of others (not critical issues), I prefer to merge these JIRAs in the same branch and make the branch stabilised. So the new branch will focus on: * ClientProtocol APIs implementation to RBF * Router admin error handling improvments * Other improvements(e.g. setting configurable, normal bug fixes) One thing I'm sure, it's not appropriate to directly merge these all to trunk now. > RBF: Add missing ClientProtocol APIs to RBF > --- > > Key: HDFS-13655 > URL: https://issues.apache.org/jira/browse/HDFS-13655 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Xiao Chen >Priority: Major > > As > [discussed|https://issues.apache.org/jira/browse/HDFS-12858?focusedCommentId=16500975=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16500975|#comment-16500975] > with [~elgoiri], there are some HDFS methods that does not take path as a > parameter. We should support these to work with federation. > The ones missing are: > * Snapshots > * Storage policies > * Encryption zones > * Cache pools > One way to reasonably have them to work with federation is to 'list' each > nameservice and concat the results. This can be done pretty much the same as > {{refreshNodes()}} and it would be a matter of querying all the subclusters > and aggregate the output (e.g., {{getDatanodeReport()}}.) -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDFS-13869) Handle NPE for NamenodeBeanMetrics#getFederationMetrics()
[ https://issues.apache.org/jira/browse/HDFS-13869?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ranith Sardar reassigned HDFS-13869: Assignee: Ranith Sardar (was: Surendra Singh Lilhore) > Handle NPE for NamenodeBeanMetrics#getFederationMetrics() > - > > Key: HDFS-13869 > URL: https://issues.apache.org/jira/browse/HDFS-13869 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Affects Versions: 3.0.0 >Reporter: Surendra Singh Lilhore >Assignee: Ranith Sardar >Priority: Major > > {code:java} > Caused by: java.lang.NullPointerException > at > org.apache.hadoop.hdfs.server.federation.metrics.NamenodeBeanMetrics.getUsed(NamenodeBeanMetrics.java:205) > at > org.apache.hadoop.hdfs.server.federation.metrics.NamenodeBeanMetrics.getCapacityUsed(NamenodeBeanMetrics.java:519) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43){code} > ngMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13867) RBF: Add validation for max arguments for Router admin ls, clrQuota, setQuota, rm and nameservice commands
[ https://issues.apache.org/jira/browse/HDFS-13867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16592421#comment-16592421 ] genericqa commented on HDFS-13867: -- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 21s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 28s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 19s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 32s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 0s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 48s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 32s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 29s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 22s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 30s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 16m 9s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 24s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 66m 7s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 | | JIRA Issue | HDFS-13867 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12937106/HDFS-13867-04.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux a25c44270a49 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / a5eba25 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_181 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/24883/testReport/ | | Max. process+thread count | 969 (vs. ulimit of 1) | | modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: hadoop-hdfs-project/hadoop-hdfs-rbf | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/24883/console | | Powered by | Apache Yetus 0.9.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > RBF: Add validation for max arguments for Router admin ls, clrQuota, > setQuota, rm and nameservice commands >
[jira] [Assigned] (HDFS-13869) Handle NPE for NamenodeBeanMetrics#getFederationMetrics()
[ https://issues.apache.org/jira/browse/HDFS-13869?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Surendra Singh Lilhore reassigned HDFS-13869: - Assignee: Surendra Singh Lilhore > Handle NPE for NamenodeBeanMetrics#getFederationMetrics() > - > > Key: HDFS-13869 > URL: https://issues.apache.org/jira/browse/HDFS-13869 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Affects Versions: 3.0.0 >Reporter: Surendra Singh Lilhore >Assignee: Surendra Singh Lilhore >Priority: Major > > {code:java} > Caused by: java.lang.NullPointerException > at > org.apache.hadoop.hdfs.server.federation.metrics.NamenodeBeanMetrics.getUsed(NamenodeBeanMetrics.java:205) > at > org.apache.hadoop.hdfs.server.federation.metrics.NamenodeBeanMetrics.getCapacityUsed(NamenodeBeanMetrics.java:519) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43){code} > ngMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-13869) Handle NPE for NamenodeBeanMetrics#getFederationMetrics()
Surendra Singh Lilhore created HDFS-13869: - Summary: Handle NPE for NamenodeBeanMetrics#getFederationMetrics() Key: HDFS-13869 URL: https://issues.apache.org/jira/browse/HDFS-13869 Project: Hadoop HDFS Issue Type: Bug Components: namenode Affects Versions: 3.0.0 Reporter: Surendra Singh Lilhore {code:java} Caused by: java.lang.NullPointerException at org.apache.hadoop.hdfs.server.federation.metrics.NamenodeBeanMetrics.getUsed(NamenodeBeanMetrics.java:205) at org.apache.hadoop.hdfs.server.federation.metrics.NamenodeBeanMetrics.getCapacityUsed(NamenodeBeanMetrics.java:519) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43){code} ngMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-376) Create custom message structure for use in AuditLogging
[ https://issues.apache.org/jira/browse/HDDS-376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16592401#comment-16592401 ] Dinesh Chitlangia commented on HDDS-376: [~anu] - Apologies for the multiple iterations just for checkstyle issues. My local setup went haywire after an IntelliJ update :( I have restored it and hopefully, it should not show any more violations today. > Create custom message structure for use in AuditLogging > --- > > Key: HDDS-376 > URL: https://issues.apache.org/jira/browse/HDDS-376 > Project: Hadoop Distributed Data Store > Issue Type: Improvement >Reporter: Dinesh Chitlangia >Assignee: Dinesh Chitlangia >Priority: Major > Labels: audit, logging > Attachments: HDDS-376.001.patch, HDDS-376.002.patch, > HDDS-376.003.patch, HDDS-376.004.patch > > > In HDDS-198 we introduced a framework for AuditLogging in Ozone. > We had used StructuredDataMessage for formatting the messages to be logged. > > Based on discussion with [~jnp] and [~anu], this Jira proposes to create a > custom message structure to generate audit messages in the following format: > user=xxx ip=xxx op=_ \{key=val, key1=val1..} ret=XX -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-376) Create custom message structure for use in AuditLogging
[ https://issues.apache.org/jira/browse/HDDS-376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dinesh Chitlangia updated HDDS-376: --- Attachment: HDDS-376.004.patch > Create custom message structure for use in AuditLogging > --- > > Key: HDDS-376 > URL: https://issues.apache.org/jira/browse/HDDS-376 > Project: Hadoop Distributed Data Store > Issue Type: Improvement >Reporter: Dinesh Chitlangia >Assignee: Dinesh Chitlangia >Priority: Major > Labels: audit, logging > Attachments: HDDS-376.001.patch, HDDS-376.002.patch, > HDDS-376.003.patch, HDDS-376.004.patch > > > In HDDS-198 we introduced a framework for AuditLogging in Ozone. > We had used StructuredDataMessage for formatting the messages to be logged. > > Based on discussion with [~jnp] and [~anu], this Jira proposes to create a > custom message structure to generate audit messages in the following format: > user=xxx ip=xxx op=_ \{key=val, key1=val1..} ret=XX -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13848) Refactor NameNode failover proxy providers
[ https://issues.apache.org/jira/browse/HDFS-13848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16592399#comment-16592399 ] Hudson commented on HDFS-13848: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14827 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/14827/]) HDFS-13848. Refactor NameNode failover proxy providers. Contributed by (shv: rev a4121c71c29d13866a605d9c0d013e5de9c147c3) * (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/AbstractNNFailoverProxyProvider.java * (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/ConfiguredFailoverProxyProvider.java * (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/IPFailoverProxyProvider.java * (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/retry/FailoverProxyProvider.java > Refactor NameNode failover proxy providers > -- > > Key: HDFS-13848 > URL: https://issues.apache.org/jira/browse/HDFS-13848 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ha, hdfs-client >Affects Versions: 2.7.5 >Reporter: Konstantin Shvachko >Assignee: Konstantin Shvachko >Priority: Major > Fix For: 2.10.0, 3.2.0, 2.9.2, 3.0.4, 3.1.2 > > Attachments: HDFS-13848-003.patch, HDFS-13848-004.patch, > HDFS-13848-005.patch, HDFS-13848.002.patch, HDFS-13848.patch > > > Looking at NN failover proxy providers in the context of HDFS-13782 I noticed > that {{ConfiguredFailoverProxyProvider}} and {{IPFailoverProxyProvider}} have > a lot of common logic. We can move this common logic into > {{AbstractNNFailoverProxyProvider}}, which simplifies things a lot. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-376) Create custom message structure for use in AuditLogging
[ https://issues.apache.org/jira/browse/HDDS-376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dinesh Chitlangia updated HDDS-376: --- Attachment: (was: HDDS-376.004.patch) > Create custom message structure for use in AuditLogging > --- > > Key: HDDS-376 > URL: https://issues.apache.org/jira/browse/HDDS-376 > Project: Hadoop Distributed Data Store > Issue Type: Improvement >Reporter: Dinesh Chitlangia >Assignee: Dinesh Chitlangia >Priority: Major > Labels: audit, logging > Attachments: HDDS-376.001.patch, HDDS-376.002.patch, > HDDS-376.003.patch > > > In HDDS-198 we introduced a framework for AuditLogging in Ozone. > We had used StructuredDataMessage for formatting the messages to be logged. > > Based on discussion with [~jnp] and [~anu], this Jira proposes to create a > custom message structure to generate audit messages in the following format: > user=xxx ip=xxx op=_ \{key=val, key1=val1..} ret=XX -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-376) Create custom message structure for use in AuditLogging
[ https://issues.apache.org/jira/browse/HDDS-376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dinesh Chitlangia updated HDDS-376: --- Attachment: HDDS-376.004.patch > Create custom message structure for use in AuditLogging > --- > > Key: HDDS-376 > URL: https://issues.apache.org/jira/browse/HDDS-376 > Project: Hadoop Distributed Data Store > Issue Type: Improvement >Reporter: Dinesh Chitlangia >Assignee: Dinesh Chitlangia >Priority: Major > Labels: audit, logging > Attachments: HDDS-376.001.patch, HDDS-376.002.patch, > HDDS-376.003.patch, HDDS-376.004.patch > > > In HDDS-198 we introduced a framework for AuditLogging in Ozone. > We had used StructuredDataMessage for formatting the messages to be logged. > > Based on discussion with [~jnp] and [~anu], this Jira proposes to create a > custom message structure to generate audit messages in the following format: > user=xxx ip=xxx op=_ \{key=val, key1=val1..} ret=XX -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-351) Add chill mode state to SCM
[ https://issues.apache.org/jira/browse/HDDS-351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ajay Kumar updated HDDS-351: Status: Patch Available (was: Open) > Add chill mode state to SCM > --- > > Key: HDDS-351 > URL: https://issues.apache.org/jira/browse/HDDS-351 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Ajay Kumar >Assignee: Ajay Kumar >Priority: Major > Attachments: HDDS-351.00.patch > > > Add chill mode state to SCM -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-351) Add chill mode state to SCM
[ https://issues.apache.org/jira/browse/HDDS-351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ajay Kumar updated HDDS-351: Attachment: (was: HDDS-351.00.patch) > Add chill mode state to SCM > --- > > Key: HDDS-351 > URL: https://issues.apache.org/jira/browse/HDDS-351 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Ajay Kumar >Assignee: Ajay Kumar >Priority: Major > Attachments: HDDS-351.00.patch > > > Add chill mode state to SCM -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-351) Add chill mode state to SCM
[ https://issues.apache.org/jira/browse/HDDS-351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ajay Kumar updated HDDS-351: Attachment: HDDS-351.00.patch > Add chill mode state to SCM > --- > > Key: HDDS-351 > URL: https://issues.apache.org/jira/browse/HDDS-351 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Ajay Kumar >Assignee: Ajay Kumar >Priority: Major > Attachments: HDDS-351.00.patch > > > Add chill mode state to SCM -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-376) Create custom message structure for use in AuditLogging
[ https://issues.apache.org/jira/browse/HDDS-376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16592388#comment-16592388 ] genericqa commented on HDDS-376: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 38s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 26s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 15s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 29s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 1s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 59s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 57s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 32s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 26s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 13s{color} | {color:orange} hadoop-hdds/common: The patch generated 2 new + 24 unchanged - 11 fixed = 26 total (was 35) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 29s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 47s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 53s{color} | {color:green} common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 27s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 50m 16s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 | | JIRA Issue | HDDS-376 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12937102/HDDS-376.003.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 622bed4197f5 4.4.0-133-generic #159-Ubuntu SMP Fri Aug 10 07:31:43 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / a5eba25 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_181 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-HDDS-Build/844/artifact/out/diff-checkstyle-hadoop-hdds_common.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDDS-Build/844/testReport/ | | Max. process+thread count | 408 (vs. ulimit of 1) | | modules | C: hadoop-hdds/common U: hadoop-hdds/common | | Console output | https://builds.apache.org/job/PreCommit-HDDS-Build/844/console | | Powered by | Apache Yetus 0.8.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Create custom
[jira] [Updated] (HDFS-13848) Refactor NameNode failover proxy providers
[ https://issues.apache.org/jira/browse/HDFS-13848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Konstantin Shvachko updated HDFS-13848: --- Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 3.1.2 3.0.4 2.9.2 3.2.0 2.10.0 Status: Resolved (was: Patch Available) I just committed this up-to branch-2.9. Thanks [~vagarychen] and [~xkrogen] for help. > Refactor NameNode failover proxy providers > -- > > Key: HDFS-13848 > URL: https://issues.apache.org/jira/browse/HDFS-13848 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ha, hdfs-client >Affects Versions: 2.7.5 >Reporter: Konstantin Shvachko >Assignee: Konstantin Shvachko >Priority: Major > Fix For: 2.10.0, 3.2.0, 2.9.2, 3.0.4, 3.1.2 > > Attachments: HDFS-13848-003.patch, HDFS-13848-004.patch, > HDFS-13848-005.patch, HDFS-13848.002.patch, HDFS-13848.patch > > > Looking at NN failover proxy providers in the context of HDFS-13782 I noticed > that {{ConfiguredFailoverProxyProvider}} and {{IPFailoverProxyProvider}} have > a lot of common logic. We can move this common logic into > {{AbstractNNFailoverProxyProvider}}, which simplifies things a lot. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13867) RBF: Add validation for max arguments for Router admin ls, clrQuota, setQuota, rm and nameservice commands
[ https://issues.apache.org/jira/browse/HDFS-13867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ayush Saxena updated HDFS-13867: Attachment: HDFS-13867-04.patch > RBF: Add validation for max arguments for Router admin ls, clrQuota, > setQuota, rm and nameservice commands > -- > > Key: HDFS-13867 > URL: https://issues.apache.org/jira/browse/HDFS-13867 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Ayush Saxena >Assignee: Ayush Saxena >Priority: Major > Attachments: HDFS-13867-01.patch, HDFS-13867-02.patch, > HDFS-13867-03.patch, HDFS-13867-04.patch > > > Add validation to check if the total number of arguments provided for the > Router Admin commands are not more than max possible.In most cases if there > are some non related extra parameters after the required arguments it doesn't > validate against this but instead perform the action with the required > parameters and ignore the extra ones which shouldn't be in the ideal case. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13695) Move logging to slf4j in HDFS package
[ https://issues.apache.org/jira/browse/HDFS-13695?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16592380#comment-16592380 ] genericqa commented on HDFS-13695: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 213 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 52s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 58s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 33s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 1s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 33s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 0s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 48s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 53s{color} | {color:red} hadoop-hdfs-project_hadoop-hdfs generated 2 new + 424 unchanged - 106 fixed = 426 total (was 530) {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 2m 30s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 7 new + 6575 unchanged - 81 fixed = 6582 total (was 6656) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 53s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 7s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 83m 0s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 30s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}147m 22s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.namenode.ha.TestBootstrapStandby | | | hadoop.hdfs.server.datanode.TestDirectoryScanner | | | hadoop.hdfs.TestWriteReadStripedFile | | | hadoop.hdfs.client.impl.TestBlockReaderLocal | | | hadoop.hdfs.qjournal.server.TestJournalNodeSync | | | hadoop.hdfs.TestLeaseRecovery2 | | | hadoop.hdfs.web.TestWebHDFS | | | hadoop.hdfs.server.blockmanagement.TestReconstructStripedBlocksWithRackAwareness | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 | | JIRA Issue | HDFS-13695 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12937088/HDFS-13695.v9.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 3bb02325f7d0 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git
[jira] [Updated] (HDDS-376) Create custom message structure for use in AuditLogging
[ https://issues.apache.org/jira/browse/HDDS-376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dinesh Chitlangia updated HDDS-376: --- Status: Open (was: Patch Available) > Create custom message structure for use in AuditLogging > --- > > Key: HDDS-376 > URL: https://issues.apache.org/jira/browse/HDDS-376 > Project: Hadoop Distributed Data Store > Issue Type: Improvement >Reporter: Dinesh Chitlangia >Assignee: Dinesh Chitlangia >Priority: Major > Labels: audit, logging > Attachments: HDDS-376.001.patch, HDDS-376.002.patch, > HDDS-376.003.patch > > > In HDDS-198 we introduced a framework for AuditLogging in Ozone. > We had used StructuredDataMessage for formatting the messages to be logged. > > Based on discussion with [~jnp] and [~anu], this Jira proposes to create a > custom message structure to generate audit messages in the following format: > user=xxx ip=xxx op=_ \{key=val, key1=val1..} ret=XX -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-376) Create custom message structure for use in AuditLogging
[ https://issues.apache.org/jira/browse/HDDS-376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dinesh Chitlangia updated HDDS-376: --- Attachment: HDDS-376.003.patch Status: Patch Available (was: Open) > Create custom message structure for use in AuditLogging > --- > > Key: HDDS-376 > URL: https://issues.apache.org/jira/browse/HDDS-376 > Project: Hadoop Distributed Data Store > Issue Type: Improvement >Reporter: Dinesh Chitlangia >Assignee: Dinesh Chitlangia >Priority: Major > Labels: audit, logging > Attachments: HDDS-376.001.patch, HDDS-376.002.patch, > HDDS-376.003.patch > > > In HDDS-198 we introduced a framework for AuditLogging in Ozone. > We had used StructuredDataMessage for formatting the messages to be logged. > > Based on discussion with [~jnp] and [~anu], this Jira proposes to create a > custom message structure to generate audit messages in the following format: > user=xxx ip=xxx op=_ \{key=val, key1=val1..} ret=XX -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-376) Create custom message structure for use in AuditLogging
[ https://issues.apache.org/jira/browse/HDDS-376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16592361#comment-16592361 ] genericqa commented on HDDS-376: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 24s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 48s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 33s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 19s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 35s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 26s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 58s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 59s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 26s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 14s{color} | {color:orange} hadoop-hdds/common: The patch generated 5 new + 24 unchanged - 11 fixed = 29 total (was 35) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 29s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 11s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 47s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 50s{color} | {color:green} common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 22s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 51m 7s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 | | JIRA Issue | HDDS-376 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12937094/HDDS-376.002.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux a940b64846c0 4.4.0-133-generic #159-Ubuntu SMP Fri Aug 10 07:31:43 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / a5eba25 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_181 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-HDDS-Build/843/artifact/out/diff-checkstyle-hadoop-hdds_common.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDDS-Build/843/testReport/ | | Max. process+thread count | 407 (vs. ulimit of 1) | | modules | C: hadoop-hdds/common U: hadoop-hdds/common | | Console output | https://builds.apache.org/job/PreCommit-HDDS-Build/843/console | | Powered by | Apache Yetus 0.8.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Create custom
[jira] [Commented] (HDFS-13849) Migrate logging to slf4j in hadoop-hdfs-httpfs, hadoop-hdfs-nfs, hadoop-hdfs-rbf, hadoop-hdfs-native-client
[ https://issues.apache.org/jira/browse/HDFS-13849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16592347#comment-16592347 ] genericqa commented on HDFS-13849: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 28s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 3 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 30s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 18s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 51s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 54s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 6s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 28s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-hdfs-project/hadoop-hdfs-native-client {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 14s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 20s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-hdfs-project/hadoop-hdfs-native-client {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 3s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 5m 58s{color} | {color:red} hadoop-hdfs-native-client in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 38s{color} | {color:green} hadoop-hdfs-nfs in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 17m 36s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 27s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 82m 4s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed CTEST tests | test_test_libhdfs_threaded_hdfs_static | | | test_libhdfs_threaded_hdfspp_test_shim_static | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 | | JIRA Issue | HDFS-13849 | | JIRA Patch URL |
[jira] [Commented] (HDFS-13830) Backport HDFS-13141 to branch-3.0: WebHDFS: Add support for getting snasphottable directory list
[ https://issues.apache.org/jira/browse/HDFS-13830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16592343#comment-16592343 ] Wei-Chiu Chuang commented on HDFS-13830: [~smeng] thanks for the rev003 patch. {quote} # Added missing pieces in JsonUtil. # Integrated HDFS-13280 NPE fix.{quote} Please refrain from incorporating fixes into the same patch, because it makes diagnosing problems harder in the future. Instead, we should backport HDFS-13280 to branch 3.0 on its own. > Backport HDFS-13141 to branch-3.0: WebHDFS: Add support for getting > snasphottable directory list > > > Key: HDFS-13830 > URL: https://issues.apache.org/jira/browse/HDFS-13830 > Project: Hadoop HDFS > Issue Type: Improvement > Components: webhdfs >Affects Versions: 3.0.3 >Reporter: Siyao Meng >Assignee: Siyao Meng >Priority: Major > Attachments: HDFS-13830.branch-3.0.001.patch, > HDFS-13830.branch-3.0.002.patch, HDFS-13830.branch-3.0.003.patch > > > HDFS-13141 conflicts with 3.0.3 because of interface change in HdfsFileStatus. > This Jira aims to backport the WebHDFS getSnapshottableDirListing() support > to branch-3.0. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13848) Refactor NameNode failover proxy providers
[ https://issues.apache.org/jira/browse/HDFS-13848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16592338#comment-16592338 ] genericqa commented on HDFS-13848: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 44s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 50s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 34m 35s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 21m 32s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 39s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 30s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 19m 25s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 45s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 44s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 20s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 31s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 3m 15s{color} | {color:orange} root: The patch generated 10 new + 7 unchanged - 14 fixed = 17 total (was 21) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 17s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 33s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 47s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 41s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 40s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}136m 1s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.util.TestBasicDiskValidator | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 | | JIRA Issue | HDFS-13848 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12937069/HDFS-13848-005.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 2123618ee743 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk
[jira] [Updated] (HDDS-376) Create custom message structure for use in AuditLogging
[ https://issues.apache.org/jira/browse/HDDS-376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dinesh Chitlangia updated HDDS-376: --- Attachment: HDDS-376.002.patch Status: Patch Available (was: Open) > Create custom message structure for use in AuditLogging > --- > > Key: HDDS-376 > URL: https://issues.apache.org/jira/browse/HDDS-376 > Project: Hadoop Distributed Data Store > Issue Type: Improvement >Reporter: Dinesh Chitlangia >Assignee: Dinesh Chitlangia >Priority: Major > Labels: audit, logging > Attachments: HDDS-376.001.patch, HDDS-376.002.patch > > > In HDDS-198 we introduced a framework for AuditLogging in Ozone. > We had used StructuredDataMessage for formatting the messages to be logged. > > Based on discussion with [~jnp] and [~anu], this Jira proposes to create a > custom message structure to generate audit messages in the following format: > user=xxx ip=xxx op=_ \{key=val, key1=val1..} ret=XX -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-376) Create custom message structure for use in AuditLogging
[ https://issues.apache.org/jira/browse/HDDS-376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dinesh Chitlangia updated HDDS-376: --- Status: Open (was: Patch Available) > Create custom message structure for use in AuditLogging > --- > > Key: HDDS-376 > URL: https://issues.apache.org/jira/browse/HDDS-376 > Project: Hadoop Distributed Data Store > Issue Type: Improvement >Reporter: Dinesh Chitlangia >Assignee: Dinesh Chitlangia >Priority: Major > Labels: audit, logging > Attachments: HDDS-376.001.patch, HDDS-376.002.patch > > > In HDDS-198 we introduced a framework for AuditLogging in Ozone. > We had used StructuredDataMessage for formatting the messages to be logged. > > Based on discussion with [~jnp] and [~anu], this Jira proposes to create a > custom message structure to generate audit messages in the following format: > user=xxx ip=xxx op=_ \{key=val, key1=val1..} ret=XX -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13695) Move logging to slf4j in HDFS package
[ https://issues.apache.org/jira/browse/HDFS-13695?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16592329#comment-16592329 ] genericqa commented on HDFS-13695: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 21s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 265 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 21s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 57s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 50s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 1s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 7s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 55s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 48s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 55s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 54s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 54s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 2m 45s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 394 new + 6694 unchanged - 605 fixed = 7088 total (was 7299) {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 58s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 3m 34s{color} | {color:red} patch has errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 36s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 53s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 22s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 52m 36s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 | | JIRA Issue | HDFS-13695 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12937084/HDFS-13695.v8.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux c3d48565ad89 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / a5eba25 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_181 | | findbugs | v3.1.0-RC1 | | mvninstall | https://builds.apache.org/job/PreCommit-HDFS-Build/24879/artifact/out/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt | | compile | https://builds.apache.org/job/PreCommit-HDFS-Build/24879/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs.txt | | javac |
[jira] [Commented] (HDFS-13867) RBF: Add validation for max arguments for Router admin ls, clrQuota, setQuota, rm and nameservice commands
[ https://issues.apache.org/jira/browse/HDFS-13867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16592325#comment-16592325 ] genericqa commented on HDFS-13867: -- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 31s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 31s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 19s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 32s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 45s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 48s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 31s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 24s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 15s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs-rbf: The patch generated 1 new + 1 unchanged - 0 fixed = 2 total (was 1) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 30s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 31s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 15m 28s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 24s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 66m 27s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 | | JIRA Issue | HDFS-13867 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12937080/HDFS-13867-03.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 97fd306f2e63 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / a5eba25 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_181 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/24878/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/24878/testReport/ | | Max. process+thread count | 954 (vs. ulimit of 1) | | modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: hadoop-hdfs-project/hadoop-hdfs-rbf | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/24878/console | | Powered by | Apache Yetus
[jira] [Comment Edited] (HDFS-13782) ObserverReadProxyProvider should work with IPFailoverProxyProvider
[ https://issues.apache.org/jira/browse/HDFS-13782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16592318#comment-16592318 ] Chen Liang edited comment on HDFS-13782 at 8/24/18 11:57 PM: - As a minimal conflict resolution Jira, v002 patch LGTM. was (Author: vagarychen): As a minimal conflict resolution Jira, v002 patch LGTM, pending Jenkins. > ObserverReadProxyProvider should work with IPFailoverProxyProvider > -- > > Key: HDFS-13782 > URL: https://issues.apache.org/jira/browse/HDFS-13782 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: test >Reporter: Konstantin Shvachko >Assignee: Konstantin Shvachko >Priority: Major > Attachments: HDFS-13782-HDFS-12943.001.patch, > HDFS-13782-HDFS-12943.002.patch > > > Currently {{ObserverReadProxyProvider}} is based on > {{ConfiguredFailoverProxyProvider}}. We should also be able perform SBN reads > in case of {{IPFailoverProxyProvider}}. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13782) ObserverReadProxyProvider should work with IPFailoverProxyProvider
[ https://issues.apache.org/jira/browse/HDFS-13782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16592318#comment-16592318 ] Chen Liang commented on HDFS-13782: --- As a minimal conflict resolution Jira, v002 patch LGTM, pending Jenkins. > ObserverReadProxyProvider should work with IPFailoverProxyProvider > -- > > Key: HDFS-13782 > URL: https://issues.apache.org/jira/browse/HDFS-13782 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: test >Reporter: Konstantin Shvachko >Assignee: Konstantin Shvachko >Priority: Major > Attachments: HDFS-13782-HDFS-12943.001.patch, > HDFS-13782-HDFS-12943.002.patch > > > Currently {{ObserverReadProxyProvider}} is based on > {{ConfiguredFailoverProxyProvider}}. We should also be able perform SBN reads > in case of {{IPFailoverProxyProvider}}. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13837) hdfs.TestDistributedFileSystem.testDFSClient: test is flaky
[ https://issues.apache.org/jira/browse/HDFS-13837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16592316#comment-16592316 ] genericqa commented on HDFS-13837: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 5s{color} | {color:red} HDFS-13837 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | HDFS-13837 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12937089/HDFS-13837.002.patch | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/24882/console | | Powered by | Apache Yetus 0.9.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > hdfs.TestDistributedFileSystem.testDFSClient: test is flaky > --- > > Key: HDFS-13837 > URL: https://issues.apache.org/jira/browse/HDFS-13837 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Reporter: Shweta >Assignee: Shweta >Priority: Major > Attachments: HDFS-13837.001.patch, HDFS-13837.002.patch, > TestDistributedFileSystem.testDFSClient_Stderr_log > > > Stack Trace : > {noformat} > java.lang.AssertionError > at > org.apache.hadoop.hdfs.TestDistributedFileSystem.testDFSClient(TestDistributedFileSystem.java:449) > {noformat} > Stdout: > {noformat} > [truncated]kmanagement.BlockManager > (BlockManager.java:processMisReplicatesAsync(3385)) - Number of blocks being > written = 0 > 2018-07-31 21:42:46,675 [Reconstruction Queue Initializer] INFO > hdfs.StateChange (BlockManager.java:processMisReplicatesAsync(3388)) - STATE* > Replication Queue initialization scan for invalid, over- and under-replicated > blocks completed in 5 msec > 2018-07-31 21:42:46,676 [IPC Server Responder] INFO ipc.Server > (Server.java:run(1307)) - IPC Server Responder: starting > 2018-07-31 21:42:46,676 [IPC Server listener on port1] INFO ipc.Server > (Server.java:run(1146)) - IPC Server listener on port1: starting > 2018-07-31 21:42:46,678 [main] INFO namenode.NameNode > (NameNode.java:startCommonServices(831)) - NameNode RPC up at: > localhost/x.x.x.x:port1 > 2018-07-31 21:42:46,678 [main] INFO namenode.FSNamesystem > (FSNamesystem.java:startActiveServices(1230)) - Starting services required > for active state > 2018-07-31 21:42:46,678 [main] INFO namenode.FSDirectory > (FSDirectory.java:updateCountForQuota(758)) - Initializing quota with 4 > thread(s) > 2018-07-31 21:42:46,679 [main] INFO namenode.FSDirectory > (FSDirectory.java:updateCountForQuota(767)) - Quota initialization completed > in 0 milliseconds > name space=1 > storage space=0 > storage types=RAM_DISK=0, SSD=0, DISK=0, ARCHIVE=0 > 2018-07-31 21:42:46,682 [CacheReplicationMonitor(752355)] INFO > blockmanagement.CacheReplicationMonitor > (CacheReplicationMonitor.java:run(160)) - Starting CacheReplicationMonitor > with interval 3 milliseconds > 2018-07-31 21:42:46,686 [main] INFO hdfs.MiniDFSCluster > (MiniDFSCluster.java:startDataNodes(1599)) - Starting DataNode 0 with > dfs.datanode.data.dir: > [DISK]file:/tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data1,[DISK][file:/tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data2|file:///tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data2] > 2018-07-31 21:42:46,687 [main] INFO checker.ThrottledAsyncChecker > (ThrottledAsyncChecker.java:schedule(122)) - Scheduling a check for > [DISK]file:/tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data1 > 2018-07-31 21:42:46,687 [main] INFO checker.ThrottledAsyncChecker > (ThrottledAsyncChecker.java:schedule(122)) - Scheduling a check for > [DISK]file:/tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data2 > 2018-07-31 21:42:46,695 [main] INFO impl.MetricsSystemImpl > (MetricsSystemImpl.java:init(158)) - DataNode metrics system started (again) > 2018-07-31 21:42:46,695 [main] INFO common.Util > (Util.java:isDiskStatsEnabled(395)) - > dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO > profiling > 2018-07-31 21:42:46,695 [main] INFO datanode.BlockScanner > (BlockScanner.java:(184)) - Initialized block scanner with > targetBytesPerSec 1048576 > 2018-07-31 21:42:46,696 [main] INFO datanode.DataNode > (DataNode.java:(496)) - Configured hostname is x.x.x.x > 2018-07-31
[jira] [Commented] (HDFS-13791) Limit logging frequency of edit tail related statements
[ https://issues.apache.org/jira/browse/HDFS-13791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16592312#comment-16592312 ] genericqa commented on HDFS-13791: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 22s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} HDFS-12943 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 5m 39s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 52s{color} | {color:green} HDFS-12943 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 30m 55s{color} | {color:green} HDFS-12943 passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 24s{color} | {color:orange} The patch fails to run checkstyle in root {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 28s{color} | {color:green} HDFS-12943 passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 31s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 55s{color} | {color:green} HDFS-12943 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 58s{color} | {color:green} HDFS-12943 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 20s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 40m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 40m 26s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 35s{color} | {color:orange} The patch fails to run checkstyle in root {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 25s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 26s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 41s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}120m 48s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 1m 22s{color} | {color:red} The patch generated 1 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}289m 44s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations | | | hadoop.fs.viewfs.TestViewFsAtHdfsRoot | | | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks | | | hadoop.hdfs.server.datanode.TestDirectoryScanner | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:9b55946 | | JIRA Issue | HDFS-13791 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12937047/HDFS-13791-HDFS-12943.000.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall
[jira] [Commented] (HDFS-13837) hdfs.TestDistributedFileSystem.testDFSClient: test is flaky
[ https://issues.apache.org/jira/browse/HDFS-13837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16592311#comment-16592311 ] Shweta commented on HDFS-13837: --- Thanks [~xiaochen] for the prompt review. Yes, the LeaseRenewer Logs will be helpful when the test fails. Also, as per your suggesstion I have made the change at class level so that Lease Renewer logging is present for all the tests in the class. I have added a new patch w.r.t. this change. Please review. Thank you. :) > hdfs.TestDistributedFileSystem.testDFSClient: test is flaky > --- > > Key: HDFS-13837 > URL: https://issues.apache.org/jira/browse/HDFS-13837 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Reporter: Shweta >Assignee: Shweta >Priority: Major > Attachments: HDFS-13837.001.patch, HDFS-13837.002.patch, > TestDistributedFileSystem.testDFSClient_Stderr_log > > > Stack Trace : > {noformat} > java.lang.AssertionError > at > org.apache.hadoop.hdfs.TestDistributedFileSystem.testDFSClient(TestDistributedFileSystem.java:449) > {noformat} > Stdout: > {noformat} > [truncated]kmanagement.BlockManager > (BlockManager.java:processMisReplicatesAsync(3385)) - Number of blocks being > written = 0 > 2018-07-31 21:42:46,675 [Reconstruction Queue Initializer] INFO > hdfs.StateChange (BlockManager.java:processMisReplicatesAsync(3388)) - STATE* > Replication Queue initialization scan for invalid, over- and under-replicated > blocks completed in 5 msec > 2018-07-31 21:42:46,676 [IPC Server Responder] INFO ipc.Server > (Server.java:run(1307)) - IPC Server Responder: starting > 2018-07-31 21:42:46,676 [IPC Server listener on port1] INFO ipc.Server > (Server.java:run(1146)) - IPC Server listener on port1: starting > 2018-07-31 21:42:46,678 [main] INFO namenode.NameNode > (NameNode.java:startCommonServices(831)) - NameNode RPC up at: > localhost/x.x.x.x:port1 > 2018-07-31 21:42:46,678 [main] INFO namenode.FSNamesystem > (FSNamesystem.java:startActiveServices(1230)) - Starting services required > for active state > 2018-07-31 21:42:46,678 [main] INFO namenode.FSDirectory > (FSDirectory.java:updateCountForQuota(758)) - Initializing quota with 4 > thread(s) > 2018-07-31 21:42:46,679 [main] INFO namenode.FSDirectory > (FSDirectory.java:updateCountForQuota(767)) - Quota initialization completed > in 0 milliseconds > name space=1 > storage space=0 > storage types=RAM_DISK=0, SSD=0, DISK=0, ARCHIVE=0 > 2018-07-31 21:42:46,682 [CacheReplicationMonitor(752355)] INFO > blockmanagement.CacheReplicationMonitor > (CacheReplicationMonitor.java:run(160)) - Starting CacheReplicationMonitor > with interval 3 milliseconds > 2018-07-31 21:42:46,686 [main] INFO hdfs.MiniDFSCluster > (MiniDFSCluster.java:startDataNodes(1599)) - Starting DataNode 0 with > dfs.datanode.data.dir: > [DISK]file:/tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data1,[DISK][file:/tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data2|file:///tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data2] > 2018-07-31 21:42:46,687 [main] INFO checker.ThrottledAsyncChecker > (ThrottledAsyncChecker.java:schedule(122)) - Scheduling a check for > [DISK]file:/tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data1 > 2018-07-31 21:42:46,687 [main] INFO checker.ThrottledAsyncChecker > (ThrottledAsyncChecker.java:schedule(122)) - Scheduling a check for > [DISK]file:/tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data2 > 2018-07-31 21:42:46,695 [main] INFO impl.MetricsSystemImpl > (MetricsSystemImpl.java:init(158)) - DataNode metrics system started (again) > 2018-07-31 21:42:46,695 [main] INFO common.Util > (Util.java:isDiskStatsEnabled(395)) - > dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO > profiling > 2018-07-31 21:42:46,695 [main] INFO datanode.BlockScanner > (BlockScanner.java:(184)) - Initialized block scanner with > targetBytesPerSec 1048576 > 2018-07-31 21:42:46,696 [main] INFO datanode.DataNode > (DataNode.java:(496)) - Configured hostname is x.x.x.x > 2018-07-31 21:42:46,696 [main] INFO common.Util > (Util.java:isDiskStatsEnabled(395)) - > dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO > profiling > 2018-07-31 21:42:46,696 [main] INFO datanode.DataNode > (DataNode.java:startDataNode(1385)) - Starting DataNode with maxLockedMemory > = 0 > 2018-07-31 21:42:46,697 [main] INFO datanode.DataNode > (DataNode.java:initDataXceiver(1142)) - Opened streaming server at > /x.x.x.x:port2 > 2018-07-31 21:42:46,697 [main] INFO
[jira] [Updated] (HDFS-13837) hdfs.TestDistributedFileSystem.testDFSClient: test is flaky
[ https://issues.apache.org/jira/browse/HDFS-13837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shweta updated HDFS-13837: -- Attachment: HDFS-13837.002.patch > hdfs.TestDistributedFileSystem.testDFSClient: test is flaky > --- > > Key: HDFS-13837 > URL: https://issues.apache.org/jira/browse/HDFS-13837 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Reporter: Shweta >Assignee: Shweta >Priority: Major > Attachments: HDFS-13837.001.patch, HDFS-13837.002.patch, > TestDistributedFileSystem.testDFSClient_Stderr_log > > > Stack Trace : > {noformat} > java.lang.AssertionError > at > org.apache.hadoop.hdfs.TestDistributedFileSystem.testDFSClient(TestDistributedFileSystem.java:449) > {noformat} > Stdout: > {noformat} > [truncated]kmanagement.BlockManager > (BlockManager.java:processMisReplicatesAsync(3385)) - Number of blocks being > written = 0 > 2018-07-31 21:42:46,675 [Reconstruction Queue Initializer] INFO > hdfs.StateChange (BlockManager.java:processMisReplicatesAsync(3388)) - STATE* > Replication Queue initialization scan for invalid, over- and under-replicated > blocks completed in 5 msec > 2018-07-31 21:42:46,676 [IPC Server Responder] INFO ipc.Server > (Server.java:run(1307)) - IPC Server Responder: starting > 2018-07-31 21:42:46,676 [IPC Server listener on port1] INFO ipc.Server > (Server.java:run(1146)) - IPC Server listener on port1: starting > 2018-07-31 21:42:46,678 [main] INFO namenode.NameNode > (NameNode.java:startCommonServices(831)) - NameNode RPC up at: > localhost/x.x.x.x:port1 > 2018-07-31 21:42:46,678 [main] INFO namenode.FSNamesystem > (FSNamesystem.java:startActiveServices(1230)) - Starting services required > for active state > 2018-07-31 21:42:46,678 [main] INFO namenode.FSDirectory > (FSDirectory.java:updateCountForQuota(758)) - Initializing quota with 4 > thread(s) > 2018-07-31 21:42:46,679 [main] INFO namenode.FSDirectory > (FSDirectory.java:updateCountForQuota(767)) - Quota initialization completed > in 0 milliseconds > name space=1 > storage space=0 > storage types=RAM_DISK=0, SSD=0, DISK=0, ARCHIVE=0 > 2018-07-31 21:42:46,682 [CacheReplicationMonitor(752355)] INFO > blockmanagement.CacheReplicationMonitor > (CacheReplicationMonitor.java:run(160)) - Starting CacheReplicationMonitor > with interval 3 milliseconds > 2018-07-31 21:42:46,686 [main] INFO hdfs.MiniDFSCluster > (MiniDFSCluster.java:startDataNodes(1599)) - Starting DataNode 0 with > dfs.datanode.data.dir: > [DISK]file:/tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data1,[DISK][file:/tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data2|file:///tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data2] > 2018-07-31 21:42:46,687 [main] INFO checker.ThrottledAsyncChecker > (ThrottledAsyncChecker.java:schedule(122)) - Scheduling a check for > [DISK]file:/tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data1 > 2018-07-31 21:42:46,687 [main] INFO checker.ThrottledAsyncChecker > (ThrottledAsyncChecker.java:schedule(122)) - Scheduling a check for > [DISK]file:/tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data2 > 2018-07-31 21:42:46,695 [main] INFO impl.MetricsSystemImpl > (MetricsSystemImpl.java:init(158)) - DataNode metrics system started (again) > 2018-07-31 21:42:46,695 [main] INFO common.Util > (Util.java:isDiskStatsEnabled(395)) - > dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO > profiling > 2018-07-31 21:42:46,695 [main] INFO datanode.BlockScanner > (BlockScanner.java:(184)) - Initialized block scanner with > targetBytesPerSec 1048576 > 2018-07-31 21:42:46,696 [main] INFO datanode.DataNode > (DataNode.java:(496)) - Configured hostname is x.x.x.x > 2018-07-31 21:42:46,696 [main] INFO common.Util > (Util.java:isDiskStatsEnabled(395)) - > dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO > profiling > 2018-07-31 21:42:46,696 [main] INFO datanode.DataNode > (DataNode.java:startDataNode(1385)) - Starting DataNode with maxLockedMemory > = 0 > 2018-07-31 21:42:46,697 [main] INFO datanode.DataNode > (DataNode.java:initDataXceiver(1142)) - Opened streaming server at > /x.x.x.x:port2 > 2018-07-31 21:42:46,697 [main] INFO datanode.DataNode > (DataXceiverServer.java:(78)) - Balancing bandwidth is 10485760 bytes/s > 2018-07-31 21:42:46,697 [main] INFO datanode.DataNode > (DataXceiverServer.java:(79)) - Number threads for balancing is 50 > 2018-07-31 21:42:46,699 [main] INFO server.AuthenticationFilter >
[jira] [Updated] (HDFS-13695) Move logging to slf4j in HDFS package
[ https://issues.apache.org/jira/browse/HDFS-13695?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ian Pickering updated HDFS-13695: - Attachment: HDFS-13695.v9.patch > Move logging to slf4j in HDFS package > - > > Key: HDFS-13695 > URL: https://issues.apache.org/jira/browse/HDFS-13695 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Giovanni Matteo Fumarola >Assignee: Ian Pickering >Priority: Major > Attachments: HDFS-13695.v1.patch, HDFS-13695.v2.patch, > HDFS-13695.v3.patch, HDFS-13695.v4.patch, HDFS-13695.v5.patch, > HDFS-13695.v6.patch, HDFS-13695.v7.patch, HDFS-13695.v8.patch, > HDFS-13695.v9.patch > > > Move logging to slf4j in HDFS package -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-376) Create custom message structure for use in AuditLogging
[ https://issues.apache.org/jira/browse/HDDS-376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16592305#comment-16592305 ] genericqa commented on HDDS-376: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 3s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 32s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 18s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 33s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 0s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 0s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 30s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 12s{color} | {color:orange} hadoop-hdds/common: The patch generated 5 new + 26 unchanged - 9 fixed = 31 total (was 35) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 11s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 1s{color} | {color:green} common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 25s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 55m 11s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 | | JIRA Issue | HDDS-376 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12937076/HDDS-376.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux e3a949ea57e3 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / a5eba25 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_181 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-HDDS-Build/842/artifact/out/diff-checkstyle-hadoop-hdds_common.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDDS-Build/842/testReport/ | | Max. process+thread count | 334 (vs. ulimit of 1) | | modules | C: hadoop-hdds/common U: hadoop-hdds/common | | Console output | https://builds.apache.org/job/PreCommit-HDDS-Build/842/console | | Powered by | Apache Yetus 0.8.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Create custom
[jira] [Updated] (HDFS-13868) WebHDFS: GETSNAPSHOTDIFF API NPE when param "snapshotname" is given but "oldsnapshotname" is not.
[ https://issues.apache.org/jira/browse/HDFS-13868?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siyao Meng updated HDFS-13868: -- Description: HDFS-13052 implements GETSNAPSHOTDIFF for WebHDFS. Proof: {code:java} # Bash # Prerequisite: You will need to create the directory "/snapshot", allowSnapshot() on it, and create a snapshot named "snap3" for it to reach NPE. $ curl "http://:/webhdfs/v1/snaptest/?op=GETSNAPSHOTDIFF=hdfs=snap2=snap3" # Note that I intentionally typed the wrong parameter name for "oldsnapshotname" above to cause NPE. {"RemoteException":{"exception":"NullPointerException","javaClassName":"java.lang.NullPointerException","message":null}} # OR $ curl "http://:/webhdfs/v1/snaptest/?op=GETSNAPSHOTDIFF=hdfs==snap3" # Empty string for oldsnapshotname {"RemoteException":{"exception":"NullPointerException","javaClassName":"java.lang.NullPointerException","message":null}} # OR $ curl "http://:/webhdfs/v1/snaptest/?op=GETSNAPSHOTDIFF=hdfs=snap3" # Missing param oldsnapshotname, essentially the same as the first case. {"RemoteException":{"exception":"NullPointerException","javaClassName":"java.lang.NullPointerException","message":null}{code} was: Proof: {code:java} # Bash # Prerequisite: You will need to create the directory "/snapshot", allowSnapshot() on it, and create a snapshot named "snap3" for it to reach NPE. $ curl "http://:/webhdfs/v1/snaptest/?op=GETSNAPSHOTDIFF=hdfs=snap2=snap3" # Note that I intentionally typed the wrong parameter name for "oldsnapshotname" above to cause NPE. {"RemoteException":{"exception":"NullPointerException","javaClassName":"java.lang.NullPointerException","message":null}} # OR $ curl "http://:/webhdfs/v1/snaptest/?op=GETSNAPSHOTDIFF=hdfs==snap3" # Empty string for oldsnapshotname {"RemoteException":{"exception":"NullPointerException","javaClassName":"java.lang.NullPointerException","message":null}} # OR $ curl "http://:/webhdfs/v1/snaptest/?op=GETSNAPSHOTDIFF=hdfs=snap3" # Missing param oldsnapshotname, essentially the same as the first case. {"RemoteException":{"exception":"NullPointerException","javaClassName":"java.lang.NullPointerException","message":null}{code} > WebHDFS: GETSNAPSHOTDIFF API NPE when param "snapshotname" is given but > "oldsnapshotname" is not. > - > > Key: HDFS-13868 > URL: https://issues.apache.org/jira/browse/HDFS-13868 > Project: Hadoop HDFS > Issue Type: Bug > Components: webhdfs >Affects Versions: 3.1.0, 3.0.3 >Reporter: Siyao Meng >Priority: Major > > HDFS-13052 implements GETSNAPSHOTDIFF for WebHDFS. > > Proof: > {code:java} > # Bash > # Prerequisite: You will need to create the directory "/snapshot", > allowSnapshot() on it, and create a snapshot named "snap3" for it to reach > NPE. > $ curl > "http://:/webhdfs/v1/snaptest/?op=GETSNAPSHOTDIFF=hdfs=snap2=snap3" > # Note that I intentionally typed the wrong parameter name for > "oldsnapshotname" above to cause NPE. > {"RemoteException":{"exception":"NullPointerException","javaClassName":"java.lang.NullPointerException","message":null}} > # OR > $ curl > "http://:/webhdfs/v1/snaptest/?op=GETSNAPSHOTDIFF=hdfs==snap3" > # Empty string for oldsnapshotname > {"RemoteException":{"exception":"NullPointerException","javaClassName":"java.lang.NullPointerException","message":null}} > # OR > $ curl > "http://:/webhdfs/v1/snaptest/?op=GETSNAPSHOTDIFF=hdfs=snap3" > # Missing param oldsnapshotname, essentially the same as the first case. > {"RemoteException":{"exception":"NullPointerException","javaClassName":"java.lang.NullPointerException","message":null}{code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13695) Move logging to slf4j in HDFS package
[ https://issues.apache.org/jira/browse/HDFS-13695?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ian Pickering updated HDFS-13695: - Attachment: HDFS-13695.v8.patch > Move logging to slf4j in HDFS package > - > > Key: HDFS-13695 > URL: https://issues.apache.org/jira/browse/HDFS-13695 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Giovanni Matteo Fumarola >Assignee: Ian Pickering >Priority: Major > Attachments: HDFS-13695.v1.patch, HDFS-13695.v2.patch, > HDFS-13695.v3.patch, HDFS-13695.v4.patch, HDFS-13695.v5.patch, > HDFS-13695.v6.patch, HDFS-13695.v7.patch, HDFS-13695.v8.patch > > > Move logging to slf4j in HDFS package -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13849) Migrate logging to slf4j in hadoop-hdfs-httpfs, hadoop-hdfs-nfs, hadoop-hdfs-rbf, hadoop-hdfs-native-client
[ https://issues.apache.org/jira/browse/HDFS-13849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ian Pickering updated HDFS-13849: - Attachment: HDFS-13849.v2.patch > Migrate logging to slf4j in hadoop-hdfs-httpfs, hadoop-hdfs-nfs, > hadoop-hdfs-rbf, hadoop-hdfs-native-client > --- > > Key: HDFS-13849 > URL: https://issues.apache.org/jira/browse/HDFS-13849 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Ian Pickering >Assignee: Ian Pickering >Priority: Minor > Attachments: HDFS-13849.v1.patch, HDFS-13849.v1.patch, > HDFS-13849.v2.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13868) WebHDFS: GETSNAPSHOTDIFF API NPE when param "snapshotname" is given but "oldsnapshotname" is not.
[ https://issues.apache.org/jira/browse/HDFS-13868?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siyao Meng updated HDFS-13868: -- Description: Proof: {code:java} # Bash # Prerequisite: You will need to create the directory "/snapshot", allowSnapshot() on it, and create a snapshot named "snap3" for it to reach NPE. $ curl "http://:/webhdfs/v1/snaptest/?op=GETSNAPSHOTDIFF=hdfs=snap2=snap3" # Note that I intentionally typed the wrong parameter name for "oldsnapshotname" above to cause NPE. {"RemoteException":{"exception":"NullPointerException","javaClassName":"java.lang.NullPointerException","message":null}} # OR $ curl "http://:/webhdfs/v1/snaptest/?op=GETSNAPSHOTDIFF=hdfs==snap3" # Empty string for oldsnapshotname {"RemoteException":{"exception":"NullPointerException","javaClassName":"java.lang.NullPointerException","message":null}} # OR $ curl "http://:/webhdfs/v1/snaptest/?op=GETSNAPSHOTDIFF=hdfs=snap3" # Missing param oldsnapshotname, essentially the same as the first case. {"RemoteException":{"exception":"NullPointerException","javaClassName":"java.lang.NullPointerException","message":null}{code} was: Proof: {code:java} # Bash $ curl "http://:/webhdfs/v1/snaptest/?op=GETSNAPSHOTDIFF=hdfs=snap2=snap3" # Note that I intentionally typed the wrong parameter name for "oldsnapshotname" above to cause NPE. {"RemoteException":{"exception":"NullPointerException","javaClassName":"java.lang.NullPointerException","message":null}} # OR $ curl "http://:/webhdfs/v1/snaptest/?op=GETSNAPSHOTDIFF=hdfs==snap3" # Empty string for oldsnapshotname {"RemoteException":{"exception":"NullPointerException","javaClassName":"java.lang.NullPointerException","message":null}} # OR $ curl "http://:/webhdfs/v1/snaptest/?op=GETSNAPSHOTDIFF=hdfs=snap3" # Missing param oldsnapshotname, essentially the same as the first case. {"RemoteException":{"exception":"NullPointerException","javaClassName":"java.lang.NullPointerException","message":null}{code} > WebHDFS: GETSNAPSHOTDIFF API NPE when param "snapshotname" is given but > "oldsnapshotname" is not. > - > > Key: HDFS-13868 > URL: https://issues.apache.org/jira/browse/HDFS-13868 > Project: Hadoop HDFS > Issue Type: Bug > Components: webhdfs >Affects Versions: 3.1.0, 3.0.3 >Reporter: Siyao Meng >Priority: Major > > Proof: > {code:java} > # Bash > # Prerequisite: You will need to create the directory "/snapshot", > allowSnapshot() on it, and create a snapshot named "snap3" for it to reach > NPE. > $ curl > "http://:/webhdfs/v1/snaptest/?op=GETSNAPSHOTDIFF=hdfs=snap2=snap3" > # Note that I intentionally typed the wrong parameter name for > "oldsnapshotname" above to cause NPE. > {"RemoteException":{"exception":"NullPointerException","javaClassName":"java.lang.NullPointerException","message":null}} > # OR > $ curl > "http://:/webhdfs/v1/snaptest/?op=GETSNAPSHOTDIFF=hdfs==snap3" > # Empty string for oldsnapshotname > {"RemoteException":{"exception":"NullPointerException","javaClassName":"java.lang.NullPointerException","message":null}} > # OR > $ curl > "http://:/webhdfs/v1/snaptest/?op=GETSNAPSHOTDIFF=hdfs=snap3" > # Missing param oldsnapshotname, essentially the same as the first case. > {"RemoteException":{"exception":"NullPointerException","javaClassName":"java.lang.NullPointerException","message":null}{code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work started] (HDFS-13800) Improve the error message when contacting an IPC port via a browser
[ https://issues.apache.org/jira/browse/HDFS-13800?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HDFS-13800 started by Vaibhav Gandhi. - > Improve the error message when contacting an IPC port via a browser > --- > > Key: HDFS-13800 > URL: https://issues.apache.org/jira/browse/HDFS-13800 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Daniel Templeton >Assignee: Vaibhav Gandhi >Priority: Major > Labels: newbie > > When I point a browser at {{http://:9000}}, get get back a 404 with > the following text: {quote}It looks like you are making an HTTP request to a > Hadoop IPC port. This is not the correct port for the web interface on this > daemon.{quote} While accurate, that's not exactly helpful. It would be > worlds more useful to include the URL for the web UI in the text. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13867) RBF: Add validation for max arguments for Router admin ls, clrQuota, setQuota, rm and nameservice commands
[ https://issues.apache.org/jira/browse/HDFS-13867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16592278#comment-16592278 ] Ayush Saxena commented on HDFS-13867: - [~elgoiri] Have uploaded the patch to counter checkstyle warnings!!! Pls Review. > RBF: Add validation for max arguments for Router admin ls, clrQuota, > setQuota, rm and nameservice commands > -- > > Key: HDFS-13867 > URL: https://issues.apache.org/jira/browse/HDFS-13867 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Ayush Saxena >Assignee: Ayush Saxena >Priority: Major > Attachments: HDFS-13867-01.patch, HDFS-13867-02.patch, > HDFS-13867-03.patch > > > Add validation to check if the total number of arguments provided for the > Router Admin commands are not more than max possible.In most cases if there > are some non related extra parameters after the required arguments it doesn't > validate against this but instead perform the action with the required > parameters and ignore the extra ones which shouldn't be in the ideal case. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13867) RBF: Add validation for max arguments for Router admin ls, clrQuota, setQuota, rm and nameservice commands
[ https://issues.apache.org/jira/browse/HDFS-13867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ayush Saxena updated HDFS-13867: Attachment: HDFS-13867-03.patch > RBF: Add validation for max arguments for Router admin ls, clrQuota, > setQuota, rm and nameservice commands > -- > > Key: HDFS-13867 > URL: https://issues.apache.org/jira/browse/HDFS-13867 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Ayush Saxena >Assignee: Ayush Saxena >Priority: Major > Attachments: HDFS-13867-01.patch, HDFS-13867-02.patch, > HDFS-13867-03.patch > > > Add validation to check if the total number of arguments provided for the > Router Admin commands are not more than max possible.In most cases if there > are some non related extra parameters after the required arguments it doesn't > validate against this but instead perform the action with the required > parameters and ignore the extra ones which shouldn't be in the ideal case. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13867) RBF: Add validation for max arguments for Router admin ls, clrQuota, setQuota, rm and nameservice commands
[ https://issues.apache.org/jira/browse/HDFS-13867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16592274#comment-16592274 ] Íñigo Goiri commented on HDFS-13867: I think we can fix the checkstyle warnings. [~ayushtkn] can you provide a patch fixing them? > RBF: Add validation for max arguments for Router admin ls, clrQuota, > setQuota, rm and nameservice commands > -- > > Key: HDFS-13867 > URL: https://issues.apache.org/jira/browse/HDFS-13867 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Ayush Saxena >Assignee: Ayush Saxena >Priority: Major > Attachments: HDFS-13867-01.patch, HDFS-13867-02.patch > > > Add validation to check if the total number of arguments provided for the > Router Admin commands are not more than max possible.In most cases if there > are some non related extra parameters after the required arguments it doesn't > validate against this but instead perform the action with the required > parameters and ignore the extra ones which shouldn't be in the ideal case. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-374) Support to configure container size in units lesser than GB
[ https://issues.apache.org/jira/browse/HDDS-374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16592272#comment-16592272 ] genericqa commented on HDDS-374: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 1s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 17 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 27s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 27s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 20m 14s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 4m 17s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 6s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 18m 26s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-ozone/integration-test {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 59s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 2s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 23s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 55s{color} | {color:green} root: The patch generated 0 new + 8 unchanged - 7 fixed = 8 total (was 15) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 17m 45s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 1s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-ozone/integration-test {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 6m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 6m 34s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 52s{color} | {color:green} common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 45s{color} | {color:green} container-service in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 10m 54s{color} | {color:red} integration-test in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 43s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}146m 18s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.ozone.web.client.TestKeys | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 | | JIRA Issue | HDDS-374 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12937057/HDDS-374.000.patch | |
[jira] [Comment Edited] (HDFS-13655) RBF: Add missing ClientProtocol APIs to RBF
[ https://issues.apache.org/jira/browse/HDFS-13655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16592269#comment-16592269 ] Íñigo Goiri edited comment on HDFS-13655 at 8/24/18 10:37 PM: -- I'm also a little worried about the number of new JIRAs. The main problem is that most of them are pretty minimal and related to the error handling of the Router admin: * HDFS-13867: validation for max arguments * HDFS-13861: print usage only for the right command * HDFS-13858: check for the safemode command * HDFS-13815: proper managing of the order parameter (There's a couple more I cannot even find; this shows how hard is getting to track this.) I'm even tempted to move all of those to a single JIRA or put under an umbrella. Not sure if it makes sense to merge those JIRAs with this umbrella in the same branch though. Feedback is welcome. was (Author: elgoiri): I'm also a little worried about the number of new JIRAs. The main problem is that most of them are pretty minimal and related to the error handling of the Router admin: * HDFS-13867: validation for max arguments * HDFS-13861: print usage only for the right command * HDFS-13858: check for the safemode command * HDFS-13815: proper managing of the order parameter (There's a couple more I cannot even find; this shows how hard is getting to track this.) I'm even tempted to move all of those to a single JIRA or put under an umbrella. Not sure if it makes sense to merge those JIRAs with this umbrella in the same branch though. Feedback is welcome. > RBF: Add missing ClientProtocol APIs to RBF > --- > > Key: HDFS-13655 > URL: https://issues.apache.org/jira/browse/HDFS-13655 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Xiao Chen >Priority: Major > > As > [discussed|https://issues.apache.org/jira/browse/HDFS-12858?focusedCommentId=16500975=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16500975|#comment-16500975] > with [~elgoiri], there are some HDFS methods that does not take path as a > parameter. We should support these to work with federation. > The ones missing are: > * Snapshots > * Storage policies > * Encryption zones > * Cache pools > One way to reasonably have them to work with federation is to 'list' each > nameservice and concat the results. This can be done pretty much the same as > {{refreshNodes()}} and it would be a matter of querying all the subclusters > and aggregate the output (e.g., {{getDatanodeReport()}}.) -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13655) RBF: Add missing ClientProtocol APIs to RBF
[ https://issues.apache.org/jira/browse/HDFS-13655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16592269#comment-16592269 ] Íñigo Goiri commented on HDFS-13655: I'm also a little worried about the number of new JIRAs. The main problem is that most of them are pretty minimal and related to the error handling of the Router admin: * HDFS-13867: validation for max arguments * HDFS-13861: print usage only for the right command * HDFS-13858: check for the safemode command * HDFS-13815: proper managing of the order parameter (There's a couple more I cannot even find; this shows how hard is getting to track this.) I'm even tempted to move all of those to a single JIRA or put under an umbrella. Not sure if it makes sense to merge those JIRAs with this umbrella in the same branch though. Feedback is welcome. > RBF: Add missing ClientProtocol APIs to RBF > --- > > Key: HDFS-13655 > URL: https://issues.apache.org/jira/browse/HDFS-13655 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Xiao Chen >Priority: Major > > As > [discussed|https://issues.apache.org/jira/browse/HDFS-12858?focusedCommentId=16500975=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16500975|#comment-16500975] > with [~elgoiri], there are some HDFS methods that does not take path as a > parameter. We should support these to work with federation. > The ones missing are: > * Snapshots > * Storage policies > * Encryption zones > * Cache pools > One way to reasonably have them to work with federation is to 'list' each > nameservice and concat the results. This can be done pretty much the same as > {{refreshNodes()}} and it would be a matter of querying all the subclusters > and aggregate the output (e.g., {{getDatanodeReport()}}.) -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13867) RBF: Add validation for max arguments for Router admin ls, clrQuota, setQuota, rm and nameservice command
[ https://issues.apache.org/jira/browse/HDFS-13867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Íñigo Goiri updated HDFS-13867: --- Summary: RBF: Add validation for max arguments for Router admin ls, clrQuota, setQuota, rm and nameservice command (was: RBF: Add Validation for max arguments for Router admin ls, clrQuota, setQuota, rm and nameservice commands) > RBF: Add validation for max arguments for Router admin ls, clrQuota, > setQuota, rm and nameservice command > - > > Key: HDFS-13867 > URL: https://issues.apache.org/jira/browse/HDFS-13867 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Ayush Saxena >Assignee: Ayush Saxena >Priority: Major > Attachments: HDFS-13867-01.patch, HDFS-13867-02.patch > > > Add validation to check if the total number of arguments provided for the > Router Admin commands are not more than max possible.In most cases if there > are some non related extra parameters after the required arguments it doesn't > validate against this but instead perform the action with the required > parameters and ignore the extra ones which shouldn't be in the ideal case. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13867) RBF: Add validation for max arguments for Router admin ls, clrQuota, setQuota, rm and nameservice commands
[ https://issues.apache.org/jira/browse/HDFS-13867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Íñigo Goiri updated HDFS-13867: --- Summary: RBF: Add validation for max arguments for Router admin ls, clrQuota, setQuota, rm and nameservice commands (was: RBF: Add validation for max arguments for Router admin ls, clrQuota, setQuota, rm and nameservice command) > RBF: Add validation for max arguments for Router admin ls, clrQuota, > setQuota, rm and nameservice commands > -- > > Key: HDFS-13867 > URL: https://issues.apache.org/jira/browse/HDFS-13867 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Ayush Saxena >Assignee: Ayush Saxena >Priority: Major > Attachments: HDFS-13867-01.patch, HDFS-13867-02.patch > > > Add validation to check if the total number of arguments provided for the > Router Admin commands are not more than max possible.In most cases if there > are some non related extra parameters after the required arguments it doesn't > validate against this but instead perform the action with the required > parameters and ignore the extra ones which shouldn't be in the ideal case. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13782) ObserverReadProxyProvider should work with IPFailoverProxyProvider
[ https://issues.apache.org/jira/browse/HDFS-13782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16592260#comment-16592260 ] Konstantin Shvachko commented on HDFS-13782: Updated the patch to the latest of HDFS-13848. > ObserverReadProxyProvider should work with IPFailoverProxyProvider > -- > > Key: HDFS-13782 > URL: https://issues.apache.org/jira/browse/HDFS-13782 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: test >Reporter: Konstantin Shvachko >Assignee: Konstantin Shvachko >Priority: Major > Attachments: HDFS-13782-HDFS-12943.001.patch, > HDFS-13782-HDFS-12943.002.patch > > > Currently {{ObserverReadProxyProvider}} is based on > {{ConfiguredFailoverProxyProvider}}. We should also be able perform SBN reads > in case of {{IPFailoverProxyProvider}}. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-13782) ObserverReadProxyProvider should work with IPFailoverProxyProvider
[ https://issues.apache.org/jira/browse/HDFS-13782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16592253#comment-16592253 ] Konstantin Shvachko edited comment on HDFS-13782 at 8/24/18 10:30 PM: -- Thanks for the review, [~xkrogen]. * I target this issue as a minimal patch to bridge HDFS-13848 and enable IP failover. HDFS-13848 will break ORPP once pulled into the branch, so this will need to be committed along with the merge. So let's do other improvements in subsequent jiras, including HDFS-13779 and HDFS-13780. * {{ObserverReadProxyProvider}} is not tied to CFPP, rather it uses CFPP by default. I did add {{ObserverReadProxyProviderWithIPFailover}}, but I think that most people use CFPP, so it seems natural to default ORPP to CFPP for failover, rather than creating an extra class. Don't like adding another config parameter - it is really not necessary. was (Author: shv): Thanks for the review, [~xkrogen]. * I target this issue as a minimal patch to bridge HDFS-13848 and enable IP failover. HDFS-13848 will break ORPP once pulled into the branch, so this will need to be committed along with the merge. So let's do other improvements in subsequent jiras, including HDFS-13779 and HDFS-13780. * {{ObserverReadProxyProvider}} is not tied to CFPP, rather it uses CFPP by default. I did add {{ObserverReadProxyProviderWithIPFailover}}, but I think that most people use CFPP, so it seems natural to default ORPP to CFPP for failover, rather than creating an extra class. > ObserverReadProxyProvider should work with IPFailoverProxyProvider > -- > > Key: HDFS-13782 > URL: https://issues.apache.org/jira/browse/HDFS-13782 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: test >Reporter: Konstantin Shvachko >Assignee: Konstantin Shvachko >Priority: Major > Attachments: HDFS-13782-HDFS-12943.001.patch, > HDFS-13782-HDFS-12943.002.patch > > > Currently {{ObserverReadProxyProvider}} is based on > {{ConfiguredFailoverProxyProvider}}. We should also be able perform SBN reads > in case of {{IPFailoverProxyProvider}}. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13782) ObserverReadProxyProvider should work with IPFailoverProxyProvider
[ https://issues.apache.org/jira/browse/HDFS-13782?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Konstantin Shvachko updated HDFS-13782: --- Attachment: HDFS-13782-HDFS-12943.002.patch > ObserverReadProxyProvider should work with IPFailoverProxyProvider > -- > > Key: HDFS-13782 > URL: https://issues.apache.org/jira/browse/HDFS-13782 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: test >Reporter: Konstantin Shvachko >Assignee: Konstantin Shvachko >Priority: Major > Attachments: HDFS-13782-HDFS-12943.001.patch, > HDFS-13782-HDFS-12943.002.patch > > > Currently {{ObserverReadProxyProvider}} is based on > {{ConfiguredFailoverProxyProvider}}. We should also be able perform SBN reads > in case of {{IPFailoverProxyProvider}}. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-359) RocksDB Profiles support
[ https://issues.apache.org/jira/browse/HDDS-359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16592256#comment-16592256 ] genericqa commented on HDDS-359: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 38s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 5 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 12s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 0s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 58s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 26s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 34s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 49s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-hdds {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 54s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 10s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 5s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-hdds {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 6s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 23s{color} | {color:green} hadoop-hdds in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 22s{color} | {color:green} common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 24s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 61m 54s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 | | JIRA Issue | HDDS-359 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12937065/HDDS-359.004.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle xml | | uname | Linux 05677be3a378 4.4.0-133-generic #159-Ubuntu
[jira] [Commented] (HDFS-13782) ObserverReadProxyProvider should work with IPFailoverProxyProvider
[ https://issues.apache.org/jira/browse/HDFS-13782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16592253#comment-16592253 ] Konstantin Shvachko commented on HDFS-13782: Thanks for the review, [~xkrogen]. * I target this issue as a minimal patch to bridge HDFS-13848 and enable IP failover. HDFS-13848 will break ORPP once pulled into the branch, so this will need to be committed along with the merge. So let's do other improvements in subsequent jiras, including HDFS-13779 and HDFS-13780. * {{ObserverReadProxyProvider}} is not tied to CFPP, rather it uses CFPP by default. I did add {{ObserverReadProxyProviderWithIPFailover}}, but I think that most people use CFPP, so it seems natural to default ORPP to CFPP for failover, rather than creating an extra class. > ObserverReadProxyProvider should work with IPFailoverProxyProvider > -- > > Key: HDFS-13782 > URL: https://issues.apache.org/jira/browse/HDFS-13782 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: test >Reporter: Konstantin Shvachko >Assignee: Konstantin Shvachko >Priority: Major > Attachments: HDFS-13782-HDFS-12943.001.patch > > > Currently {{ObserverReadProxyProvider}} is based on > {{ConfiguredFailoverProxyProvider}}. We should also be able perform SBN reads > in case of {{IPFailoverProxyProvider}}. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work started] (HDDS-98) Adding Ozone Manager Audit Log
[ https://issues.apache.org/jira/browse/HDDS-98?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HDDS-98 started by Dinesh Chitlangia. - > Adding Ozone Manager Audit Log > -- > > Key: HDDS-98 > URL: https://issues.apache.org/jira/browse/HDDS-98 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Xiaoyu Yao >Assignee: Dinesh Chitlangia >Priority: Major > Labels: Logging, audit > Fix For: 0.2.1 > > Attachments: HDDS-98.001.patch, HDDS-98.002.patch, HDDS-98.003.patch, > HDDS-98.004.patch, audit.log, log4j2.properties > > > This ticket is opened to add ozone manager's audit log. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-98) Adding Ozone Manager Audit Log
[ https://issues.apache.org/jira/browse/HDDS-98?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dinesh Chitlangia updated HDDS-98: -- Status: Open (was: Patch Available) > Adding Ozone Manager Audit Log > -- > > Key: HDDS-98 > URL: https://issues.apache.org/jira/browse/HDDS-98 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Xiaoyu Yao >Assignee: Dinesh Chitlangia >Priority: Major > Labels: Logging, audit > Fix For: 0.2.1 > > Attachments: HDDS-98.001.patch, HDDS-98.002.patch, HDDS-98.003.patch, > HDDS-98.004.patch, audit.log, log4j2.properties > > > This ticket is opened to add ozone manager's audit log. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-376) Create custom message structure for use in AuditLogging
[ https://issues.apache.org/jira/browse/HDDS-376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dinesh Chitlangia updated HDDS-376: --- Attachment: HDDS-376.001.patch Status: Patch Available (was: In Progress) [~anu] - Request you to please review this patch. > Create custom message structure for use in AuditLogging > --- > > Key: HDDS-376 > URL: https://issues.apache.org/jira/browse/HDDS-376 > Project: Hadoop Distributed Data Store > Issue Type: Improvement >Reporter: Dinesh Chitlangia >Assignee: Dinesh Chitlangia >Priority: Major > Labels: audit, logging > Attachments: HDDS-376.001.patch > > > In HDDS-198 we introduced a framework for AuditLogging in Ozone. > We had used StructuredDataMessage for formatting the messages to be logged. > > Based on discussion with [~jnp] and [~anu], this Jira proposes to create a > custom message structure to generate audit messages in the following format: > user=xxx ip=xxx op=_ \{key=val, key1=val1..} ret=XX -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work started] (HDDS-376) Create custom message structure for use in AuditLogging
[ https://issues.apache.org/jira/browse/HDDS-376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HDDS-376 started by Dinesh Chitlangia. -- > Create custom message structure for use in AuditLogging > --- > > Key: HDDS-376 > URL: https://issues.apache.org/jira/browse/HDDS-376 > Project: Hadoop Distributed Data Store > Issue Type: Improvement >Reporter: Dinesh Chitlangia >Assignee: Dinesh Chitlangia >Priority: Major > Labels: audit, logging > > In HDDS-198 we introduced a framework for AuditLogging in Ozone. > We had used StructuredDataMessage for formatting the messages to be logged. > > Based on discussion with [~jnp] and [~anu], this Jira proposes to create a > custom message structure to generate audit messages in the following format: > user=xxx ip=xxx op=_ \{key=val, key1=val1..} ret=XX -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDDS-376) Create custom message structure for use in AuditLogging
Dinesh Chitlangia created HDDS-376: -- Summary: Create custom message structure for use in AuditLogging Key: HDDS-376 URL: https://issues.apache.org/jira/browse/HDDS-376 Project: Hadoop Distributed Data Store Issue Type: Improvement Reporter: Dinesh Chitlangia Assignee: Dinesh Chitlangia In HDDS-198 we introduced a framework for AuditLogging in Ozone. We had used StructuredDataMessage for formatting the messages to be logged. Based on discussion with [~jnp] and [~anu], this Jira proposes to create a custom message structure to generate audit messages in the following format: user=xxx ip=xxx op=_ \{key=val, key1=val1..} ret=XX -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13868) WebHDFS: GETSNAPSHOTDIFF API NPE when param "snapshotname" is given but "oldsnapshotname" is not.
[ https://issues.apache.org/jira/browse/HDFS-13868?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siyao Meng updated HDFS-13868: -- Description: Proof: {code:java} # Bash $ curl "http://:/webhdfs/v1/snaptest/?op=GETSNAPSHOTDIFF=hdfs=snap2=snap3" # Note that I intentionally typed the wrong parameter name for "oldsnapshotname" above to cause NPE. {"RemoteException":{"exception":"NullPointerException","javaClassName":"java.lang.NullPointerException","message":null}} # OR $ curl "http://:/webhdfs/v1/snaptest/?op=GETSNAPSHOTDIFF=hdfs==snap3" # Empty string for oldsnapshotname {"RemoteException":{"exception":"NullPointerException","javaClassName":"java.lang.NullPointerException","message":null}} # OR $ curl "http://:/webhdfs/v1/snaptest/?op=GETSNAPSHOTDIFF=hdfs=snap3" # Missing param oldsnapshotname, essentially the same as the first case. {"RemoteException":{"exception":"NullPointerException","javaClassName":"java.lang.NullPointerException","message":null}{code} was: Proof: {code:java} # Bash $ curl "http://:/webhdfs/v1/snaptest/?op=GETSNAPSHOTDIFF=hdfs=snap2=snap3" # Note that I intentionally typed the wrong parameter name for "oldsnapshotname" above to cause NPE. {"RemoteException":{"exception":"NullPointerException","javaClassName":"java.lang.NullPointerException","message":null}} # OR $ curl "http://:/webhdfs/v1/snaptest/?op=GETSNAPSHOTDIFF=hdfs==snap3" {"RemoteException":{"exception":"NullPointerException","javaClassName":"java.lang.NullPointerException","message":null}} # OR $ curl "http://:/webhdfs/v1/snaptest/?op=GETSNAPSHOTDIFF=hdfs=snap3" {"RemoteException":{"exception":"NullPointerException","javaClassName":"java.lang.NullPointerException","message":null}{code} > WebHDFS: GETSNAPSHOTDIFF API NPE when param "snapshotname" is given but > "oldsnapshotname" is not. > - > > Key: HDFS-13868 > URL: https://issues.apache.org/jira/browse/HDFS-13868 > Project: Hadoop HDFS > Issue Type: Bug > Components: webhdfs >Affects Versions: 3.1.0, 3.0.3 >Reporter: Siyao Meng >Priority: Major > > Proof: > {code:java} > # Bash > $ curl > "http://:/webhdfs/v1/snaptest/?op=GETSNAPSHOTDIFF=hdfs=snap2=snap3" > # Note that I intentionally typed the wrong parameter name for > "oldsnapshotname" above to cause NPE. > {"RemoteException":{"exception":"NullPointerException","javaClassName":"java.lang.NullPointerException","message":null}} > # OR > $ curl > "http://:/webhdfs/v1/snaptest/?op=GETSNAPSHOTDIFF=hdfs==snap3" > # Empty string for oldsnapshotname > {"RemoteException":{"exception":"NullPointerException","javaClassName":"java.lang.NullPointerException","message":null}} > # OR > $ curl > "http://:/webhdfs/v1/snaptest/?op=GETSNAPSHOTDIFF=hdfs=snap3" > # Missing param oldsnapshotname, essentially the same as the first case. > {"RemoteException":{"exception":"NullPointerException","javaClassName":"java.lang.NullPointerException","message":null}{code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-13655) RBF: Add missing ClientProtocol APIs to RBF
[ https://issues.apache.org/jira/browse/HDFS-13655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16592218#comment-16592218 ] Brahma Reddy Battula edited comment on HDFS-13655 at 8/24/18 10:02 PM: --- Currently we can see,there are so many jira's got logged,those also we might need to control to branch if we want to create separate branch. And here we might need to impl unsupported API's(not new features) which can done one by one.SO I feel,try fix as many as possible.Can mention in release notes(future) about RBF status..? (OR) can even create branch and push all commits to branch,once it's stabilised we can merge back. May be we can arrange call and finalise.? was (Author: brahmareddy): Currently we can see,there are so many jira's got logged,those also we might need to control to branch if we want to create separate branch. And here we might need to impl unsupported API's(not new features) which can done one by one.SO I feel,try fix as many as possible.Can mention in release notes(future) about RBF status..? > RBF: Add missing ClientProtocol APIs to RBF > --- > > Key: HDFS-13655 > URL: https://issues.apache.org/jira/browse/HDFS-13655 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Xiao Chen >Priority: Major > > As > [discussed|https://issues.apache.org/jira/browse/HDFS-12858?focusedCommentId=16500975=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16500975|#comment-16500975] > with [~elgoiri], there are some HDFS methods that does not take path as a > parameter. We should support these to work with federation. > The ones missing are: > * Snapshots > * Storage policies > * Encryption zones > * Cache pools > One way to reasonably have them to work with federation is to 'list' each > nameservice and concat the results. This can be done pretty much the same as > {{refreshNodes()}} and it would be a matter of querying all the subclusters > and aggregate the output (e.g., {{getDatanodeReport()}}.) -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13868) WebHDFS: GETSNAPSHOTDIFF API NPE when param "snapshotname" is given but "oldsnapshotname" is not.
[ https://issues.apache.org/jira/browse/HDFS-13868?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siyao Meng updated HDFS-13868: -- Affects Version/s: 3.1.0 3.0.3 > WebHDFS: GETSNAPSHOTDIFF API NPE when param "snapshotname" is given but > "oldsnapshotname" is not. > - > > Key: HDFS-13868 > URL: https://issues.apache.org/jira/browse/HDFS-13868 > Project: Hadoop HDFS > Issue Type: Bug > Components: webhdfs >Affects Versions: 3.1.0, 3.0.3 >Reporter: Siyao Meng >Priority: Major > > Proof: > {code:java} > # Bash > $ curl > "http://:/webhdfs/v1/snaptest/?op=GETSNAPSHOTDIFF=hdfs=snap2=snap3" > # Note that I intentionally typed the wrong parameter name for > "oldsnapshotname" above to cause NPE. > {"RemoteException":{"exception":"NullPointerException","javaClassName":"java.lang.NullPointerException","message":null}} > # OR > $ curl > "http://:/webhdfs/v1/snaptest/?op=GETSNAPSHOTDIFF=hdfs==snap3" > {"RemoteException":{"exception":"NullPointerException","javaClassName":"java.lang.NullPointerException","message":null}} > # OR > $ curl > "http://:/webhdfs/v1/snaptest/?op=GETSNAPSHOTDIFF=hdfs=snap3" > {"RemoteException":{"exception":"NullPointerException","javaClassName":"java.lang.NullPointerException","message":null}{code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13867) RBF: Add Validation for max arguments for Router admin ls, clrQuota, setQuota, rm and nameservice commands
[ https://issues.apache.org/jira/browse/HDFS-13867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16592235#comment-16592235 ] genericqa commented on HDFS-13867: -- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 20s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 10s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 17s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 29s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 13s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 47s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 31s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 23s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 14s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs-rbf: The patch generated 6 new + 1 unchanged - 0 fixed = 7 total (was 1) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 28s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 18m 27s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 21s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 64m 1s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 | | JIRA Issue | HDFS-13867 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12937063/HDFS-13867-02.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux e0ec30a614d1 4.4.0-133-generic #159-Ubuntu SMP Fri Aug 10 07:31:43 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 8563fd6 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_181 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/24876/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/24876/testReport/ | | Max. process+thread count | 1363 (vs. ulimit of 1) | | modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: hadoop-hdfs-project/hadoop-hdfs-rbf | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/24876/console | | Powered by | Apache Yetus
[jira] [Created] (HDFS-13868) WebHDFS: GETSNAPSHOTDIFF API NPE when param "snapshotname" is given but "oldsnapshotname" is not.
Siyao Meng created HDFS-13868: - Summary: WebHDFS: GETSNAPSHOTDIFF API NPE when param "snapshotname" is given but "oldsnapshotname" is not. Key: HDFS-13868 URL: https://issues.apache.org/jira/browse/HDFS-13868 Project: Hadoop HDFS Issue Type: Bug Components: webhdfs Reporter: Siyao Meng Proof: {code:java} # Bash $ curl "http://:/webhdfs/v1/snaptest/?op=GETSNAPSHOTDIFF=hdfs=snap2=snap3" # Note that I intentionally typed the wrong parameter name for "oldsnapshotname" above to cause NPE. {"RemoteException":{"exception":"NullPointerException","javaClassName":"java.lang.NullPointerException","message":null}} # OR $ curl "http://:/webhdfs/v1/snaptest/?op=GETSNAPSHOTDIFF=hdfs==snap3" {"RemoteException":{"exception":"NullPointerException","javaClassName":"java.lang.NullPointerException","message":null}} # OR $ curl "http://:/webhdfs/v1/snaptest/?op=GETSNAPSHOTDIFF=hdfs=snap3" {"RemoteException":{"exception":"NullPointerException","javaClassName":"java.lang.NullPointerException","message":null}{code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDDS-359) RocksDB Profiles support
[ https://issues.apache.org/jira/browse/HDDS-359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16592163#comment-16592163 ] Anu Engineer edited comment on HDDS-359 at 8/24/18 9:56 PM: [~xyao] Thanks for the review. The next patch addresses some of the comments. {quote}DBConfigFromFile.java Line 122: can we move this to line 117 so that the caller does not need to check null. {quote} Here is what we are doing, if there is a name.db.ini in the config directory, we will use those params for the RocksDB. Otherwise, we will use the profile key if specified, if there is nothing specified we will use the default profile. The return of null makes it easy for us to know that we need to get a profile from the secondary sources. {quote}Line 79: NIT: annotate for test only? {quote} addTable is not test only, if you add a table without columnfamily, we will default to profiles. {quote}Line 118: NIT: rename from processTableNames => processTables? {quote} Done. {quote}Line 141: this assumes all tables have the same column family option. Is it possible to have per table column family option {quote} We already do that. When the user add a table, they can call addTable(string) or addTable(string, options). If the first addTable is used, we iterate thru the string table and add the default option. If the user had called addTable(string, option) then the user-specified option would be used. {quote}Line 144: should we document in the build class that addTable() should not add {quote} I can do that, but if the user tried to add an existing table we will throw an exception. we have a test case for that case. {quote}Line 148-170: the current code loads from rocksDBOption parameter, OR config file OR pre-defined db profiles? Is it possible to allow all of these and merge/overwrite with the config file like hadoop configs. {quote} The current code attempts to get config in a hierarchical fashion. First, it looks for the config from an .INI file, if not found, it will try to read the profile from ozone-deafult.xml and if that is not defined, we will use the default profile. We don't try to merge the configs but use each one. It is hard to merge these since it because harder to define what the final config will look like. {quote}Can we abstract the table config otherwise we can use ColumnFamilyOptions directly {quote} Sorry, I am not sure I understand this comment clearly. Could you please explain it to me once more? {quote}Test.db.ini : Add an exception to the license check plugin? {quote} done. {quote}Line 67: should we clean up the copied test ini file in the tearDown? {quote} we are copying a file into the Temporary folder. Junit will clean it up automatically at the exit of the test. was (Author: anu): [~xyao] Thanks for the review. The next patch addresses some of the comments. {quote}DBConfigFromFile.java Line 122: can we move this to line 117 so that the caller does not need to check null. {quote} Here is what we are doing, if there is a name.db.ini in the config directory, we will use those params for the RocksDB. Otherwise we will use the profile key if specified, if there is nothing specified we will use the default profile. The return of null makes it easy for us to know that we need to get profile from the secondary sources. {quote}Line 79: NIT: annotate for test only? {quote} addTable is not test only, if you add a table without columnfamily, we will default to profiles. {quote}Line 118: NIT: rename from processTableNames => processTables? {quote} Done. {quote}Line 141: this assumes all tables have the same column family option. Is it possible to have per table column family option {quote} We already do that. When the user add a table, they can call addTable(string) or addTable(string, options). If the first addTable is used, we iterate thru the string table and add the default option. If the user had called addTable(string, option) then the user specified option would be used. {quote}Line 144: should we document in the build class that addTable() should not add {quote} I can do that, but if the user tried to add an existing table we will throw an exception. we have a test case for that case. {quote}Line 148-170: the current code loads from rocksDBOption parameter, OR config file OR pre-defined db profiles? Is it possible to allow all of these and merge/overwrite with the config file like hadoop configs. {quote} The current code attempts to get config in a heireachical fashion. First it looks for the config from an .INI file, if not found, it will try to read the profile from ozone-deafult.xml and if that is not defined, we will use the default profile. We don't try to merge the configs, but use each one. It is hard to merge these since it because harder to define what the final config will look like. {quote}Can we abstract the table config otherwise we can use ColumnFamilyOptions
[jira] [Commented] (HDDS-359) RocksDB Profiles support
[ https://issues.apache.org/jira/browse/HDDS-359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1659#comment-1659 ] Anu Engineer commented on HDDS-359: --- {quote}I think the confusion is that DBStoreBuilder#addTable() allows add per table column family option. But the caller processTableNames() only pass the same one from dbProfile.getColumnFamilyOptions() in line 141 based on SSD/DISK. {quote} I see the issue you are mentioning and I can see the source of confusion this creating. When we call the function addTable(string, option), internally we are adding that to a set. {{private Set tables;}} However, when the user makes a call to addTable(string) where the user does not specify an Option, we add that a list of Strings. {{private List tableNames;}} Later in the "processTables" call, we walk the tableNames and make calls into addTable(string, default Option). The addTable with name only makes some code simpler, especially if you are happy with the defaults. bq. I mean the TableConfig class is almost the same as ColumnFamilyOption. We could potentially use it directly. Yes, that would mean that we have to send into 2 lists, one forthe name and another for ColumnFamilyOption. This is class creates a relationship between those 2 variables. > RocksDB Profiles support > > > Key: HDDS-359 > URL: https://issues.apache.org/jira/browse/HDDS-359 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Xiaoyu Yao >Assignee: Anu Engineer >Priority: Major > Fix For: 0.2.1 > > Attachments: HDDS-359.001.patch, HDDS-359.002.patch, > HDDS-359.003.patch, HDDS-359.004.patch > > > This allows us to tune the OM/SCM DB for different machine configurations. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13848) Refactor NameNode failover proxy providers
[ https://issues.apache.org/jira/browse/HDFS-13848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16592219#comment-16592219 ] Chen Liang commented on HDFS-13848: --- v005 patch LGTM, +1 pending Jenkins. > Refactor NameNode failover proxy providers > -- > > Key: HDFS-13848 > URL: https://issues.apache.org/jira/browse/HDFS-13848 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ha, hdfs-client >Affects Versions: 2.7.5 >Reporter: Konstantin Shvachko >Assignee: Konstantin Shvachko >Priority: Major > Attachments: HDFS-13848-003.patch, HDFS-13848-004.patch, > HDFS-13848-005.patch, HDFS-13848.002.patch, HDFS-13848.patch > > > Looking at NN failover proxy providers in the context of HDFS-13782 I noticed > that {{ConfiguredFailoverProxyProvider}} and {{IPFailoverProxyProvider}} have > a lot of common logic. We can move this common logic into > {{AbstractNNFailoverProxyProvider}}, which simplifies things a lot. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13655) RBF: Add missing ClientProtocol APIs to RBF
[ https://issues.apache.org/jira/browse/HDFS-13655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16592218#comment-16592218 ] Brahma Reddy Battula commented on HDFS-13655: - Currently we can see,there are so many jira's got logged,those also we might need to control to branch if we want to create separate branch. And here we might need to impl unsupported API's(not new features) which can done one by one.SO I feel,try fix as many as possible.Can mention in release notes(future) about RBF status..? > RBF: Add missing ClientProtocol APIs to RBF > --- > > Key: HDFS-13655 > URL: https://issues.apache.org/jira/browse/HDFS-13655 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Xiao Chen >Priority: Major > > As > [discussed|https://issues.apache.org/jira/browse/HDFS-12858?focusedCommentId=16500975=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16500975|#comment-16500975] > with [~elgoiri], there are some HDFS methods that does not take path as a > parameter. We should support these to work with federation. > The ones missing are: > * Snapshots > * Storage policies > * Encryption zones > * Cache pools > One way to reasonably have them to work with federation is to 'list' each > nameservice and concat the results. This can be done pretty much the same as > {{refreshNodes()}} and it would be a matter of querying all the subclusters > and aggregate the output (e.g., {{getDatanodeReport()}}.) -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-359) RocksDB Profiles support
[ https://issues.apache.org/jira/browse/HDDS-359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16592214#comment-16592214 ] Xiaoyu Yao commented on HDDS-359: - {quote}{quote}Line 141: this assumes all tables have the same column family option. Is it possible to have per table column family option {quote} We already do that. When the user add a table, they can call addTable(string) or addTable(string, options). If the first addTable is used, we iterate thru the string table and add the default option. If the user had called addTable(string, option) then the user specified option would be used. {quote} I think the confusion is that DBStoreBuilder#addTable() allows add per table column family option. But the caller processTableNames() only pass the same one from dbProfile.getColumnFamilyOptions() in line 141 based on SSD/DISK. For example, if we have two tables: one for normal key and the other for open key both take the SSD profile but open key table may want some different column family settings. {quote}{quote}Can we abstract the table config otherwise we can use ColumnFamilyOptions directly {quote} Sorry, I am not sure I understand this comment clearly. Could you please explain it to me once more ? {quote} I mean the TableConfig class is almost the same as ColumnFamilyOption. We could potentially use it directly. {quote}It is hard to merge these since it because harder to define what the final config will look like. {quote} Good point. Let's revisit this later if things change. > RocksDB Profiles support > > > Key: HDDS-359 > URL: https://issues.apache.org/jira/browse/HDDS-359 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Xiaoyu Yao >Assignee: Anu Engineer >Priority: Major > Fix For: 0.2.1 > > Attachments: HDDS-359.001.patch, HDDS-359.002.patch, > HDDS-359.003.patch, HDDS-359.004.patch > > > This allows us to tune the OM/SCM DB for different machine configurations. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13849) Migrate logging to slf4j in hadoop-hdfs-httpfs, hadoop-hdfs-nfs, hadoop-hdfs-rbf, hadoop-hdfs-native-client
[ https://issues.apache.org/jira/browse/HDFS-13849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16592208#comment-16592208 ] Giovanni Matteo Fumarola commented on HDFS-13849: - Thanks [~iapicker] for the patch. Please fix the checkstyle and the patch is good to go. The tests are not related to the patch. > Migrate logging to slf4j in hadoop-hdfs-httpfs, hadoop-hdfs-nfs, > hadoop-hdfs-rbf, hadoop-hdfs-native-client > --- > > Key: HDFS-13849 > URL: https://issues.apache.org/jira/browse/HDFS-13849 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Ian Pickering >Assignee: Ian Pickering >Priority: Minor > Attachments: HDFS-13849.v1.patch, HDFS-13849.v1.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-364) Update open container replica information in SCM during DN register
[ https://issues.apache.org/jira/browse/HDDS-364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16592200#comment-16592200 ] Hudson commented on HDDS-364: - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14826 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/14826/]) HDDS-364. Update open container replica information in SCM during DN (elek: rev a5eba25506a4ca7ac9efa9b60b204c8cf1aa4160) * (edit) hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/TestBlockDeletion.java * (edit) hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/Mapping.java * (edit) hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerReportHandler.java * (edit) hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerMapping.java * (edit) hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/TestContainerMapping.java * (edit) hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMDatanodeProtocolServer.java * (edit) hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/closer/TestContainerCloser.java > Update open container replica information in SCM during DN register > --- > > Key: HDDS-364 > URL: https://issues.apache.org/jira/browse/HDDS-364 > Project: Hadoop Distributed Data Store > Issue Type: New Feature >Reporter: Ajay Kumar >Assignee: Ajay Kumar >Priority: Major > Fix For: 0.2.1 > > Attachments: HDDS-364.00.patch, HDDS-364.01.patch, HDDS-364.02.patch, > HDDS-364.03.patch > > > Update open container replica information in SCM during DN register. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-7524) TestRetryCacheWithHA.testUpdatePipeline fails occasionally in trunk
[ https://issues.apache.org/jira/browse/HDFS-7524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16592195#comment-16592195 ] Brahma Reddy Battula edited comment on HDFS-7524 at 8/24/18 9:34 PM: - [~RANith] thanks for reminding and sorry for delay.. [~yzhangal] if you've chance,can you pitch in here before I commit this patch. was (Author: brahmareddy): [~RANith] thanks for reminding and sorry for delay.. [~yzhangal] it will be great if you pitch in before commit this patch. > TestRetryCacheWithHA.testUpdatePipeline fails occasionally in trunk > --- > > Key: HDFS-7524 > URL: https://issues.apache.org/jira/browse/HDFS-7524 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode, test >Reporter: Yongjun Zhang >Assignee: Ranith Sardar >Priority: Major > Labels: flaky-test > Attachments: HDFS-7524-001.patch > > > https://builds.apache.org/job/Hadoop-Hdfs-trunk/1974/testReport/ > Error Message > {quote} > After waiting the operation updatePipeline still has not taken effect on NN > yet > Stacktrace > java.lang.AssertionError: After waiting the operation updatePipeline still > has not taken effect on NN yet > at org.junit.Assert.fail(Assert.java:88) > at org.junit.Assert.assertTrue(Assert.java:41) > at > org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA.testClientRetryWithFailover(TestRetryCacheWithHA.java:1278) > at > org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA.testUpdatePipeline(TestRetryCacheWithHA.java:1176) > {quote} > Found by tool proposed in HADOOP-11045: > {quote} > [yzhang@localhost jenkinsftf]$ ./determine-flaky-tests-hadoop.py -j > Hadoop-Hdfs-trunk -n 5 | tee bt.log > Recently FAILED builds in url: > https://builds.apache.org//job/Hadoop-Hdfs-trunk > THERE ARE 4 builds (out of 6) that have failed tests in the past 5 days, > as listed below: > ===>https://builds.apache.org/job/Hadoop-Hdfs-trunk/1974/testReport > (2014-12-15 03:30:01) > Failed test: > org.apache.hadoop.hdfs.TestDecommission.testIncludeByRegistrationName > Failed test: > org.apache.hadoop.hdfs.server.blockmanagement.TestDatanodeManager.testNumVersionsReportedCorrect > Failed test: > org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA.testUpdatePipeline > ===>https://builds.apache.org/job/Hadoop-Hdfs-trunk/1972/testReport > (2014-12-13 10:32:27) > Failed test: > org.apache.hadoop.hdfs.TestDecommission.testIncludeByRegistrationName > ===>https://builds.apache.org/job/Hadoop-Hdfs-trunk/1971/testReport > (2014-12-13 03:30:01) > Failed test: > org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA.testUpdatePipeline > ===>https://builds.apache.org/job/Hadoop-Hdfs-trunk/1969/testReport > (2014-12-11 03:30:01) > Failed test: > org.apache.hadoop.hdfs.server.blockmanagement.TestDatanodeManager.testNumVersionsReportedCorrect > Failed test: > org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA.testUpdatePipeline > Failed test: > org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover.testFailoverRightBeforeCommitSynchronization > Among 6 runs examined, all failed tests <#failedRuns: testName>: > 3: > org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA.testUpdatePipeline > 2: org.apache.hadoop.hdfs.TestDecommission.testIncludeByRegistrationName > 2: > org.apache.hadoop.hdfs.server.blockmanagement.TestDatanodeManager.testNumVersionsReportedCorrect > 1: > org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover.testFailoverRightBeforeCommitSynchronization > {quote} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13848) Refactor NameNode failover proxy providers
[ https://issues.apache.org/jira/browse/HDFS-13848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Konstantin Shvachko updated HDFS-13848: --- Attachment: HDFS-13848-005.patch > Refactor NameNode failover proxy providers > -- > > Key: HDFS-13848 > URL: https://issues.apache.org/jira/browse/HDFS-13848 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ha, hdfs-client >Affects Versions: 2.7.5 >Reporter: Konstantin Shvachko >Assignee: Konstantin Shvachko >Priority: Major > Attachments: HDFS-13848-003.patch, HDFS-13848-004.patch, > HDFS-13848-005.patch, HDFS-13848.002.patch, HDFS-13848.patch > > > Looking at NN failover proxy providers in the context of HDFS-13782 I noticed > that {{ConfiguredFailoverProxyProvider}} and {{IPFailoverProxyProvider}} have > a lot of common logic. We can move this common logic into > {{AbstractNNFailoverProxyProvider}}, which simplifies things a lot. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13848) Refactor NameNode failover proxy providers
[ https://issues.apache.org/jira/browse/HDFS-13848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16592197#comment-16592197 ] Konstantin Shvachko commented on HDFS-13848: * {{addressKey}} is a hacky solution introduced in HDFS-13536 for provided storage. I first thought it was used in some tests. Different address key should be handled in {{InMemoryAliasMapFailoverProxyProvider}} instead of spilling it into {{ConfiguredFailoverProxyProvider}}, but oh well. Added {{addressKey}} to {{getAddresses()}}. * I think the return value is useful. You can do {{return createProxyIfNeeded(ip)}} instead of two lines. * I first made {{getProxyAddresses()}} static, just as you suggest, but then I realized that I forgot to add delegation token cloning in places where it is needed, and decided to go with current version. > Refactor NameNode failover proxy providers > -- > > Key: HDFS-13848 > URL: https://issues.apache.org/jira/browse/HDFS-13848 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ha, hdfs-client >Affects Versions: 2.7.5 >Reporter: Konstantin Shvachko >Assignee: Konstantin Shvachko >Priority: Major > Attachments: HDFS-13848-003.patch, HDFS-13848-004.patch, > HDFS-13848.002.patch, HDFS-13848.patch > > > Looking at NN failover proxy providers in the context of HDFS-13782 I noticed > that {{ConfiguredFailoverProxyProvider}} and {{IPFailoverProxyProvider}} have > a lot of common logic. We can move this common logic into > {{AbstractNNFailoverProxyProvider}}, which simplifies things a lot. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-13867) RBF: Add Validation for max arguments for Router admin ls, clrQuota, setQuota, rm and nameservice commands
[ https://issues.apache.org/jira/browse/HDFS-13867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16592186#comment-16592186 ] Ayush Saxena edited comment on HDFS-13867 at 8/24/18 9:32 PM: -- Thanx [~brahmareddy] for the suggestion. But when the exception is thrown, in the catch block it will print the error message as well as the usage. Before HDFS-13861 it will be printing the entire usage but after the patch gets pushed it would be printing just for the command that failed. was (Author: ayushtkn): Thanx [~brahmareddy] for the suggestion. Yes I even think that could be a better approach. If you suggest I can move on with this approach after HDFS-13861 is pushed. > RBF: Add Validation for max arguments for Router admin ls, clrQuota, > setQuota, rm and nameservice commands > -- > > Key: HDFS-13867 > URL: https://issues.apache.org/jira/browse/HDFS-13867 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Ayush Saxena >Assignee: Ayush Saxena >Priority: Major > Attachments: HDFS-13867-01.patch, HDFS-13867-02.patch > > > Add validation to check if the total number of arguments provided for the > Router Admin commands are not more than max possible.In most cases if there > are some non related extra parameters after the required arguments it doesn't > validate against this but instead perform the action with the required > parameters and ignore the extra ones which shouldn't be in the ideal case. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-7524) TestRetryCacheWithHA.testUpdatePipeline fails occasionally in trunk
[ https://issues.apache.org/jira/browse/HDFS-7524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16592195#comment-16592195 ] Brahma Reddy Battula commented on HDFS-7524: [~RANith] thanks for reminding and sorry for delay.. [~yzhangal] it will be great if you pitch in before commit this patch. > TestRetryCacheWithHA.testUpdatePipeline fails occasionally in trunk > --- > > Key: HDFS-7524 > URL: https://issues.apache.org/jira/browse/HDFS-7524 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode, test >Reporter: Yongjun Zhang >Assignee: Ranith Sardar >Priority: Major > Labels: flaky-test > Attachments: HDFS-7524-001.patch > > > https://builds.apache.org/job/Hadoop-Hdfs-trunk/1974/testReport/ > Error Message > {quote} > After waiting the operation updatePipeline still has not taken effect on NN > yet > Stacktrace > java.lang.AssertionError: After waiting the operation updatePipeline still > has not taken effect on NN yet > at org.junit.Assert.fail(Assert.java:88) > at org.junit.Assert.assertTrue(Assert.java:41) > at > org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA.testClientRetryWithFailover(TestRetryCacheWithHA.java:1278) > at > org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA.testUpdatePipeline(TestRetryCacheWithHA.java:1176) > {quote} > Found by tool proposed in HADOOP-11045: > {quote} > [yzhang@localhost jenkinsftf]$ ./determine-flaky-tests-hadoop.py -j > Hadoop-Hdfs-trunk -n 5 | tee bt.log > Recently FAILED builds in url: > https://builds.apache.org//job/Hadoop-Hdfs-trunk > THERE ARE 4 builds (out of 6) that have failed tests in the past 5 days, > as listed below: > ===>https://builds.apache.org/job/Hadoop-Hdfs-trunk/1974/testReport > (2014-12-15 03:30:01) > Failed test: > org.apache.hadoop.hdfs.TestDecommission.testIncludeByRegistrationName > Failed test: > org.apache.hadoop.hdfs.server.blockmanagement.TestDatanodeManager.testNumVersionsReportedCorrect > Failed test: > org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA.testUpdatePipeline > ===>https://builds.apache.org/job/Hadoop-Hdfs-trunk/1972/testReport > (2014-12-13 10:32:27) > Failed test: > org.apache.hadoop.hdfs.TestDecommission.testIncludeByRegistrationName > ===>https://builds.apache.org/job/Hadoop-Hdfs-trunk/1971/testReport > (2014-12-13 03:30:01) > Failed test: > org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA.testUpdatePipeline > ===>https://builds.apache.org/job/Hadoop-Hdfs-trunk/1969/testReport > (2014-12-11 03:30:01) > Failed test: > org.apache.hadoop.hdfs.server.blockmanagement.TestDatanodeManager.testNumVersionsReportedCorrect > Failed test: > org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA.testUpdatePipeline > Failed test: > org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover.testFailoverRightBeforeCommitSynchronization > Among 6 runs examined, all failed tests <#failedRuns: testName>: > 3: > org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA.testUpdatePipeline > 2: org.apache.hadoop.hdfs.TestDecommission.testIncludeByRegistrationName > 2: > org.apache.hadoop.hdfs.server.blockmanagement.TestDatanodeManager.testNumVersionsReportedCorrect > 1: > org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover.testFailoverRightBeforeCommitSynchronization > {quote} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13867) RBF: Add Validation for max arguments for Router admin ls, clrQuota, setQuota, rm and nameservice commands
[ https://issues.apache.org/jira/browse/HDFS-13867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16592186#comment-16592186 ] Ayush Saxena commented on HDFS-13867: - Thanx [~brahmareddy] for the suggestion. Yes I even think that could be a better approach. If you suggest I can move on with this approach after HDFS-13861 is pushed. > RBF: Add Validation for max arguments for Router admin ls, clrQuota, > setQuota, rm and nameservice commands > -- > > Key: HDFS-13867 > URL: https://issues.apache.org/jira/browse/HDFS-13867 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Ayush Saxena >Assignee: Ayush Saxena >Priority: Major > Attachments: HDFS-13867-01.patch, HDFS-13867-02.patch > > > Add validation to check if the total number of arguments provided for the > Router Admin commands are not more than max possible.In most cases if there > are some non related extra parameters after the required arguments it doesn't > validate against this but instead perform the action with the required > parameters and ignore the extra ones which shouldn't be in the ideal case. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-13861) RBF: Illegal Router Admin command leads to printing usage for all commands
[ https://issues.apache.org/jira/browse/HDFS-13861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16592177#comment-16592177 ] Ayush Saxena edited comment on HDFS-13861 at 8/24/18 9:20 PM: -- Thanx [~brahmareddy] for the comment. i.) It should be backward compatible as the case was only ignored in case of checking whether the command has minimum arguments or not.In the execution part it is case sensitive only at the end if the case was not same as desired it won't execute and land up considering it not one of its command and print the entire usage. so it shouldn't be an issue? ii)Could be done for sure,if you suggest.It was previously like that so was thinking to put minimal changes in terms of previous usage.But if you think i can do that. was (Author: ayushtkn): Thanx [~brahmareddy] for the comment. i.) It should be backward compatible as the case was only ignored in case of checking whether the command has minimum arguments or not.In the execution part it is case sensitive only. so it shouldn't be an issue? ii)Could be done for sure,if you suggest.It was previously like that so was thinking to put minimal changes in terms of previous usage.But if you think i can do that. > RBF: Illegal Router Admin command leads to printing usage for all commands > -- > > Key: HDFS-13861 > URL: https://issues.apache.org/jira/browse/HDFS-13861 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Ayush Saxena >Assignee: Ayush Saxena >Priority: Major > Attachments: HDFS-13861-01.patch, HDFS-13861-02.patch > > > When an illegal argument is passed for any router admin command it prints > usage for all the admin command it should be specific to the command used and > print the usage only for that command. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13849) Migrate logging to slf4j in hadoop-hdfs-httpfs, hadoop-hdfs-nfs, hadoop-hdfs-rbf, hadoop-hdfs-native-client
[ https://issues.apache.org/jira/browse/HDFS-13849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16592181#comment-16592181 ] genericqa commented on HDFS-13849: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 3 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 17s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 22s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 18s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 8s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 24s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 8s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-hdfs-project/hadoop-hdfs-native-client {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 29s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 12s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 4m 19s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 1m 11s{color} | {color:orange} hadoop-hdfs-project: The patch generated 5 new + 49 unchanged - 0 fixed = 54 total (was 49) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 7s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-hdfs-project/hadoop-hdfs-native-client {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 13s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 6m 20s{color} | {color:red} hadoop-hdfs-native-client in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 25s{color} | {color:green} hadoop-hdfs-nfs in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 16m 55s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 29s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 94m 3s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed CTEST tests | test_test_libhdfs_threaded_hdfs_static | | | test_libhdfs_threaded_hdfspp_test_shim_static | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
[jira] [Commented] (HDFS-13861) RBF: Illegal Router Admin command leads to printing usage for all commands
[ https://issues.apache.org/jira/browse/HDFS-13861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16592177#comment-16592177 ] Ayush Saxena commented on HDFS-13861: - Thanx [~brahmareddy] for the comment. i.) It should be backward compatible as the case was only ignored in case of checking whether the command has minimum arguments or not.In the execution part it is case sensitive only. so it shouldn't be an issue? ii)Could be done for sure,if you suggest.It was previously like that so was thinking to put minimal changes in terms of previous usage.But if you think i can do that. > RBF: Illegal Router Admin command leads to printing usage for all commands > -- > > Key: HDFS-13861 > URL: https://issues.apache.org/jira/browse/HDFS-13861 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Ayush Saxena >Assignee: Ayush Saxena >Priority: Major > Attachments: HDFS-13861-01.patch, HDFS-13861-02.patch > > > When an illegal argument is passed for any router admin command it prints > usage for all the admin command it should be specific to the command used and > print the usage only for that command. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13867) RBF: Add Validation for max arguments for Router admin ls, clrQuota, setQuota, rm and nameservice commands
[ https://issues.apache.org/jira/browse/HDFS-13867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16592170#comment-16592170 ] Brahma Reddy Battula commented on HDFS-13867: - Instead of throwing the IAE,how about giving command usage itself..? i. e. printUsage(cmd); If you all agree, this can be done along with HDFS-13861..? > RBF: Add Validation for max arguments for Router admin ls, clrQuota, > setQuota, rm and nameservice commands > -- > > Key: HDFS-13867 > URL: https://issues.apache.org/jira/browse/HDFS-13867 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Ayush Saxena >Assignee: Ayush Saxena >Priority: Major > Attachments: HDFS-13867-01.patch, HDFS-13867-02.patch > > > Add validation to check if the total number of arguments provided for the > Router Admin commands are not more than max possible.In most cases if there > are some non related extra parameters after the required arguments it doesn't > validate against this but instead perform the action with the required > parameters and ignore the extra ones which shouldn't be in the ideal case. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13861) RBF: Illegal Router Admin command leads to printing usage for all commands
[ https://issues.apache.org/jira/browse/HDFS-13861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16592164#comment-16592164 ] Brahma Reddy Battula commented on HDFS-13861: - [~ayushtkn] thanks for working on this. Have couple of queries: i) Make case sensitive wn't it be backward compatible..? ii) how about changing "Federation Admin Tools:\n" ==> "Usage: hdfs routeradmin"..? > RBF: Illegal Router Admin command leads to printing usage for all commands > -- > > Key: HDFS-13861 > URL: https://issues.apache.org/jira/browse/HDFS-13861 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Ayush Saxena >Assignee: Ayush Saxena >Priority: Major > Attachments: HDFS-13861-01.patch, HDFS-13861-02.patch > > > When an illegal argument is passed for any router admin command it prints > usage for all the admin command it should be specific to the command used and > print the usage only for that command. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-359) RocksDB Profiles support
[ https://issues.apache.org/jira/browse/HDDS-359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16592163#comment-16592163 ] Anu Engineer commented on HDDS-359: --- [~xyao] Thanks for the review. The next patch addresses some of the comments. {quote}DBConfigFromFile.java Line 122: can we move this to line 117 so that the caller does not need to check null. {quote} Here is what we are doing, if there is a name.db.ini in the config directory, we will use those params for the RocksDB. Otherwise we will use the profile key if specified, if there is nothing specified we will use the default profile. The return of null makes it easy for us to know that we need to get profile from the secondary sources. {quote}Line 79: NIT: annotate for test only? {quote} addTable is not test only, if you add a table without columnfamily, we will default to profiles. {quote}Line 118: NIT: rename from processTableNames => processTables? {quote} Done. {quote}Line 141: this assumes all tables have the same column family option. Is it possible to have per table column family option {quote} We already do that. When the user add a table, they can call addTable(string) or addTable(string, options). If the first addTable is used, we iterate thru the string table and add the default option. If the user had called addTable(string, option) then the user specified option would be used. {quote}Line 144: should we document in the build class that addTable() should not add {quote} I can do that, but if the user tried to add an existing table we will throw an exception. we have a test case for that case. {quote}Line 148-170: the current code loads from rocksDBOption parameter, OR config file OR pre-defined db profiles? Is it possible to allow all of these and merge/overwrite with the config file like hadoop configs. {quote} The current code attempts to get config in a heireachical fashion. First it looks for the config from an .INI file, if not found, it will try to read the profile from ozone-deafult.xml and if that is not defined, we will use the default profile. We don't try to merge the configs, but use each one. It is hard to merge these since it because harder to define what the final config will look like. {quote}Can we abstract the table config otherwise we can use ColumnFamilyOptions directly {quote} Sorry, I am not sure I understand this comment clearly. Could you please explain it to me once more ? {quote}Test.db.ini : Add an exception to the license check plugin? {quote} done. {quote}Line 67: should we clean up the copied test ini file in the tearDown? {quote} we are copying a file into the Temporary folder. Junit will clean it up automatically at the exit of the test. > RocksDB Profiles support > > > Key: HDDS-359 > URL: https://issues.apache.org/jira/browse/HDDS-359 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Xiaoyu Yao >Assignee: Anu Engineer >Priority: Major > Fix For: 0.2.1 > > Attachments: HDDS-359.001.patch, HDDS-359.002.patch, > HDDS-359.003.patch, HDDS-359.004.patch > > > This allows us to tune the OM/SCM DB for different machine configurations. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-359) RocksDB Profiles support
[ https://issues.apache.org/jira/browse/HDDS-359?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDDS-359: -- Attachment: HDDS-359.004.patch > RocksDB Profiles support > > > Key: HDDS-359 > URL: https://issues.apache.org/jira/browse/HDDS-359 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Xiaoyu Yao >Assignee: Anu Engineer >Priority: Major > Fix For: 0.2.1 > > Attachments: HDDS-359.001.patch, HDDS-359.002.patch, > HDDS-359.003.patch, HDDS-359.004.patch > > > This allows us to tune the OM/SCM DB for different machine configurations. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-247) Handle CLOSED_CONTAINER_IO exception in ozoneClient
[ https://issues.apache.org/jira/browse/HDDS-247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16592160#comment-16592160 ] Tsz Wo Nicholas Sze commented on HDDS-247: -- > ..., i agree we can make both ChunkGroupOutputStream/ChunkOutputStream > non-thread-safe. However, I would prefer to do this in a follow up Jira to > ensure all our existing tests/tools work correctly with this change, Sure. I am fine to commit this patch. Thanks. > Handle CLOSED_CONTAINER_IO exception in ozoneClient > --- > > Key: HDDS-247 > URL: https://issues.apache.org/jira/browse/HDDS-247 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: Ozone Client >Reporter: Shashikant Banerjee >Assignee: Shashikant Banerjee >Priority: Blocker > Fix For: 0.2.1 > > Attachments: HDDS-247.00.patch, HDDS-247.01.patch, HDDS-247.02.patch, > HDDS-247.03.patch, HDDS-247.04.patch, HDDS-247.05.patch, HDDS-247.06.patch, > HDDS-247.07.patch, HDDS-247.08.patch, HDDS-247.09.patch, HDDS-247.10.patch > > > In case of ongoing writes by Ozone client to a container, the container might > get closed on the Datanodes because of node loss, out of space issues etc. In > such cases, the operation will fail with CLOSED_CONTAINER_IO exception. In > cases as such, ozone client should try to get the committed length of the > block from the Datanodes, and update the OM. This Jira aims to address this > issue. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-364) Update open container replica information in SCM during DN register
[ https://issues.apache.org/jira/browse/HDDS-364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Elek, Marton updated HDDS-364: -- Resolution: Fixed Status: Resolved (was: Patch Available) Just landed on trunk. Thank you [~ajayydv] the contribution. > Update open container replica information in SCM during DN register > --- > > Key: HDDS-364 > URL: https://issues.apache.org/jira/browse/HDDS-364 > Project: Hadoop Distributed Data Store > Issue Type: New Feature >Reporter: Ajay Kumar >Assignee: Ajay Kumar >Priority: Major > Fix For: 0.2.1 > > Attachments: HDDS-364.00.patch, HDDS-364.01.patch, HDDS-364.02.patch, > HDDS-364.03.patch > > > Update open container replica information in SCM during DN register. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13815) RBF: Add check to order command
[ https://issues.apache.org/jira/browse/HDFS-13815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16592155#comment-16592155 ] Brahma Reddy Battula commented on HDFS-13815: - Sorry for landing late. Currently both RouterAdmin#*addMount*(String[], int),RouterAdmin#*updateMount*(java.lang.String[], int) have same validation for params expect order ( HASH for addMount where updateMount isn't having. So,how about to extract to one class(inner) or Method..? So, that we can have all the params one place and maintenance will easy. I am thinking like below. Please correct me if I am wrong. {code:java} public boolean addMount(String[] parameters, int i) throws IOException { ValidateParams params = new ValidateParams(parameters, i,"addMount" ).invoke(); return addMount(params.getMount(), params.getNss(), params.getDest(), params.isReadOnly(), DestinationOrder.HASH, new ACLEntity(params.getOwner(), params.getGroup(), params.getMode())); } public boolean updateMount(String[] parameters, int i) throws IOException { ValidateParams params = new ValidateParams(parameters, i,"updateMount" ).invoke(); return updateMount(params.getMount(), params.getNss(), params.getDest(), params.isReadOnly(), params.getOrder(), new ACLEntity(params.getOwner(), params.getGroup(), params.getMode())); } {code} {code:java} private class ValidateParams { private String[] parameters; private int i; private String mount; private String[] nss; private String dest; private boolean readOnly; private String owner; private String group; private FsPermission mode; private DestinationOrder order; public ValidateParams(String[] parameters, int i, String cmd) { this.parameters = parameters; this.i = i; } public String getMount() { return mount; } public String[] getNss() { return nss; } public String getDest() { return dest; } public boolean isReadOnly() { return readOnly; } public String getOwner() { return owner; } public String getGroup() { return group; } public FsPermission getMode() { return mode; } public DestinationOrder getOrder() { return order; } public ValidateParams invoke() { // Mandatory parameters mount = parameters[i++]; nss = parameters[i++].split(","); dest = parameters[i++]; // Optional parameters readOnly = false; owner = null; group = null; mode = null; order = DestinationOrder.HASH; while (i < parameters.length) { if (parameters[i].equals("-readonly")) { readOnly = true; } else if (parameters[i].equals("-order")) { i++; try { order = DestinationOrder.valueOf(parameters[i]); } catch(Exception e) { System.err.println("Cannot parse order: " + parameters[i]); } } else if (parameters[i].equals("-owner")) { i++; owner = parameters[i]; } else if (parameters[i].equals("-group")) { i++; group = parameters[i]; } else if (parameters[i].equals("-mode")) { i++; short modeValue = Short.parseShort(parameters[i], 8); mode = new FsPermission(modeValue); }else{ printUsage(cmd); } i++; } return this; } {code} > RBF: Add check to order command > --- > > Key: HDFS-13815 > URL: https://issues.apache.org/jira/browse/HDFS-13815 > Project: Hadoop HDFS > Issue Type: Bug > Components: federation >Affects Versions: 3.0.0 >Reporter: Soumyapn >Assignee: Ranith Sardar >Priority: Major > Attachments: HDFS-13815-001.patch, HDFS-13815-002.patch, > HDFS-13815-003.patch, HDFS-13815-004.patch > > > No check being done on order command. > It says successfully updated mount table if we don't specify order command > and it is not updated in mount table > Execute the dfsrouter update command with the below scenarios. > 1. ./hdfs dfsrouteradmin -update /apps3 hacluster,ns2 /tmp6 RANDOM > 2. ./hdfs dfsrouteradmin -update /apps3 hacluster,ns2 /tmp6 -or RANDOM > 3. ./hdfs dfsrouteradmin -update /apps3 hacluster,ns2 /tmp6 -ord RANDOM > 4. ./hdfs dfsrouteradmin -update /apps3 hacluster,ns2 /tmp6 -orde RANDOM > > The console message says, Successfully updated mount point. But it is not > updated in the mount table. > > Expected Result: > Exception on console as the order command is missing/not written properl -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-374) Support to configure container size in units lesser than GB
[ https://issues.apache.org/jira/browse/HDDS-374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16592140#comment-16592140 ] Hanisha Koneru commented on HDDS-374: - Thanks for working on this [~nandakumar131]. LGTM, +1 pending Jenkins. > Support to configure container size in units lesser than GB > --- > > Key: HDDS-374 > URL: https://issues.apache.org/jira/browse/HDDS-374 > Project: Hadoop Distributed Data Store > Issue Type: Improvement > Components: Ozone Datanode >Reporter: Nanda kumar >Assignee: Nanda kumar >Priority: Major > Fix For: 0.2.1 > > Attachments: HDDS-374.000.patch > > > After HDDS-317 we can configure the container size with its unit (eg. 5gb, > 1000mb etc.). But we still require it to be in multiples of GB, the > configured value will be rounded off (floor) to the nearest GB value. It will > be helpful to have support for units lesser than GB, it will make our life > simpler while writing unit tests. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13867) RBF: Add Validation for max arguments for Router admin ls, clrQuota, setQuota, rm and nameservice commands
[ https://issues.apache.org/jira/browse/HDFS-13867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ayush Saxena updated HDFS-13867: Attachment: HDFS-13867-02.patch > RBF: Add Validation for max arguments for Router admin ls, clrQuota, > setQuota, rm and nameservice commands > -- > > Key: HDFS-13867 > URL: https://issues.apache.org/jira/browse/HDFS-13867 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Ayush Saxena >Assignee: Ayush Saxena >Priority: Major > Attachments: HDFS-13867-01.patch, HDFS-13867-02.patch > > > Add validation to check if the total number of arguments provided for the > Router Admin commands are not more than max possible.In most cases if there > are some non related extra parameters after the required arguments it doesn't > validate against this but instead perform the action with the required > parameters and ignore the extra ones which shouldn't be in the ideal case. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org