[jira] [Commented] (HDDS-194) Remove NodePoolManager and node pool handling from SCM
[ https://issues.apache.org/jira/browse/HDDS-194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16524601#comment-16524601 ] genericqa commented on HDDS-194: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 21s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 5 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 56s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 30s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m 31s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 27s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 32s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 37s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-ozone/integration-test {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 32s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 31s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 22s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 27m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 27m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 1s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 38s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-ozone/integration-test {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 34s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 17s{color} | {color:green} common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 28s{color} | {color:green} server-scm in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 35s{color} | {color:green} tools in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 22m 18s{color} | {color:red} integration-test in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 44s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}154m 31s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests |
[jira] [Updated] (HDDS-183) Integrate Volumeset, ContainerSet.
[ https://issues.apache.org/jira/browse/HDDS-183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HDDS-183: Description: This Jira adds following: 1. Use new VolumeSet. 2. build container map from .container files during startup. 3. Integrate HddsDispatcher. was: This Jira adds following: 1. Use new VolumeSet. 2. build container map from .container files during startup. > Integrate Volumeset, ContainerSet. > -- > > Key: HDDS-183 > URL: https://issues.apache.org/jira/browse/HDDS-183 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham >Priority: Major > Fix For: 0.2.1 > > Attachments: HDDS-183-HDDS-48.00.patch, HDDS-183-HDDS-48.01.patch, > HDDS-183-HDDS-48.02.patch, HDDS-183-HDDS-48.03.patch > > > This Jira adds following: > 1. Use new VolumeSet. > 2. build container map from .container files during startup. > 3. Integrate HddsDispatcher. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-183) Integrate Volumeset, ContainerSet.
[ https://issues.apache.org/jira/browse/HDDS-183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HDDS-183: Attachment: HDDS-183-HDDS-48.03.patch > Integrate Volumeset, ContainerSet. > -- > > Key: HDDS-183 > URL: https://issues.apache.org/jira/browse/HDDS-183 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham >Priority: Major > Fix For: 0.2.1 > > Attachments: HDDS-183-HDDS-48.00.patch, HDDS-183-HDDS-48.01.patch, > HDDS-183-HDDS-48.02.patch, HDDS-183-HDDS-48.03.patch > > > This Jira adds following: > 1. Use new VolumeSet. > 2. build container map from .container files during startup. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-193) Make Datanode heartbeat dispatcher in SCM event based
[ https://issues.apache.org/jira/browse/HDDS-193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16524597#comment-16524597 ] genericqa commented on HDDS-193: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 7 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 2m 2s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 37s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m 52s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 10s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 43s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-ozone/integration-test {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 47s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 21s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 29m 7s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 29m 7s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 10s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-ozone/integration-test {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 24s{color} | {color:green} server-scm in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 14m 26s{color} | {color:red} integration-test in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 43s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}139m 23s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.ozone.TestStorageContainerManager | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd | | JIRA Issue | HDDS-193 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12929314/HDDS-193.004.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 58b4c44cae7b 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 10:45:36
[jira] [Commented] (HDDS-173) Refactor Dispatcher and implement Handler for new ContainerIO design
[ https://issues.apache.org/jira/browse/HDDS-173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16524570#comment-16524570 ] Xiaoyu Yao commented on HDDS-173: - Thanks [~hanishakoneru] for the update. Patch v3 looks good to me. Can you fix the Jenkins issues from previous run? +1 after that. > Refactor Dispatcher and implement Handler for new ContainerIO design > > > Key: HDDS-173 > URL: https://issues.apache.org/jira/browse/HDDS-173 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Hanisha Koneru >Assignee: Hanisha Koneru >Priority: Major > Fix For: 0.2.1 > > Attachments: HDDS-173-HDDS-48.001.patch, HDDS-173-HDDS-48.002.patch, > HDDS-173-HDDS-48.003.patch > > > Dispatcher will pass the ContainerCommandRequests to the corresponding > Handler based on the ContainerType. Each ContainerType will have its own > Handler. The Handler class will process the message. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-186) Create under replicated queue
[ https://issues.apache.org/jira/browse/HDDS-186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16524564#comment-16524564 ] Xiaoyu Yao commented on HDDS-186: - Thanks [~ajayydv] for working on this. The patch v2 looks good to me. I just have few minor comments: ReplicationQueue.java Line 41-43: this can be consolidated into a single this.queue.add(repObj). ReplicationReqMsg.java Line 28: NIT: rename from ReplicationReqMsg -> ReplicationRequest Line 59: this should be moved before line 56 Line 63: Shorts.compare instead of Long.compare > Create under replicated queue > - > > Key: HDDS-186 > URL: https://issues.apache.org/jira/browse/HDDS-186 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: SCM >Affects Versions: 0.2.1 >Reporter: Ajay Kumar >Assignee: Ajay Kumar >Priority: Major > Fix For: 0.2.1 > > Attachments: HDDS-186.00.patch, HDDS-186.01.patch, HDDS-186.02.patch > > > Create under replicated queue to replicate under replicated containers in > Ozone. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-195) Create generic CommandWatcher utility
[ https://issues.apache.org/jira/browse/HDDS-195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16524551#comment-16524551 ] genericqa commented on HDDS-195: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 33s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 31s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 28s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 50s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 14s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 51s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 14s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 9s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 47s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 25s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 42s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 28s{color} | {color:green} framework in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 32s{color} | {color:green} container-service in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 25s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 64m 10s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd | | JIRA Issue | HDDS-195 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12929313/HDDS-195.003.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 9b9f9bfec369 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / bedc4fe | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_171 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/PreCommit-HDDS-Build/372/testReport/ | | Max. process+thread count | 301 (vs. ulimit of 1) | | modules | C: hadoop-hdds/framework
[jira] [Commented] (HDDS-195) Create generic CommandWatcher utility
[ https://issues.apache.org/jira/browse/HDDS-195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16524526#comment-16524526 ] Elek, Marton commented on HDDS-195: --- The second version (2nd + whitespace fix = 3rd) is more extensible. Now it has no default implementation but in CloseCommandWatcher and CopyContainerWatcher the onTimeout and onFinished methods could be implemented. Also new helper methods (contains/remove) are included. > Create generic CommandWatcher utility > - > > Key: HDDS-195 > URL: https://issues.apache.org/jira/browse/HDDS-195 > Project: Hadoop Distributed Data Store > Issue Type: Improvement >Reporter: Elek, Marton >Assignee: Elek, Marton >Priority: Major > Fix For: 0.2.1 > > Attachments: HDDS-195.001.patch, HDDS-195.002.patch, > HDDS-195.003.patch > > > In some cases we need a class which can track the status of the outgoing > commands. > The commands should be resent after a while except a status message is > received about command completion. > On high level, we need a builder factory, which takes the following > parameters: > * (destination) event type and the payload of the command which should be > repeated. > * the ID of the command/event > * The event/topic of the completion messages. If an IdentifiableEventPayload > is received on this topic, the specific event is done and don't need to > resend it. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-187) Command status publisher for datanode
[ https://issues.apache.org/jira/browse/HDDS-187?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ajay Kumar updated HDDS-187: Description: Currently SCM sends set of commands for DataNode. DataNode executes them via CommandHandler. This jira intends to create a Command status publisher which will return status of these commands back to the SCM. (was: Create over replicated queue to replicate over replicated containers in Ozone.) > Command status publisher for datanode > - > > Key: HDDS-187 > URL: https://issues.apache.org/jira/browse/HDDS-187 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: SCM >Affects Versions: 0.2.1 >Reporter: Ajay Kumar >Assignee: Ajay Kumar >Priority: Major > Fix For: 0.2.1 > > > Currently SCM sends set of commands for DataNode. DataNode executes them via > CommandHandler. This jira intends to create a Command status publisher which > will return status of these commands back to the SCM. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-187) Command status publisher for datanode
[ https://issues.apache.org/jira/browse/HDDS-187?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ajay Kumar updated HDDS-187: Summary: Command status publisher for datanode (was: Create over replicated queue) > Command status publisher for datanode > - > > Key: HDDS-187 > URL: https://issues.apache.org/jira/browse/HDDS-187 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: SCM >Affects Versions: 0.2.1 >Reporter: Ajay Kumar >Assignee: Ajay Kumar >Priority: Major > Fix For: 0.2.1 > > > Create over replicated queue to replicate over replicated containers in Ozone. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-194) Remove NodePoolManager and node pool handling from SCM
[ https://issues.apache.org/jira/browse/HDDS-194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Elek, Marton updated HDDS-194: -- Attachment: HDDS-194.002.patch > Remove NodePoolManager and node pool handling from SCM > -- > > Key: HDDS-194 > URL: https://issues.apache.org/jira/browse/HDDS-194 > Project: Hadoop Distributed Data Store > Issue Type: Improvement > Components: SCM >Reporter: Elek, Marton >Assignee: Elek, Marton >Priority: Major > Fix For: 0.2.1 > > Attachments: HDDS-194.001.patch, HDDS-194.002.patch > > > The current code use NodePoolManager and ContainerSupervisor to group the > nodes to smaller groups (pools) and handle the pull based node reports group > by group. > But this code is not used any more as we switch back to use a push based > model. In the datanode the reports could be handled by the specific report > handlers, and in the scm side the reports will be processed by the > SCMHeartbeatDispatcher which will send the events to the EventQueue. > As of now the NodePool abstraction could be removed from the code. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-194) Remove NodePoolManager and node pool handling from SCM
[ https://issues.apache.org/jira/browse/HDDS-194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16524487#comment-16524487 ] Elek, Marton commented on HDDS-194: --- Patch is re-uploaded to get more clean view about the test failures. > Remove NodePoolManager and node pool handling from SCM > -- > > Key: HDDS-194 > URL: https://issues.apache.org/jira/browse/HDDS-194 > Project: Hadoop Distributed Data Store > Issue Type: Improvement > Components: SCM >Reporter: Elek, Marton >Assignee: Elek, Marton >Priority: Major > Fix For: 0.2.1 > > Attachments: HDDS-194.001.patch, HDDS-194.002.patch > > > The current code use NodePoolManager and ContainerSupervisor to group the > nodes to smaller groups (pools) and handle the pull based node reports group > by group. > But this code is not used any more as we switch back to use a push based > model. In the datanode the reports could be handled by the specific report > handlers, and in the scm side the reports will be processed by the > SCMHeartbeatDispatcher which will send the events to the EventQueue. > As of now the NodePool abstraction could be removed from the code. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-193) Make Datanode heartbeat dispatcher in SCM event based
[ https://issues.apache.org/jira/browse/HDDS-193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16524486#comment-16524486 ] Elek, Marton commented on HDDS-193: --- Unit test failures should be independent. Re-uploading the v4 patch to prove it. > Make Datanode heartbeat dispatcher in SCM event based > - > > Key: HDDS-193 > URL: https://issues.apache.org/jira/browse/HDDS-193 > Project: Hadoop Distributed Data Store > Issue Type: Improvement > Components: SCM >Reporter: Elek, Marton >Assignee: Elek, Marton >Priority: Major > Fix For: 0.2.1 > > Attachments: HDDS-193.001.patch, HDDS-193.002.patch, > HDDS-193.003.patch, HDDS-193.004.patch, HDDS-193.004.patch > > > HDDS-163 introduced a new dispatcher in the SCM side to send the heartbeat > report parts to the appropriate listeners. I propose to make it EventQueue > based to handle/monitor these async calls in the same way as the other events. > Report handlers would subscribe to the specific events to process the > information. > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-193) Make Datanode heartbeat dispatcher in SCM event based
[ https://issues.apache.org/jira/browse/HDDS-193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Elek, Marton updated HDDS-193: -- Attachment: HDDS-193.004.patch > Make Datanode heartbeat dispatcher in SCM event based > - > > Key: HDDS-193 > URL: https://issues.apache.org/jira/browse/HDDS-193 > Project: Hadoop Distributed Data Store > Issue Type: Improvement > Components: SCM >Reporter: Elek, Marton >Assignee: Elek, Marton >Priority: Major > Fix For: 0.2.1 > > Attachments: HDDS-193.001.patch, HDDS-193.002.patch, > HDDS-193.003.patch, HDDS-193.004.patch, HDDS-193.004.patch > > > HDDS-163 introduced a new dispatcher in the SCM side to send the heartbeat > report parts to the appropriate listeners. I propose to make it EventQueue > based to handle/monitor these async calls in the same way as the other events. > Report handlers would subscribe to the specific events to process the > information. > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-195) Create generic CommandWatcher utility
[ https://issues.apache.org/jira/browse/HDDS-195?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Elek, Marton updated HDDS-195: -- Attachment: HDDS-195.003.patch > Create generic CommandWatcher utility > - > > Key: HDDS-195 > URL: https://issues.apache.org/jira/browse/HDDS-195 > Project: Hadoop Distributed Data Store > Issue Type: Improvement >Reporter: Elek, Marton >Assignee: Elek, Marton >Priority: Major > Fix For: 0.2.1 > > Attachments: HDDS-195.001.patch, HDDS-195.002.patch, > HDDS-195.003.patch > > > In some cases we need a class which can track the status of the outgoing > commands. > The commands should be resent after a while except a status message is > received about command completion. > On high level, we need a builder factory, which takes the following > parameters: > * (destination) event type and the payload of the command which should be > repeated. > * the ID of the command/event > * The event/topic of the completion messages. If an IdentifiableEventPayload > is received on this topic, the specific event is done and don't need to > resend it. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-193) Make Datanode heartbeat dispatcher in SCM event based
[ https://issues.apache.org/jira/browse/HDDS-193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16524480#comment-16524480 ] genericqa commented on HDDS-193: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 21s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 7 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 2m 26s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 5s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 27m 31s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 18s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 55s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 26s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-ozone/integration-test {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 33s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 47s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 21s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 27m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 27m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 51s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-ozone/integration-test {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 43s{color} | {color:green} server-scm in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 35m 58s{color} | {color:red} integration-test in the patch failed. {color} | | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 35s{color} | {color:red} The patch generated 2 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}149m 59s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.ozone.scm.TestSCMCli | | | hadoop.ozone.container.common.statemachine.commandhandler.TestCloseContainerByPipeline | | | hadoop.ozone.client.rpc.TestOzoneRpcClient | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd | | JIRA Issue | HDDS-193 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12929289/HDDS-193.004.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit
[jira] [Updated] (HDDS-195) Create generic CommandWatcher utility
[ https://issues.apache.org/jira/browse/HDDS-195?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Elek, Marton updated HDDS-195: -- Attachment: HDDS-195.002.patch > Create generic CommandWatcher utility > - > > Key: HDDS-195 > URL: https://issues.apache.org/jira/browse/HDDS-195 > Project: Hadoop Distributed Data Store > Issue Type: Improvement >Reporter: Elek, Marton >Assignee: Elek, Marton >Priority: Major > Fix For: 0.2.1 > > Attachments: HDDS-195.001.patch, HDDS-195.002.patch > > > In some cases we need a class which can track the status of the outgoing > commands. > The commands should be resent after a while except a status message is > received about command completion. > On high level, we need a builder factory, which takes the following > parameters: > * (destination) event type and the payload of the command which should be > repeated. > * the ID of the command/event > * The event/topic of the completion messages. If an IdentifiableEventPayload > is received on this topic, the specific event is done and don't need to > resend it. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13665) Move RPC response serialization into Server.doResponse
[ https://issues.apache.org/jira/browse/HDFS-13665?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16524466#comment-16524466 ] Konstantin Shvachko commented on HDFS-13665: +1 looks good. > Move RPC response serialization into Server.doResponse > -- > > Key: HDFS-13665 > URL: https://issues.apache.org/jira/browse/HDFS-13665 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: HDFS-12943 >Reporter: Plamen Jeliazkov >Assignee: Plamen Jeliazkov >Priority: Major > Attachments: HDFS-13665-HDFS-12943.000.patch, > HDFS-13665-HDFS-12943.001.patch > > > In HDFS-13399 we addressed a race condition in AlignmentContext processing > where the RPC response would assign a transactionId independently of the > transactions own processing, resulting in a stateId response that was lower > than expected. However this caused us to serialize the RpcResponse twice in > order to address the header field change. > See here: > https://issues.apache.org/jira/browse/HDFS-13399?focusedCommentId=16464279=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16464279 > And here: > https://issues.apache.org/jira/browse/HDFS-13399?focusedCommentId=16498660=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16498660 > At the end it was agreed upon to move the logic of Server.setupResponse into > Server.doResponse directly. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12976) Introduce ObserverReadProxyProvider
[ https://issues.apache.org/jira/browse/HDFS-12976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16524459#comment-16524459 ] genericqa commented on HDFS-12976: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 6s{color} | {color:red} HDFS-12976 does not apply to HDFS-12943. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | HDFS-12976 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12929305/HDFS-12976-HDFS-12943.004.patch | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/24503/console | | Powered by | Apache Yetus 0.8.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Introduce ObserverReadProxyProvider > --- > > Key: HDFS-12976 > URL: https://issues.apache.org/jira/browse/HDFS-12976 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs-client >Reporter: Konstantin Shvachko >Assignee: Chao Sun >Priority: Major > Attachments: HDFS-12976-HDFS-12943.000.patch, > HDFS-12976-HDFS-12943.001.patch, HDFS-12976-HDFS-12943.002.patch, > HDFS-12976-HDFS-12943.003.patch, HDFS-12976-HDFS-12943.004.patch, > HDFS-12976.WIP.patch > > > {{StandbyReadProxyProvider}} should implement {{FailoverProxyProvider}} > interface and be able to submit read requests to ANN and SBN(s). -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-94) Change ozone datanode command to start the standalone datanode plugin
[ https://issues.apache.org/jira/browse/HDDS-94?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16524451#comment-16524451 ] genericqa commented on HDDS-94: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 5 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 2m 10s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 38s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 30m 12s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 33s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 27s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 35s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-ozone/acceptance-test hadoop-dist {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 53s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 33s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 13s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 28s{color} | {color:red} container-service in the patch failed. {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 28s{color} | {color:red} common in the patch failed. {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 24s{color} | {color:red} hadoop-dist in the patch failed. {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 29m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 29m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 27s{color} | {color:green} There were no new shellcheck issues. {color} | | {color:green}+1{color} | {color:green} shelldocs {color} | {color:green} 0m 32s{color} | {color:green} There were no new shelldocs issues. {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 20s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-dist hadoop-ozone/acceptance-test {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 31s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 52s{color} | {color:green} container-service in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 45s{color} | {color:green} common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 33s{color} | {color:green} hadoop-dist in the patch passed. {color} | | {color:green}+1{color} | {color:green}
[jira] [Updated] (HDDS-196) PipelineManager should choose datanodes based on ContainerPlacementPolicy
[ https://issues.apache.org/jira/browse/HDDS-196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaoyu Yao updated HDDS-196: Status: Open (was: Patch Available) > PipelineManager should choose datanodes based on ContainerPlacementPolicy > -- > > Key: HDDS-196 > URL: https://issues.apache.org/jira/browse/HDDS-196 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Xiaoyu Yao >Assignee: Xiaoyu Yao >Priority: Major > Fix For: 0.2.1 > > Attachments: HDDS-196.001.patch > > > This is somehow not connected now after refactoring. This ticket is opened to > fix it. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-196) PipelineManager should choose datanodes based on ContainerPlacementPolicy
[ https://issues.apache.org/jira/browse/HDDS-196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaoyu Yao updated HDDS-196: Status: Patch Available (was: Open) > PipelineManager should choose datanodes based on ContainerPlacementPolicy > -- > > Key: HDDS-196 > URL: https://issues.apache.org/jira/browse/HDDS-196 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Xiaoyu Yao >Assignee: Xiaoyu Yao >Priority: Major > Fix For: 0.2.1 > > Attachments: HDDS-196.001.patch > > > This is somehow not connected now after refactoring. This ticket is opened to > fix it. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12976) Introduce ObserverReadProxyProvider
[ https://issues.apache.org/jira/browse/HDFS-12976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16524440#comment-16524440 ] Konstantin Shvachko commented on HDFS-12976: Hey [~csun]. Took me some time, sorry. So when saying that {{ConfiguredFailoverProxyProvider}} doesn't have notion of NameNode in HDFS-13687, I meant that the protocol {{}} is completely obscured from it. The proxy does not know that it is talking to NameNodes and therefore cannot call {{getServiceStatus()}} or any other NN RPCs. So we should probably add some interfaces to the type, so that we could actually make the calls. I played a bit with your patch. Attaching v. 004 as illustration only, sure enough it needs more work. # I added {{}} so that we could call {{getServiceStatus()}} on existing proxies. # Added {{alignmentContext}} it is needed for passing transaction ids. # Probably need to write a new implementation of {{createProxyWithAlignmentContext()}} to avoid nasty casting, etc. I was also thinking that {{ObserverReadProxyProvider}} now heavily relies on {{ConfiguredFailoverProxyProvider}}. But there are use cases for {{IPFailoverProxyProvider}} as well. Would be nice if we could combine them. But let's first finish this. > Introduce ObserverReadProxyProvider > --- > > Key: HDFS-12976 > URL: https://issues.apache.org/jira/browse/HDFS-12976 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs-client >Reporter: Konstantin Shvachko >Assignee: Chao Sun >Priority: Major > Attachments: HDFS-12976-HDFS-12943.000.patch, > HDFS-12976-HDFS-12943.001.patch, HDFS-12976-HDFS-12943.002.patch, > HDFS-12976-HDFS-12943.003.patch, HDFS-12976-HDFS-12943.004.patch, > HDFS-12976.WIP.patch > > > {{StandbyReadProxyProvider}} should implement {{FailoverProxyProvider}} > interface and be able to submit read requests to ANN and SBN(s). -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12976) Introduce ObserverReadProxyProvider
[ https://issues.apache.org/jira/browse/HDFS-12976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Konstantin Shvachko updated HDFS-12976: --- Attachment: HDFS-12976-HDFS-12943.004.patch > Introduce ObserverReadProxyProvider > --- > > Key: HDFS-12976 > URL: https://issues.apache.org/jira/browse/HDFS-12976 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs-client >Reporter: Konstantin Shvachko >Assignee: Chao Sun >Priority: Major > Attachments: HDFS-12976-HDFS-12943.000.patch, > HDFS-12976-HDFS-12943.001.patch, HDFS-12976-HDFS-12943.002.patch, > HDFS-12976-HDFS-12943.003.patch, HDFS-12976-HDFS-12943.004.patch, > HDFS-12976.WIP.patch > > > {{StandbyReadProxyProvider}} should implement {{FailoverProxyProvider}} > interface and be able to submit read requests to ANN and SBN(s). -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-94) Change ozone datanode command to start the standalone datanode plugin
[ https://issues.apache.org/jira/browse/HDDS-94?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16524430#comment-16524430 ] Anu Engineer commented on HDDS-94: -- +1, the acceptance tests are green in Jenkins, I will commit this now. Thanks for the review and help [~elek] > Change ozone datanode command to start the standalone datanode plugin > - > > Key: HDDS-94 > URL: https://issues.apache.org/jira/browse/HDDS-94 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: Ozone Datanode >Reporter: Elek, Marton >Assignee: Sandeep Nemuri >Priority: Major > Labels: newbie > Fix For: 0.2.1 > > Attachments: HDDS-94.001.patch, HDDS-94.002.patch, HDDS-94.003.patch, > HDDS-94.004.patch, HDDS-94.005.patch > > > The current ozone datanode command starts the regular hdfs datanode with an > enabled HddsDatanodeService as a datanode plugin. > The goal is to start only the HddsDatanodeService.java (main function is > already there but GenericOptionParser should be adopted). -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13643) Implement basic async rpc client
[ https://issues.apache.org/jira/browse/HDFS-13643?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16524429#comment-16524429 ] stack commented on HDFS-13643: -- Yeah, we can take a look at [~daryn] stuff when it shows up. On the patch, checkstyles? No need of this since its default 44 compile ? Otherwise, classes could do w/ a bit of class javadoc situating them (though they are @Private audience and its kinda plain what they are about adding basic client on netty). Fine in a follow-up. +1 to commit on branch from me. We should do a writeup on general approach as entrance for those who might be trying to follow-along Good stuff. > Implement basic async rpc client > > > Key: HDFS-13643 > URL: https://issues.apache.org/jira/browse/HDFS-13643 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ipc >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > Fix For: HDFS-13572 > > Attachments: HDFS-13643-v1.patch, HDFS-13643-v2.patch, > HDFS-13643.patch > > > Implement the basic async rpc client so we can start working on the DFSClient > implementation ASAP. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13702) HTrace hooks taking 10-15% CPU in DFS client when disabled
[ https://issues.apache.org/jira/browse/HDFS-13702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16524385#comment-16524385 ] Sean Busbey commented on HDFS-13702: Why do you think it didn't apply the latest patch? the reported URL is the latest one AFAICT. > HTrace hooks taking 10-15% CPU in DFS client when disabled > -- > > Key: HDFS-13702 > URL: https://issues.apache.org/jira/browse/HDFS-13702 > Project: Hadoop HDFS > Issue Type: Bug > Components: performance >Affects Versions: 3.0.0 >Reporter: Todd Lipcon >Assignee: Todd Lipcon >Priority: Major > Attachments: hdfs-13702.patch, hdfs-13702.patch, hdfs-13702.patch > > > I am seeing DFSClient.newReaderTraceScope take ~15% CPU in a teravalidate > workload even when HTrace is disabled. This is because it stringifies several > integers. We should avoid all allocation and stringification when htrace is > disabled. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13702) HTrace hooks taking 10-15% CPU in DFS client when disabled
[ https://issues.apache.org/jira/browse/HDFS-13702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16524376#comment-16524376 ] Todd Lipcon commented on HDFS-13702: Not sure what's up with the above precommit. It appears like it didn't apply the latest patch, even though the console log shows it downloading from the latest URL. When I apply the patch locally and build in a clean tree, it builds OK. Any ideas? > HTrace hooks taking 10-15% CPU in DFS client when disabled > -- > > Key: HDFS-13702 > URL: https://issues.apache.org/jira/browse/HDFS-13702 > Project: Hadoop HDFS > Issue Type: Bug > Components: performance >Affects Versions: 3.0.0 >Reporter: Todd Lipcon >Assignee: Todd Lipcon >Priority: Major > Attachments: hdfs-13702.patch, hdfs-13702.patch, hdfs-13702.patch > > > I am seeing DFSClient.newReaderTraceScope take ~15% CPU in a teravalidate > workload even when HTrace is disabled. This is because it stringifies several > integers. We should avoid all allocation and stringification when htrace is > disabled. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13635) Incorrect message when block is not found
[ https://issues.apache.org/jira/browse/HDFS-13635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16524361#comment-16524361 ] Wei-Chiu Chuang commented on HDFS-13635: Thanks [~gabor.bota] +1 > Incorrect message when block is not found > - > > Key: HDFS-13635 > URL: https://issues.apache.org/jira/browse/HDFS-13635 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode >Reporter: Wei-Chiu Chuang >Assignee: Gabor Bota >Priority: Major > Attachments: HDFS-13635.001.patch, HDFS-13635.002.patch, > HDFS-13635.003.patch > > > When client opens a file, it asks DataNode to check the blocks' visible > length. If somehow the block is not on the DN, it throws "Cannot append to a > non-existent replica" message, which is incorrect, because > getReplicaVisibleLength() is called for different use, just not for appending > to a block. It should just state "block is not found" > The following stacktrace comes from a CDH5.13, but it looks like the same > warning exists in Apache Hadoop trunk. > {noformat} > 2018-05-29 09:23:41,966 INFO org.apache.hadoop.ipc.Server: IPC Server handler > 2 on 50020, call > org.apache.hadoop.hdfs.protocol.ClientDatanodeProtocol.getReplicaVisibleLength > from 10.0.0.14:53217 Call#38334117 Retry#0 > org.apache.hadoop.hdfs.server.datanode.ReplicaNotFoundException: Cannot > append to a non-existent replica > BP-725378529-10.236.236.8-1410027444173:13276792346 > at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.getReplicaInfo(FsDatasetImpl.java:792) > at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.getReplicaVisibleLength(FsDatasetImpl.java:2588) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.getReplicaVisibleLength(DataNode.java:2756) > at > org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getReplicaVisibleLength(ClientDatanodeProtocolServerSideTranslatorPB.java:107) > at > org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:17873) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2217) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2213) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1917) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2211){noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13702) HTrace hooks taking 10-15% CPU in DFS client when disabled
[ https://issues.apache.org/jira/browse/HDFS-13702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16524359#comment-16524359 ] genericqa commented on HDFS-13702: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 35s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 4 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 32m 16s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 20m 49s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 17s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 21s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 32s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 52s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 37s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 13s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 19m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 19m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 17s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 27s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 33s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 38s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 25s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}122m 34s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd | | JIRA Issue | HDFS-13702 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12929270/hdfs-13702.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux abd8325e84cd 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / b69ba0f | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_171 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/24502/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results |
[jira] [Commented] (HDFS-13703) Avoid allocation of CorruptedBlocks hashmap when no corrupted blocks are hit
[ https://issues.apache.org/jira/browse/HDFS-13703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16524351#comment-16524351 ] Todd Lipcon commented on HDFS-13703: Anyone know if the failed tests are known flakes? Doesn't seem related to this change. > Avoid allocation of CorruptedBlocks hashmap when no corrupted blocks are hit > > > Key: HDFS-13703 > URL: https://issues.apache.org/jira/browse/HDFS-13703 > Project: Hadoop HDFS > Issue Type: Improvement > Components: performance >Reporter: Todd Lipcon >Assignee: Todd Lipcon >Priority: Major > Attachments: hdfs-13703.patch > > > The DFSClient creates a CorruptedBlocks object, which contains a HashMap, on > every read call. In most cases, a read will not hit any corrupted blocks, and > this hashmap is not used. It seems the JIT isn't smart enough to eliminate > this allocation. We would be better off avoiding it and only allocating in > the rare case when a corrupt block is hit. > Removing this allocation reduced CPU usage of a TeraValidate job by about 10%. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-193) Make Datanode heartbeat dispatcher in SCM event based
[ https://issues.apache.org/jira/browse/HDDS-193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Elek, Marton updated HDDS-193: -- Attachment: HDDS-193.004.patch > Make Datanode heartbeat dispatcher in SCM event based > - > > Key: HDDS-193 > URL: https://issues.apache.org/jira/browse/HDDS-193 > Project: Hadoop Distributed Data Store > Issue Type: Improvement > Components: SCM >Reporter: Elek, Marton >Assignee: Elek, Marton >Priority: Major > Fix For: 0.2.1 > > Attachments: HDDS-193.001.patch, HDDS-193.002.patch, > HDDS-193.003.patch, HDDS-193.004.patch > > > HDDS-163 introduced a new dispatcher in the SCM side to send the heartbeat > report parts to the appropriate listeners. I propose to make it EventQueue > based to handle/monitor these async calls in the same way as the other events. > Report handlers would subscribe to the specific events to process the > information. > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-193) Make Datanode heartbeat dispatcher in SCM event based
[ https://issues.apache.org/jira/browse/HDDS-193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Elek, Marton updated HDDS-193: -- Attachment: HDDS-193.003.patch > Make Datanode heartbeat dispatcher in SCM event based > - > > Key: HDDS-193 > URL: https://issues.apache.org/jira/browse/HDDS-193 > Project: Hadoop Distributed Data Store > Issue Type: Improvement > Components: SCM >Reporter: Elek, Marton >Assignee: Elek, Marton >Priority: Major > Fix For: 0.2.1 > > Attachments: HDDS-193.001.patch, HDDS-193.002.patch, > HDDS-193.003.patch > > > HDDS-163 introduced a new dispatcher in the SCM side to send the heartbeat > report parts to the appropriate listeners. I propose to make it EventQueue > based to handle/monitor these async calls in the same way as the other events. > Report handlers would subscribe to the specific events to process the > information. > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13703) Avoid allocation of CorruptedBlocks hashmap when no corrupted blocks are hit
[ https://issues.apache.org/jira/browse/HDFS-13703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16524339#comment-16524339 ] genericqa commented on HDFS-13703: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 23s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 43s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 57s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 14s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 10s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 43s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 7s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 17s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 17m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 49s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 14s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 32s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 98m 48s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 29s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}206m 40s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy | | | hadoop.hdfs.client.impl.TestBlockReaderLocal | | | hadoop.hdfs.server.namenode.TestDecommissioningStatus | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd | | JIRA Issue | HDFS-13703 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12929252/hdfs-13703.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 01085243969b 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality |
[jira] [Updated] (HDDS-196) PipelineManager should choose datanodes based on ContainerPlacementPolicy
[ https://issues.apache.org/jira/browse/HDDS-196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaoyu Yao updated HDDS-196: Fix Version/s: 0.2.1 > PipelineManager should choose datanodes based on ContainerPlacementPolicy > -- > > Key: HDDS-196 > URL: https://issues.apache.org/jira/browse/HDDS-196 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Xiaoyu Yao >Assignee: Xiaoyu Yao >Priority: Major > Fix For: 0.2.1 > > Attachments: HDDS-196.001.patch > > > This is somehow not connected now after refactoring. This ticket is opened to > fix it. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-196) PipelineManager should choose datanodes based on ContainerPlacementPolicy
[ https://issues.apache.org/jira/browse/HDDS-196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaoyu Yao updated HDDS-196: Attachment: HDDS-196.001.patch > PipelineManager should choose datanodes based on ContainerPlacementPolicy > -- > > Key: HDDS-196 > URL: https://issues.apache.org/jira/browse/HDDS-196 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Xiaoyu Yao >Assignee: Xiaoyu Yao >Priority: Major > Attachments: HDDS-196.001.patch > > > This is somehow not connected now after refactoring. This ticket is opened to > fix it. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-94) Change ozone datanode command to start the standalone datanode plugin
[ https://issues.apache.org/jira/browse/HDDS-94?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16524333#comment-16524333 ] Elek, Marton commented on HDDS-94: -- Started a new full acceptance test build with the restored line (v5): https://builds.apache.org/job/Hadoop-precommit-ozone-acceptance/23/ > Change ozone datanode command to start the standalone datanode plugin > - > > Key: HDDS-94 > URL: https://issues.apache.org/jira/browse/HDDS-94 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: Ozone Datanode >Reporter: Elek, Marton >Assignee: Sandeep Nemuri >Priority: Major > Labels: newbie > Fix For: 0.2.1 > > Attachments: HDDS-94.001.patch, HDDS-94.002.patch, HDDS-94.003.patch, > HDDS-94.004.patch, HDDS-94.005.patch > > > The current ozone datanode command starts the regular hdfs datanode with an > enabled HddsDatanodeService as a datanode plugin. > The goal is to start only the HddsDatanodeService.java (main function is > already there but GenericOptionParser should be adopted). -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-94) Change ozone datanode command to start the standalone datanode plugin
[ https://issues.apache.org/jira/browse/HDDS-94?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Elek, Marton updated HDDS-94: - Attachment: HDDS-94.005.patch > Change ozone datanode command to start the standalone datanode plugin > - > > Key: HDDS-94 > URL: https://issues.apache.org/jira/browse/HDDS-94 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: Ozone Datanode >Reporter: Elek, Marton >Assignee: Sandeep Nemuri >Priority: Major > Labels: newbie > Fix For: 0.2.1 > > Attachments: HDDS-94.001.patch, HDDS-94.002.patch, HDDS-94.003.patch, > HDDS-94.004.patch, HDDS-94.005.patch > > > The current ozone datanode command starts the regular hdfs datanode with an > enabled HddsDatanodeService as a datanode plugin. > The goal is to start only the HddsDatanodeService.java (main function is > already there but GenericOptionParser should be adopted). -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-94) Change ozone datanode command to start the standalone datanode plugin
[ https://issues.apache.org/jira/browse/HDDS-94?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16524328#comment-16524328 ] Elek, Marton commented on HDDS-94: -- OK. I got it. The following line accidentally removed from the hadoop-ozone/acceptance-test/src/test/acceptance/ozonefs/docker-config {code} CORE-SITE.XML_fs.o3.impl=org.apache.hadoop.fs.ozone.OzoneFileSystem {code} We need this. By default the file systems are defined in core-default.xml but this test (which is a real ozone cluster + hadoop3.1 client) doesn't contain it, as the core-default.xml of hadoop 3.1 doesn't have the required ozone fs entry. I will be +1, if this specific line is restored. > Change ozone datanode command to start the standalone datanode plugin > - > > Key: HDDS-94 > URL: https://issues.apache.org/jira/browse/HDDS-94 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: Ozone Datanode >Reporter: Elek, Marton >Assignee: Sandeep Nemuri >Priority: Major > Labels: newbie > Fix For: 0.2.1 > > Attachments: HDDS-94.001.patch, HDDS-94.002.patch, HDDS-94.003.patch, > HDDS-94.004.patch > > > The current ozone datanode command starts the regular hdfs datanode with an > enabled HddsDatanodeService as a datanode plugin. > The goal is to start only the HddsDatanodeService.java (main function is > already there but GenericOptionParser should be adopted). -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-94) Change ozone datanode command to start the standalone datanode plugin
[ https://issues.apache.org/jira/browse/HDDS-94?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16524319#comment-16524319 ] Elek, Marton commented on HDDS-94: -- Thank you very much the update. Unfortunately the ozonefs acceptance tests are not working for me. Maybe my fault, still investigating, but without the patch they are passed. > Change ozone datanode command to start the standalone datanode plugin > - > > Key: HDDS-94 > URL: https://issues.apache.org/jira/browse/HDDS-94 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: Ozone Datanode >Reporter: Elek, Marton >Assignee: Sandeep Nemuri >Priority: Major > Labels: newbie > Fix For: 0.2.1 > > Attachments: HDDS-94.001.patch, HDDS-94.002.patch, HDDS-94.003.patch, > HDDS-94.004.patch > > > The current ozone datanode command starts the regular hdfs datanode with an > enabled HddsDatanodeService as a datanode plugin. > The goal is to start only the HddsDatanodeService.java (main function is > already there but GenericOptionParser should be adopted). -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-195) Create generic CommandWatcher utility
[ https://issues.apache.org/jira/browse/HDDS-195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16524317#comment-16524317 ] genericqa commented on HDDS-195: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 31s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 0s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 22s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 14s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 24s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 40s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 27s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 21s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 18s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 14s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 29s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 24s{color} | {color:red} framework in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 19s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 51m 57s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdds.server.events.TestEventWatcher | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd | | JIRA Issue | HDDS-195 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12929273/HDDS-195.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 76cc3ecdf5e3 4.4.0-89-generic #112-Ubuntu SMP Mon Jul 31 19:38:41 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / b69ba0f | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_171 | | findbugs | v3.1.0-RC1 | | whitespace | https://builds.apache.org/job/PreCommit-HDDS-Build/367/artifact/out/whitespace-eol.txt | | unit | https://builds.apache.org/job/PreCommit-HDDS-Build/367/artifact/out/patch-unit-hadoop-hdds_framework.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDDS-Build/367/testReport/ | | Max. process+thread count | 459 (vs. ulimit of 1) | | modules | C: hadoop-hdds/framework U: hadoop-hdds/framework | | Console output |
[jira] [Commented] (HDDS-173) Refactor Dispatcher and implement Handler for new ContainerIO design
[ https://issues.apache.org/jira/browse/HDDS-173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16524312#comment-16524312 ] genericqa commented on HDDS-173: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 10 new or modified test files. {color} | || || || || {color:brown} HDDS-48 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 2m 1s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 36m 19s{color} | {color:green} HDDS-48 passed {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 27m 5s{color} | {color:red} root in HDDS-48 failed. {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 33s{color} | {color:green} HDDS-48 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 4m 29s{color} | {color:green} HDDS-48 passed {color} | | {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 8m 3s{color} | {color:red} branch has errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-ozone/integration-test {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 42s{color} | {color:green} HDDS-48 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 18s{color} | {color:green} HDDS-48 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 21s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 6s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 22m 6s{color} | {color:red} root in the patch failed. {color} | | {color:red}-1{color} | {color:red} cc {color} | {color:red} 22m 6s{color} | {color:red} root in the patch failed. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 22m 6s{color} | {color:red} root in the patch failed. {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 2m 35s{color} | {color:red} patch has errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-ozone/integration-test {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 3m 15s{color} | {color:red} hadoop-hdds/common generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 58s{color} | {color:red} hadoop-hdds/container-service generated 4 new + 0 unchanged - 0 fixed = 4 total (was 0) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 8m 58s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 41s{color} | {color:green} common in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 2m 55s{color} | {color:red} container-service in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 51s{color} | {color:green} client in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 47s{color} | {color:green} tools in the patch passed. {color} | |
[jira] [Updated] (HDFS-13536) [PROVIDED Storage] HA for InMemoryAliasMap
[ https://issues.apache.org/jira/browse/HDFS-13536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Virajith Jalaparti updated HDFS-13536: -- Status: Open (was: Patch Available) > [PROVIDED Storage] HA for InMemoryAliasMap > -- > > Key: HDFS-13536 > URL: https://issues.apache.org/jira/browse/HDFS-13536 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Virajith Jalaparti >Assignee: Virajith Jalaparti >Priority: Major > Attachments: HDFS-13536.001.patch, HDFS-13536.002.patch > > > Provide HA for the {{InMemoryLevelDBAliasMapServer}} to work with HDFS NN > configured in high availability. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-195) Create generic CommandWatcher utility
[ https://issues.apache.org/jira/browse/HDDS-195?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Elek, Marton updated HDDS-195: -- Attachment: HDDS-195.001.patch > Create generic CommandWatcher utility > - > > Key: HDDS-195 > URL: https://issues.apache.org/jira/browse/HDDS-195 > Project: Hadoop Distributed Data Store > Issue Type: Improvement >Reporter: Elek, Marton >Assignee: Elek, Marton >Priority: Major > Fix For: 0.2.1 > > Attachments: HDDS-195.001.patch > > > In some cases we need a class which can track the status of the outgoing > commands. > The commands should be resent after a while except a status message is > received about command completion. > On high level, we need a builder factory, which takes the following > parameters: > * (destination) event type and the payload of the command which should be > repeated. > * the ID of the command/event > * The event/topic of the completion messages. If an IdentifiableEventPayload > is received on this topic, the specific event is done and don't need to > resend it. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-195) Create generic CommandWatcher utility
[ https://issues.apache.org/jira/browse/HDDS-195?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Elek, Marton updated HDDS-195: -- Status: Patch Available (was: Open) > Create generic CommandWatcher utility > - > > Key: HDDS-195 > URL: https://issues.apache.org/jira/browse/HDDS-195 > Project: Hadoop Distributed Data Store > Issue Type: Improvement >Reporter: Elek, Marton >Assignee: Elek, Marton >Priority: Major > Fix For: 0.2.1 > > Attachments: HDDS-195.001.patch > > > In some cases we need a class which can track the status of the outgoing > commands. > The commands should be resent after a while except a status message is > received about command completion. > On high level, we need a builder factory, which takes the following > parameters: > * (destination) event type and the payload of the command which should be > repeated. > * the ID of the command/event > * The event/topic of the completion messages. If an IdentifiableEventPayload > is received on this topic, the specific event is done and don't need to > resend it. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-195) Create generic CommandWatcher utility
[ https://issues.apache.org/jira/browse/HDDS-195?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Elek, Marton updated HDDS-195: -- Description: In some cases we need a class which can track the status of the outgoing commands. The commands should be resent after a while except a status message is received about command completion. On high level, we need a builder factory, which takes the following parameters: * (destination) event type and the payload of the command which should be repeated. * the ID of the command/event * The event/topic of the completion messages. If an IdentifiableEventPayload is received on this topic, the specific event is done and don't need to resend it. was: In some cases we need a class which can track the status of the outgoing commands. The commands should be resent after a while except a status message is received about command completion. On high level, we need a builder factory, which takes the following parameters: * (destination) event type and the payload of the command which should be repeated. * the ID of the command/event * The event/topic of the completion messages. If a EventWithIdentifier is received on this topic, the specific event is done and don't need to resend it. > Create generic CommandWatcher utility > - > > Key: HDDS-195 > URL: https://issues.apache.org/jira/browse/HDDS-195 > Project: Hadoop Distributed Data Store > Issue Type: Improvement >Reporter: Elek, Marton >Assignee: Elek, Marton >Priority: Major > Fix For: 0.2.1 > > Attachments: HDDS-195.001.patch > > > In some cases we need a class which can track the status of the outgoing > commands. > The commands should be resent after a while except a status message is > received about command completion. > On high level, we need a builder factory, which takes the following > parameters: > * (destination) event type and the payload of the command which should be > repeated. > * the ID of the command/event > * The event/topic of the completion messages. If an IdentifiableEventPayload > is received on this topic, the specific event is done and don't need to > resend it. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDDS-195) Create generic CommandWatcher utility
[ https://issues.apache.org/jira/browse/HDDS-195?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Elek, Marton reassigned HDDS-195: - Assignee: Elek, Marton > Create generic CommandWatcher utility > - > > Key: HDDS-195 > URL: https://issues.apache.org/jira/browse/HDDS-195 > Project: Hadoop Distributed Data Store > Issue Type: Improvement >Reporter: Elek, Marton >Assignee: Elek, Marton >Priority: Major > Fix For: 0.2.1 > > > In some cases we need a class which can track the status of the outgoing > commands. > The commands should be resent after a while except a status message is > received about command completion. > On high level, we need a builder factory, which takes the following > parameters: > * (destination) event type and the payload of the command which should be > repeated. > * the ID of the command/event > * The event/topic of the completion messages. If a EventWithIdentifier is > received on this topic, the specific event is done and don't need to resend > it. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDDS-196) PipelineManager should choose datanodes based on ContainerPlacementPolicy
Xiaoyu Yao created HDDS-196: --- Summary: PipelineManager should choose datanodes based on ContainerPlacementPolicy Key: HDDS-196 URL: https://issues.apache.org/jira/browse/HDDS-196 Project: Hadoop Distributed Data Store Issue Type: Bug Reporter: Xiaoyu Yao Assignee: Xiaoyu Yao This is somehow not connected now after refactoring. This ticket is opened to fix it. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13702) HTrace hooks taking 10-15% CPU in DFS client when disabled
[ https://issues.apache.org/jira/browse/HDFS-13702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16524270#comment-16524270 ] Sean Busbey commented on HDFS-13702: {quote} I think we should commit this patch, +1, and then we can file another to review how to move forward with tracing in light of recent developments in htrace project; i.e. purge all other htrace references, look into alternatives, etc. {quote} please be sure to link the follow-on jira to this one. probably should get a DISCUSS thread on common-dev@ > HTrace hooks taking 10-15% CPU in DFS client when disabled > -- > > Key: HDFS-13702 > URL: https://issues.apache.org/jira/browse/HDFS-13702 > Project: Hadoop HDFS > Issue Type: Bug > Components: performance >Affects Versions: 3.0.0 >Reporter: Todd Lipcon >Assignee: Todd Lipcon >Priority: Major > Attachments: hdfs-13702.patch, hdfs-13702.patch, hdfs-13702.patch > > > I am seeing DFSClient.newReaderTraceScope take ~15% CPU in a teravalidate > workload even when HTrace is disabled. This is because it stringifies several > integers. We should avoid all allocation and stringification when htrace is > disabled. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-194) Remove NodePoolManager and node pool handling from SCM
[ https://issues.apache.org/jira/browse/HDDS-194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16524249#comment-16524249 ] genericqa commented on HDDS-194: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 38s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 1s{color} | {color:green} The patch appears to include 5 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 25s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 29m 0s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 32m 21s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 19s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 9s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-ozone/integration-test {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 37s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 17s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 23s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 30m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 30m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 17s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-ozone/integration-test {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 16s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 18s{color} | {color:green} common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 26s{color} | {color:green} server-scm in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 31s{color} | {color:green} tools in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 21m 18s{color} | {color:red} integration-test in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 40s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}161m 36s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests |
[jira] [Updated] (HDFS-13702) HTrace hooks taking 10-15% CPU in DFS client when disabled
[ https://issues.apache.org/jira/browse/HDFS-13702?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Todd Lipcon updated HDFS-13702: --- Attachment: hdfs-13702.patch > HTrace hooks taking 10-15% CPU in DFS client when disabled > -- > > Key: HDFS-13702 > URL: https://issues.apache.org/jira/browse/HDFS-13702 > Project: Hadoop HDFS > Issue Type: Bug > Components: performance >Affects Versions: 3.0.0 >Reporter: Todd Lipcon >Assignee: Todd Lipcon >Priority: Major > Attachments: hdfs-13702.patch, hdfs-13702.patch, hdfs-13702.patch > > > I am seeing DFSClient.newReaderTraceScope take ~15% CPU in a teravalidate > workload even when HTrace is disabled. This is because it stringifies several > integers. We should avoid all allocation and stringification when htrace is > disabled. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13702) HTrace hooks taking 10-15% CPU in DFS client when disabled
[ https://issues.apache.org/jira/browse/HDFS-13702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16524221#comment-16524221 ] genericqa commented on HDFS-13702: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 39s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 4 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 39s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 57s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 14s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 8s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 42s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 40s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 20s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 40s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 1m 15s{color} | {color:red} hadoop-hdfs-project in the patch failed. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 1m 15s{color} | {color:red} hadoop-hdfs-project in the patch failed. {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 9s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 36s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 3m 6s{color} | {color:red} patch has errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 20s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 11s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 29s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 36s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 21s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 80m 54s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd | | JIRA Issue | HDFS-13702 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12929253/hdfs-13702.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux b0df8745d820 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 3e58633 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_171 | | findbugs | v3.1.0-RC1 | | mvninstall | https://builds.apache.org/job/PreCommit-HDFS-Build/24501/artifact/out/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt | |
[jira] [Commented] (HDFS-13703) Avoid allocation of CorruptedBlocks hashmap when no corrupted blocks are hit
[ https://issues.apache.org/jira/browse/HDFS-13703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16524182#comment-16524182 ] Todd Lipcon commented on HDFS-13703: BTW would be better if we could entirely avoid allocating the CorruptedBlocks structure at all, but couldn't see a straightforward way of doing that, especially for pread. > Avoid allocation of CorruptedBlocks hashmap when no corrupted blocks are hit > > > Key: HDFS-13703 > URL: https://issues.apache.org/jira/browse/HDFS-13703 > Project: Hadoop HDFS > Issue Type: Improvement > Components: performance >Reporter: Todd Lipcon >Assignee: Todd Lipcon >Priority: Major > Attachments: hdfs-13703.patch > > > The DFSClient creates a CorruptedBlocks object, which contains a HashMap, on > every read call. In most cases, a read will not hit any corrupted blocks, and > this hashmap is not used. It seems the JIT isn't smart enough to eliminate > this allocation. We would be better off avoiding it and only allocating in > the rare case when a corrupt block is hit. > Removing this allocation reduced CPU usage of a TeraValidate job by about 10%. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-173) Refactor Dispatcher and implement Handler for new ContainerIO design
[ https://issues.apache.org/jira/browse/HDDS-173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hanisha Koneru updated HDDS-173: Attachment: HDDS-173-HDDS-48.003.patch > Refactor Dispatcher and implement Handler for new ContainerIO design > > > Key: HDDS-173 > URL: https://issues.apache.org/jira/browse/HDDS-173 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Hanisha Koneru >Assignee: Hanisha Koneru >Priority: Major > Fix For: 0.2.1 > > Attachments: HDDS-173-HDDS-48.001.patch, HDDS-173-HDDS-48.002.patch, > HDDS-173-HDDS-48.003.patch > > > Dispatcher will pass the ContainerCommandRequests to the corresponding > Handler based on the ContainerType. Each ContainerType will have its own > Handler. The Handler class will process the message. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-173) Refactor Dispatcher and implement Handler for new ContainerIO design
[ https://issues.apache.org/jira/browse/HDDS-173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16524178#comment-16524178 ] Hanisha Koneru commented on HDDS-173: - Thanks for the review [~xyao]. I have addressed the comments in patch v03. {quote} ContainerSet.java Line 63: remove the throw StorageContainerException in the declaration. {quote} This StorageContainerException is caught by KeyValueHandler and returned as Response to the client. {quote}KeyUtils.java Now we have two version, one in common/helpers and one in keyvalue/helpers. Is it possible to consolidate to avoid duplicate code such as getDB/RemoveDB/shutdownCache, etc.? {quote} A lot of these classes will be removed when integrating the code. common/helpers/KeyUtils, ChunkUtils etc. will go away. {quote} We are passing the whole containerSet to all handlers without checking its supported container type. As a result, the Handler#handle() method should check the container#getContainerType to see if can handler the command for specific container type before further processing in subclass like KeyValueHandler#handle(). {quote} HddsDispatcher checks the containerType and passes the request to the correct Handler accordingly. So KeyValueHandler will only get requests corresponding to containers with KeyValue as the containerType. Should we still have another check inside KeyValueHandler#handle()? {quote} Line 168: Do we have a follow up JIRA to handle out of space error and I/O error in the volume layer? {quote} This will be added in phase 2. > Refactor Dispatcher and implement Handler for new ContainerIO design > > > Key: HDDS-173 > URL: https://issues.apache.org/jira/browse/HDDS-173 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Hanisha Koneru >Assignee: Hanisha Koneru >Priority: Major > Fix For: 0.2.1 > > Attachments: HDDS-173-HDDS-48.001.patch, HDDS-173-HDDS-48.002.patch > > > Dispatcher will pass the ContainerCommandRequests to the corresponding > Handler based on the ContainerType. Each ContainerType will have its own > Handler. The Handler class will process the message. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13703) Avoid allocation of CorruptedBlocks hashmap when no corrupted blocks are hit
[ https://issues.apache.org/jira/browse/HDFS-13703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16524176#comment-16524176 ] Todd Lipcon commented on HDFS-13703: Before/after perf stat results from teravalidate running on LJR against a remote 1GB dataset: {code} before: 26447.108758 task-clock (msec) #2.962 CPUs utilized ( +- 1.69% ) 24,975 context-switches #0.944 K/sec ( +- 4.97% ) 1,499 cpu-migrations#0.057 K/sec ( +- 5.19% ) 489,110 page-faults #0.018 M/sec ( +- 1.80% ) 75,262,938,888 cycles#2.846 GHz ( +- 1.66% ) 72,494,344,992 instructions #0.96 insn per cycle ( +- 0.61% ) 15,345,052,758 branches # 580.217 M/sec ( +- 0.92% ) 169,449,684 branch-misses #1.10% of all branches ( +- 1.08% ) 8.929516900 seconds time elapsed ( +- 4.40% ) after: 22169.778723 task-clock (msec) #2.555 CPUs utilized ( +- 4.90% ) 24,194 context-switches #0.001 M/sec ( +- 1.19% ) 1,395 cpu-migrations#0.063 K/sec ( +- 2.27% ) 422,476 page-faults #0.019 M/sec ( +- 2.14% ) 63,362,629,008 cycles#2.858 GHz ( +- 4.55% ) 68,717,989,010 instructions #1.08 insn per cycle ( +- 1.57% ) 13,715,267,587 branches # 618.647 M/sec ( +- 3.25% ) 171,417,145 branch-misses #1.25% of all branches ( +- 0.99% ) 8.678080322 seconds time elapsed ( +- 2.90% ) {code} > Avoid allocation of CorruptedBlocks hashmap when no corrupted blocks are hit > > > Key: HDFS-13703 > URL: https://issues.apache.org/jira/browse/HDFS-13703 > Project: Hadoop HDFS > Issue Type: Improvement > Components: performance >Reporter: Todd Lipcon >Assignee: Todd Lipcon >Priority: Major > Attachments: hdfs-13703.patch > > > The DFSClient creates a CorruptedBlocks object, which contains a HashMap, on > every read call. In most cases, a read will not hit any corrupted blocks, and > this hashmap is not used. It seems the JIT isn't smart enough to eliminate > this allocation. We would be better off avoiding it and only allocating in > the rare case when a corrupt block is hit. > Removing this allocation reduced CPU usage of a TeraValidate job by about 10%. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-173) Refactor Dispatcher and implement Handler for new ContainerIO design
[ https://issues.apache.org/jira/browse/HDDS-173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hanisha Koneru updated HDDS-173: Attachment: (was: HDDS-173-HDDS-48.003.patch) > Refactor Dispatcher and implement Handler for new ContainerIO design > > > Key: HDDS-173 > URL: https://issues.apache.org/jira/browse/HDDS-173 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Hanisha Koneru >Assignee: Hanisha Koneru >Priority: Major > Fix For: 0.2.1 > > Attachments: HDDS-173-HDDS-48.001.patch, HDDS-173-HDDS-48.002.patch > > > Dispatcher will pass the ContainerCommandRequests to the corresponding > Handler based on the ContainerType. Each ContainerType will have its own > Handler. The Handler class will process the message. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-173) Refactor Dispatcher and implement Handler for new ContainerIO design
[ https://issues.apache.org/jira/browse/HDDS-173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hanisha Koneru updated HDDS-173: Attachment: HDDS-173-HDDS-48.003.patch > Refactor Dispatcher and implement Handler for new ContainerIO design > > > Key: HDDS-173 > URL: https://issues.apache.org/jira/browse/HDDS-173 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Hanisha Koneru >Assignee: Hanisha Koneru >Priority: Major > Fix For: 0.2.1 > > Attachments: HDDS-173-HDDS-48.001.patch, HDDS-173-HDDS-48.002.patch > > > Dispatcher will pass the ContainerCommandRequests to the corresponding > Handler based on the ContainerType. Each ContainerType will have its own > Handler. The Handler class will process the message. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13702) HTrace hooks taking 10-15% CPU in DFS client when disabled
[ https://issues.apache.org/jira/browse/HDFS-13702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16524156#comment-16524156 ] stack commented on HDFS-13702: -- bq. what do you think? I think we need to be able to trace end-to-end where time is being spent. I think that if htrace is not enabled, it should not add friction. I think harley-davidson's are awful motorcycles but even they don't deserve the abuse they are getting. I think those numbers you posted for the difference your patch makes in throughput stripping htrace are radical. Poor htrace has been added to the apache attic. It got no loving. htace in hdfs got no loving either post inital-commit; it was added and then let fester. I think we should commit this patch, +1, and then we can file another to review how to move forward with tracing in light of recent developments in htrace project; i.e. purge all other htrace references, look into alternatives, etc. > HTrace hooks taking 10-15% CPU in DFS client when disabled > -- > > Key: HDFS-13702 > URL: https://issues.apache.org/jira/browse/HDFS-13702 > Project: Hadoop HDFS > Issue Type: Bug > Components: performance >Affects Versions: 3.0.0 >Reporter: Todd Lipcon >Assignee: Todd Lipcon >Priority: Major > Attachments: hdfs-13702.patch, hdfs-13702.patch > > > I am seeing DFSClient.newReaderTraceScope take ~15% CPU in a teravalidate > workload even when HTrace is disabled. This is because it stringifies several > integers. We should avoid all allocation and stringification when htrace is > disabled. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13702) HTrace hooks taking 10-15% CPU in DFS client when disabled
[ https://issues.apache.org/jira/browse/HDFS-13702?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Todd Lipcon updated HDFS-13702: --- Attachment: hdfs-13702.patch > HTrace hooks taking 10-15% CPU in DFS client when disabled > -- > > Key: HDFS-13702 > URL: https://issues.apache.org/jira/browse/HDFS-13702 > Project: Hadoop HDFS > Issue Type: Bug > Components: performance >Affects Versions: 3.0.0 >Reporter: Todd Lipcon >Assignee: Todd Lipcon >Priority: Major > Attachments: hdfs-13702.patch, hdfs-13702.patch > > > I am seeing DFSClient.newReaderTraceScope take ~15% CPU in a teravalidate > workload even when HTrace is disabled. This is because it stringifies several > integers. We should avoid all allocation and stringification when htrace is > disabled. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13703) Avoid allocation of CorruptedBlocks hashmap when no corrupted blocks are hit
[ https://issues.apache.org/jira/browse/HDFS-13703?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Todd Lipcon updated HDFS-13703: --- Attachment: hdfs-13703.patch > Avoid allocation of CorruptedBlocks hashmap when no corrupted blocks are hit > > > Key: HDFS-13703 > URL: https://issues.apache.org/jira/browse/HDFS-13703 > Project: Hadoop HDFS > Issue Type: Improvement > Components: performance >Reporter: Todd Lipcon >Assignee: Todd Lipcon >Priority: Major > Attachments: hdfs-13703.patch > > > The DFSClient creates a CorruptedBlocks object, which contains a HashMap, on > every read call. In most cases, a read will not hit any corrupted blocks, and > this hashmap is not used. It seems the JIT isn't smart enough to eliminate > this allocation. We would be better off avoiding it and only allocating in > the rare case when a corrupt block is hit. > Removing this allocation reduced CPU usage of a TeraValidate job by about 10%. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13703) Avoid allocation of CorruptedBlocks hashmap when no corrupted blocks are hit
[ https://issues.apache.org/jira/browse/HDFS-13703?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Todd Lipcon updated HDFS-13703: --- Status: Patch Available (was: Open) > Avoid allocation of CorruptedBlocks hashmap when no corrupted blocks are hit > > > Key: HDFS-13703 > URL: https://issues.apache.org/jira/browse/HDFS-13703 > Project: Hadoop HDFS > Issue Type: Improvement > Components: performance >Reporter: Todd Lipcon >Assignee: Todd Lipcon >Priority: Major > Attachments: hdfs-13703.patch > > > The DFSClient creates a CorruptedBlocks object, which contains a HashMap, on > every read call. In most cases, a read will not hit any corrupted blocks, and > this hashmap is not used. It seems the JIT isn't smart enough to eliminate > this allocation. We would be better off avoiding it and only allocating in > the rare case when a corrupt block is hit. > Removing this allocation reduced CPU usage of a TeraValidate job by about 10%. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13703) Avoid allocation of CorruptedBlocks hashmap when no corrupted blocks are hit
[ https://issues.apache.org/jira/browse/HDFS-13703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16524149#comment-16524149 ] Anu Engineer commented on HDFS-13703: - {quote} TeraValidate job by about 10% {quote} Awesome. Looking forward to this patch. > Avoid allocation of CorruptedBlocks hashmap when no corrupted blocks are hit > > > Key: HDFS-13703 > URL: https://issues.apache.org/jira/browse/HDFS-13703 > Project: Hadoop HDFS > Issue Type: Improvement > Components: performance >Reporter: Todd Lipcon >Assignee: Todd Lipcon >Priority: Major > > The DFSClient creates a CorruptedBlocks object, which contains a HashMap, on > every read call. In most cases, a read will not hit any corrupted blocks, and > this hashmap is not used. It seems the JIT isn't smart enough to eliminate > this allocation. We would be better off avoiding it and only allocating in > the rare case when a corrupt block is hit. > Removing this allocation reduced CPU usage of a TeraValidate job by about 10%. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13702) HTrace hooks taking 10-15% CPU in DFS client when disabled
[ https://issues.apache.org/jira/browse/HDFS-13702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16524147#comment-16524147 ] Todd Lipcon commented on HDFS-13702: Here are some perf results based on a 180GB teravalidate on a small cluster, as well as a 1GB teravalidate on LocalJobRunner (against a remote HDFS) {code} 3.0.x original Avg map: 18sec CPU time spent (ms)2,208,950 GC time elapsed (ms)68,153 Performance counter stats for './run-validate.sh' (5 runs): 22357.081985 task-clock (msec) #2.688 CPUs utilized ( +- 6.78% ) 21,573 context-switches #0.965 K/sec ( +- 2.58% ) 1,300 cpu-migrations#0.058 K/sec ( +- 4.82% ) 425,146 page-faults #0.019 M/sec ( +- 4.52% ) 63,809,409,850 cycles#2.854 GHz ( +- 6.56% ) 66,580,182,677 instructions #1.04 insn per cycle ( +- 2.28% ) 13,489,574,848 branches # 603.369 M/sec ( +- 4.58% ) 158,670,595 branch-misses #1.18% of all branches ( +- 0.35% ) 8.317048233 seconds time elapsed ( +- 0.10% ) 3.0.x patched: Avg map time: 14sec CPU time spent (ms) 1,750,180 GC time elapsed (ms) 42,468 Performance counter stats for './run-validate.sh' (5 runs): 14466.559412 task-clock (msec) #2.006 CPUs utilized ( +- 3.18% ) 21,666 context-switches #0.001 M/sec ( +- 0.55% ) 1,180 cpu-migrations#0.082 K/sec ( +- 1.91% ) 234,159 page-faults #0.016 M/sec ( +- 0.60% ) 41,793,452,250 cycles#2.889 GHz ( +- 2.77% ) 55,219,815,925 instructions #1.32 insn per cycle ( +- 1.67% ) 9,837,238,534 branches # 679.998 M/sec ( +- 2.57% ) 161,071,903 branch-misses #1.64% of all branches ( +- 0.62% ) 7.210730451 seconds time elapsed ( +- 0.25% ) {code} > HTrace hooks taking 10-15% CPU in DFS client when disabled > -- > > Key: HDFS-13702 > URL: https://issues.apache.org/jira/browse/HDFS-13702 > Project: Hadoop HDFS > Issue Type: Bug > Components: performance >Affects Versions: 3.0.0 >Reporter: Todd Lipcon >Assignee: Todd Lipcon >Priority: Major > Attachments: hdfs-13702.patch > > > I am seeing DFSClient.newReaderTraceScope take ~15% CPU in a teravalidate > workload even when HTrace is disabled. This is because it stringifies several > integers. We should avoid all allocation and stringification when htrace is > disabled. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13702) HTrace hooks taking 10-15% CPU in DFS client when disabled
[ https://issues.apache.org/jira/browse/HDFS-13702?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Todd Lipcon updated HDFS-13702: --- Status: Patch Available (was: Open) > HTrace hooks taking 10-15% CPU in DFS client when disabled > -- > > Key: HDFS-13702 > URL: https://issues.apache.org/jira/browse/HDFS-13702 > Project: Hadoop HDFS > Issue Type: Bug > Components: performance >Affects Versions: 3.0.0 >Reporter: Todd Lipcon >Assignee: Todd Lipcon >Priority: Major > Attachments: hdfs-13702.patch > > > I am seeing DFSClient.newReaderTraceScope take ~15% CPU in a teravalidate > workload even when HTrace is disabled. This is because it stringifies several > integers. We should avoid all allocation and stringification when htrace is > disabled. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDFS-13702) HTrace hooks taking 10-15% CPU in DFS client when disabled
[ https://issues.apache.org/jira/browse/HDFS-13702?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Todd Lipcon reassigned HDFS-13702: -- Assignee: Todd Lipcon > HTrace hooks taking 10-15% CPU in DFS client when disabled > -- > > Key: HDFS-13702 > URL: https://issues.apache.org/jira/browse/HDFS-13702 > Project: Hadoop HDFS > Issue Type: Bug > Components: performance >Affects Versions: 3.0.0 >Reporter: Todd Lipcon >Assignee: Todd Lipcon >Priority: Major > Attachments: hdfs-13702.patch > > > I am seeing DFSClient.newReaderTraceScope take ~15% CPU in a teravalidate > workload even when HTrace is disabled. This is because it stringifies several > integers. We should avoid all allocation and stringification when htrace is > disabled. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13702) HTrace hooks taking 10-15% CPU in DFS client when disabled
[ https://issues.apache.org/jira/browse/HDFS-13702?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Todd Lipcon updated HDFS-13702: --- Attachment: hdfs-13702.patch > HTrace hooks taking 10-15% CPU in DFS client when disabled > -- > > Key: HDFS-13702 > URL: https://issues.apache.org/jira/browse/HDFS-13702 > Project: Hadoop HDFS > Issue Type: Bug > Components: performance >Affects Versions: 3.0.0 >Reporter: Todd Lipcon >Priority: Major > Attachments: hdfs-13702.patch > > > I am seeing DFSClient.newReaderTraceScope take ~15% CPU in a teravalidate > workload even when HTrace is disabled. This is because it stringifies several > integers. We should avoid all allocation and stringification when htrace is > disabled. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-193) Make Datanode heartbeat dispatcher in SCM event based
[ https://issues.apache.org/jira/browse/HDDS-193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16524134#comment-16524134 ] genericqa commented on HDDS-193: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 42s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 7 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 41s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 30m 11s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 30m 43s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 24s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 8s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 51s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-ozone/integration-test {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 43s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 21s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 29m 21s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 29m 21s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 7s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 5s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-ozone/integration-test {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 2m 24s{color} | {color:red} server-scm in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 21m 55s{color} | {color:red} integration-test in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 41s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}148m 26s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdds.scm.server.TestSCMDatanodeHeartbeatDispatcher | | | hadoop.ozone.TestStorageContainerManager | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd | | JIRA Issue | HDDS-193 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12929213/HDDS-193.002.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux
[jira] [Commented] (HDDS-175) Refactor ContainerInfo to remove Pipeline object from it
[ https://issues.apache.org/jira/browse/HDDS-175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16524130#comment-16524130 ] genericqa commented on HDDS-175: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 23s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 1s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 17 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 54s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 47s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m 32s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 26s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 4m 54s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 16m 13s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-ozone/integration-test {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 39s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 22s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 27m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 27m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 27m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 4m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 33s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-ozone/integration-test {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 54s{color} | {color:red} hadoop-hdds/server-scm generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 56s{color} | {color:red} hadoop-ozone/tools generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 39s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 17s{color} | {color:green} common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 32s{color} | {color:green} client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 2m 34s{color} | {color:red} server-scm in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 32s{color} | {color:green} tools in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 41s{color} | {color:green} common in the patch passed. {color} | | {color:green}+1{color} |
[jira] [Created] (HDFS-13703) Avoid allocation of CorruptedBlocks hashmap when no corrupted blocks are hit
Todd Lipcon created HDFS-13703: -- Summary: Avoid allocation of CorruptedBlocks hashmap when no corrupted blocks are hit Key: HDFS-13703 URL: https://issues.apache.org/jira/browse/HDFS-13703 Project: Hadoop HDFS Issue Type: Improvement Components: performance Reporter: Todd Lipcon Assignee: Todd Lipcon The DFSClient creates a CorruptedBlocks object, which contains a HashMap, on every read call. In most cases, a read will not hit any corrupted blocks, and this hashmap is not used. It seems the JIT isn't smart enough to eliminate this allocation. We would be better off avoiding it and only allocating in the rare case when a corrupt block is hit. Removing this allocation reduced CPU usage of a TeraValidate job by about 10%. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13702) HTrace hooks taking 10-15% CPU in DFS client when disabled
[ https://issues.apache.org/jira/browse/HDFS-13702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16524128#comment-16524128 ] Todd Lipcon commented on HDFS-13702: Removing all the HTrace stuff from the block reader path improved the performance of TeraValidate by about 20%, reduced CPU consumption of the job in LocalJobRunner mode by about 36%. We could attempt to use the HTrace APIs in a fancier way to detect when tracing is not enabled, but I think we may be better off just removing it entirely. HTrace as a project is being retired. If we choose to re-introduce tracing (eg using OpenTracing) we should make sure to benchmark it more thoroughly. [~stack] what do you think? > HTrace hooks taking 10-15% CPU in DFS client when disabled > -- > > Key: HDFS-13702 > URL: https://issues.apache.org/jira/browse/HDFS-13702 > Project: Hadoop HDFS > Issue Type: Bug > Components: performance >Affects Versions: 3.0.0 >Reporter: Todd Lipcon >Priority: Major > > I am seeing DFSClient.newReaderTraceScope take ~15% CPU in a teravalidate > workload even when HTrace is disabled. This is because it stringifies several > integers. We should avoid all allocation and stringification when htrace is > disabled. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13665) Move RPC response serialization into Server.doResponse
[ https://issues.apache.org/jira/browse/HDFS-13665?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16524126#comment-16524126 ] genericqa commented on HDFS-13665: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 43s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} HDFS-12943 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 29m 49s{color} | {color:green} HDFS-12943 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 31m 16s{color} | {color:green} HDFS-12943 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 21s{color} | {color:green} HDFS-12943 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 24s{color} | {color:green} HDFS-12943 passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 59s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 38s{color} | {color:green} HDFS-12943 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 0s{color} | {color:green} HDFS-12943 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 29m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 29m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 3s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 19s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 42s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}134m 58s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd | | JIRA Issue | HDFS-13665 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12929214/HDFS-13665-HDFS-12943.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 037277de9352 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | HDFS-12943 / 8310973 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_171 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/24499/testReport/ | | Max. process+thread count | 1348 (vs. ulimit of 1) | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/24499/console | | Powered by | Apache Yetus 0.8.0-SNAPSHOT http://yetus.apache.org | This message was automatically
[jira] [Created] (HDDS-195) Create generic CommandWatcher utility
Elek, Marton created HDDS-195: - Summary: Create generic CommandWatcher utility Key: HDDS-195 URL: https://issues.apache.org/jira/browse/HDDS-195 Project: Hadoop Distributed Data Store Issue Type: Improvement Reporter: Elek, Marton Fix For: 0.2.1 In some cases we need a class which can track the status of the outgoing commands. The commands should be resent after a while except a status message is received about command completion. On high level, we need a builder factory, which takes the following parameters: * (destination) event type and the payload of the command which should be repeated. * the ID of the command/event * The event/topic of the completion messages. If a EventWithIdentifier is received on this topic, the specific event is done and don't need to resend it. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-194) Remove NodePoolManager and node pool handling from SCM
[ https://issues.apache.org/jira/browse/HDDS-194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Elek, Marton updated HDDS-194: -- Status: Patch Available (was: Open) > Remove NodePoolManager and node pool handling from SCM > -- > > Key: HDDS-194 > URL: https://issues.apache.org/jira/browse/HDDS-194 > Project: Hadoop Distributed Data Store > Issue Type: Improvement > Components: SCM >Reporter: Elek, Marton >Assignee: Elek, Marton >Priority: Major > Fix For: 0.2.1 > > Attachments: HDDS-194.001.patch > > > The current code use NodePoolManager and ContainerSupervisor to group the > nodes to smaller groups (pools) and handle the pull based node reports group > by group. > But this code is not used any more as we switch back to use a push based > model. In the datanode the reports could be handled by the specific report > handlers, and in the scm side the reports will be processed by the > SCMHeartbeatDispatcher which will send the events to the EventQueue. > As of now the NodePool abstraction could be removed from the code. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-194) Remove NodePoolManager and node pool handling from SCM
[ https://issues.apache.org/jira/browse/HDDS-194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Elek, Marton updated HDDS-194: -- Attachment: HDDS-194.001.patch > Remove NodePoolManager and node pool handling from SCM > -- > > Key: HDDS-194 > URL: https://issues.apache.org/jira/browse/HDDS-194 > Project: Hadoop Distributed Data Store > Issue Type: Improvement > Components: SCM >Reporter: Elek, Marton >Assignee: Elek, Marton >Priority: Major > Fix For: 0.2.1 > > Attachments: HDDS-194.001.patch > > > The current code use NodePoolManager and ContainerSupervisor to group the > nodes to smaller groups (pools) and handle the pull based node reports group > by group. > But this code is not used any more as we switch back to use a push based > model. In the datanode the reports could be handled by the specific report > handlers, and in the scm side the reports will be processed by the > SCMHeartbeatDispatcher which will send the events to the EventQueue. > As of now the NodePool abstraction could be removed from the code. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13635) Incorrect message when block is not found
[ https://issues.apache.org/jira/browse/HDFS-13635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16524074#comment-16524074 ] genericqa commented on HDFS-13635: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 44s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 29m 12s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 40s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 50s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 2s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 38s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 58s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 31s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 41s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 24s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 66m 4s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd | | JIRA Issue | HDFS-13635 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12929219/HDFS-13635.003.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux dca1f1032774 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 238fe00 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_171 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/24498/testReport/ | | Max. process+thread count | 335 (vs. ulimit of 1) | | modules | C: hadoop-hdfs-project/hadoop-hdfs-client U: hadoop-hdfs-project/hadoop-hdfs-client | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/24498/console | | Powered by | Apache Yetus 0.8.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Incorrect message when block is
[jira] [Commented] (HDDS-186) Create under replicated queue
[ https://issues.apache.org/jira/browse/HDDS-186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16524069#comment-16524069 ] genericqa commented on HDDS-186: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 45s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 30m 35s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 26s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 31s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 42s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 42s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 28s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 55s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 35s{color} | {color:green} container-service in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 26s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 62m 19s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd | | JIRA Issue | HDDS-186 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12929218/HDDS-186.02.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux aa875a00c794 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 238fe00 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_171 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/PreCommit-HDDS-Build/362/testReport/ | | Max. process+thread count | 343 (vs. ulimit of 1) | | modules | C: hadoop-hdds/container-service U: hadoop-hdds/container-service | | Console output | https://builds.apache.org/job/PreCommit-HDDS-Build/362/console | | Powered by | Apache Yetus 0.8.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Create under replicated queue > - > > Key: HDDS-186 > URL: https://issues.apache.org/jira/browse/HDDS-186 > Project:
[jira] [Created] (HDFS-13702) HTrace hooks taking 10-15% CPU in DFS client when disabled
Todd Lipcon created HDFS-13702: -- Summary: HTrace hooks taking 10-15% CPU in DFS client when disabled Key: HDFS-13702 URL: https://issues.apache.org/jira/browse/HDFS-13702 Project: Hadoop HDFS Issue Type: Bug Components: performance Affects Versions: 3.0.0 Reporter: Todd Lipcon I am seeing DFSClient.newReaderTraceScope take ~15% CPU in a teravalidate workload even when HTrace is disabled. This is because it stringifies several integers. We should avoid all allocation and stringification when htrace is disabled. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-13701) Removal of logging guards regressed performance
Todd Lipcon created HDFS-13701: -- Summary: Removal of logging guards regressed performance Key: HDFS-13701 URL: https://issues.apache.org/jira/browse/HDFS-13701 Project: Hadoop HDFS Issue Type: Bug Components: performance Affects Versions: 3.0.0 Reporter: Todd Lipcon HDFS-8971 removed various logging guards from hot methods in the DFS client. In theory using a format string with {} placeholders is equivalent, but in fact it's not equivalent when one or more of the variable arguments are primitives. To be passed as part of the varargs array, the primitives need to be boxed. I am seeing Integer.valueOf() inside BlockReaderLocal.read taking ~3% of CPU. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDDS-175) Refactor ContainerInfo to remove Pipeline object from it
[ https://issues.apache.org/jira/browse/HDDS-175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16523946#comment-16523946 ] Ajay Kumar edited comment on HDDS-175 at 6/26/18 5:15 PM: -- [~nandakumar131] thanks for review, Patch v7 removes dnList from ContainerInfo and SCMContainerInfo. So some of the refactoring you suggested is not applicable. PipelineId in ContainerInfo is renamed to pipelineName to make it consistent with Pipeline field. Also removed ClientPbHelper. Will handle checkstyle and OzonePBHelper suggestion in next iteration. was (Author: ajayydv): [~nandakumar131] thanks for review, Patch v7 removes dnList from ContainerInfo and SCMContainerInfo. So some of the refactoring you suggested is not applicable. PipelineId in ContainerInfo is renamed to pipelineName to make it consistent with Pipeline field. Also removed ClientPbHelper. Will remove OzonePBHelper in next iteration. > Refactor ContainerInfo to remove Pipeline object from it > - > > Key: HDDS-175 > URL: https://issues.apache.org/jira/browse/HDDS-175 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: SCM >Affects Versions: 0.2.1 >Reporter: Ajay Kumar >Assignee: Ajay Kumar >Priority: Major > Fix For: 0.2.1 > > Attachments: HDDS-175.00.patch, HDDS-175.01.patch, HDDS-175.02.patch, > HDDS-175.03.patch, HDDS-175.04.patch, HDDS-175.05.patch, HDDS-175.06.patch, > HDDS-175.07.patch > > > Refactor ContainerInfo to remove Pipeline object from it. We can add below 4 > fields to ContainerInfo to recreate pipeline if required: > # pipelineId > # replication type > # expected replication count > # DataNode where its replica exist -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13635) Incorrect message when block is not found
[ https://issues.apache.org/jira/browse/HDFS-13635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16523972#comment-16523972 ] Gabor Bota commented on HDFS-13635: --- Since all places that could throw ReplicaNotFoundException with NON_EXISTENT_REPLICA string the usage of the methods are general, it would be a great idea to fix the constant string itself, and not to add a new constant. This solution can be found in my v003 patch. > Incorrect message when block is not found > - > > Key: HDFS-13635 > URL: https://issues.apache.org/jira/browse/HDFS-13635 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode >Reporter: Wei-Chiu Chuang >Assignee: Gabor Bota >Priority: Major > Attachments: HDFS-13635.001.patch, HDFS-13635.002.patch, > HDFS-13635.003.patch > > > When client opens a file, it asks DataNode to check the blocks' visible > length. If somehow the block is not on the DN, it throws "Cannot append to a > non-existent replica" message, which is incorrect, because > getReplicaVisibleLength() is called for different use, just not for appending > to a block. It should just state "block is not found" > The following stacktrace comes from a CDH5.13, but it looks like the same > warning exists in Apache Hadoop trunk. > {noformat} > 2018-05-29 09:23:41,966 INFO org.apache.hadoop.ipc.Server: IPC Server handler > 2 on 50020, call > org.apache.hadoop.hdfs.protocol.ClientDatanodeProtocol.getReplicaVisibleLength > from 10.0.0.14:53217 Call#38334117 Retry#0 > org.apache.hadoop.hdfs.server.datanode.ReplicaNotFoundException: Cannot > append to a non-existent replica > BP-725378529-10.236.236.8-1410027444173:13276792346 > at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.getReplicaInfo(FsDatasetImpl.java:792) > at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.getReplicaVisibleLength(FsDatasetImpl.java:2588) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.getReplicaVisibleLength(DataNode.java:2756) > at > org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getReplicaVisibleLength(ClientDatanodeProtocolServerSideTranslatorPB.java:107) > at > org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:17873) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2217) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2213) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1917) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2211){noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13635) Incorrect message when block is not found
[ https://issues.apache.org/jira/browse/HDFS-13635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gabor Bota updated HDFS-13635: -- Attachment: HDFS-13635.003.patch > Incorrect message when block is not found > - > > Key: HDFS-13635 > URL: https://issues.apache.org/jira/browse/HDFS-13635 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode >Reporter: Wei-Chiu Chuang >Assignee: Gabor Bota >Priority: Major > Attachments: HDFS-13635.001.patch, HDFS-13635.002.patch, > HDFS-13635.003.patch > > > When client opens a file, it asks DataNode to check the blocks' visible > length. If somehow the block is not on the DN, it throws "Cannot append to a > non-existent replica" message, which is incorrect, because > getReplicaVisibleLength() is called for different use, just not for appending > to a block. It should just state "block is not found" > The following stacktrace comes from a CDH5.13, but it looks like the same > warning exists in Apache Hadoop trunk. > {noformat} > 2018-05-29 09:23:41,966 INFO org.apache.hadoop.ipc.Server: IPC Server handler > 2 on 50020, call > org.apache.hadoop.hdfs.protocol.ClientDatanodeProtocol.getReplicaVisibleLength > from 10.0.0.14:53217 Call#38334117 Retry#0 > org.apache.hadoop.hdfs.server.datanode.ReplicaNotFoundException: Cannot > append to a non-existent replica > BP-725378529-10.236.236.8-1410027444173:13276792346 > at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.getReplicaInfo(FsDatasetImpl.java:792) > at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.getReplicaVisibleLength(FsDatasetImpl.java:2588) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.getReplicaVisibleLength(DataNode.java:2756) > at > org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getReplicaVisibleLength(ClientDatanodeProtocolServerSideTranslatorPB.java:107) > at > org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:17873) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2217) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2213) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1917) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2211){noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-186) Create under replicated queue
[ https://issues.apache.org/jira/browse/HDDS-186?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ajay Kumar updated HDDS-186: Attachment: HDDS-186.02.patch > Create under replicated queue > - > > Key: HDDS-186 > URL: https://issues.apache.org/jira/browse/HDDS-186 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: SCM >Affects Versions: 0.2.1 >Reporter: Ajay Kumar >Assignee: Ajay Kumar >Priority: Major > Fix For: 0.2.1 > > Attachments: HDDS-186.00.patch, HDDS-186.01.patch, HDDS-186.02.patch > > > Create under replicated queue to replicate under replicated containers in > Ozone. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-167) Rename KeySpaceManager to OzoneManager
[ https://issues.apache.org/jira/browse/HDDS-167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16523966#comment-16523966 ] Arpit Agarwal commented on HDDS-167: v05 - rebase to trunk, fix more acceptance tests. > Rename KeySpaceManager to OzoneManager > -- > > Key: HDDS-167 > URL: https://issues.apache.org/jira/browse/HDDS-167 > Project: Hadoop Distributed Data Store > Issue Type: Task > Components: Ozone Manager >Reporter: Arpit Agarwal >Assignee: Arpit Agarwal >Priority: Major > Fix For: 0.2.1 > > Attachments: HDDS-167.01.patch, HDDS-167.02.patch, HDDS-167.03.patch, > HDDS-167.04.patch, HDDS-167.05.patch > > > The Ozone KeySpaceManager daemon was renamed to OzoneManager. There's some > more changes needed to complete the rename everywhere e.g. > - command-line > - documentation > - unit tests > - Acceptance tests -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-167) Rename KeySpaceManager to OzoneManager
[ https://issues.apache.org/jira/browse/HDDS-167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arpit Agarwal updated HDDS-167: --- Attachment: HDDS-167.05.patch > Rename KeySpaceManager to OzoneManager > -- > > Key: HDDS-167 > URL: https://issues.apache.org/jira/browse/HDDS-167 > Project: Hadoop Distributed Data Store > Issue Type: Task > Components: Ozone Manager >Reporter: Arpit Agarwal >Assignee: Arpit Agarwal >Priority: Major > Fix For: 0.2.1 > > Attachments: HDDS-167.01.patch, HDDS-167.02.patch, HDDS-167.03.patch, > HDDS-167.04.patch, HDDS-167.05.patch > > > The Ozone KeySpaceManager daemon was renamed to OzoneManager. There's some > more changes needed to complete the rename everywhere e.g. > - command-line > - documentation > - unit tests > - Acceptance tests -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDDS-175) Refactor ContainerInfo to remove Pipeline object from it
[ https://issues.apache.org/jira/browse/HDDS-175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16523946#comment-16523946 ] Ajay Kumar edited comment on HDDS-175 at 6/26/18 4:35 PM: -- [~nandakumar131] thanks for review, Patch v7 removes dnList from ContainerInfo and SCMContainerInfo. So some of the refactoring you suggested is not applicable. PipelineId in ContainerInfo is renamed to pipelineName to make it consistent with Pipeline field. Also removed ClientPbHelper. Will remove OzonePBHelper in next iteration. was (Author: ajayydv): [~nandakumar131] thanks for review, Patch v7 removes dnList from ContainerInfo and SCMContainerInfo. So some of the refactoring you suggested is not applicable. Also removed ClientPbHelper. Will remove OzonePBHelper in next iteration. > Refactor ContainerInfo to remove Pipeline object from it > - > > Key: HDDS-175 > URL: https://issues.apache.org/jira/browse/HDDS-175 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: SCM >Affects Versions: 0.2.1 >Reporter: Ajay Kumar >Assignee: Ajay Kumar >Priority: Major > Fix For: 0.2.1 > > Attachments: HDDS-175.00.patch, HDDS-175.01.patch, HDDS-175.02.patch, > HDDS-175.03.patch, HDDS-175.04.patch, HDDS-175.05.patch, HDDS-175.06.patch, > HDDS-175.07.patch > > > Refactor ContainerInfo to remove Pipeline object from it. We can add below 4 > fields to ContainerInfo to recreate pipeline if required: > # pipelineId > # replication type > # expected replication count > # DataNode where its replica exist -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-193) Make Datanode heartbeat dispatcher in SCM event based
[ https://issues.apache.org/jira/browse/HDDS-193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Elek, Marton updated HDDS-193: -- Status: Patch Available (was: Open) > Make Datanode heartbeat dispatcher in SCM event based > - > > Key: HDDS-193 > URL: https://issues.apache.org/jira/browse/HDDS-193 > Project: Hadoop Distributed Data Store > Issue Type: Improvement > Components: SCM >Reporter: Elek, Marton >Assignee: Elek, Marton >Priority: Major > Fix For: 0.2.1 > > Attachments: HDDS-193.001.patch, HDDS-193.002.patch > > > HDDS-163 introduced a new dispatcher in the SCM side to send the heartbeat > report parts to the appropriate listeners. I propose to make it EventQueue > based to handle/monitor these async calls in the same way as the other events. > Report handlers would subscribe to the specific events to process the > information. > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13665) Move RPC response serialization into Server.doResponse
[ https://issues.apache.org/jira/browse/HDFS-13665?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Plamen Jeliazkov updated HDFS-13665: Attachment: HDFS-13665-HDFS-12943.001.patch > Move RPC response serialization into Server.doResponse > -- > > Key: HDFS-13665 > URL: https://issues.apache.org/jira/browse/HDFS-13665 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: HDFS-12943 >Reporter: Plamen Jeliazkov >Assignee: Plamen Jeliazkov >Priority: Major > Attachments: HDFS-13665-HDFS-12943.000.patch, > HDFS-13665-HDFS-12943.001.patch > > > In HDFS-13399 we addressed a race condition in AlignmentContext processing > where the RPC response would assign a transactionId independently of the > transactions own processing, resulting in a stateId response that was lower > than expected. However this caused us to serialize the RpcResponse twice in > order to address the header field change. > See here: > https://issues.apache.org/jira/browse/HDFS-13399?focusedCommentId=16464279=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16464279 > And here: > https://issues.apache.org/jira/browse/HDFS-13399?focusedCommentId=16498660=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16498660 > At the end it was agreed upon to move the logic of Server.setupResponse into > Server.doResponse directly. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-193) Make Datanode heartbeat dispatcher in SCM event based
[ https://issues.apache.org/jira/browse/HDDS-193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Elek, Marton updated HDDS-193: -- Attachment: HDDS-193.002.patch > Make Datanode heartbeat dispatcher in SCM event based > - > > Key: HDDS-193 > URL: https://issues.apache.org/jira/browse/HDDS-193 > Project: Hadoop Distributed Data Store > Issue Type: Improvement > Components: SCM >Reporter: Elek, Marton >Assignee: Elek, Marton >Priority: Major > Fix For: 0.2.1 > > Attachments: HDDS-193.001.patch, HDDS-193.002.patch > > > HDDS-163 introduced a new dispatcher in the SCM side to send the heartbeat > report parts to the appropriate listeners. I propose to make it EventQueue > based to handle/monitor these async calls in the same way as the other events. > Report handlers would subscribe to the specific events to process the > information. > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-175) Refactor ContainerInfo to remove Pipeline object from it
[ https://issues.apache.org/jira/browse/HDDS-175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16523946#comment-16523946 ] Ajay Kumar commented on HDDS-175: - [~nandakumar131] thanks for review, Patch v7 removes dnList from ContainerInfo and SCMContainerInfo. So some of the refactoring you suggested is not applicable. Also removed ClientPbHelper. Will remove OzonePBHelper in next iteration. > Refactor ContainerInfo to remove Pipeline object from it > - > > Key: HDDS-175 > URL: https://issues.apache.org/jira/browse/HDDS-175 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: SCM >Affects Versions: 0.2.1 >Reporter: Ajay Kumar >Assignee: Ajay Kumar >Priority: Major > Fix For: 0.2.1 > > Attachments: HDDS-175.00.patch, HDDS-175.01.patch, HDDS-175.02.patch, > HDDS-175.03.patch, HDDS-175.04.patch, HDDS-175.05.patch, HDDS-175.06.patch, > HDDS-175.07.patch > > > Refactor ContainerInfo to remove Pipeline object from it. We can add below 4 > fields to ContainerInfo to recreate pipeline if required: > # pipelineId > # replication type > # expected replication count > # DataNode where its replica exist -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-175) Refactor ContainerInfo to remove Pipeline object from it
[ https://issues.apache.org/jira/browse/HDDS-175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ajay Kumar updated HDDS-175: Attachment: HDDS-175.07.patch > Refactor ContainerInfo to remove Pipeline object from it > - > > Key: HDDS-175 > URL: https://issues.apache.org/jira/browse/HDDS-175 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: SCM >Affects Versions: 0.2.1 >Reporter: Ajay Kumar >Assignee: Ajay Kumar >Priority: Major > Fix For: 0.2.1 > > Attachments: HDDS-175.00.patch, HDDS-175.01.patch, HDDS-175.02.patch, > HDDS-175.03.patch, HDDS-175.04.patch, HDDS-175.05.patch, HDDS-175.06.patch, > HDDS-175.07.patch > > > Refactor ContainerInfo to remove Pipeline object from it. We can add below 4 > fields to ContainerInfo to recreate pipeline if required: > # pipelineId > # replication type > # expected replication count > # DataNode where its replica exist -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10664) layoutVersion mismatch between Namenode VERSION file and Journalnode VERSION file after cluster upgrade
[ https://issues.apache.org/jira/browse/HDFS-10664?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16523900#comment-16523900 ] genericqa commented on HDFS-10664: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 21s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 29m 46s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 2s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 14s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 8s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 21s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 3s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 11s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 48s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}101m 45s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 38s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}169m 35s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFS | | | hadoop.hdfs.server.namenode.TestNameNodeMXBean | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd | | JIRA Issue | HDFS-10664 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12820927/HDFS-10664.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 68c8cdc74d9a 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 238fe00 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_171 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/24497/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/24497/testReport/ | | Max. process+thread count | 2716 (vs. ulimit of 1) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/24497/console | | Powered
[jira] [Updated] (HDDS-193) Make Datanode heartbeat dispatcher in SCM event based
[ https://issues.apache.org/jira/browse/HDDS-193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Elek, Marton updated HDDS-193: -- Status: Open (was: Patch Available) > Make Datanode heartbeat dispatcher in SCM event based > - > > Key: HDDS-193 > URL: https://issues.apache.org/jira/browse/HDDS-193 > Project: Hadoop Distributed Data Store > Issue Type: Improvement > Components: SCM >Reporter: Elek, Marton >Assignee: Elek, Marton >Priority: Major > Fix For: 0.2.1 > > Attachments: HDDS-193.001.patch > > > HDDS-163 introduced a new dispatcher in the SCM side to send the heartbeat > report parts to the appropriate listeners. I propose to make it EventQueue > based to handle/monitor these async calls in the same way as the other events. > Report handlers would subscribe to the specific events to process the > information. > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13700) The process of loading image can be done in a pipeline model
[ https://issues.apache.org/jira/browse/HDFS-13700?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16523841#comment-16523841 ] genericqa commented on HDFS-13700: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 23s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 54s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 26m 47s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 24s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 55s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 18s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 55s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 19s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 25m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 25m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 11s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 7 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 12s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 35s{color} | {color:red} hadoop-common-project/hadoop-common generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 0s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 54s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 11s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}109m 28s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 42s{color} | {color:red} The patch generated 2 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}235m 46s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:hadoop-common-project/hadoop-common | | | org.apache.hadoop.util.PipelineTask$Carriage$CarriageThread.run() does not release lock on all exception paths At PipelineTask.java:on all exception paths At PipelineTask.java:[line 148] | | FindBugs | module:hadoop-hdfs-project/hadoop-hdfs | | | Dead store to counter in
[jira] [Commented] (HDFS-13697) EDEK decrypt fails due to proxy user being lost because of empty AccessControllerContext
[ https://issues.apache.org/jira/browse/HDFS-13697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16523824#comment-16523824 ] genericqa commented on HDFS-13697: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 24s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 31s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 29m 19s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 46s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 15s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 2s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 28s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 57s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 23s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 12s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 20m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 20m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 5s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 30s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 31s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 47s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}104m 56s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 32s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}220m 29s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy | | | hadoop.hdfs.server.namenode.TestReencryptionWithKMS | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd | | JIRA Issue | HDFS-13697 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12929171/HDFS-13697.03.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle xml | | uname | Linux d2693d4b10b4 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality |
[jira] [Commented] (HDFS-10664) layoutVersion mismatch between Namenode VERSION file and Journalnode VERSION file after cluster upgrade
[ https://issues.apache.org/jira/browse/HDFS-10664?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16523784#comment-16523784 ] genericqa commented on HDFS-10664: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 49s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 56s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 14s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 1s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 13s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 51s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 46s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 59s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 45s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 77m 43s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 32s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}140m 13s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.namenode.ha.TestEditLogTailer | | | hadoop.hdfs.client.impl.TestBlockReaderLocal | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd | | JIRA Issue | HDFS-10664 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12820927/HDFS-10664.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux c3d05dbba50d 3.13.0-141-generic #190-Ubuntu SMP Fri Jan 19 12:52:38 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 238fe00 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_171 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/24496/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/24496/testReport/ | | Max. process+thread count | 3540 (vs. ulimit of 1) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/24496/console | | Powered by | Apache
[jira] [Commented] (HDFS-13610) [Edit Tail Fast Path Pt 4] Cleanup: integration test, documentation, remove unnecessary dummy sync
[ https://issues.apache.org/jira/browse/HDFS-13610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16523765#comment-16523765 ] genericqa commented on HDFS-13610: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 24s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} HDFS-12943 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 29m 48s{color} | {color:green} HDFS-12943 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 9s{color} | {color:green} HDFS-12943 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 13s{color} | {color:green} HDFS-12943 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 6s{color} | {color:green} HDFS-12943 passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 0s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 59s{color} | {color:green} HDFS-12943 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 46s{color} | {color:green} HDFS-12943 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 5s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 5s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 9s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 29s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}107m 55s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 36s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}175m 33s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestDFSInotifyEventInputStreamKerberized | | | hadoop.hdfs.server.namenode.ha.TestStandbyInProgressTail | | | hadoop.hdfs.client.impl.TestBlockReaderLocal | | | hadoop.hdfs.qjournal.client.TestQuorumJournalManager | | | hadoop.hdfs.TestDFSClientRetries | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd | | JIRA Issue | HDFS-13610 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12929180/HDFS-13610-HDFS-12943.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 80c4bb60a6a0 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | HDFS-12943 / 8310973 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_171 | | findbugs | v3.1.0-RC1 | | whitespace | https://builds.apache.org/job/PreCommit-HDFS-Build/24494/artifact/out/whitespace-eol.txt | | unit |
[jira] [Commented] (HDFS-13690) Improve error message when creating encryption zone while KMS is unreachable
[ https://issues.apache.org/jira/browse/HDFS-13690?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16523632#comment-16523632 ] Kitti Nanasi commented on HDFS-13690: - Thank you [~xiaochen] for the comments! * This Jira only handles the crypto admin, the key shell command is just an example in the description about how descriptive the output should be. * Good idea, I removed the stack trace in patch v002 and wrote a debug message instead of that. * I didn’t handle SocketTimeoutException and ConnectException together, because SocketTimeoutException is an internal exception with meaningful messages and it is fine if that exception is thrown further, but ConnectException doesn’t have a meaningful exception message (only “Connection refused.”) and I want to throw further an exception with a meaningful message, because the exception message will be printed out when the createZone command is executed. And also I think it provides more info if the whole url is printed out, not just the ip address and the port. New output: {code:java} root@ad1edbfc9866:/hadoop# hdfs crypto -createZone -keyName mykey -path /zone RemoteException: Failed to connect to: http://localhost:9600/kms/v1/key/mykey/_metadata {code} > Improve error message when creating encryption zone while KMS is unreachable > > > Key: HDFS-13690 > URL: https://issues.apache.org/jira/browse/HDFS-13690 > Project: Hadoop HDFS > Issue Type: Improvement > Components: encryption, hdfs, kms >Reporter: Kitti Nanasi >Assignee: Kitti Nanasi >Priority: Minor > Attachments: HDFS-13690.001.patch, HDFS-13690.002.patch, > HDFS-13690.003.patch > > > In failure testing, we stopped the KMS and then tried to run some encryption > related commands. > {{hdfs crypto -createZone}} will complain with a short "RemoteException: > Connection refused." This message could be improved to explain that we cannot > connect to the KMSClientProvier. > For example, {{hadoop key list}} while KMS is down will error: > {code} > -bash-4.1$ hadoop key list > Cannot list keys for KeyProvider: > KMSClientProvider[http://hdfs-cdh5-vanilla-1.vpc.cloudera.com:16000/kms/v1/]: > Connection refusedjava.net.ConnectException: Connection refused > at java.net.PlainSocketImpl.socketConnect(Native Method) > at > java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) > at > java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) > at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) > at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) > at java.net.Socket.connect(Socket.java:579) > at sun.net.NetworkClient.doConnect(NetworkClient.java:175) > at sun.net.www.http.HttpClient.openServer(HttpClient.java:432) > at sun.net.www.http.HttpClient.openServer(HttpClient.java:527) > at sun.net.www.http.HttpClient.(HttpClient.java:211) > at sun.net.www.http.HttpClient.New(HttpClient.java:308) > at sun.net.www.http.HttpClient.New(HttpClient.java:326) > at > sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:996) > at > sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:932) > at > sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:850) > at > org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:186) > at > org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.authenticate(DelegationTokenAuthenticator.java:125) > at > org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:216) > at > org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL.openConnection(DelegationTokenAuthenticatedURL.java:312) > at > org.apache.hadoop.crypto.key.kms.KMSClientProvider$1.run(KMSClientProvider.java:397) > at > org.apache.hadoop.crypto.key.kms.KMSClientProvider$1.run(KMSClientProvider.java:392) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614) > at > org.apache.hadoop.crypto.key.kms.KMSClientProvider.createConnection(KMSClientProvider.java:392) > at > org.apache.hadoop.crypto.key.kms.KMSClientProvider.getKeys(KMSClientProvider.java:479) > at > org.apache.hadoop.crypto.key.KeyShell$ListCommand.execute(KeyShell.java:286) > at org.apache.hadoop.crypto.key.KeyShell.run(KeyShell.java:79) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) > at org.apache.hadoop.crypto.key.KeyShell.main(KeyShell.java:513) > {code} -- This message was sent by
[jira] [Updated] (HDFS-13690) Improve error message when creating encryption zone while KMS is unreachable
[ https://issues.apache.org/jira/browse/HDFS-13690?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kitti Nanasi updated HDFS-13690: Attachment: HDFS-13690.003.patch > Improve error message when creating encryption zone while KMS is unreachable > > > Key: HDFS-13690 > URL: https://issues.apache.org/jira/browse/HDFS-13690 > Project: Hadoop HDFS > Issue Type: Improvement > Components: encryption, hdfs, kms >Reporter: Kitti Nanasi >Assignee: Kitti Nanasi >Priority: Minor > Attachments: HDFS-13690.001.patch, HDFS-13690.002.patch, > HDFS-13690.003.patch > > > In failure testing, we stopped the KMS and then tried to run some encryption > related commands. > {{hdfs crypto -createZone}} will complain with a short "RemoteException: > Connection refused." This message could be improved to explain that we cannot > connect to the KMSClientProvier. > For example, {{hadoop key list}} while KMS is down will error: > {code} > -bash-4.1$ hadoop key list > Cannot list keys for KeyProvider: > KMSClientProvider[http://hdfs-cdh5-vanilla-1.vpc.cloudera.com:16000/kms/v1/]: > Connection refusedjava.net.ConnectException: Connection refused > at java.net.PlainSocketImpl.socketConnect(Native Method) > at > java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) > at > java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) > at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) > at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) > at java.net.Socket.connect(Socket.java:579) > at sun.net.NetworkClient.doConnect(NetworkClient.java:175) > at sun.net.www.http.HttpClient.openServer(HttpClient.java:432) > at sun.net.www.http.HttpClient.openServer(HttpClient.java:527) > at sun.net.www.http.HttpClient.(HttpClient.java:211) > at sun.net.www.http.HttpClient.New(HttpClient.java:308) > at sun.net.www.http.HttpClient.New(HttpClient.java:326) > at > sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:996) > at > sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:932) > at > sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:850) > at > org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:186) > at > org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.authenticate(DelegationTokenAuthenticator.java:125) > at > org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:216) > at > org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL.openConnection(DelegationTokenAuthenticatedURL.java:312) > at > org.apache.hadoop.crypto.key.kms.KMSClientProvider$1.run(KMSClientProvider.java:397) > at > org.apache.hadoop.crypto.key.kms.KMSClientProvider$1.run(KMSClientProvider.java:392) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614) > at > org.apache.hadoop.crypto.key.kms.KMSClientProvider.createConnection(KMSClientProvider.java:392) > at > org.apache.hadoop.crypto.key.kms.KMSClientProvider.getKeys(KMSClientProvider.java:479) > at > org.apache.hadoop.crypto.key.KeyShell$ListCommand.execute(KeyShell.java:286) > at org.apache.hadoop.crypto.key.KeyShell.run(KeyShell.java:79) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) > at org.apache.hadoop.crypto.key.KeyShell.main(KeyShell.java:513) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10664) layoutVersion mismatch between Namenode VERSION file and Journalnode VERSION file after cluster upgrade
[ https://issues.apache.org/jira/browse/HDFS-10664?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16523615#comment-16523615 ] Junping Du commented on HDFS-10664: --- This patch seems to correct some harmless but silly mistake for JN version number, and I see very low risk for the patch here. Kick off Jenkins again. > layoutVersion mismatch between Namenode VERSION file and Journalnode VERSION > file after cluster upgrade > --- > > Key: HDFS-10664 > URL: https://issues.apache.org/jira/browse/HDFS-10664 > Project: Hadoop HDFS > Issue Type: Bug > Components: ha, hdfs >Affects Versions: 2.7.1 >Reporter: Amit Anand >Assignee: Yuanbo Liu >Priority: Major > Attachments: HDFS-10664.001.patch > > > After a cluster is upgraded I see a mismatch in {{layoutVersion}} between NN > VERSION file and JN VERSION file. > Here is what I see: > Before cluster upgrade: > == > {code} > ## Version file from NN current directory > namespaceID=109645726 > clusterID=CID-edcb62c5-bc1f-49f5-addb-37827340b5de > cTime=0 > storageType=NAME_NODE > blockpoolID=BP-786201894-10.0.100.11-1466026941507 > layoutVersion=-60 > {code} > {code} > ## Version file from JN current directory > namespaceID=109645726 > clusterID=CID-edcb62c5-bc1f-49f5-addb-37827340b5de > cTime=0 > storageType=JOURNAL_NODE > layoutVersion=-60 > {code} > After cluster upgrade: > = > {code} > ## Version file from NN current directory > namespaceID=109645726 > clusterID=CID-edcb62c5-bc1f-49f5-addb-37827340b5de > cTime=0 > storageType=NAME_NODE > blockpoolID=BP-786201894-10.0.100.11-1466026941507 > layoutVersion=-63 > {code} > {code} > ## Version file from JN current directory > namespaceID=109645726 > clusterID=CID-edcb62c5-bc1f-49f5-addb-37827340b5de > cTime=0 > storageType=JOURNAL_NODE > layoutVersion=-60 > {code} > Since {{Namenode}} is what creates {{Journalnode}} {{VERSION}} file during > {{initializeSharedEdits}}, it should also update the file with correct > information after the cluster is upgraded and {{hdfs dfsadmin > -finalizeUpgrade}} has been executed. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org