[jira] [Commented] (HDFS-10580) DiskBalancer : Make use of unused methods in GreedyPlanner to print debug info
[ https://issues.apache.org/jira/browse/HDFS-10580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15350436#comment-15350436 ] Hadoop QA commented on HDFS-10580: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 48s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 1s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 30s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 56s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 15s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 47s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 57s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 57m 47s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 18s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 78m 18s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:85209cc | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12813584/HDFS-10580.001.patch | | JIRA Issue | HDFS-10580 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 2fb47579acdf 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 73615a7 | | Default Java | 1.8.0_91 | | findbugs | v3.0.0 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/15918/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/15918/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/15918/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > DiskBalancer : Make use of unused methods in GreedyPlanner to print debug info > -- > > Key: HDFS-10580 > URL: https://issues.apache.org/jira/browse/HDFS-10580
[jira] [Updated] (HDFS-10569) A bug causes OutOfIndex error in BlockListAsLongs
[ https://issues.apache.org/jira/browse/HDFS-10569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Weiwei Yang updated HDFS-10569: --- Status: Patch Available (was: In Progress) > A bug causes OutOfIndex error in BlockListAsLongs > - > > Key: HDFS-10569 > URL: https://issues.apache.org/jira/browse/HDFS-10569 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.7.0 >Reporter: Weiwei Yang >Assignee: Weiwei Yang >Priority: Minor > Attachments: HDFS-10569.001.patch, HDFS-10569.002.patch > > > An obvious bug in LongsDecoder.getBlockListAsLongs(), the size of var *longs* > is +2 to the size of *values*, but the for-loop accesses *values* using > *longs* index. This will cause OutOfIndex. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10440) Improve DataNode web UI
[ https://issues.apache.org/jira/browse/HDFS-10440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Weiwei Yang updated HDFS-10440: --- Status: Patch Available (was: In Progress) > Improve DataNode web UI > --- > > Key: HDFS-10440 > URL: https://issues.apache.org/jira/browse/HDFS-10440 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode, ui >Affects Versions: 2.7.0 >Reporter: Weiwei Yang >Assignee: Weiwei Yang > Attachments: HDFS-10440.001.patch, HDFS-10440.002.patch, > HDFS-10440.003.patch, HDFS-10440.004.patch, HDFS-10440.005.patch, > HDFS-10440.006.patch, HDFS-10440.007.patch, datanode_2nns.html.002.jpg, > datanode_html.001.jpg, datanode_loading_err.002.jpg, > datanode_utilities.001.jpg, datanode_utilities.002.jpg, dn_web_ui.003.jpg, > dn_web_ui_mockup.jpg, nn_dfs_storage_types.jpg > > > At present, datanode web UI doesn't have much information except for node > name and port. Propose to add more information similar to namenode UI, > including, > * Static info (version, block pool and cluster ID) > * Block pools info (BP IDs, namenode address, actor states) > * Storage info (Volumes, capacity used, reserved, left) > * Utilities (logs) -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10440) Improve DataNode web UI
[ https://issues.apache.org/jira/browse/HDFS-10440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Weiwei Yang updated HDFS-10440: --- Status: In Progress (was: Patch Available) > Improve DataNode web UI > --- > > Key: HDFS-10440 > URL: https://issues.apache.org/jira/browse/HDFS-10440 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode, ui >Affects Versions: 2.7.0 >Reporter: Weiwei Yang >Assignee: Weiwei Yang > Attachments: HDFS-10440.001.patch, HDFS-10440.002.patch, > HDFS-10440.003.patch, HDFS-10440.004.patch, HDFS-10440.005.patch, > HDFS-10440.006.patch, HDFS-10440.007.patch, datanode_2nns.html.002.jpg, > datanode_html.001.jpg, datanode_loading_err.002.jpg, > datanode_utilities.001.jpg, datanode_utilities.002.jpg, dn_web_ui.003.jpg, > dn_web_ui_mockup.jpg, nn_dfs_storage_types.jpg > > > At present, datanode web UI doesn't have much information except for node > name and port. Propose to add more information similar to namenode UI, > including, > * Static info (version, block pool and cluster ID) > * Block pools info (BP IDs, namenode address, actor states) > * Storage info (Volumes, capacity used, reserved, left) > * Utilities (logs) -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10569) A bug causes OutOfIndex error in BlockListAsLongs
[ https://issues.apache.org/jira/browse/HDFS-10569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Weiwei Yang updated HDFS-10569: --- Status: In Progress (was: Patch Available) > A bug causes OutOfIndex error in BlockListAsLongs > - > > Key: HDFS-10569 > URL: https://issues.apache.org/jira/browse/HDFS-10569 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.7.0 >Reporter: Weiwei Yang >Assignee: Weiwei Yang >Priority: Minor > Attachments: HDFS-10569.001.patch, HDFS-10569.002.patch > > > An obvious bug in LongsDecoder.getBlockListAsLongs(), the size of var *longs* > is +2 to the size of *values*, but the for-loop accesses *values* using > *longs* index. This will cause OutOfIndex. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10580) DiskBalancer : Make use of unused methods in GreedyPlanner to print debug info
[ https://issues.apache.org/jira/browse/HDFS-10580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yiqun Lin updated HDFS-10580: - Attachment: HDFS-10580.001.patch > DiskBalancer : Make use of unused methods in GreedyPlanner to print debug info > -- > > Key: HDFS-10580 > URL: https://issues.apache.org/jira/browse/HDFS-10580 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: balancer & mover >Reporter: Yiqun Lin >Assignee: Yiqun Lin >Priority: Minor > Attachments: HDFS-10580.001.patch > > > There are two unused method {{skipVolume}} and {{printQueue}} in class > {{GreedyPlanner}}. These two methods were added in HDFS-9469 but they are not > used. In these two method, it will print the detail debug info. So We can > make use of them. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10580) DiskBalancer : Make use of unused methods in GreedyPlanner to print debug info
[ https://issues.apache.org/jira/browse/HDFS-10580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yiqun Lin updated HDFS-10580: - Status: Patch Available (was: Open) Attach a initial patch, thanks for review. > DiskBalancer : Make use of unused methods in GreedyPlanner to print debug info > -- > > Key: HDFS-10580 > URL: https://issues.apache.org/jira/browse/HDFS-10580 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: balancer & mover >Reporter: Yiqun Lin >Assignee: Yiqun Lin >Priority: Minor > > There are two unused method {{skipVolume}} and {{printQueue}} in class > {{GreedyPlanner}}. These two methods were added in HDFS-9469 but they are not > used. In these two method, it will print the detail debug info. So We can > make use of them. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10580) DiskBalancer : Make use of unused methods in GreedyPlanner to print debug info
[ https://issues.apache.org/jira/browse/HDFS-10580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yiqun Lin updated HDFS-10580: - Issue Type: Sub-task (was: Bug) Parent: HDFS-10576 > DiskBalancer : Make use of unused methods in GreedyPlanner to print debug info > -- > > Key: HDFS-10580 > URL: https://issues.apache.org/jira/browse/HDFS-10580 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: balancer & mover >Reporter: Yiqun Lin >Assignee: Yiqun Lin >Priority: Minor > > There are two unused method {{skipVolume}} and {{printQueue}} in class > {{GreedyPlanner}}. These two methods were added in HDFS-9469 but they are not > used. In these two method, it will print the detail debug info. So We can > make use of them. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-10580) DiskBalancer : Make use of unused methods in GreedyPlanner to print debug info
Yiqun Lin created HDFS-10580: Summary: DiskBalancer : Make use of unused methods in GreedyPlanner to print debug info Key: HDFS-10580 URL: https://issues.apache.org/jira/browse/HDFS-10580 Project: Hadoop HDFS Issue Type: Bug Components: balancer & mover Reporter: Yiqun Lin Assignee: Yiqun Lin Priority: Minor There are two unused method {{skipVolume}} and {{printQueue}} in class {{GreedyPlanner}}. These two methods were added in HDFS-9469 but they are not used. In these two method, it will print the detail debug info. So We can make use of them. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-7343) A comprehensive and flexible storage policy engine
[ https://issues.apache.org/jira/browse/HDFS-7343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15350392#comment-15350392 ] Yuanbo Liu commented on HDFS-7343: -- [~drankye] Great proposal. Any updates here? We're interested in this feature and look forward to the future work about this proposal. > A comprehensive and flexible storage policy engine > -- > > Key: HDFS-7343 > URL: https://issues.apache.org/jira/browse/HDFS-7343 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode >Reporter: Kai Zheng >Assignee: Kai Zheng > > As discussed in HDFS-7285, it would be better to have a comprehensive and > flexible storage policy engine considering file attributes, metadata, data > temperature, storage type, EC codec, available hardware capabilities, > user/application preference and etc. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10530) BlockManager reconstruction work scheduling should correctly adhere to EC block placement policy
[ https://issues.apache.org/jira/browse/HDFS-10530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15350390#comment-15350390 ] Hadoop QA commented on HDFS-10530: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 30s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 51s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 45s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 28s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 52s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 43s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 26s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 2 new + 184 unchanged - 0 fixed = 186 total (was 184) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 57s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 74m 6s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 19s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 93m 48s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer | | | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped | | | hadoop.hdfs.server.balancer.TestBalancer | | | hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes | | | hadoop.hdfs.server.namenode.snapshot.TestSnapshotFileLength | | | hadoop.hdfs.TestLeaseRecoveryStriped | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:85209cc | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12813576/HDFS-10530.1.patch | | JIRA Issue | HDFS-10530 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 93a4016d67e4 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 73615a7 | | Default Java | 1.8.0_91 | | findbugs | v3.0.0 | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/15916/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/15916/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/15916/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output |
[jira] [Commented] (HDFS-10568) Reuse ObjectMapper instance in CombinedHostsFileReader and CombinedHostsFileWriter
[ https://issues.apache.org/jira/browse/HDFS-10568?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15350356#comment-15350356 ] Yiqun Lin commented on HDFS-10568: -- It seemed the jenkins was hung, upload the patch file again to trigger the jenkins. > Reuse ObjectMapper instance in CombinedHostsFileReader and > CombinedHostsFileWriter > -- > > Key: HDFS-10568 > URL: https://issues.apache.org/jira/browse/HDFS-10568 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs-client >Reporter: Yiqun Lin >Assignee: Yiqun Lin > Attachments: HDFS-10568.001.patch > > > The {{ObjectMapper}} instance is not reused in class > {{CombinedHostsFileReader}} and {{CombinedHostsFileWriter}}. We can reuse > them to improve performance. > Here are related issues: HDFS-9724, HDFS-9768. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10568) Reuse ObjectMapper instance in CombinedHostsFileReader and CombinedHostsFileWriter
[ https://issues.apache.org/jira/browse/HDFS-10568?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yiqun Lin updated HDFS-10568: - Attachment: HDFS-10568.001.patch > Reuse ObjectMapper instance in CombinedHostsFileReader and > CombinedHostsFileWriter > -- > > Key: HDFS-10568 > URL: https://issues.apache.org/jira/browse/HDFS-10568 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs-client >Reporter: Yiqun Lin >Assignee: Yiqun Lin > Attachments: HDFS-10568.001.patch > > > The {{ObjectMapper}} instance is not reused in class > {{CombinedHostsFileReader}} and {{CombinedHostsFileWriter}}. We can reuse > them to improve performance. > Here are related issues: HDFS-9724, HDFS-9768. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10568) Reuse ObjectMapper instance in CombinedHostsFileReader and CombinedHostsFileWriter
[ https://issues.apache.org/jira/browse/HDFS-10568?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yiqun Lin updated HDFS-10568: - Attachment: (was: HDFS-10568.001.patch) > Reuse ObjectMapper instance in CombinedHostsFileReader and > CombinedHostsFileWriter > -- > > Key: HDFS-10568 > URL: https://issues.apache.org/jira/browse/HDFS-10568 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs-client >Reporter: Yiqun Lin >Assignee: Yiqun Lin > Attachments: HDFS-10568.001.patch > > > The {{ObjectMapper}} instance is not reused in class > {{CombinedHostsFileReader}} and {{CombinedHostsFileWriter}}. We can reuse > them to improve performance. > Here are related issues: HDFS-9724, HDFS-9768. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work started] (HDFS-10530) BlockManager reconstruction work scheduling should correctly adhere to EC block placement policy
[ https://issues.apache.org/jira/browse/HDFS-10530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HDFS-10530 started by GAO Rui. -- > BlockManager reconstruction work scheduling should correctly adhere to EC > block placement policy > > > Key: HDFS-10530 > URL: https://issues.apache.org/jira/browse/HDFS-10530 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: namenode >Reporter: GAO Rui >Assignee: GAO Rui > Attachments: HDFS-10530.1.patch > > > This issue was found by [~tfukudom]. > Under RS-DEFAULT-6-3-64k EC policy, > 1. Create an EC file, the file was witten to all the 5 racks( 2 dns for each) > of the cluster. > 2. Reconstruction work would be scheduled if the 6th rack is added. > 3. While adding the 7th rack or more racks will not trigger reconstruction > work. > Based on default EC block placement policy defined in > “BlockPlacementPolicyRackFaultTolerant.java”, EC file should be able to be > scheduled to distribute to 9 racks if possible. > In *BlockManager#isPlacementPolicySatisfied(BlockInfo storedBlock)* , > *numReplicas* of striped blocks might should be *getRealTotalBlockNum()*, > instead of *getRealDataBlockNum()*. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10530) BlockManager reconstruction work scheduling should correctly adhere to EC block placement policy
[ https://issues.apache.org/jira/browse/HDFS-10530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] GAO Rui updated HDFS-10530: --- Status: Patch Available (was: In Progress) > BlockManager reconstruction work scheduling should correctly adhere to EC > block placement policy > > > Key: HDFS-10530 > URL: https://issues.apache.org/jira/browse/HDFS-10530 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: namenode >Reporter: GAO Rui >Assignee: GAO Rui > Attachments: HDFS-10530.1.patch > > > This issue was found by [~tfukudom]. > Under RS-DEFAULT-6-3-64k EC policy, > 1. Create an EC file, the file was witten to all the 5 racks( 2 dns for each) > of the cluster. > 2. Reconstruction work would be scheduled if the 6th rack is added. > 3. While adding the 7th rack or more racks will not trigger reconstruction > work. > Based on default EC block placement policy defined in > “BlockPlacementPolicyRackFaultTolerant.java”, EC file should be able to be > scheduled to distribute to 9 racks if possible. > In *BlockManager#isPlacementPolicySatisfied(BlockInfo storedBlock)* , > *numReplicas* of striped blocks might should be *getRealTotalBlockNum()*, > instead of *getRealDataBlockNum()*. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10530) BlockManager reconstruction work scheduling should correctly adhere to EC block placement policy
[ https://issues.apache.org/jira/browse/HDFS-10530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] GAO Rui updated HDFS-10530: --- Attachment: HDFS-10530.1.patch [~zhz], thanks for your ideas. I've attached a patch with additional unit test. Please feel free to commend on the patch. For the priority of rack placement policy related reconstruction work, I'd like to be assigned, and report to you in another JIRA :D > BlockManager reconstruction work scheduling should correctly adhere to EC > block placement policy > > > Key: HDFS-10530 > URL: https://issues.apache.org/jira/browse/HDFS-10530 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: namenode >Reporter: GAO Rui >Assignee: GAO Rui > Attachments: HDFS-10530.1.patch > > > This issue was found by [~tfukudom]. > Under RS-DEFAULT-6-3-64k EC policy, > 1. Create an EC file, the file was witten to all the 5 racks( 2 dns for each) > of the cluster. > 2. Reconstruction work would be scheduled if the 6th rack is added. > 3. While adding the 7th rack or more racks will not trigger reconstruction > work. > Based on default EC block placement policy defined in > “BlockPlacementPolicyRackFaultTolerant.java”, EC file should be able to be > scheduled to distribute to 9 racks if possible. > In *BlockManager#isPlacementPolicySatisfied(BlockInfo storedBlock)* , > *numReplicas* of striped blocks might should be *getRealTotalBlockNum()*, > instead of *getRealDataBlockNum()*. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-7859) Erasure Coding: Persist erasure coding policies in NameNode
[ https://issues.apache.org/jira/browse/HDFS-7859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15350146#comment-15350146 ] Xinwei Qin commented on HDFS-7859: --- Rebased patch still have some test failure, I'm doing my best to fix it now. > Erasure Coding: Persist erasure coding policies in NameNode > --- > > Key: HDFS-7859 > URL: https://issues.apache.org/jira/browse/HDFS-7859 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Kai Zheng >Assignee: Xinwei Qin > Labels: BB2015-05-TBR, hdfs-ec-3.0-must-do > Attachments: HDFS-7859-HDFS-7285.002.patch, > HDFS-7859-HDFS-7285.002.patch, HDFS-7859-HDFS-7285.003.patch, > HDFS-7859.001.patch, HDFS-7859.002.patch, HDFS-7859.004.patch > > > In meetup discussion with [~zhz] and [~jingzhao], it's suggested that we > persist EC schemas in NameNode centrally and reliably, so that EC zones can > reference them by name efficiently. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10536) Standby NN can not trigger log roll after EditLogTailer thread failed 3 times in EditLogTailer.triggerActiveLogRoll method.
[ https://issues.apache.org/jira/browse/HDFS-10536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15350071#comment-15350071 ] Hudson commented on HDFS-10536: --- SUCCESS: Integrated in Hadoop-trunk-Commit #10020 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/10020/]) HDFS-10536. Standby NN can not trigger log roll after EditLogTailer (vinayakumarb: rev 73615a789d96292e2731b5aa33ce46aa004d8211) * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestEditLogTailer.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/EditLogTailer.java > Standby NN can not trigger log roll after EditLogTailer thread failed 3 times > in EditLogTailer.triggerActiveLogRoll method. > --- > > Key: HDFS-10536 > URL: https://issues.apache.org/jira/browse/HDFS-10536 > Project: Hadoop HDFS > Issue Type: Bug > Components: auto-failover >Affects Versions: 3.0.0-alpha1 >Reporter: XingFeng Shen >Assignee: XingFeng Shen >Priority: Critical > Labels: patch > Fix For: 3.0.0-alpha1 > > Attachments: HDFS-10536-02.patch, HDFS-10536.02.patch, > HDFS-10536.patch > > > When all NameNodes become standby, EditLogTailer will retry 3 times to > trigger log roll, then it will be failed and throw Exception "Cannot find any > valid remote NN to service request!". After one namenode become active, > standby NN still can not trigger log roll again because variable > "nnLoopCount" is still 3, it can not init to 0. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10536) Standby NN can not trigger log roll after EditLogTailer thread failed 3 times in EditLogTailer.triggerActiveLogRoll method.
[ https://issues.apache.org/jira/browse/HDFS-10536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vinayakumar B updated HDFS-10536: - Resolution: Fixed Assignee: XingFeng Shen Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) Committed to trunk, Thanks for the contribution [~xingfengshen]. > Standby NN can not trigger log roll after EditLogTailer thread failed 3 times > in EditLogTailer.triggerActiveLogRoll method. > --- > > Key: HDFS-10536 > URL: https://issues.apache.org/jira/browse/HDFS-10536 > Project: Hadoop HDFS > Issue Type: Bug > Components: auto-failover >Affects Versions: 3.0.0-alpha1 >Reporter: XingFeng Shen >Assignee: XingFeng Shen >Priority: Critical > Labels: patch > Fix For: 3.0.0-alpha1 > > Attachments: HDFS-10536-02.patch, HDFS-10536.02.patch, > HDFS-10536.patch > > > When all NameNodes become standby, EditLogTailer will retry 3 times to > trigger log roll, then it will be failed and throw Exception "Cannot find any > valid remote NN to service request!". After one namenode become active, > standby NN still can not trigger log roll again because variable > "nnLoopCount" is still 3, it can not init to 0. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org