[jira] [Commented] (HDFS-12482) Provide a configuration to adjust the weight of EC recovery tasks to adjust the speed of recovery
[ https://issues.apache.org/jira/browse/HDFS-12482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16218148#comment-16218148 ] Xiao Chen commented on HDFS-12482: -- Thanks for the new rev Eddy! We can use preconditions which will conveniently throw a {{IllegalArgumentException}} for invalid arguments, so we don't have to change the constructor signature. {code} Preconditions.checkArgument(this.xmitWeight >= 0, "Invalid value configured for " + DFSConfigKeys.DFS_DN_EC_RECONSTRUCTION_XMITS_WEIGHT_KEY + ", it can not be negative value (" + this.xmitWeight + ")."); {code} +1 once this is done and checkstyle unused imports are fixed. > Provide a configuration to adjust the weight of EC recovery tasks to adjust > the speed of recovery > - > > Key: HDFS-12482 > URL: https://issues.apache.org/jira/browse/HDFS-12482 > Project: Hadoop HDFS > Issue Type: Improvement > Components: erasure-coding >Affects Versions: 3.0.0-alpha4 >Reporter: Lei (Eddy) Xu >Assignee: Lei (Eddy) Xu >Priority: Minor > Labels: hdfs-ec-3.0-nice-to-have > Attachments: HDFS-12482.00.patch, HDFS-12482.01.patch, > HDFS-12482.02.patch, HDFS-12482.03.patch > > > The relative speed of EC recovery comparing to 3x replica recovery is a > function of (EC codec, number of sources, NIC speed, and CPU speed, and etc). > Currently the EC recovery has a fixed {{xmitsInProgress}} of {{max(# of > sources, # of targets)}} comparing to {{1}} for 3x replica recovery, and NN > uses {{xmitsInProgress}} to decide how much recovery tasks to schedule to the > DataNode this we can add a coefficient for user to tune the weight of EC > recovery tasks. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-11902) [READ] Merge BlockFormatProvider and FileRegionProvider.
[ https://issues.apache.org/jira/browse/HDFS-11902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16218129#comment-16218129 ] Virajith Jalaparti edited comment on HDFS-11902 at 10/25/17 5:42 AM: - Posting a new patch which moves the {{BlockAliasMap}} interface to it's own package ({{org.apache.hadoop.hdfs.server.common.blockaliasmap}}) and it's implementations to the package {{org.apache.hadoop.hdfs.server.common.blockaliasmap.impl}}. Also adds the package-info.java to the package {{org.apache.hadoop.hdfs.server.common.blockaliasmap}}. [~ehiggs], can you please take a look at patch v010? This will affect HDFS-12665. was (Author: virajith): Posting a new patch which moves the {{BlockAliasMap}} interface to it's own package ({{org.apache.hadoop.hdfs.server.common.blockaliasmap}}) and it's implementations to the package {{org.apache.hadoop.hdfs.server.common.blockaliasmap.impl}}. Also adds the package-info.java to the package {{org.apache.hadoop.hdfs.server.common.blockaliasmap}}. > [READ] Merge BlockFormatProvider and FileRegionProvider. > > > Key: HDFS-11902 > URL: https://issues.apache.org/jira/browse/HDFS-11902 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Virajith Jalaparti >Assignee: Virajith Jalaparti > Attachments: HDFS-11902-HDFS-9806.001.patch, > HDFS-11902-HDFS-9806.002.patch, HDFS-11902-HDFS-9806.003.patch, > HDFS-11902-HDFS-9806.004.patch, HDFS-11902-HDFS-9806.005.patch, > HDFS-11902-HDFS-9806.006.patch, HDFS-11902-HDFS-9806.007.patch, > HDFS-11902-HDFS-9806.008.patch, HDFS-11902-HDFS-9806.009.patch, > HDFS-11902-HDFS-9806.010.patch > > > Currently {{BlockFormatProvider}} and {{TextFileRegionProvider}} perform > almost the same function on the Namenode and Datanode respectively. This JIRA > is to merge them into one. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12482) Provide a configuration to adjust the weight of EC recovery tasks to adjust the speed of recovery
[ https://issues.apache.org/jira/browse/HDFS-12482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16218136#comment-16218136 ] Hadoop QA commented on HDFS-12482: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 20s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 48s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 22s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 2s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 36s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 19s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 45s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 18s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 49s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 1m 11s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 5 new + 421 unchanged - 0 fixed = 426 total (was 421) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 3s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 43s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 25s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}156m 19s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 33s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}224m 45s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | | | hadoop.hdfs.web.TestHttpsFileSystem | | | hadoop.hdfs.TestReadStripedFileWithMissingBlocks | | | hadoop.hdfs.server.namenode.TestDecommissioningStatus | | | hadoop.hdfs.server.datanode.TestDirectoryScanner | | | hadoop.hdfs.TestEncryptionZones | | | hadoop.hdfs.server.namenode.ha.TestHAAppend | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure | | | hadoop.hdfs.security.TestDelegationTokenForProxyUser | | | hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean | | Timed out junit tests | org.apache.hadoop.hdfs.TestReadStripedFileWithDecodingCorruptData | | | org.apache.hadoop.hdfs.TestReadStripedFileWithDecodingDeletedData | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:ca8ddc6 | | JIRA Issue | HDFS-12482 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12893851/HDFS-12482.03.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs
[jira] [Updated] (HDFS-11902) [READ] Merge BlockFormatProvider and FileRegionProvider.
[ https://issues.apache.org/jira/browse/HDFS-11902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Virajith Jalaparti updated HDFS-11902: -- Status: Patch Available (was: Open) > [READ] Merge BlockFormatProvider and FileRegionProvider. > > > Key: HDFS-11902 > URL: https://issues.apache.org/jira/browse/HDFS-11902 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Virajith Jalaparti >Assignee: Virajith Jalaparti > Attachments: HDFS-11902-HDFS-9806.001.patch, > HDFS-11902-HDFS-9806.002.patch, HDFS-11902-HDFS-9806.003.patch, > HDFS-11902-HDFS-9806.004.patch, HDFS-11902-HDFS-9806.005.patch, > HDFS-11902-HDFS-9806.006.patch, HDFS-11902-HDFS-9806.007.patch, > HDFS-11902-HDFS-9806.008.patch, HDFS-11902-HDFS-9806.009.patch, > HDFS-11902-HDFS-9806.010.patch > > > Currently {{BlockFormatProvider}} and {{TextFileRegionProvider}} perform > almost the same function on the Namenode and Datanode respectively. This JIRA > is to merge them into one. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11902) [READ] Merge BlockFormatProvider and FileRegionProvider.
[ https://issues.apache.org/jira/browse/HDFS-11902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Virajith Jalaparti updated HDFS-11902: -- Attachment: HDFS-11902-HDFS-9806.010.patch Posting a new patch which moves the {{BlockAliasMap}} interface to it's own package ({{org.apache.hadoop.hdfs.server.common.blockaliasmap}}) and it's implementations to the package {{org.apache.hadoop.hdfs.server.common.blockaliasmap.impl}}. Also adds the package-info.java to the package {{org.apache.hadoop.hdfs.server.common.blockaliasmap}}. > [READ] Merge BlockFormatProvider and FileRegionProvider. > > > Key: HDFS-11902 > URL: https://issues.apache.org/jira/browse/HDFS-11902 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Virajith Jalaparti >Assignee: Virajith Jalaparti > Attachments: HDFS-11902-HDFS-9806.001.patch, > HDFS-11902-HDFS-9806.002.patch, HDFS-11902-HDFS-9806.003.patch, > HDFS-11902-HDFS-9806.004.patch, HDFS-11902-HDFS-9806.005.patch, > HDFS-11902-HDFS-9806.006.patch, HDFS-11902-HDFS-9806.007.patch, > HDFS-11902-HDFS-9806.008.patch, HDFS-11902-HDFS-9806.009.patch, > HDFS-11902-HDFS-9806.010.patch > > > Currently {{BlockFormatProvider}} and {{TextFileRegionProvider}} perform > almost the same function on the Namenode and Datanode respectively. This JIRA > is to merge them into one. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11902) [READ] Merge BlockFormatProvider and FileRegionProvider.
[ https://issues.apache.org/jira/browse/HDFS-11902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Virajith Jalaparti updated HDFS-11902: -- Status: Open (was: Patch Available) > [READ] Merge BlockFormatProvider and FileRegionProvider. > > > Key: HDFS-11902 > URL: https://issues.apache.org/jira/browse/HDFS-11902 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Virajith Jalaparti >Assignee: Virajith Jalaparti > Attachments: HDFS-11902-HDFS-9806.001.patch, > HDFS-11902-HDFS-9806.002.patch, HDFS-11902-HDFS-9806.003.patch, > HDFS-11902-HDFS-9806.004.patch, HDFS-11902-HDFS-9806.005.patch, > HDFS-11902-HDFS-9806.006.patch, HDFS-11902-HDFS-9806.007.patch, > HDFS-11902-HDFS-9806.008.patch, HDFS-11902-HDFS-9806.009.patch > > > Currently {{BlockFormatProvider}} and {{TextFileRegionProvider}} perform > almost the same function on the Namenode and Datanode respectively. This JIRA > is to merge them into one. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12532) DN Reg can Fail when principal doesn't contain hostname and floatingIP is configured.
[ https://issues.apache.org/jira/browse/HDFS-12532?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula updated HDFS-12532: Status: Patch Available (was: In Progress) > DN Reg can Fail when principal doesn't contain hostname and floatingIP is > configured. > - > > Key: HDFS-12532 > URL: https://issues.apache.org/jira/browse/HDFS-12532 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Brahma Reddy Battula >Assignee: Brahma Reddy Battula > Attachments: HDFS-12532.patch > > > Configure principal without hostname (i.e hdfs/had...@hadoop.com) > Configure floatingIP > Start Cluster. > Here DN will fail to register as it can take IP which is not in "/etc/hosts". -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12532) DN Reg can Fail when principal doesn't contain hostname and floatingIP is configured.
[ https://issues.apache.org/jira/browse/HDFS-12532?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula updated HDFS-12532: Attachment: HDFS-12532.patch Attaching patch with above approach.Kindly review. > DN Reg can Fail when principal doesn't contain hostname and floatingIP is > configured. > - > > Key: HDFS-12532 > URL: https://issues.apache.org/jira/browse/HDFS-12532 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Brahma Reddy Battula >Assignee: Brahma Reddy Battula > Attachments: HDFS-12532.patch > > > Configure principal without hostname (i.e hdfs/had...@hadoop.com) > Configure floatingIP > Start Cluster. > Here DN will fail to register as it can take IP which is not in "/etc/hosts". -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work started] (HDFS-12532) DN Reg can Fail when principal doesn't contain hostname and floatingIP is configured.
[ https://issues.apache.org/jira/browse/HDFS-12532?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HDFS-12532 started by Brahma Reddy Battula. --- > DN Reg can Fail when principal doesn't contain hostname and floatingIP is > configured. > - > > Key: HDFS-12532 > URL: https://issues.apache.org/jira/browse/HDFS-12532 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Brahma Reddy Battula >Assignee: Brahma Reddy Battula > > Configure principal without hostname (i.e hdfs/had...@hadoop.com) > Configure floatingIP > Start Cluster. > Here DN will fail to register as it can take IP which is not in "/etc/hosts". -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12482) Provide a configuration to adjust the weight of EC recovery tasks to adjust the speed of recovery
[ https://issues.apache.org/jira/browse/HDFS-12482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16218115#comment-16218115 ] Hadoop QA commented on HDFS-12482: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 16s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 12s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 51s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 14s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 34s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 27s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 58s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 40s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 5 new + 422 unchanged - 0 fixed = 427 total (was 422) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 24s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 98m 29s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 28s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}150m 0s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure | | | hadoop.hdfs.TestEncryptionZones | | | hadoop.hdfs.TestReadStripedFileWithMissingBlocks | | | hadoop.hdfs.web.TestWebHdfsTimeouts | | | hadoop.hdfs.server.federation.metrics.TestFederationMetrics | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:ca8ddc6 | | JIRA Issue | HDFS-12482 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12893851/HDFS-12482.03.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle xml | | uname | Linux eda0a9790513 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 17cd8d0 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_131 | | findbugs | v3.1.0-RC1 | | checkstyle |
[jira] [Reopened] (HDFS-12532) DN Reg can Fail when principal doesn't contain hostname and floatingIP is configured.
[ https://issues.apache.org/jira/browse/HDFS-12532?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula reopened HDFS-12532: - Assignee: Brahma Reddy Battula > DN Reg can Fail when principal doesn't contain hostname and floatingIP is > configured. > - > > Key: HDFS-12532 > URL: https://issues.apache.org/jira/browse/HDFS-12532 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Brahma Reddy Battula >Assignee: Brahma Reddy Battula > > Configure principal without hostname (i.e hdfs/had...@hadoop.com) > Configure floatingIP > Start Cluster. > Here DN will fail to register as it can take IP which is not in "/etc/hosts". -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12532) DN Reg can Fail when principal doesn't contain hostname and floatingIP is configured.
[ https://issues.apache.org/jira/browse/HDFS-12532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16218114#comment-16218114 ] Brahma Reddy Battula commented on HDFS-12532: - bq.I recommend either switching your interface and aliased ips Looks It's not possible in our environment. It used for another components also. bq. set dfs.namenode.datanode.registration.ip-hostname-check=false. This can aviod the ERROR, but it will overide the [folating IP|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java#L1025] hence clients will get this IP from Namenode but Datanode will listen on another IP? I feel, when principal having hostname which can't resolvable (i.e bindAddr = null),we can have one config like below..? This can be configured DualIP machines. {code} if (bindAddr == null) { String bindAddrIp = conf.get( CommonConfigurationKeysPublic.IPC_CLIENT_CONNECT_LOCAL_BIND_IP_KEY); if (bindAddrIp != null && !bindAddrIp.isEmpty()) { bindAddr = new InetSocketAddress(bindAddrIp, 0); } } {code} > DN Reg can Fail when principal doesn't contain hostname and floatingIP is > configured. > - > > Key: HDFS-12532 > URL: https://issues.apache.org/jira/browse/HDFS-12532 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Brahma Reddy Battula > > Configure principal without hostname (i.e hdfs/had...@hadoop.com) > Configure floatingIP > Start Cluster. > Here DN will fail to register as it can take IP which is not in "/etc/hosts". -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12681) Fold HdfsLocatedFileStatus into HdfsFileStatus
[ https://issues.apache.org/jira/browse/HDFS-12681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Douglas updated HDFS-12681: - Attachment: HDFS-12681.04.patch > Fold HdfsLocatedFileStatus into HdfsFileStatus > -- > > Key: HDFS-12681 > URL: https://issues.apache.org/jira/browse/HDFS-12681 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Chris Douglas >Priority: Minor > Attachments: HDFS-12681.00.patch, HDFS-12681.01.patch, > HDFS-12681.02.patch, HDFS-12681.03.patch, HDFS-12681.04.patch > > > {{HdfsLocatedFileStatus}} is a subtype of {{HdfsFileStatus}}, but not of > {{LocatedFileStatus}}. Conversion requires copying common fields and shedding > unknown data. It would be cleaner and sufficient for {{HdfsFileStatus}} to > extend {{LocatedFileStatus}}. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12708) Fix hdfs haadmin usage
[ https://issues.apache.org/jira/browse/HDFS-12708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] fang zhenyi updated HDFS-12708: --- Affects Version/s: 3.0.0-alpha4 Status: Patch Available (was: Open) > Fix hdfs haadmin usage > --- > > Key: HDFS-12708 > URL: https://issues.apache.org/jira/browse/HDFS-12708 > Project: Hadoop HDFS > Issue Type: Improvement >Affects Versions: 3.0.0-alpha4 >Reporter: fang zhenyi >Assignee: fang zhenyi >Priority: Minor > Fix For: 3.1.0 > > Attachments: HDFS-12708.001.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12708) Fix hdfs haadmin usage
[ https://issues.apache.org/jira/browse/HDFS-12708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] fang zhenyi updated HDFS-12708: --- Attachment: HDFS-12708.001.patch > Fix hdfs haadmin usage > --- > > Key: HDFS-12708 > URL: https://issues.apache.org/jira/browse/HDFS-12708 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: fang zhenyi >Assignee: fang zhenyi >Priority: Minor > Fix For: 3.1.0 > > Attachments: HDFS-12708.001.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-12708) Fix hdfs haadmin usage
fang zhenyi created HDFS-12708: -- Summary: Fix hdfs haadmin usage Key: HDFS-12708 URL: https://issues.apache.org/jira/browse/HDFS-12708 Project: Hadoop HDFS Issue Type: Improvement Reporter: fang zhenyi Assignee: fang zhenyi Priority: Minor Fix For: 3.1.0 -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12544) SnapshotDiff - support diff generation on any snapshot root descendant directory
[ https://issues.apache.org/jira/browse/HDFS-12544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16218045#comment-16218045 ] Hadoop QA commented on HDFS-12544: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 39s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 53s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 41s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 58s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 35s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 58s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 39s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 2 new + 468 unchanged - 5 fixed = 470 total (was 473) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 32s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 45s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 96m 20s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 23s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}140m 42s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:ca8ddc6 | | JIRA Issue | HDFS-12544 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12893845/HDFS-12544.05.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle xml | | uname | Linux 0471709a71d8 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 17cd8d0 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_131 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/21807/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt | | unit |
[jira] [Commented] (HDFS-11902) [READ] Merge BlockFormatProvider and FileRegionProvider.
[ https://issues.apache.org/jira/browse/HDFS-11902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16218037#comment-16218037 ] Hadoop QA commented on HDFS-11902: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 5 new or modified test files. {color} | || || || || {color:brown} HDFS-9806 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 40s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 16s{color} | {color:green} HDFS-9806 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 39s{color} | {color:green} HDFS-9806 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 5s{color} | {color:green} HDFS-9806 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 26s{color} | {color:green} HDFS-9806 passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 16s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 30s{color} | {color:red} hadoop-tools/hadoop-fs2img in HDFS-9806 has 1 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 14s{color} | {color:green} HDFS-9806 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m 33s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 2m 8s{color} | {color:orange} root: The patch generated 8 new + 448 unchanged - 11 fixed = 456 total (was 459) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 8m 47s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 19s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 88m 18s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 46s{color} | {color:green} hadoop-fs2img in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 36s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}167m 4s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:ca8ddc6 | | JIRA Issue | HDFS-11902 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12893840/HDFS-11902-HDFS-9806.009.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle xml | | uname | Linux 1e1d3f2771de 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | |
[jira] [Commented] (HDFS-11468) Ozone: SCM: Add Node Metrics for SCM
[ https://issues.apache.org/jira/browse/HDFS-11468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16218021#comment-16218021 ] Yiqun Lin commented on HDFS-11468: -- Thanks [~xyao] for the comments, I will take care of remaining work once HDFS-12474 is chceked in. > Ozone: SCM: Add Node Metrics for SCM > > > Key: HDFS-11468 > URL: https://issues.apache.org/jira/browse/HDFS-11468 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Xiaoyu Yao >Assignee: Yiqun Lin >Priority: Critical > Labels: OzonePostMerge > Fix For: HDFS-7240 > > Attachments: HDFS-11468-HDFS-7240.001.patch, > HDFS-11468-HDFS-7240.002.patch, HDFS-11468-HDFS-7240.003.patch, > HDFS-11468-HDFS-7240.004.patch, HDFS-11468-HDFS-7240.005.patch, > HDFS-11468-HDFS-7240.006.patch > > > This ticket is opened to add node metrics in SCM based on heartbeat, node > report and container report from datanodes. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11468) Ozone: SCM: Add Node Metrics for SCM
[ https://issues.apache.org/jira/browse/HDFS-11468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16218016#comment-16218016 ] Xiaoyu Yao commented on HDFS-11468: --- Thanks [~linyiqun] for the contribution and [~cheersyang] for the review. My late +1 for the latest patch is here. Once, HDFS-12474 is in, we will move the current metrics update code from StorageContainerManager#sendContainerReport into the container handler. Also, we should aggregate the metrics from all the reports of different datanodes in addition to the last report. This way, we can get a global view of the container I/Os over the ozone cluster. This work can all be handled in follow up JIRAs. > Ozone: SCM: Add Node Metrics for SCM > > > Key: HDFS-11468 > URL: https://issues.apache.org/jira/browse/HDFS-11468 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Xiaoyu Yao >Assignee: Yiqun Lin >Priority: Critical > Labels: OzonePostMerge > Fix For: HDFS-7240 > > Attachments: HDFS-11468-HDFS-7240.001.patch, > HDFS-11468-HDFS-7240.002.patch, HDFS-11468-HDFS-7240.003.patch, > HDFS-11468-HDFS-7240.004.patch, HDFS-11468-HDFS-7240.005.patch, > HDFS-11468-HDFS-7240.006.patch > > > This ticket is opened to add node metrics in SCM based on heartbeat, node > report and container report from datanodes. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11468) Ozone: SCM: Add Node Metrics for SCM
[ https://issues.apache.org/jira/browse/HDFS-11468?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yiqun Lin updated HDFS-11468: - Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: HDFS-7240 Status: Resolved (was: Patch Available) Committed to the feature branch. Thanks [~cheersyang] for the review. > Ozone: SCM: Add Node Metrics for SCM > > > Key: HDFS-11468 > URL: https://issues.apache.org/jira/browse/HDFS-11468 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Xiaoyu Yao >Assignee: Yiqun Lin >Priority: Critical > Labels: OzonePostMerge > Fix For: HDFS-7240 > > Attachments: HDFS-11468-HDFS-7240.001.patch, > HDFS-11468-HDFS-7240.002.patch, HDFS-11468-HDFS-7240.003.patch, > HDFS-11468-HDFS-7240.004.patch, HDFS-11468-HDFS-7240.005.patch, > HDFS-11468-HDFS-7240.006.patch > > > This ticket is opened to add node metrics in SCM based on heartbeat, node > report and container report from datanodes. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12521) Ozone: SCM should read all Container info into memory when booting up
[ https://issues.apache.org/jira/browse/HDFS-12521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16218005#comment-16218005 ] Xiaoyu Yao commented on HDFS-12521: --- Thanks [~ljain] for working on this. The patch v5 looks good to me overall. Here are a few comments: ContainerStateManager.java Line 41: NIT: please expand the collapsed imports Line 208: we need to load the allocated usage here. ContainerInfo needs to include allocated usage that information HDFS-12474 will add that. Line 405: good catch and fix. Line 428: NIT: suggest rename to getMatchingContainers and return List Line 431: I think readlock is good enough. StorageContainerLocationProtocol.proto Line 83: NIT: containers TestContainerStateManager Line 43-48: we spin up the test cluster and the initialize the xceiverClientManager. Please add a @After routine to tear down the resources to avoid resource leak. Line 53-55: this local variables can be refactored into class private members are they can be reused in many tests. ScmClient.java Line 48: do we want to change ScmClient#createContainer to return a containerInfo instead of pipeline as well? ContainerOperationClient.java Line 85: same as above. > Ozone: SCM should read all Container info into memory when booting up > - > > Key: HDFS-12521 > URL: https://issues.apache.org/jira/browse/HDFS-12521 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Anu Engineer >Assignee: Lokesh Jain > Labels: ozoneMerge > Attachments: HDFS-12521-HDFS-7240.001.patch, > HDFS-12521-HDFS-7240.002.patch, HDFS-12521-HDFS-7240.003.patch, > HDFS-12521-HDFS-7240.004.patch, HDFS-12521-HDFS-7240.005.patch > > > When SCM boots up it should read all containers into memory. This is a > performance optimization that allows delays on SCM side. This JIRA tracks > that issue. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11468) Ozone: SCM: Add Node Metrics for SCM
[ https://issues.apache.org/jira/browse/HDFS-11468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16217996#comment-16217996 ] Yiqun Lin commented on HDFS-11468: -- Will commit this shortly, thanks [~cheersyang]. > Ozone: SCM: Add Node Metrics for SCM > > > Key: HDFS-11468 > URL: https://issues.apache.org/jira/browse/HDFS-11468 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Xiaoyu Yao >Assignee: Yiqun Lin >Priority: Critical > Labels: OzonePostMerge > Attachments: HDFS-11468-HDFS-7240.001.patch, > HDFS-11468-HDFS-7240.002.patch, HDFS-11468-HDFS-7240.003.patch, > HDFS-11468-HDFS-7240.004.patch, HDFS-11468-HDFS-7240.005.patch, > HDFS-11468-HDFS-7240.006.patch > > > This ticket is opened to add node metrics in SCM based on heartbeat, node > report and container report from datanodes. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12705) WebHdfsFileSystem exceptions should retain the caused by exception
[ https://issues.apache.org/jira/browse/HDFS-12705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16217994#comment-16217994 ] Nanda kumar commented on HDFS-12705: Thanks [~hanishakoneru] for taking it up. Instead of setting cause as exception message we can call {{newIoe.initCause(ioe.getCause())}} to set the cause in newly created exception. > WebHdfsFileSystem exceptions should retain the caused by exception > -- > > Key: HDFS-12705 > URL: https://issues.apache.org/jira/browse/HDFS-12705 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Affects Versions: 2.8.0 >Reporter: Daryn Sharp >Assignee: Hanisha Koneru > Attachments: HDFS-12705.001.patch > > > {{WebHdfsFileSystem#runWithRetry}} uses reflection to prepend the remote host > to the exception. While it preserves the original stacktrace, it omits the > original cause which complicates debugging. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12697) Ozone services must stay disabled in secure setup for alpha
[ https://issues.apache.org/jira/browse/HDFS-12697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16217987#comment-16217987 ] Hadoop QA commented on HDFS-12697: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 27s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue} 0m 5s{color} | {color:blue} Shelldocs was not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} HDFS-7240 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 19s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 59s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 58s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 5s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 22s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 15s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 3s{color} | {color:green} HDFS-7240 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 42s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 4 new + 202 unchanged - 0 fixed = 206 total (was 202) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 59s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} shellcheck {color} | {color:red} 0m 0s{color} | {color:red} The patch generated 8 new + 0 unchanged - 0 fixed = 8 total (was 0) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 24s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 3s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}159m 25s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 25s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}214m 1s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestBlocksScheduledCounter | | | hadoop.hdfs.TestReadStripedFileWithMissingBlocks | | | hadoop.hdfs.server.namenode.ha.TestPipelinesFailover | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150 | | | hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations | | | hadoop.ozone.container.common.impl.TestContainerPersistence | | | hadoop.cblock.TestCBlockReadWrite | | | hadoop.cblock.TestBufferManager | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure | | Timed out junit tests | org.apache.hadoop.ozone.scm.node.TestNodeManager | | | org.apache.hadoop.cblock.TestLocalBlockCache | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:ca8ddc6 | | JIRA Issue |
[jira] [Commented] (HDFS-12686) Erasure coding system policy state is not correctly saved and loaded during real cluster restart
[ https://issues.apache.org/jira/browse/HDFS-12686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16217973#comment-16217973 ] SammiChen commented on HDFS-12686: -- Hi [~jojochuang], this JIRA is closely related with HDFS-12682. I was plan to work on it after HDFS-12682 is committed. Thanks [~xiaochen] for taking care it together in HDFS-12682. > Erasure coding system policy state is not correctly saved and loaded during > real cluster restart > > > Key: HDFS-12686 > URL: https://issues.apache.org/jira/browse/HDFS-12686 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 3.0.0-beta1 >Reporter: SammiChen >Assignee: SammiChen >Priority: Blocker > Labels: hdfs-ec-3.0-must-do > > Inspired by HDFS-12682, I found the system erasure coding policy state will > not be correctly saved and loaded in a real cluster. Through there are such > kind of unit tests and all are passed with MiniCluster. It's because the > MiniCluster keeps the same static system erasure coding policy object after > the NN restart operation. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12665) [AliasMap] Create a version of the AliasMap that runs in memory in the Namenode (leveldb)
[ https://issues.apache.org/jira/browse/HDFS-12665?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16217962#comment-16217962 ] Virajith Jalaparti commented on HDFS-12665: --- bq. Sure. This means all the classes used by fs2img will be in the same package (unless they need dependencies like using DynamoDB, AzureTable, etc). Yes, HDFS-11902 already includes this change. bq. In an early version we refactored it to use ExtendedBlock as the key but were advised that it should remain Block. AIUI, the AliasMap is unique to a NN so there is no ambiguity. Yes, only one NN uses the AliasMap. However, the check was on the DN to ensure that when using HDFS federation (multiple block pool ids), the provided volume wasn't report to the NN which doesn't expect the blocks to be reported. bq. If an administrator is running with {{DFSConfigKeys.DFS_NAMENODE_PROVIDED_ENABLED}} set to false but {{DFSConfigKeys.DFS_USE_ALIASMAP}} set to true, it's pretty lame. Should we throw or just log a warning about the misconfiguration I think logging a warning and disabling this makes sense. > [AliasMap] Create a version of the AliasMap that runs in memory in the > Namenode (leveldb) > - > > Key: HDFS-12665 > URL: https://issues.apache.org/jira/browse/HDFS-12665 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Ewan Higgs >Assignee: Ewan Higgs > Attachments: HDFS-12665-HDFS-9806.001.patch, > HDFS-12665-HDFS-9806.002.patch > > > The design of Provided Storage requires the use of an AliasMap to manage the > mapping between blocks of files on the local HDFS and ranges of files on a > remote storage system. To reduce load from the Namenode, this can be done > using a pluggable external service (e.g. AzureTable, Cassandra, Ratis). > However, to aide adoption and ease of deployment, we propose an in memory > version. > This AliasMap will be a wrapper around LevelDB (already a dependency from the > Timeline Service) and use protobuf for the key (blockpool, blockid, and > genstamp) and the value (url, offset, length, nonce). The in memory service > will also have a configurable port on which it will listen for updates from > Storage Policy Satisfier (SPS) Coordinating Datanodes (C-DN). -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-12544) SnapshotDiff - support diff generation on any snapshot root descendant directory
[ https://issues.apache.org/jira/browse/HDFS-12544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16217953#comment-16217953 ] Yongjun Zhang edited comment on HDFS-12544 at 10/25/17 1:13 AM: Thanks for the updated patch [~manojg], +1 on rev05 pending jenkins. As we discussed, please have separate jira to make distcp work with the changes made here. Thanks. was (Author: yzhangal): Thanks for the updated patch [~manojg], +1 on rev05. As we discussed, please have separate jira to make distcp work with the changes made here. Thanks. > SnapshotDiff - support diff generation on any snapshot root descendant > directory > > > Key: HDFS-12544 > URL: https://issues.apache.org/jira/browse/HDFS-12544 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0-beta1 >Reporter: Manoj Govindassamy >Assignee: Manoj Govindassamy > Attachments: HDFS-12544.01.patch, HDFS-12544.02.patch, > HDFS-12544.03.patch, HDFS-12544.04.patch, HDFS-12544.05.patch > > > {noformat} > # hdfs snapshotDiff > > {noformat} > Using snapshot diff command, we can generate a diff report between any two > given snapshots under a snapshot root directory. The command today only > accepts the path that is a snapshot root. There are many deployments where > the snapshot root is configured at the higher level directory but the diff > report needed is only for a specific directory under the snapshot root. In > these cases, the diff report can be filtered for changes pertaining to the > directory we are interested in. But when the snapshot root directory is very > huge, the snapshot diff report generation can take minutes even if we are > interested to know the changes only in a small directory. So, it would be > highly performant if the diff report calculation can be limited to only the > interesting sub-directory of the snapshot root instead of the whole snapshot > root. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12544) SnapshotDiff - support diff generation on any snapshot root descendant directory
[ https://issues.apache.org/jira/browse/HDFS-12544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16217953#comment-16217953 ] Yongjun Zhang commented on HDFS-12544: -- Thanks for the updated patch [~manojg], +1 on rev05. As we discussed, please have separate jira to make distcp work with the changes made here. Thanks. > SnapshotDiff - support diff generation on any snapshot root descendant > directory > > > Key: HDFS-12544 > URL: https://issues.apache.org/jira/browse/HDFS-12544 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0-beta1 >Reporter: Manoj Govindassamy >Assignee: Manoj Govindassamy > Attachments: HDFS-12544.01.patch, HDFS-12544.02.patch, > HDFS-12544.03.patch, HDFS-12544.04.patch, HDFS-12544.05.patch > > > {noformat} > # hdfs snapshotDiff > > {noformat} > Using snapshot diff command, we can generate a diff report between any two > given snapshots under a snapshot root directory. The command today only > accepts the path that is a snapshot root. There are many deployments where > the snapshot root is configured at the higher level directory but the diff > report needed is only for a specific directory under the snapshot root. In > these cases, the diff report can be filtered for changes pertaining to the > directory we are interested in. But when the snapshot root directory is very > huge, the snapshot diff report generation can take minutes even if we are > interested to know the changes only in a small directory. So, it would be > highly performant if the diff report calculation can be limited to only the > interesting sub-directory of the snapshot root instead of the whole snapshot > root. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12482) Provide a configuration to adjust the weight of EC recovery tasks to adjust the speed of recovery
[ https://issues.apache.org/jira/browse/HDFS-12482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lei (Eddy) Xu updated HDFS-12482: - Attachment: HDFS-12482.03.patch thanks for the suggestions, [~xiaochen] Add checks as you suggested. > Provide a configuration to adjust the weight of EC recovery tasks to adjust > the speed of recovery > - > > Key: HDFS-12482 > URL: https://issues.apache.org/jira/browse/HDFS-12482 > Project: Hadoop HDFS > Issue Type: Improvement > Components: erasure-coding >Affects Versions: 3.0.0-alpha4 >Reporter: Lei (Eddy) Xu >Assignee: Lei (Eddy) Xu >Priority: Minor > Labels: hdfs-ec-3.0-nice-to-have > Attachments: HDFS-12482.00.patch, HDFS-12482.01.patch, > HDFS-12482.02.patch, HDFS-12482.03.patch > > > The relative speed of EC recovery comparing to 3x replica recovery is a > function of (EC codec, number of sources, NIC speed, and CPU speed, and etc). > Currently the EC recovery has a fixed {{xmitsInProgress}} of {{max(# of > sources, # of targets)}} comparing to {{1}} for 3x replica recovery, and NN > uses {{xmitsInProgress}} to decide how much recovery tasks to schedule to the > DataNode this we can add a coefficient for user to tune the weight of EC > recovery tasks. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12682) ECAdmin -listPolicies will always show policy state as DISABLED
[ https://issues.apache.org/jira/browse/HDFS-12682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16217928#comment-16217928 ] Lei (Eddy) Xu commented on HDFS-12682: -- Thanks for reporting this and working on this, [~xiaochen] * I think we should keep {{ErasureCodingPolicyInfo}} as {{@InterfaceAudience.Private}}, also private for {{ErasureCodingPolicy}}. These classes should not be used outside of HDFS. Lessons from HADOOP-14957 :) * Wondering whether it is possible that always set {{state}} in {{PBHelperClient#convertErasureCodingPolicy}} {code} public static convertErasureCodingPolicy() { ... if (proto.hasState()) { policy.setState(proto.getState()); } return policy; } {code} > ECAdmin -listPolicies will always show policy state as DISABLED > --- > > Key: HDFS-12682 > URL: https://issues.apache.org/jira/browse/HDFS-12682 > Project: Hadoop HDFS > Issue Type: Bug > Components: erasure-coding >Reporter: Xiao Chen >Assignee: Xiao Chen >Priority: Blocker > Labels: hdfs-ec-3.0-must-do > Attachments: HDFS-12682.01.patch, HDFS-12682.02.patch > > > On a real cluster, {{hdfs ec -listPolicies}} will always show policy state as > DISABLED. > {noformat} > [hdfs@nightly6x-1 root]$ hdfs ec -listPolicies > Erasure Coding Policies: > ErasureCodingPolicy=[Name=RS-10-4-1024k, Schema=[ECSchema=[Codec=rs, > numDataUnits=10, numParityUnits=4]], CellSize=1048576, Id=5, State=DISABLED] > ErasureCodingPolicy=[Name=RS-3-2-1024k, Schema=[ECSchema=[Codec=rs, > numDataUnits=3, numParityUnits=2]], CellSize=1048576, Id=2, State=DISABLED] > ErasureCodingPolicy=[Name=RS-6-3-1024k, Schema=[ECSchema=[Codec=rs, > numDataUnits=6, numParityUnits=3]], CellSize=1048576, Id=1, State=DISABLED] > ErasureCodingPolicy=[Name=RS-LEGACY-6-3-1024k, > Schema=[ECSchema=[Codec=rs-legacy, numDataUnits=6, numParityUnits=3]], > CellSize=1048576, Id=3, State=DISABLED] > ErasureCodingPolicy=[Name=XOR-2-1-1024k, Schema=[ECSchema=[Codec=xor, > numDataUnits=2, numParityUnits=1]], CellSize=1048576, Id=4, State=DISABLED] > [hdfs@nightly6x-1 root]$ hdfs ec -getPolicy -path /ecec > XOR-2-1-1024k > {noformat} > This is because when [deserializing > protobuf|https://github.com/apache/hadoop/blob/branch-3.0.0-beta1/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelperClient.java#L2942], > the static instance of [SystemErasureCodingPolicies > class|https://github.com/apache/hadoop/blob/branch-3.0.0-beta1/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/SystemErasureCodingPolicies.java#L101] > is first checked, and always returns the cached policy objects, which are > created by default with state=DISABLED. > All the existing unit tests pass, because that static instance that the > client (e.g. ECAdmin) reads in unit test is updated by NN. :) -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12697) Ozone services must stay disabled in secure setup for alpha
[ https://issues.apache.org/jira/browse/HDFS-12697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16217907#comment-16217907 ] Hadoop QA commented on HDFS-12697: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 25s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue} 0m 6s{color} | {color:blue} Shelldocs was not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} HDFS-7240 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 48s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 23s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 5s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 39s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 47s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 33s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 22s{color} | {color:green} HDFS-7240 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 13s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 50s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 4 new + 201 unchanged - 0 fixed = 205 total (was 201) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 21s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} shellcheck {color} | {color:red} 0m 2s{color} | {color:red} The patch generated 8 new + 0 unchanged - 0 fixed = 8 total (was 0) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 1s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 16s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 21s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}141m 42s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 33s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}203m 35s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations | | | hadoop.hdfs.server.namenode.ha.TestStandbyCheckpoints | | | hadoop.ozone.container.common.impl.TestContainerPersistence | | | hadoop.hdfs.TestReadStripedFileWithMissingBlocks | | | hadoop.cblock.TestCBlockReadWrite | | Timed out junit tests | org.apache.hadoop.cblock.TestLocalBlockCache | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:ca8ddc6 | | JIRA Issue | HDFS-12697 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12893786/HDFS-12697-HDFS-7240.01.patch | | Optional Tests | asflicense mvnsite unit shellcheck shelldocs compile javac javadoc mvninstall shadedclient findbugs checkstyle
[jira] [Commented] (HDFS-10659) Namenode crashes after Journalnode re-installation in an HA cluster due to missing paxos directory
[ https://issues.apache.org/jira/browse/HDFS-10659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16217904#comment-16217904 ] Arpit Agarwal commented on HDFS-10659: -- bq. During transition to active state, Namenode crashes if a quorum of JNs do not have the paxos directory. This is because, the NN tries to recover the log segments and in the process needs to write recovery data into the paxos dir. Data is written into the paxos dir only during Journal#acceptRecovery() phase. So all we need to do is add a check and create the paxos dir if it does not exist during this phase. +1 for this approach. I will review the v04 patch. > Namenode crashes after Journalnode re-installation in an HA cluster due to > missing paxos directory > -- > > Key: HDFS-10659 > URL: https://issues.apache.org/jira/browse/HDFS-10659 > Project: Hadoop HDFS > Issue Type: Improvement > Components: ha, journal-node >Affects Versions: 2.7.0 >Reporter: Amit Anand >Assignee: Hanisha Koneru > Attachments: HDFS-10659.000.patch, HDFS-10659.001.patch, > HDFS-10659.002.patch, HDFS-10659.003.patch, HDFS-10659.004.patch > > > In my environment I am seeing {{Namenodes}} crashing down after majority of > {{Journalnodes}} are re-installed. We manage multiple clusters and do rolling > upgrades followed by rolling re-install of each node including master(NN, JN, > RM, ZK) nodes. When a journal node is re-installed or moved to a new > disk/host, instead of running {{"initializeSharedEdits"}} command, I copy > {{VERSION}} file from one of the other {{Journalnode}} and that allows my > {{NN}} to start writing data to the newly installed {{Journalnode}}. > To acheive quorum for JN and recover unfinalized segments NN during starupt > creates .tmp files under {{"/jn/current/paxos"}} directory . In > current implementation "paxos" directry is only created during > {{"initializeSharedEdits"}} command and if a JN is re-installed the "paxos" > directory is not created upon JN startup or by NN while writing .tmp > files which causes NN to crash with following error message: > {code} > 192.168.100.16:8485: /disk/1/dfs/jn/Test-Laptop/current/paxos/64044.tmp (No > such file or directory) > at java.io.FileOutputStream.open(Native Method) > at java.io.FileOutputStream.(FileOutputStream.java:221) > at java.io.FileOutputStream.(FileOutputStream.java:171) > at > org.apache.hadoop.hdfs.util.AtomicFileOutputStream.(AtomicFileOutputStream.java:58) > at > org.apache.hadoop.hdfs.qjournal.server.Journal.persistPaxosData(Journal.java:971) > at > org.apache.hadoop.hdfs.qjournal.server.Journal.acceptRecovery(Journal.java:846) > at > org.apache.hadoop.hdfs.qjournal.server.JournalNodeRpcServer.acceptRecovery(JournalNodeRpcServer.java:205) > at > org.apache.hadoop.hdfs.qjournal.protocolPB.QJournalProtocolServerSideTranslatorPB.acceptRecovery(QJournalProtocolServerSideTranslatorPB.java:249) > at > org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocolProtos$QJournalProtocolService$2.callBlockingMethod(QJournalProtocolProtos.java:25435) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2151) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2147) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2145) > {code} > The current > [getPaxosFile|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/JNStorage.java#L128-L130] > method simply returns a path to a file under "paxos" directory without > verifiying its existence. Since "paxos" directoy holds files that are > required for NN recovery and acheiving JN quorum my proposed solution is to > add a check to "getPaxosFile" method and create the {{"paxos"}} directory if > it is missing. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12502) nntop should support a category based on FilesInGetListingOps
[ https://issues.apache.org/jira/browse/HDFS-12502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16217901#comment-16217901 ] Hudson commented on HDFS-12502: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13131 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/13131/]) Revert "HDFS-12502. nntop should support a category based on (zhz: rev 17cd8d0c1786d6e3ea5fe7c90b176381db6f9c36) * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/top/metrics/TopMetrics.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/metrics/TestTopMetrics.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java > nntop should support a category based on FilesInGetListingOps > - > > Key: HDFS-12502 > URL: https://issues.apache.org/jira/browse/HDFS-12502 > Project: Hadoop HDFS > Issue Type: Improvement > Components: metrics >Reporter: Zhe Zhang >Assignee: Zhe Zhang > Attachments: HDFS-12502.00.patch, HDFS-12502.01.patch, > HDFS-12502.02.patch, HDFS-12502.03.patch, HDFS-12502.04.patch > > > Large listing ops can oftentimes be the main contributor to NameNode > slowness. The aggregate cost of listing ops is proportional to the > {{FilesInGetListingOps}} rather than the number of listing ops. Therefore > it'd be very useful for nntop to support this category. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12544) SnapshotDiff - support diff generation on any snapshot root descendant directory
[ https://issues.apache.org/jira/browse/HDFS-12544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Manoj Govindassamy updated HDFS-12544: -- Attachment: HDFS-12544.05.patch Attached v05 patch with test updated to cover more cases discussed in the previous comment. [~yzhangal], can you please take a look at the latest patch? > SnapshotDiff - support diff generation on any snapshot root descendant > directory > > > Key: HDFS-12544 > URL: https://issues.apache.org/jira/browse/HDFS-12544 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0-beta1 >Reporter: Manoj Govindassamy >Assignee: Manoj Govindassamy > Attachments: HDFS-12544.01.patch, HDFS-12544.02.patch, > HDFS-12544.03.patch, HDFS-12544.04.patch, HDFS-12544.05.patch > > > {noformat} > # hdfs snapshotDiff > > {noformat} > Using snapshot diff command, we can generate a diff report between any two > given snapshots under a snapshot root directory. The command today only > accepts the path that is a snapshot root. There are many deployments where > the snapshot root is configured at the higher level directory but the diff > report needed is only for a specific directory under the snapshot root. In > these cases, the diff report can be filtered for changes pertaining to the > directory we are interested in. But when the snapshot root directory is very > huge, the snapshot diff report generation can take minutes even if we are > interested to know the changes only in a small directory. So, it would be > highly performant if the diff report calculation can be limited to only the > interesting sub-directory of the snapshot root instead of the whole snapshot > root. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12502) nntop should support a category based on FilesInGetListingOps
[ https://issues.apache.org/jira/browse/HDFS-12502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16217862#comment-16217862 ] Wei Yan commented on HDFS-12502: {quote} We should probably also extend fair call queue to consider the cost of each op. {quote} We also spend some time investigating building cost-based FairCallQueue, where the cost is the lock held time. One challenge here would be how to differentiate read lock and write lock. > nntop should support a category based on FilesInGetListingOps > - > > Key: HDFS-12502 > URL: https://issues.apache.org/jira/browse/HDFS-12502 > Project: Hadoop HDFS > Issue Type: Improvement > Components: metrics >Reporter: Zhe Zhang >Assignee: Zhe Zhang > Attachments: HDFS-12502.00.patch, HDFS-12502.01.patch, > HDFS-12502.02.patch, HDFS-12502.03.patch, HDFS-12502.04.patch > > > Large listing ops can oftentimes be the main contributor to NameNode > slowness. The aggregate cost of listing ops is proportional to the > {{FilesInGetListingOps}} rather than the number of listing ops. Therefore > it'd be very useful for nntop to support this category. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12707) Ozone: start-all script is missing ozone start
[ https://issues.apache.org/jira/browse/HDFS-12707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-12707: Summary: Ozone: start-all script is missing ozone start (was: start-all script is missing ozone start) > Ozone: start-all script is missing ozone start > -- > > Key: HDFS-12707 > URL: https://issues.apache.org/jira/browse/HDFS-12707 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham > > start-all script is missing ozone start -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11902) [READ] Merge BlockFormatProvider and FileRegionProvider.
[ https://issues.apache.org/jira/browse/HDFS-11902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Virajith Jalaparti updated HDFS-11902: -- Status: Open (was: Patch Available) > [READ] Merge BlockFormatProvider and FileRegionProvider. > > > Key: HDFS-11902 > URL: https://issues.apache.org/jira/browse/HDFS-11902 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Virajith Jalaparti >Assignee: Virajith Jalaparti > Attachments: HDFS-11902-HDFS-9806.001.patch, > HDFS-11902-HDFS-9806.002.patch, HDFS-11902-HDFS-9806.003.patch, > HDFS-11902-HDFS-9806.004.patch, HDFS-11902-HDFS-9806.005.patch, > HDFS-11902-HDFS-9806.006.patch, HDFS-11902-HDFS-9806.007.patch, > HDFS-11902-HDFS-9806.008.patch, HDFS-11902-HDFS-9806.009.patch > > > Currently {{BlockFormatProvider}} and {{TextFileRegionProvider}} perform > almost the same function on the Namenode and Datanode respectively. This JIRA > is to merge them into one. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11902) [READ] Merge BlockFormatProvider and FileRegionProvider.
[ https://issues.apache.org/jira/browse/HDFS-11902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Virajith Jalaparti updated HDFS-11902: -- Attachment: HDFS-11902-HDFS-9806.009.patch > [READ] Merge BlockFormatProvider and FileRegionProvider. > > > Key: HDFS-11902 > URL: https://issues.apache.org/jira/browse/HDFS-11902 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Virajith Jalaparti >Assignee: Virajith Jalaparti > Attachments: HDFS-11902-HDFS-9806.001.patch, > HDFS-11902-HDFS-9806.002.patch, HDFS-11902-HDFS-9806.003.patch, > HDFS-11902-HDFS-9806.004.patch, HDFS-11902-HDFS-9806.005.patch, > HDFS-11902-HDFS-9806.006.patch, HDFS-11902-HDFS-9806.007.patch, > HDFS-11902-HDFS-9806.008.patch, HDFS-11902-HDFS-9806.009.patch > > > Currently {{BlockFormatProvider}} and {{TextFileRegionProvider}} perform > almost the same function on the Namenode and Datanode respectively. This JIRA > is to merge them into one. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11902) [READ] Merge BlockFormatProvider and FileRegionProvider.
[ https://issues.apache.org/jira/browse/HDFS-11902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Virajith Jalaparti updated HDFS-11902: -- Attachment: (was: HDFS-11902-HDFS-9806.009.patch) > [READ] Merge BlockFormatProvider and FileRegionProvider. > > > Key: HDFS-11902 > URL: https://issues.apache.org/jira/browse/HDFS-11902 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Virajith Jalaparti >Assignee: Virajith Jalaparti > Attachments: HDFS-11902-HDFS-9806.001.patch, > HDFS-11902-HDFS-9806.002.patch, HDFS-11902-HDFS-9806.003.patch, > HDFS-11902-HDFS-9806.004.patch, HDFS-11902-HDFS-9806.005.patch, > HDFS-11902-HDFS-9806.006.patch, HDFS-11902-HDFS-9806.007.patch, > HDFS-11902-HDFS-9806.008.patch, HDFS-11902-HDFS-9806.009.patch > > > Currently {{BlockFormatProvider}} and {{TextFileRegionProvider}} perform > almost the same function on the Namenode and Datanode respectively. This JIRA > is to merge them into one. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11902) [READ] Merge BlockFormatProvider and FileRegionProvider.
[ https://issues.apache.org/jira/browse/HDFS-11902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Virajith Jalaparti updated HDFS-11902: -- Status: Patch Available (was: Open) > [READ] Merge BlockFormatProvider and FileRegionProvider. > > > Key: HDFS-11902 > URL: https://issues.apache.org/jira/browse/HDFS-11902 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Virajith Jalaparti >Assignee: Virajith Jalaparti > Attachments: HDFS-11902-HDFS-9806.001.patch, > HDFS-11902-HDFS-9806.002.patch, HDFS-11902-HDFS-9806.003.patch, > HDFS-11902-HDFS-9806.004.patch, HDFS-11902-HDFS-9806.005.patch, > HDFS-11902-HDFS-9806.006.patch, HDFS-11902-HDFS-9806.007.patch, > HDFS-11902-HDFS-9806.008.patch, HDFS-11902-HDFS-9806.009.patch > > > Currently {{BlockFormatProvider}} and {{TextFileRegionProvider}} perform > almost the same function on the Namenode and Datanode respectively. This JIRA > is to merge them into one. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12681) Fold HdfsLocatedFileStatus into HdfsFileStatus
[ https://issues.apache.org/jira/browse/HDFS-12681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16217844#comment-16217844 ] Hadoop QA commented on HDFS-12681: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 7 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 17s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 59s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 24s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 3s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 21s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 36s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 27s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 17s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 15s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 2m 10s{color} | {color:orange} root: The patch generated 69 new + 631 unchanged - 10 fixed = 700 total (was 641) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 8m 40s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 43s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 9s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 41s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 24s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 98m 43s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 31s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}196m 42s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:hadoop-hdfs-project/hadoop-hdfs-client | | | org.apache.hadoop.hdfs.protocol.HdfsFileStatus$Builder.path(byte[]) may expose internal representation by storing an externally mutable object into HdfsFileStatus$Builder.path At HdfsFileStatus.java:by storing an externally mutable object into HdfsFileStatus$Builder.path At HdfsFileStatus.java:[line 459] | | |
[jira] [Assigned] (HDFS-12707) start-all script is missing ozone start
[ https://issues.apache.org/jira/browse/HDFS-12707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham reassigned HDFS-12707: - Assignee: Bharat Viswanadham > start-all script is missing ozone start > --- > > Key: HDFS-12707 > URL: https://issues.apache.org/jira/browse/HDFS-12707 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham > > start-all script is missing ozone start -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12594) SnapshotDiff - snapshotDiff fails if the snapshotDiff report exceeds the RPC response limit
[ https://issues.apache.org/jira/browse/HDFS-12594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16217837#comment-16217837 ] Tsz Wo Nicholas Sze commented on HDFS-12594: Indeed, we have a few options here: # getSnapshotDiffReport returns RemoteIterator. However, snapshotRoot, fromSnapshot and toSnapshot will be missing in the return value. It seems not a problem. # add a new getDiffListIterator method which returns RemoteIterator to SnapshotDiffReport. For the old SnapshotDiffReport obtained by non-iterative rpc, the new getDiffListIterator method is easy to implement. For the new iterative SnapshotDiffReport, throws an UnsupportedOperationException in getDiffList. # add a new class, say SnapshotDiffReportWithRemoteIterator, which has a getDiffListIterator method but not getDiffList. What do you think? > SnapshotDiff - snapshotDiff fails if the snapshotDiff report exceeds the RPC > response limit > --- > > Key: HDFS-12594 > URL: https://issues.apache.org/jira/browse/HDFS-12594 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Reporter: Shashikant Banerjee >Assignee: Shashikant Banerjee > Attachments: HDFS-12594.001.patch, HDFS-12594.002.patch, > HDFS-12594.003.patch, SnapshotDiff_Improvemnets .pdf > > > The snapshotDiff command fails if the snapshotDiff report size is larger than > the configuration value of ipc.maximum.response.length which is by default > 128 MB. > Worst case, with all Renames ops in sanpshots each with source and target > name equal to MAX_PATH_LEN which is 8k characters, this would result in at > 8192 renames. > > SnapshotDiff is currently used by distcp to optimize copy operations and in > case of the the diff report exceeding the limit , it fails with the below > exception: > Test set: > org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotDiffReport > --- > Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 112.095 sec > <<< FAILURE! - in > org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotDiffReport > testDiffReportWithMillionFiles(org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotDiffReport) > Time elapsed: 111.906 sec <<< ERROR! > java.io.IOException: Failed on local exception: > org.apache.hadoop.ipc.RpcException: RPC response exceeds maximum data length; > Host Details : local host is: "hw15685.local/10.200.5.230"; destination host > is: "localhost":59808; > Attached is the proposal for the changes required. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12707) start-all script is missing ozone start
[ https://issues.apache.org/jira/browse/HDFS-12707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HDFS-12707: -- Issue Type: Sub-task (was: Bug) Parent: HDFS-7240 > start-all script is missing ozone start > --- > > Key: HDFS-12707 > URL: https://issues.apache.org/jira/browse/HDFS-12707 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Bharat Viswanadham > > start-all script is missing ozone start -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-12707) start-all script is missing ozone start
Bharat Viswanadham created HDFS-12707: - Summary: start-all script is missing ozone start Key: HDFS-12707 URL: https://issues.apache.org/jira/browse/HDFS-12707 Project: Hadoop HDFS Issue Type: Bug Reporter: Bharat Viswanadham start-all script is missing ozone start -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11807) libhdfs++: Get minidfscluster tests running under valgrind
[ https://issues.apache.org/jira/browse/HDFS-11807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16217834#comment-16217834 ] Hadoop QA commented on HDFS-11807: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 15m 33s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} HDFS-8707 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 35s{color} | {color:green} HDFS-8707 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 41s{color} | {color:green} HDFS-8707 passed with JDK v1.8.0_151 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 15s{color} | {color:green} HDFS-8707 passed with JDK v1.7.0_151 {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 7m 19s{color} | {color:green} HDFS-8707 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 10s{color} | {color:green} the patch passed with JDK v1.8.0_151 {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 14m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 32s{color} | {color:green} the patch passed with JDK v1.7.0_151 {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 14m 32s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 32s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 7m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}225m 35s{color} | {color:red} hadoop-hdfs-native-client in the patch failed with JDK v1.7.0_151. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 50s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}359m 26s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | JDK v1.8.0_151 Failed CTEST tests | memcheck_libhdfs_mini_stress_valgrind_hdfspp_test_shim_static | | JDK v1.7.0_151 Failed CTEST tests | memcheck_libhdfs_mini_stress_valgrind_hdfspp_test_shim_static | | | test_hdfs_ext_hdfspp_test_shim_static | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:3117e2a | | JIRA Issue | HDFS-11807 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12893756/HDFS-11807.HDFS-8707.002.patch | | Optional Tests | asflicense compile cc mvnsite javac unit | | uname | Linux 397391cae5b7 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | HDFS-8707 / 3f92e63 | | maven | version: Apache Maven 3.0.5 | | Default Java | 1.7.0_151 | | Multi-JDK versions | /usr/lib/jvm/java-8-oracle:1.8.0_151 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_151 | | CTEST | https://builds.apache.org/job/PreCommit-HDFS-Build/21798/artifact/patchprocess/patch-hadoop-hdfs-project_hadoop-hdfs-native-client-jdk1.8.0_151-ctest.txt | | CTEST | https://builds.apache.org/job/PreCommit-HDFS-Build/21798/artifact/patchprocess/patch-hadoop-hdfs-project_hadoop-hdfs-native-client-jdk1.7.0_151-ctest.txt | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/21798/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-native-client-jdk1.7.0_151.txt | | JDK v1.7.0_151 Test Results |
[jira] [Commented] (HDFS-3296) Running libhdfs tests in mac fails
[ https://issues.apache.org/jira/browse/HDFS-3296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16217817#comment-16217817 ] John Zhuge commented on HDFS-3296: -- TestDomainSocket#testAsyncCloseDuringWrite failure: {noformat} java.util.concurrent.ExecutionException: java.lang.NoSuchMethodError: at java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.util.concurrent.FutureTask.get(FutureTask.java:206) at org.apache.hadoop.net.unix.TestDomainSocket.testAsyncCloseDuringIO(TestDomainSocket.java:245) at org.apache.hadoop.net.unix.TestDomainSocket.testAsyncCloseDuringWrite(TestDomainSocket.java:250) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) Caused by: java.lang.NoSuchMethodError: at org.apache.hadoop.net.unix.DomainSocket.writeArray0(Native Method) at org.apache.hadoop.net.unix.DomainSocket.access$300(DomainSocket.java:45) at org.apache.hadoop.net.unix.DomainSocket$DomainOutputStream.write(DomainSocket.java:598) at java.io.OutputStream.write(OutputStream.java:75) at org.apache.hadoop.net.unix.TestDomainSocket$3.call(TestDomainSocket.java:195) at org.apache.hadoop.net.unix.TestDomainSocket$3.call(TestDomainSocket.java:180) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) {noformat} > Running libhdfs tests in mac fails > -- > > Key: HDFS-3296 > URL: https://issues.apache.org/jira/browse/HDFS-3296 > Project: Hadoop HDFS > Issue Type: Bug > Components: libhdfs >Reporter: Amareshwari Sriramadasu >Assignee: Chris Nauroth > Attachments: HDFS-3296.001.patch, HDFS-3296.002.patch, > HDFS-3296.003.patch, HDFS-3296.004.patch > > > Running "ant -Dcompile.c++=true -Dlibhdfs=true test-c++-libhdfs" on Mac fails > with following error: > {noformat} > [exec] dyld: lazy symbol binding failed: Symbol not found: > _JNI_GetCreatedJavaVMs > [exec] Referenced from: > /Users/amareshwari.sr/workspace/hadoop/build/c++/Mac_OS_X-x86_64-64/lib/libhdfs.0.dylib > [exec] Expected in: flat namespace > [exec] > [exec] dyld: Symbol not found: _JNI_GetCreatedJavaVMs > [exec] Referenced from: > /Users/amareshwari.sr/workspace/hadoop/build/c++/Mac_OS_X-x86_64-64/lib/libhdfs.0.dylib > [exec] Expected in: flat namespace > [exec] > [exec] > /Users/amareshwari.sr/workspace/hadoop/src/c++/libhdfs/tests/test-libhdfs.sh: > line 122: 39485 Trace/BPT trap: 5 CLASSPATH=$HADOOP_CONF_DIR:$CLASSPATH > LD_PRELOAD="$LIB_JVM_DIR/libjvm.so:$LIBHDFS_INSTALL_DIR/libhdfs.so:" > $LIBHDFS_BUILD_DIR/$HDFS_TEST > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12594) SnapshotDiff - snapshotDiff fails if the snapshotDiff report exceeds the RPC response limit
[ https://issues.apache.org/jira/browse/HDFS-12594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16217815#comment-16217815 ] Tsz Wo Nicholas Sze commented on HDFS-12594: [~shashikant], thanks for updating the patch. Since the snapshot diff could be so huge that it may not fix in memory, the new DistributedFileSystem.getSnapshotDiffReport method should return a RemoteIterator. Then, the rpc calls are made on demand while consuming the diff. As an example, see DistributedFileSystem.listStatusIterator. > SnapshotDiff - snapshotDiff fails if the snapshotDiff report exceeds the RPC > response limit > --- > > Key: HDFS-12594 > URL: https://issues.apache.org/jira/browse/HDFS-12594 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Reporter: Shashikant Banerjee >Assignee: Shashikant Banerjee > Attachments: HDFS-12594.001.patch, HDFS-12594.002.patch, > HDFS-12594.003.patch, SnapshotDiff_Improvemnets .pdf > > > The snapshotDiff command fails if the snapshotDiff report size is larger than > the configuration value of ipc.maximum.response.length which is by default > 128 MB. > Worst case, with all Renames ops in sanpshots each with source and target > name equal to MAX_PATH_LEN which is 8k characters, this would result in at > 8192 renames. > > SnapshotDiff is currently used by distcp to optimize copy operations and in > case of the the diff report exceeding the limit , it fails with the below > exception: > Test set: > org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotDiffReport > --- > Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 112.095 sec > <<< FAILURE! - in > org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotDiffReport > testDiffReportWithMillionFiles(org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotDiffReport) > Time elapsed: 111.906 sec <<< ERROR! > java.io.IOException: Failed on local exception: > org.apache.hadoop.ipc.RpcException: RPC response exceeds maximum data length; > Host Details : local host is: "hw15685.local/10.200.5.230"; destination host > is: "localhost":59808; > Attached is the proposal for the changes required. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12702) Ozone: Add hugo to the dev docker image
[ https://issues.apache.org/jira/browse/HDFS-12702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16217811#comment-16217811 ] Hadoop QA commented on HDFS-12702: -- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 9m 26s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue} 0m 4s{color} | {color:blue} Shelldocs was not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} HDFS-7240 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 39s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 51s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 19s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 0s{color} | {color:green} There were no new shellcheck issues. {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 26s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 18s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 33m 14s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:ca8ddc6 | | JIRA Issue | HDFS-12702 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12893813/HDFS-12702-HDFS-7240.002.patch | | Optional Tests | asflicense shellcheck shelldocs | | uname | Linux 69e9d7121a5a 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | HDFS-7240 / 1e1fe06 | | maven | version: Apache Maven 3.3.9 | | shellcheck | v0.4.6 | | modules | C: U: | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/21804/console | | Powered by | Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Ozone: Add hugo to the dev docker image > --- > > Key: HDFS-12702 > URL: https://issues.apache.org/jira/browse/HDFS-12702 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Elek, Marton >Assignee: Elek, Marton > Attachments: HDFS-12702-HDFS-7240.001.patch, > HDFS-12702-HDFS-7240.002.patch > > > Both HADOOP-14163 and HDFS-12664 requries hugo site generation tool. To make > it easier to review those patches I suggest to add Hugo to the dev docker > image now. > This patch adds hugo to the dev docker image: > Test method: > {code} > cd dev-support/docker > docker build -t test . > docker run test hugo version > docker rmi test > {code} > Expected output (after docker run): > {code} > Hugo Static Site Generator v0.30.2 linux/amd64 BuildDate: 2017-10-19T11:34:27Z > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12697) Ozone services must stay disabled in secure setup for alpha
[ https://issues.apache.org/jira/browse/HDFS-12697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16217794#comment-16217794 ] Bharat Viswanadham commented on HDFS-12697: --- [~anu] Yes both are same. > Ozone services must stay disabled in secure setup for alpha > --- > > Key: HDFS-12697 > URL: https://issues.apache.org/jira/browse/HDFS-12697 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Jitendra Nath Pandey >Assignee: Bharat Viswanadham >Priority: Blocker > Attachments: HDFS-12697-HDFS-7240.01.patch, > HDFS-12697-HDFS-7240.02.patch > > > When security is enabled, ozone services should not start up, even if ozone > configurations are enabled. This is important to ensure a user experimenting > with ozone doesn't inadvertently get exposed to attacks. Specifically, > 1) KSM should not start up. > 2) SCM should not startup. > 3) Datanode's ozone xceiverserver should not startup, and must not listen on > a port. > 4) Datanode's ozone handler port should not be open, and webservice must stay > disabled. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12697) Ozone services must stay disabled in secure setup for alpha
[ https://issues.apache.org/jira/browse/HDFS-12697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16217788#comment-16217788 ] Anu Engineer commented on HDFS-12697: - +1 for version 4 in the review board. is the V2 patch here the same thing? if so, I will commit this after the Jenkins run. > Ozone services must stay disabled in secure setup for alpha > --- > > Key: HDFS-12697 > URL: https://issues.apache.org/jira/browse/HDFS-12697 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Jitendra Nath Pandey >Assignee: Bharat Viswanadham >Priority: Blocker > Attachments: HDFS-12697-HDFS-7240.01.patch, > HDFS-12697-HDFS-7240.02.patch > > > When security is enabled, ozone services should not start up, even if ozone > configurations are enabled. This is important to ensure a user experimenting > with ozone doesn't inadvertently get exposed to attacks. Specifically, > 1) KSM should not start up. > 2) SCM should not startup. > 3) Datanode's ozone xceiverserver should not startup, and must not listen on > a port. > 4) Datanode's ozone handler port should not be open, and webservice must stay > disabled. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12502) nntop should support a category based on FilesInGetListingOps
[ https://issues.apache.org/jira/browse/HDFS-12502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zhe Zhang updated HDFS-12502: - Fix Version/s: (was: 3.1.0) (was: 3.0.0) (was: 2.8.3) (was: 2.9.0) > nntop should support a category based on FilesInGetListingOps > - > > Key: HDFS-12502 > URL: https://issues.apache.org/jira/browse/HDFS-12502 > Project: Hadoop HDFS > Issue Type: Improvement > Components: metrics >Reporter: Zhe Zhang >Assignee: Zhe Zhang > Attachments: HDFS-12502.00.patch, HDFS-12502.01.patch, > HDFS-12502.02.patch, HDFS-12502.03.patch, HDFS-12502.04.patch > > > Large listing ops can oftentimes be the main contributor to NameNode > slowness. The aggregate cost of listing ops is proportional to the > {{FilesInGetListingOps}} rather than the number of listing ops. Therefore > it'd be very useful for nntop to support this category. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-12502) nntop should support a category based on FilesInGetListingOps
[ https://issues.apache.org/jira/browse/HDFS-12502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1621#comment-1621 ] Zhe Zhang edited comment on HDFS-12502 at 10/24/17 10:07 PM: - For some reason we were getting over 600k~700k FilesInGetListing per second during a few days, causing spikes in GC time. Single op processing time (inside the FSNLock, measured via {{FSNReadLockOpNameNanosAvgTime}}) increased by over 50%. And we don't have any tool find the abusing workload. Yes we are using fair call queue but similar to NNTop it only considers number of ops; and each large listing is 100 times as expensive as a getFileInfo. We should probably also extend fair call queue to consider the cost of each op. I just reverted this patch. was (Author: zhz): For some reason we were getting over 600k~700k FilesInGetListing per second during a few days, causing spikes in GC time. Single op processing time (inside the FSNLock, measured via {{FSNReadLockOpNameNanosAvgTime}}) increased by over 50%. And we don't have any tool find the abusing workload. Yes we are using fair call queue but similar to NNTop it only considers number of ops; and each large listing is 100 times as expensive as a getFileInfo. We should probably also extend fair call queue to consider the cost of each op. I'll work on reverting the patch now. > nntop should support a category based on FilesInGetListingOps > - > > Key: HDFS-12502 > URL: https://issues.apache.org/jira/browse/HDFS-12502 > Project: Hadoop HDFS > Issue Type: Improvement > Components: metrics >Reporter: Zhe Zhang >Assignee: Zhe Zhang > Attachments: HDFS-12502.00.patch, HDFS-12502.01.patch, > HDFS-12502.02.patch, HDFS-12502.03.patch, HDFS-12502.04.patch > > > Large listing ops can oftentimes be the main contributor to NameNode > slowness. The aggregate cost of listing ops is proportional to the > {{FilesInGetListingOps}} rather than the number of listing ops. Therefore > it'd be very useful for nntop to support this category. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Reopened] (HDFS-12502) nntop should support a category based on FilesInGetListingOps
[ https://issues.apache.org/jira/browse/HDFS-12502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zhe Zhang reopened HDFS-12502: -- > nntop should support a category based on FilesInGetListingOps > - > > Key: HDFS-12502 > URL: https://issues.apache.org/jira/browse/HDFS-12502 > Project: Hadoop HDFS > Issue Type: Improvement > Components: metrics >Reporter: Zhe Zhang >Assignee: Zhe Zhang > Fix For: 2.9.0, 2.8.3, 3.0.0, 3.1.0 > > Attachments: HDFS-12502.00.patch, HDFS-12502.01.patch, > HDFS-12502.02.patch, HDFS-12502.03.patch, HDFS-12502.04.patch > > > Large listing ops can oftentimes be the main contributor to NameNode > slowness. The aggregate cost of listing ops is proportional to the > {{FilesInGetListingOps}} rather than the number of listing ops. Therefore > it'd be very useful for nntop to support this category. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12502) nntop should support a category based on FilesInGetListingOps
[ https://issues.apache.org/jira/browse/HDFS-12502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zhe Zhang updated HDFS-12502: - Fix Version/s: (was: 2.7.5) > nntop should support a category based on FilesInGetListingOps > - > > Key: HDFS-12502 > URL: https://issues.apache.org/jira/browse/HDFS-12502 > Project: Hadoop HDFS > Issue Type: Improvement > Components: metrics >Reporter: Zhe Zhang >Assignee: Zhe Zhang > Fix For: 2.9.0, 2.8.3, 3.0.0, 3.1.0 > > Attachments: HDFS-12502.00.patch, HDFS-12502.01.patch, > HDFS-12502.02.patch, HDFS-12502.03.patch, HDFS-12502.04.patch > > > Large listing ops can oftentimes be the main contributor to NameNode > slowness. The aggregate cost of listing ops is proportional to the > {{FilesInGetListingOps}} rather than the number of listing ops. Therefore > it'd be very useful for nntop to support this category. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-12502) nntop should support a category based on FilesInGetListingOps
[ https://issues.apache.org/jira/browse/HDFS-12502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1621#comment-1621 ] Zhe Zhang edited comment on HDFS-12502 at 10/24/17 9:59 PM: For some reason we were getting over 600k~700k FilesInGetListing per second during a few days, causing spikes in GC time. Single op processing time (inside the FSNLock, measured via {{FSNReadLockOpNameNanosAvgTime}}) increased by over 50%. And we don't have any tool find the abusing workload. Yes we are using fair call queue but similar to NNTop it only considers number of ops; and each large listing is 100 times as expensive as a getFileInfo. We should probably also extend fair call queue to consider the cost of each op. I'll work on reverting the patch now. was (Author: zhz): For some reason we were getting over 600k~700k FilesInGetListing per second during a few days, causing spikes in GC time. Single op processing time (inside the FSNLock, measured via {{FSNReadLockOpNameNanosAvgTime}}) increased by over 50%. And we don't have any tool find the abusing workload. Yes we are using fair call queue but similar to NNTop it only considers number of ops; and each large listing is 100 times as expensive as a getFileInfo. We should probably also extend fair call queue to consider the cost of each op. > nntop should support a category based on FilesInGetListingOps > - > > Key: HDFS-12502 > URL: https://issues.apache.org/jira/browse/HDFS-12502 > Project: Hadoop HDFS > Issue Type: Improvement > Components: metrics >Reporter: Zhe Zhang >Assignee: Zhe Zhang > Fix For: 2.9.0, 2.8.3, 2.7.5, 3.0.0, 3.1.0 > > Attachments: HDFS-12502.00.patch, HDFS-12502.01.patch, > HDFS-12502.02.patch, HDFS-12502.03.patch, HDFS-12502.04.patch > > > Large listing ops can oftentimes be the main contributor to NameNode > slowness. The aggregate cost of listing ops is proportional to the > {{FilesInGetListingOps}} rather than the number of listing ops. Therefore > it'd be very useful for nntop to support this category. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12502) nntop should support a category based on FilesInGetListingOps
[ https://issues.apache.org/jira/browse/HDFS-12502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1621#comment-1621 ] Zhe Zhang commented on HDFS-12502: -- For some reason we were getting over 600k~700k FilesInGetListing per second during a few days, causing spikes in GC time. Single op processing time (inside the FSNLock, measured via {{FSNReadLockOpNameNanosAvgTime}}) increased by over 50%. And we don't have any tool find the abusing workload. Yes we are using fair call queue but similar to NNTop it only considers number of ops; and each large listing is 100 times as expensive as a getFileInfo. We should probably also extend fair call queue to consider the cost of each op. > nntop should support a category based on FilesInGetListingOps > - > > Key: HDFS-12502 > URL: https://issues.apache.org/jira/browse/HDFS-12502 > Project: Hadoop HDFS > Issue Type: Improvement > Components: metrics >Reporter: Zhe Zhang >Assignee: Zhe Zhang > Fix For: 2.9.0, 2.8.3, 2.7.5, 3.0.0, 3.1.0 > > Attachments: HDFS-12502.00.patch, HDFS-12502.01.patch, > HDFS-12502.02.patch, HDFS-12502.03.patch, HDFS-12502.04.patch > > > Large listing ops can oftentimes be the main contributor to NameNode > slowness. The aggregate cost of listing ops is proportional to the > {{FilesInGetListingOps}} rather than the number of listing ops. Therefore > it'd be very useful for nntop to support this category. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12702) Ozone: Add hugo to the dev docker image
[ https://issues.apache.org/jira/browse/HDFS-12702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16217763#comment-16217763 ] Hadoop QA commented on HDFS-12702: -- (!) A patch to the testing environment has been detected. Re-executing against the patched versions to perform further tests. The console is at https://builds.apache.org/job/PreCommit-HDFS-Build/21804/console in case of problems. > Ozone: Add hugo to the dev docker image > --- > > Key: HDFS-12702 > URL: https://issues.apache.org/jira/browse/HDFS-12702 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Elek, Marton >Assignee: Elek, Marton > Attachments: HDFS-12702-HDFS-7240.001.patch, > HDFS-12702-HDFS-7240.002.patch > > > Both HADOOP-14163 and HDFS-12664 requries hugo site generation tool. To make > it easier to review those patches I suggest to add Hugo to the dev docker > image now. > This patch adds hugo to the dev docker image: > Test method: > {code} > cd dev-support/docker > docker build -t test . > docker run test hugo version > docker rmi test > {code} > Expected output (after docker run): > {code} > Hugo Static Site Generator v0.30.2 linux/amd64 BuildDate: 2017-10-19T11:34:27Z > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-12697) Ozone services must stay disabled in secure setup for alpha
[ https://issues.apache.org/jira/browse/HDFS-12697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16217755#comment-16217755 ] Bharat Viswanadham edited comment on HDFS-12697 at 10/24/17 9:42 PM: - Addressed review comments from [~anu] in the reviewboard. Attached patch v02. was (Author: bharatviswa): Addressed review comments from [~anu] in the reviewboard. > Ozone services must stay disabled in secure setup for alpha > --- > > Key: HDFS-12697 > URL: https://issues.apache.org/jira/browse/HDFS-12697 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Jitendra Nath Pandey >Assignee: Bharat Viswanadham >Priority: Blocker > Attachments: HDFS-12697-HDFS-7240.01.patch, > HDFS-12697-HDFS-7240.02.patch > > > When security is enabled, ozone services should not start up, even if ozone > configurations are enabled. This is important to ensure a user experimenting > with ozone doesn't inadvertently get exposed to attacks. Specifically, > 1) KSM should not start up. > 2) SCM should not startup. > 3) Datanode's ozone xceiverserver should not startup, and must not listen on > a port. > 4) Datanode's ozone handler port should not be open, and webservice must stay > disabled. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12697) Ozone services must stay disabled in secure setup for alpha
[ https://issues.apache.org/jira/browse/HDFS-12697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16217755#comment-16217755 ] Bharat Viswanadham commented on HDFS-12697: --- Addressed review comments from [~anu] in the reviewboard. > Ozone services must stay disabled in secure setup for alpha > --- > > Key: HDFS-12697 > URL: https://issues.apache.org/jira/browse/HDFS-12697 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Jitendra Nath Pandey >Assignee: Bharat Viswanadham >Priority: Blocker > Attachments: HDFS-12697-HDFS-7240.01.patch, > HDFS-12697-HDFS-7240.02.patch > > > When security is enabled, ozone services should not start up, even if ozone > configurations are enabled. This is important to ensure a user experimenting > with ozone doesn't inadvertently get exposed to attacks. Specifically, > 1) KSM should not start up. > 2) SCM should not startup. > 3) Datanode's ozone xceiverserver should not startup, and must not listen on > a port. > 4) Datanode's ozone handler port should not be open, and webservice must stay > disabled. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12697) Ozone services must stay disabled in secure setup for alpha
[ https://issues.apache.org/jira/browse/HDFS-12697?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HDFS-12697: -- Attachment: HDFS-12697-HDFS-7240.02.patch > Ozone services must stay disabled in secure setup for alpha > --- > > Key: HDFS-12697 > URL: https://issues.apache.org/jira/browse/HDFS-12697 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Jitendra Nath Pandey >Assignee: Bharat Viswanadham >Priority: Blocker > Attachments: HDFS-12697-HDFS-7240.01.patch, > HDFS-12697-HDFS-7240.02.patch > > > When security is enabled, ozone services should not start up, even if ozone > configurations are enabled. This is important to ensure a user experimenting > with ozone doesn't inadvertently get exposed to attacks. Specifically, > 1) KSM should not start up. > 2) SCM should not startup. > 3) Datanode's ozone xceiverserver should not startup, and must not listen on > a port. > 4) Datanode's ozone handler port should not be open, and webservice must stay > disabled. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12502) nntop should support a category based on FilesInGetListingOps
[ https://issues.apache.org/jira/browse/HDFS-12502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16217740#comment-16217740 ] Daryn Sharp commented on HDFS-12502: I'd prefer to see it reverted while we figure out how to add it differently. We've already had to internally revert this change. I have a general concern about the new metrics creep. Nothing is free... I don't want Heisenberg to become an unwelcome roommate. Out of curiosity, can you share general details of the incident that motivates you to add this metric? How much of an impact did you see from listing a large dir? Are you using the fair call queue? I'm far more worried about a 10k+ create/sec flood than a 100k+ listStatus/sec flood regardless of the number of items in the dir. > nntop should support a category based on FilesInGetListingOps > - > > Key: HDFS-12502 > URL: https://issues.apache.org/jira/browse/HDFS-12502 > Project: Hadoop HDFS > Issue Type: Improvement > Components: metrics >Reporter: Zhe Zhang >Assignee: Zhe Zhang > Fix For: 2.9.0, 2.8.3, 2.7.5, 3.0.0, 3.1.0 > > Attachments: HDFS-12502.00.patch, HDFS-12502.01.patch, > HDFS-12502.02.patch, HDFS-12502.03.patch, HDFS-12502.04.patch > > > Large listing ops can oftentimes be the main contributor to NameNode > slowness. The aggregate cost of listing ops is proportional to the > {{FilesInGetListingOps}} rather than the number of listing ops. Therefore > it'd be very useful for nntop to support this category. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12705) WebHdfsFileSystem exceptions should retain the caused by exception
[ https://issues.apache.org/jira/browse/HDFS-12705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hanisha Koneru updated HDFS-12705: -- Attachment: HDFS-12705.001.patch Thanks Daryn for filing the bug. Uploaded a simple patch to retain the exception cause if it exists. > WebHdfsFileSystem exceptions should retain the caused by exception > -- > > Key: HDFS-12705 > URL: https://issues.apache.org/jira/browse/HDFS-12705 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Affects Versions: 2.8.0 >Reporter: Daryn Sharp >Assignee: Hanisha Koneru > Attachments: HDFS-12705.001.patch > > > {{WebHdfsFileSystem#runWithRetry}} uses reflection to prepend the remote host > to the exception. While it preserves the original stacktrace, it omits the > original cause which complicates debugging. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12482) Provide a configuration to adjust the weight of EC recovery tasks to adjust the speed of recovery
[ https://issues.apache.org/jira/browse/HDFS-12482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16217729#comment-16217729 ] Hadoop QA commented on HDFS-12482: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 9m 52s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 8s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 51s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 10s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 27s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 12s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 2s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 48s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 5 new + 421 unchanged - 0 fixed = 426 total (was 421) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 27s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 85m 9s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 23s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}145m 12s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestPread | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:ca8ddc6 | | JIRA Issue | HDFS-12482 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12893614/HDFS-12482.02.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle xml | | uname | Linux 1442f1eb665b 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 1c5c2b5 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_131 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/21801/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/21801/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results |
[jira] [Updated] (HDFS-3296) Running libhdfs tests in mac fails
[ https://issues.apache.org/jira/browse/HDFS-3296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Zhuge updated HDFS-3296: - Attachment: HDFS-3296.004.patch I'd like to share a patch I have been using. Not totally polished or thoroughly tested yet. Patch 004 * Work around domain socket path limitation by using "/tmp" instead of $java.io.tmpdir * On Mac, {{shutdown0}} can not unblock {{accept0}} which still holds on to a refcount, thus {{DomainSocket#close}} stuck in the loop. Fix the issue by using the "self-pipe" method. See {{accept1}} implementation for details. * Add syslog calls to DomainSocket.c for easier debugging * Set DYLD_LIBRARY_PATH to hadoop-hdfs-native-client pom. All libhdfs tests pass including the zerocopy test. TODO: * {{TestDomainSocket#testAsyncCloseDuringWrite}} failed * Pass all ShortCircuitRead tests > Running libhdfs tests in mac fails > -- > > Key: HDFS-3296 > URL: https://issues.apache.org/jira/browse/HDFS-3296 > Project: Hadoop HDFS > Issue Type: Bug > Components: libhdfs >Reporter: Amareshwari Sriramadasu >Assignee: Chris Nauroth > Attachments: HDFS-3296.001.patch, HDFS-3296.002.patch, > HDFS-3296.003.patch, HDFS-3296.004.patch > > > Running "ant -Dcompile.c++=true -Dlibhdfs=true test-c++-libhdfs" on Mac fails > with following error: > {noformat} > [exec] dyld: lazy symbol binding failed: Symbol not found: > _JNI_GetCreatedJavaVMs > [exec] Referenced from: > /Users/amareshwari.sr/workspace/hadoop/build/c++/Mac_OS_X-x86_64-64/lib/libhdfs.0.dylib > [exec] Expected in: flat namespace > [exec] > [exec] dyld: Symbol not found: _JNI_GetCreatedJavaVMs > [exec] Referenced from: > /Users/amareshwari.sr/workspace/hadoop/build/c++/Mac_OS_X-x86_64-64/lib/libhdfs.0.dylib > [exec] Expected in: flat namespace > [exec] > [exec] > /Users/amareshwari.sr/workspace/hadoop/src/c++/libhdfs/tests/test-libhdfs.sh: > line 122: 39485 Trace/BPT trap: 5 CLASSPATH=$HADOOP_CONF_DIR:$CLASSPATH > LD_PRELOAD="$LIB_JVM_DIR/libjvm.so:$LIBHDFS_INSTALL_DIR/libhdfs.so:" > $LIBHDFS_BUILD_DIR/$HDFS_TEST > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12702) Ozone: Add hugo to the dev docker image
[ https://issues.apache.org/jira/browse/HDFS-12702?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Elek, Marton updated HDFS-12702: Attachment: HDFS-12702-HDFS-7240.002.patch Final patch is uploaded. The previous one worked well with the jenkins. I removed the fake test-modification from the patch > Ozone: Add hugo to the dev docker image > --- > > Key: HDFS-12702 > URL: https://issues.apache.org/jira/browse/HDFS-12702 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Elek, Marton >Assignee: Elek, Marton > Attachments: HDFS-12702-HDFS-7240.001.patch, > HDFS-12702-HDFS-7240.002.patch > > > Both HADOOP-14163 and HDFS-12664 requries hugo site generation tool. To make > it easier to review those patches I suggest to add Hugo to the dev docker > image now. > This patch adds hugo to the dev docker image: > Test method: > {code} > cd dev-support/docker > docker build -t test . > docker run test hugo version > docker rmi test > {code} > Expected output (after docker run): > {code} > Hugo Static Site Generator v0.30.2 linux/amd64 BuildDate: 2017-10-19T11:34:27Z > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12502) nntop should support a category based on FilesInGetListingOps
[ https://issues.apache.org/jira/browse/HDFS-12502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16217704#comment-16217704 ] Zhe Zhang commented on HDFS-12502: -- Thanks [~daryn], I agree that the name {{NNTopUserOpCounts}} has an indication that each op should be counted once. I'll add another patch to move the newly added metric to a new context (maybe {{NNTopUserPerfImpact}}). Would you be OK with that (or still want to revert this first)? > nntop should support a category based on FilesInGetListingOps > - > > Key: HDFS-12502 > URL: https://issues.apache.org/jira/browse/HDFS-12502 > Project: Hadoop HDFS > Issue Type: Improvement > Components: metrics >Reporter: Zhe Zhang >Assignee: Zhe Zhang > Fix For: 2.9.0, 2.8.3, 2.7.5, 3.0.0, 3.1.0 > > Attachments: HDFS-12502.00.patch, HDFS-12502.01.patch, > HDFS-12502.02.patch, HDFS-12502.03.patch, HDFS-12502.04.patch > > > Large listing ops can oftentimes be the main contributor to NameNode > slowness. The aggregate cost of listing ops is proportional to the > {{FilesInGetListingOps}} rather than the number of listing ops. Therefore > it'd be very useful for nntop to support this category. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12653) Implement toArray() and subArray() for ReadOnlyList
[ https://issues.apache.org/jira/browse/HDFS-12653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16217697#comment-16217697 ] Daryn Sharp commented on HDFS-12653: How do you intend to use this with the inode attr provider? It looks like it's still going to be making lots of copies of the arrays? > Implement toArray() and subArray() for ReadOnlyList > --- > > Key: HDFS-12653 > URL: https://issues.apache.org/jira/browse/HDFS-12653 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Manoj Govindassamy >Assignee: Manoj Govindassamy > Attachments: HDFS-12653.01.patch > > > {{ReadOnlyList}} today gives an unmodifiable view of the backing List. This > list supports following Util methods for easy construction of read only views > of any given list. > {noformat} > public static ReadOnlyList asReadOnlyList(final List list) > public static List asList(final ReadOnlyList list) > {noformat} > {{asList}} above additionally overrides {{Object[] toArray()}} of the > {{java.util.List}} interface. Unlike the {{java.util.List}}, the above one > returns an array of Objects referring to the backing list and avoid any > copying of objects. Given that we have many usages of read only lists, > 1. Lets have a light-weight / shared-view {{toArray()}} implementation for > {{ReadOnlyList}} as well. > 2. Additionally, similar to {{java.util.List#subList(fromIndex, toIndex)}}, > lets have {{ReadOnlyList#subArray(fromIndex, toIndex)}} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Resolved] (HDFS-12686) Erasure coding system policy state is not correctly saved and loaded during real cluster restart
[ https://issues.apache.org/jira/browse/HDFS-12686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiao Chen resolved HDFS-12686. -- Resolution: Duplicate Since HDFS-12682 should be able to handle this, and it's pretty hard to split that work into 2 jiras due to protobuf changes, I'll resolve this as a dup. Thanks Sammi for filing the jira and Wei-Chiu for checking! > Erasure coding system policy state is not correctly saved and loaded during > real cluster restart > > > Key: HDFS-12686 > URL: https://issues.apache.org/jira/browse/HDFS-12686 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 3.0.0-beta1 >Reporter: SammiChen >Assignee: SammiChen >Priority: Blocker > Labels: hdfs-ec-3.0-must-do > > Inspired by HDFS-12682, I found the system erasure coding policy state will > not be correctly saved and loaded in a real cluster. Through there are such > kind of unit tests and all are passed with MiniCluster. It's because the > MiniCluster keeps the same static system erasure coding policy object after > the NN restart operation. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12686) Erasure coding system policy state is not correctly saved and loaded during real cluster restart
[ https://issues.apache.org/jira/browse/HDFS-12686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16217588#comment-16217588 ] Wei-Chiu Chuang commented on HDFS-12686: Missed target version for this ec blocker. [~Sammi] please let me know if you have a patch ready. Thanks! > Erasure coding system policy state is not correctly saved and loaded during > real cluster restart > > > Key: HDFS-12686 > URL: https://issues.apache.org/jira/browse/HDFS-12686 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 3.0.0-beta1 >Reporter: SammiChen >Assignee: SammiChen >Priority: Blocker > Labels: hdfs-ec-3.0-must-do > > Inspired by HDFS-12682, I found the system erasure coding policy state will > not be correctly saved and loaded in a real cluster. Through there are such > kind of unit tests and all are passed with MiniCluster. It's because the > MiniCluster keeps the same static system erasure coding policy object after > the NN restart operation. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12686) Erasure coding system policy state is not correctly saved and loaded during real cluster restart
[ https://issues.apache.org/jira/browse/HDFS-12686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HDFS-12686: --- Target Version/s: 3.0.0 > Erasure coding system policy state is not correctly saved and loaded during > real cluster restart > > > Key: HDFS-12686 > URL: https://issues.apache.org/jira/browse/HDFS-12686 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 3.0.0-beta1 >Reporter: SammiChen >Assignee: SammiChen >Priority: Blocker > Labels: hdfs-ec-3.0-must-do > > Inspired by HDFS-12682, I found the system erasure coding policy state will > not be correctly saved and loaded in a real cluster. Through there are such > kind of unit tests and all are passed with MiniCluster. It's because the > MiniCluster keeps the same static system erasure coding policy object after > the NN restart operation. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-12706) Allow overriding HADOOP_SHELL_EXECNAME
Arpit Agarwal created HDFS-12706: Summary: Allow overriding HADOOP_SHELL_EXECNAME Key: HDFS-12706 URL: https://issues.apache.org/jira/browse/HDFS-12706 Project: Hadoop HDFS Issue Type: Improvement Reporter: Arpit Agarwal Some Hadoop shell scripts infer their own name using this bit of shell magic: {code} 18 MYNAME="${BASH_SOURCE-$0}" 19 HADOOP_SHELL_EXECNAME="${MYNAME##*/}" {code} e.g. see the [hdfs|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs#L18] script. The inferred shell script name is later passed to _hadoop-functions.sh_ which uses it to construct the names of some environment variables. E.g. when invoking _hdfs datanode_, the options variable name is inferred as follows: {code} # HDFS + DATANODE + OPTS -> HDFS_DATANODE_OPTS {code} This works well if the calling script name is standard {{hdfs}} or {{yarn}}. If a distribution renames the script to something like foo.bar, , then the variable names will be inferred as {{FOO.BAR_DATANODE_OPTS}}. This is not a valid bash variable name. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12702) Ozone: Add hugo to the dev docker image
[ https://issues.apache.org/jira/browse/HDFS-12702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16217479#comment-16217479 ] Hadoop QA commented on HDFS-12702: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 23s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue} 0m 6s{color} | {color:blue} Shelldocs was not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} HDFS-7240 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 4s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 53s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 39s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 56s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 19s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 5s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 1s{color} | {color:green} HDFS-7240 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 2s{color} | {color:green} There were no new shellcheck issues. {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 4s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 59s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 92m 28s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 19s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}140m 56s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | | | hadoop.hdfs.web.TestWebHdfsTimeouts | | | hadoop.hdfs.TestDFSClientExcludedNodes | | | hadoop.hdfs.server.namenode.ha.TestPipelinesFailover | | Timed out junit tests | org.apache.hadoop.hdfs.TestLeaseRecovery2 | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:ca8ddc6 | | JIRA Issue | HDFS-12702 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12893754/HDFS-12702-HDFS-7240.001.patch | | Optional Tests | asflicense shellcheck shelldocs compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 45c86e4719b9 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 13:48:03 UTC 2016 x86_64 x86_64 x86_64
[jira] [Updated] (HDFS-12697) Ozone services must stay disabled in secure setup for alpha
[ https://issues.apache.org/jira/browse/HDFS-12697?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HDFS-12697: -- Attachment: HDFS-12697-HDFS-7240.01.patch > Ozone services must stay disabled in secure setup for alpha > --- > > Key: HDFS-12697 > URL: https://issues.apache.org/jira/browse/HDFS-12697 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Jitendra Nath Pandey >Assignee: Bharat Viswanadham >Priority: Blocker > Attachments: HDFS-12697-HDFS-7240.01.patch > > > When security is enabled, ozone services should not start up, even if ozone > configurations are enabled. This is important to ensure a user experimenting > with ozone doesn't inadvertently get exposed to attacks. Specifically, > 1) KSM should not start up. > 2) SCM should not startup. > 3) Datanode's ozone xceiverserver should not startup, and must not listen on > a port. > 4) Datanode's ozone handler port should not be open, and webservice must stay > disabled. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12697) Ozone services must stay disabled in secure setup for alpha
[ https://issues.apache.org/jira/browse/HDFS-12697?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HDFS-12697: -- Attachment: (was: HDFS-7240-HDFS-12697.01.patch) > Ozone services must stay disabled in secure setup for alpha > --- > > Key: HDFS-12697 > URL: https://issues.apache.org/jira/browse/HDFS-12697 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Jitendra Nath Pandey >Assignee: Bharat Viswanadham >Priority: Blocker > Attachments: HDFS-12697-HDFS-7240.01.patch > > > When security is enabled, ozone services should not start up, even if ozone > configurations are enabled. This is important to ensure a user experimenting > with ozone doesn't inadvertently get exposed to attacks. Specifically, > 1) KSM should not start up. > 2) SCM should not startup. > 3) Datanode's ozone xceiverserver should not startup, and must not listen on > a port. > 4) Datanode's ozone handler port should not be open, and webservice must stay > disabled. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12697) Ozone services must stay disabled in secure setup for alpha
[ https://issues.apache.org/jira/browse/HDFS-12697?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HDFS-12697: -- Attachment: (was: HDFS-7240-HDFS-12697.00.patch) > Ozone services must stay disabled in secure setup for alpha > --- > > Key: HDFS-12697 > URL: https://issues.apache.org/jira/browse/HDFS-12697 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Jitendra Nath Pandey >Assignee: Bharat Viswanadham >Priority: Blocker > Attachments: HDFS-12697-HDFS-7240.01.patch > > > When security is enabled, ozone services should not start up, even if ozone > configurations are enabled. This is important to ensure a user experimenting > with ozone doesn't inadvertently get exposed to attacks. Specifically, > 1) KSM should not start up. > 2) SCM should not startup. > 3) Datanode's ozone xceiverserver should not startup, and must not listen on > a port. > 4) Datanode's ozone handler port should not be open, and webservice must stay > disabled. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDFS-12705) WebHdfsFileSystem exceptions should retain the caused by exception
[ https://issues.apache.org/jira/browse/HDFS-12705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hanisha Koneru reassigned HDFS-12705: - Assignee: Hanisha Koneru > WebHdfsFileSystem exceptions should retain the caused by exception > -- > > Key: HDFS-12705 > URL: https://issues.apache.org/jira/browse/HDFS-12705 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Affects Versions: 2.8.0 >Reporter: Daryn Sharp >Assignee: Hanisha Koneru > > {{WebHdfsFileSystem#runWithRetry}} uses reflection to prepend the remote host > to the exception. While it preserves the original stacktrace, it omits the > original cause which complicates debugging. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12697) Ozone services must stay disabled in secure setup for alpha
[ https://issues.apache.org/jira/browse/HDFS-12697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16217465#comment-16217465 ] Hadoop QA commented on HDFS-12697: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 7s{color} | {color:red} HDFS-12697 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | HDFS-12697 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12893774/HDFS-7240-HDFS-12697.01.patch | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/21800/console | | Powered by | Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Ozone services must stay disabled in secure setup for alpha > --- > > Key: HDFS-12697 > URL: https://issues.apache.org/jira/browse/HDFS-12697 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Jitendra Nath Pandey >Assignee: Bharat Viswanadham >Priority: Blocker > Attachments: HDFS-7240-HDFS-12697.00.patch, > HDFS-7240-HDFS-12697.01.patch > > > When security is enabled, ozone services should not start up, even if ozone > configurations are enabled. This is important to ensure a user experimenting > with ozone doesn't inadvertently get exposed to attacks. Specifically, > 1) KSM should not start up. > 2) SCM should not startup. > 3) Datanode's ozone xceiverserver should not startup, and must not listen on > a port. > 4) Datanode's ozone handler port should not be open, and webservice must stay > disabled. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12681) Fold HdfsLocatedFileStatus into HdfsFileStatus
[ https://issues.apache.org/jira/browse/HDFS-12681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Douglas updated HDFS-12681: - Status: Patch Available (was: Open) > Fold HdfsLocatedFileStatus into HdfsFileStatus > -- > > Key: HDFS-12681 > URL: https://issues.apache.org/jira/browse/HDFS-12681 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Chris Douglas >Priority: Minor > Attachments: HDFS-12681.00.patch, HDFS-12681.01.patch, > HDFS-12681.02.patch, HDFS-12681.03.patch > > > {{HdfsLocatedFileStatus}} is a subtype of {{HdfsFileStatus}}, but not of > {{LocatedFileStatus}}. Conversion requires copying common fields and shedding > unknown data. It would be cleaner and sufficient for {{HdfsFileStatus}} to > extend {{LocatedFileStatus}}. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12681) Fold HdfsLocatedFileStatus into HdfsFileStatus
[ https://issues.apache.org/jira/browse/HDFS-12681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Douglas updated HDFS-12681: - Status: Open (was: Patch Available) > Fold HdfsLocatedFileStatus into HdfsFileStatus > -- > > Key: HDFS-12681 > URL: https://issues.apache.org/jira/browse/HDFS-12681 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Chris Douglas >Priority: Minor > Attachments: HDFS-12681.00.patch, HDFS-12681.01.patch, > HDFS-12681.02.patch, HDFS-12681.03.patch > > > {{HdfsLocatedFileStatus}} is a subtype of {{HdfsFileStatus}}, but not of > {{LocatedFileStatus}}. Conversion requires copying common fields and shedding > unknown data. It would be cleaner and sufficient for {{HdfsFileStatus}} to > extend {{LocatedFileStatus}}. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11902) [READ] Merge BlockFormatProvider and FileRegionProvider.
[ https://issues.apache.org/jira/browse/HDFS-11902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Virajith Jalaparti updated HDFS-11902: -- Status: Patch Available (was: Open) > [READ] Merge BlockFormatProvider and FileRegionProvider. > > > Key: HDFS-11902 > URL: https://issues.apache.org/jira/browse/HDFS-11902 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Virajith Jalaparti >Assignee: Virajith Jalaparti > Attachments: HDFS-11902-HDFS-9806.001.patch, > HDFS-11902-HDFS-9806.002.patch, HDFS-11902-HDFS-9806.003.patch, > HDFS-11902-HDFS-9806.004.patch, HDFS-11902-HDFS-9806.005.patch, > HDFS-11902-HDFS-9806.006.patch, HDFS-11902-HDFS-9806.007.patch, > HDFS-11902-HDFS-9806.008.patch, HDFS-11902-HDFS-9806.009.patch > > > Currently {{BlockFormatProvider}} and {{TextFileRegionProvider}} perform > almost the same function on the Namenode and Datanode respectively. This JIRA > is to merge them into one. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11902) [READ] Merge BlockFormatProvider and FileRegionProvider.
[ https://issues.apache.org/jira/browse/HDFS-11902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Virajith Jalaparti updated HDFS-11902: -- Status: Open (was: Patch Available) > [READ] Merge BlockFormatProvider and FileRegionProvider. > > > Key: HDFS-11902 > URL: https://issues.apache.org/jira/browse/HDFS-11902 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Virajith Jalaparti >Assignee: Virajith Jalaparti > Attachments: HDFS-11902-HDFS-9806.001.patch, > HDFS-11902-HDFS-9806.002.patch, HDFS-11902-HDFS-9806.003.patch, > HDFS-11902-HDFS-9806.004.patch, HDFS-11902-HDFS-9806.005.patch, > HDFS-11902-HDFS-9806.006.patch, HDFS-11902-HDFS-9806.007.patch, > HDFS-11902-HDFS-9806.008.patch, HDFS-11902-HDFS-9806.009.patch > > > Currently {{BlockFormatProvider}} and {{TextFileRegionProvider}} perform > almost the same function on the Namenode and Datanode respectively. This JIRA > is to merge them into one. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12638) NameNode exits due to ReplicationMonitor thread received Runtime exception in ReplicationWork#chooseTargets
[ https://issues.apache.org/jira/browse/HDFS-12638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16217402#comment-16217402 ] Daryn Sharp commented on HDFS-12638: Might want to investigate the lifecycle handling of {{BlockUnderConstructionFeature#truncateBlock}}. > NameNode exits due to ReplicationMonitor thread received Runtime exception in > ReplicationWork#chooseTargets > --- > > Key: HDFS-12638 > URL: https://issues.apache.org/jira/browse/HDFS-12638 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Affects Versions: 2.8.2 >Reporter: Jiandan Yang > Attachments: HDFS-12638-branch-2.8.2.001.patch, > OphanBlocksAfterTruncateDelete.jpg > > > Active NamNode exit due to NPE, I can confirm that the BlockCollection passed > in when creating ReplicationWork is null, but I do not know why > BlockCollection is null, By view history I found > [HDFS-9754|https://issues.apache.org/jira/browse/HDFS-9754] remove judging > whether BlockCollection is null. > NN logs are as following: > {code:java} > 2017-10-11 16:29:06,161 ERROR [ReplicationMonitor] > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: > ReplicationMonitor thread received Runtime exception. > java.lang.NullPointerException > at > org.apache.hadoop.hdfs.server.blockmanagement.ReplicationWork.chooseTargets(ReplicationWork.java:55) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeReplicationWorkForBlocks(BlockManager.java:1532) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeReplicationWork(BlockManager.java:1491) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeDatanodeWork(BlockManager.java:3792) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor.run(BlockManager.java:3744) > at java.lang.Thread.run(Thread.java:834) > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12700) Fix datanode link that can not be accessed in dfshealth.html
[ https://issues.apache.org/jira/browse/HDFS-12700?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16217397#comment-16217397 ] Arpit Agarwal commented on HDFS-12700: -- Hi [~zhenyi], it's generally better to use hostnames as they are more robust. e.g. in a multi-homed setup. > Fix datanode link that can not be accessed in dfshealth.html > -- > > Key: HDFS-12700 > URL: https://issues.apache.org/jira/browse/HDFS-12700 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: fang zhenyi >Assignee: fang zhenyi >Priority: Minor > Fix For: 3.1.0 > > Attachments: HDFS-12700.000.patch > > > I find that datanode link that can not be accessed in dfshealth.html if I > do not change hosts file.So I changed the link to ip address. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12544) SnapshotDiff - support diff generation on any snapshot root descendant directory
[ https://issues.apache.org/jira/browse/HDFS-12544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16217386#comment-16217386 ] Manoj Govindassamy commented on HDFS-12544: --- [~yzhangal], 1. Yes, just like the files moved out of the scope directory are showing as "Deleted", the files moved in under a scope directory as part of renames will show as "Added". 2. The newly created directory/files are available in the current version. So, even these newly created dirs can be requested for the scope diff. Its just that they are not part of any older snapshots so we will get empty diff list. Will post a new patch revision with tests updated to cover above cases. Thanks. > SnapshotDiff - support diff generation on any snapshot root descendant > directory > > > Key: HDFS-12544 > URL: https://issues.apache.org/jira/browse/HDFS-12544 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0-beta1 >Reporter: Manoj Govindassamy >Assignee: Manoj Govindassamy > Attachments: HDFS-12544.01.patch, HDFS-12544.02.patch, > HDFS-12544.03.patch, HDFS-12544.04.patch > > > {noformat} > # hdfs snapshotDiff > > {noformat} > Using snapshot diff command, we can generate a diff report between any two > given snapshots under a snapshot root directory. The command today only > accepts the path that is a snapshot root. There are many deployments where > the snapshot root is configured at the higher level directory but the diff > report needed is only for a specific directory under the snapshot root. In > these cases, the diff report can be filtered for changes pertaining to the > directory we are interested in. But when the snapshot root directory is very > huge, the snapshot diff report generation can take minutes even if we are > interested to know the changes only in a small directory. So, it would be > highly performant if the diff report calculation can be limited to only the > interesting sub-directory of the snapshot root instead of the whole snapshot > root. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-12705) WebHdfsFileSystem exceptions should retain the caused by exception
Daryn Sharp created HDFS-12705: -- Summary: WebHdfsFileSystem exceptions should retain the caused by exception Key: HDFS-12705 URL: https://issues.apache.org/jira/browse/HDFS-12705 Project: Hadoop HDFS Issue Type: Bug Components: hdfs Affects Versions: 2.8.0 Reporter: Daryn Sharp {{WebHdfsFileSystem#runWithRetry}} uses reflection to prepend the remote host to the exception. While it preserves the original stacktrace, it omits the original cause which complicates debugging. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12482) Provide a configuration to adjust the weight of EC recovery tasks to adjust the speed of recovery
[ https://issues.apache.org/jira/browse/HDFS-12482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16217361#comment-16217361 ] Xiao Chen commented on HDFS-12482: -- Thanks Eddy for the new rev. Doc example looks great! Agree {{<=0}} is handled well by the {{Math.max}} code. My concerns is purely from supportability. If 0 can disable it, I'd prefer negative values being thrown at DN startup time, rather than later being figured out when the transmits are very low. +1 pending that and pre-commit. > Provide a configuration to adjust the weight of EC recovery tasks to adjust > the speed of recovery > - > > Key: HDFS-12482 > URL: https://issues.apache.org/jira/browse/HDFS-12482 > Project: Hadoop HDFS > Issue Type: Improvement > Components: erasure-coding >Affects Versions: 3.0.0-alpha4 >Reporter: Lei (Eddy) Xu >Assignee: Lei (Eddy) Xu >Priority: Minor > Labels: hdfs-ec-3.0-nice-to-have > Attachments: HDFS-12482.00.patch, HDFS-12482.01.patch, > HDFS-12482.02.patch > > > The relative speed of EC recovery comparing to 3x replica recovery is a > function of (EC codec, number of sources, NIC speed, and CPU speed, and etc). > Currently the EC recovery has a fixed {{xmitsInProgress}} of {{max(# of > sources, # of targets)}} comparing to {{1}} for 3x replica recovery, and NN > uses {{xmitsInProgress}} to decide how much recovery tasks to schedule to the > DataNode this we can add a coefficient for user to tune the weight of EC > recovery tasks. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12697) Ozone services must stay disabled in secure setup for alpha
[ https://issues.apache.org/jira/browse/HDFS-12697?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HDFS-12697: -- Attachment: HDFS-7240-HDFS-12697.01.patch > Ozone services must stay disabled in secure setup for alpha > --- > > Key: HDFS-12697 > URL: https://issues.apache.org/jira/browse/HDFS-12697 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Jitendra Nath Pandey >Assignee: Bharat Viswanadham >Priority: Blocker > Attachments: HDFS-7240-HDFS-12697.00.patch, > HDFS-7240-HDFS-12697.01.patch > > > When security is enabled, ozone services should not start up, even if ozone > configurations are enabled. This is important to ensure a user experimenting > with ozone doesn't inadvertently get exposed to attacks. Specifically, > 1) KSM should not start up. > 2) SCM should not startup. > 3) Datanode's ozone xceiverserver should not startup, and must not listen on > a port. > 4) Datanode's ozone handler port should not be open, and webservice must stay > disabled. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12697) Ozone services must stay disabled in secure setup for alpha
[ https://issues.apache.org/jira/browse/HDFS-12697?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HDFS-12697: -- Attachment: (was: HDFS-7240-HDFS-12697.01.patch) > Ozone services must stay disabled in secure setup for alpha > --- > > Key: HDFS-12697 > URL: https://issues.apache.org/jira/browse/HDFS-12697 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Jitendra Nath Pandey >Assignee: Bharat Viswanadham >Priority: Blocker > Attachments: HDFS-7240-HDFS-12697.00.patch > > > When security is enabled, ozone services should not start up, even if ozone > configurations are enabled. This is important to ensure a user experimenting > with ozone doesn't inadvertently get exposed to attacks. Specifically, > 1) KSM should not start up. > 2) SCM should not startup. > 3) Datanode's ozone xceiverserver should not startup, and must not listen on > a port. > 4) Datanode's ozone handler port should not be open, and webservice must stay > disabled. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12697) Ozone services must stay disabled in secure setup for alpha
[ https://issues.apache.org/jira/browse/HDFS-12697?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HDFS-12697: -- Attachment: (was: HDFS-7240-HDFS-12697.01.patch) > Ozone services must stay disabled in secure setup for alpha > --- > > Key: HDFS-12697 > URL: https://issues.apache.org/jira/browse/HDFS-12697 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Jitendra Nath Pandey >Assignee: Bharat Viswanadham >Priority: Blocker > Attachments: HDFS-7240-HDFS-12697.00.patch, > HDFS-7240-HDFS-12697.01.patch > > > When security is enabled, ozone services should not start up, even if ozone > configurations are enabled. This is important to ensure a user experimenting > with ozone doesn't inadvertently get exposed to attacks. Specifically, > 1) KSM should not start up. > 2) SCM should not startup. > 3) Datanode's ozone xceiverserver should not startup, and must not listen on > a port. > 4) Datanode's ozone handler port should not be open, and webservice must stay > disabled. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12697) Ozone services must stay disabled in secure setup for alpha
[ https://issues.apache.org/jira/browse/HDFS-12697?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HDFS-12697: -- Attachment: HDFS-7240-HDFS-12697.01.patch > Ozone services must stay disabled in secure setup for alpha > --- > > Key: HDFS-12697 > URL: https://issues.apache.org/jira/browse/HDFS-12697 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Jitendra Nath Pandey >Assignee: Bharat Viswanadham >Priority: Blocker > Attachments: HDFS-7240-HDFS-12697.00.patch, > HDFS-7240-HDFS-12697.01.patch > > > When security is enabled, ozone services should not start up, even if ozone > configurations are enabled. This is important to ensure a user experimenting > with ozone doesn't inadvertently get exposed to attacks. Specifically, > 1) KSM should not start up. > 2) SCM should not startup. > 3) Datanode's ozone xceiverserver should not startup, and must not listen on > a port. > 4) Datanode's ozone handler port should not be open, and webservice must stay > disabled. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11807) libhdfs++: Get minidfscluster tests running under valgrind
[ https://issues.apache.org/jira/browse/HDFS-11807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16217316#comment-16217316 ] James Clampffer commented on HDFS-11807: Most recent change fixes the hang, thanks. Some feedback: -stdlib is now included twice {code} #include #include +#include {code} -The EXPECT_EQ checks on return values are good, however in some cases if the EXPECT fails there's no chance the rest of the test will run correctly. For example if one of the write() calls fail it'd be better to just exit the test then let it produce what might be a confusing error. Easy fix would be to swap EXPECT_EQ with ASSERT_EQ for these cases. -I'd avoid using the hardcoded path "tmp" when you write the file you're going to send with curl. Instead check out what TempFile in configuration_test.h does to get a file name that's guaranteed to be unused. -libhdfs also uses this test so we don't want to hard code the libhdfspp_ prefix on API functions since that means this could only ever test libhdfs++. You can most likely apply the same trick that the libhdfspp_wrapper shims use to add the appropriate prefixes during the build. Alternatively I think you could build the binary to be used with valgrind with -DLIBHDFS_HDFS_H set so the shims aren't applied at all to avoid changing the calls. -Might be worth checking the return value on snprintf to see if the string gets truncated. The paths returned by the temp file mechanism might be long enough to spill over 200 chars. > libhdfs++: Get minidfscluster tests running under valgrind > -- > > Key: HDFS-11807 > URL: https://issues.apache.org/jira/browse/HDFS-11807 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs-client >Reporter: James Clampffer >Assignee: Anatoli Shein > Attachments: HDFS-11807.HDFS-8707.000.patch, > HDFS-11807.HDFS-8707.001.patch, HDFS-11807.HDFS-8707.002.patch > > > The gmock based unit tests generally don't expose race conditions and memory > stomps. A good way to expose these is running libhdfs++ stress tests and > tools under valgrind and pointing them at a real cluster. Right now the CI > tools don't do that so bugs occasionally slip in and aren't caught until they > cause trouble in applications that use libhdfs++ for HDFS access. > The reason the minidfscluster tests don't run under valgrind is because the > GC and JIT compiler in the embedded JVM do things that look like errors to > valgrind. I'd like to have these tests do some basic setup and then fork > into two processes: one for the minidfscluster stuff and one for the > libhdfs++ client test. A small amount of shared memory can be used to > provide a place for the minidfscluster to stick the hdfsBuilder object that > the client needs to get info about which port to connect to. Can also stick > a condition variable there to let the minidfscluster know when it can shut > down. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12697) Ozone services must stay disabled in secure setup for alpha
[ https://issues.apache.org/jira/browse/HDFS-12697?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HDFS-12697: -- Attachment: HDFS-7240-HDFS-12697.01.patch > Ozone services must stay disabled in secure setup for alpha > --- > > Key: HDFS-12697 > URL: https://issues.apache.org/jira/browse/HDFS-12697 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Jitendra Nath Pandey >Assignee: Bharat Viswanadham >Priority: Blocker > Attachments: HDFS-7240-HDFS-12697.00.patch, > HDFS-7240-HDFS-12697.01.patch > > > When security is enabled, ozone services should not start up, even if ozone > configurations are enabled. This is important to ensure a user experimenting > with ozone doesn't inadvertently get exposed to attacks. Specifically, > 1) KSM should not start up. > 2) SCM should not startup. > 3) Datanode's ozone xceiverserver should not startup, and must not listen on > a port. > 4) Datanode's ozone handler port should not be open, and webservice must stay > disabled. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12697) Ozone services must stay disabled in secure setup for alpha
[ https://issues.apache.org/jira/browse/HDFS-12697?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HDFS-12697: -- Status: Patch Available (was: In Progress) > Ozone services must stay disabled in secure setup for alpha > --- > > Key: HDFS-12697 > URL: https://issues.apache.org/jira/browse/HDFS-12697 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Jitendra Nath Pandey >Assignee: Bharat Viswanadham >Priority: Blocker > Attachments: HDFS-7240-HDFS-12697.00.patch > > > When security is enabled, ozone services should not start up, even if ozone > configurations are enabled. This is important to ensure a user experimenting > with ozone doesn't inadvertently get exposed to attacks. Specifically, > 1) KSM should not start up. > 2) SCM should not startup. > 3) Datanode's ozone xceiverserver should not startup, and must not listen on > a port. > 4) Datanode's ozone handler port should not be open, and webservice must stay > disabled. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12697) Ozone services must stay disabled in secure setup for alpha
[ https://issues.apache.org/jira/browse/HDFS-12697?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HDFS-12697: -- Attachment: HDFS-7240-HDFS-12697.00.patch > Ozone services must stay disabled in secure setup for alpha > --- > > Key: HDFS-12697 > URL: https://issues.apache.org/jira/browse/HDFS-12697 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Jitendra Nath Pandey >Assignee: Bharat Viswanadham >Priority: Blocker > Attachments: HDFS-7240-HDFS-12697.00.patch > > > When security is enabled, ozone services should not start up, even if ozone > configurations are enabled. This is important to ensure a user experimenting > with ozone doesn't inadvertently get exposed to attacks. Specifically, > 1) KSM should not start up. > 2) SCM should not startup. > 3) Datanode's ozone xceiverserver should not startup, and must not listen on > a port. > 4) Datanode's ozone handler port should not be open, and webservice must stay > disabled. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12699) TestMountTable fails with Java 7
[ https://issues.apache.org/jira/browse/HDFS-12699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16217252#comment-16217252 ] Íñigo Goiri commented on HDFS-12699: The failed unit tests is not related. branch-2 didn't build but as [~aw] mentioned, it seems like the HDFS unit tests for branch-2 are taking too much memory and Jenkins is not able to run them. I'd say we can commit this fix to both trunk and branch-2. > TestMountTable fails with Java 7 > > > Key: HDFS-12699 > URL: https://issues.apache.org/jira/browse/HDFS-12699 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.9.0, 3.0.0 >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri > Attachments: HDFS-12699-branch-2.000.patch, HDFS-12699.000.patch > > > Some of the issues for HDFS-12620 were related to Java 7. > In particular, we relied on the {{HashMap}} order (which is wrong). > This worked by chance with Java 8 (trunk) but not in with Java 7 (branch-2). -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12704) FBR may corrupt block state
[ https://issues.apache.org/jira/browse/HDFS-12704?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16217230#comment-16217230 ] Daryn Sharp commented on HDFS-12704: During a decomm of a faulty nodem the NNs frequently reported invalid protobufs from the node during decode of {{reportBlock}}, interleaved with {{ArrayIndexOutBounds}} during actual processing of the report. The jvm clipped the stacktrace of the exception so it is unknown where it occurs. The {{DecommissionManager}} stopped after the first AIOB which is probably the root cause of HDFS-12703. The block states appear to be corrupted into an unknown state. Since the decomm task aborts and the exception is lost, it's impossible to know where the bug is occurring. > FBR may corrupt block state > --- > > Key: HDFS-12704 > URL: https://issues.apache.org/jira/browse/HDFS-12704 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Affects Versions: 2.8.0 >Reporter: Daryn Sharp >Priority: Critical > > If FBR processing generates a runtime exception it is believed to foul the > block state and lead to unpredictable behavior. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12702) Ozone: Add hugo to the dev docker image
[ https://issues.apache.org/jira/browse/HDFS-12702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16217222#comment-16217222 ] Hadoop QA commented on HDFS-12702: -- (!) A patch to the testing environment has been detected. Re-executing against the patched versions to perform further tests. The console is at https://builds.apache.org/job/PreCommit-HDFS-Build/21799/console in case of problems. > Ozone: Add hugo to the dev docker image > --- > > Key: HDFS-12702 > URL: https://issues.apache.org/jira/browse/HDFS-12702 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Elek, Marton >Assignee: Elek, Marton > Attachments: HDFS-12702-HDFS-7240.001.patch > > > Both HADOOP-14163 and HDFS-12664 requries hugo site generation tool. To make > it easier to review those patches I suggest to add Hugo to the dev docker > image now. > This patch adds hugo to the dev docker image: > Test method: > {code} > cd dev-support/docker > docker build -t test . > docker run test hugo version > docker rmi test > {code} > Expected output (after docker run): > {code} > Hugo Static Site Generator v0.30.2 linux/amd64 BuildDate: 2017-10-19T11:34:27Z > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-12704) FBR may corrupt block state
Daryn Sharp created HDFS-12704: -- Summary: FBR may corrupt block state Key: HDFS-12704 URL: https://issues.apache.org/jira/browse/HDFS-12704 Project: Hadoop HDFS Issue Type: Bug Components: namenode Affects Versions: 2.8.0 Reporter: Daryn Sharp Priority: Critical If FBR processing generates a runtime exception it is believed to foul the block state and lead to unpredictable behavior. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org