[jira] [Commented] (HDFS-12775) [READ] Fix reporting of Provided volumes
[ https://issues.apache.org/jira/browse/HDFS-12775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16255192#comment-16255192 ] Virajith Jalaparti commented on HDFS-12775: --- All the failed tests pass locally. Committing v4 of the patch to the feature branch. Thanks for reviewing [~elgoiri]. > [READ] Fix reporting of Provided volumes > > > Key: HDFS-12775 > URL: https://issues.apache.org/jira/browse/HDFS-12775 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Virajith Jalaparti >Assignee: Virajith Jalaparti > Attachments: HDFS-12775-HDFS-9806.001.patch, > HDFS-12775-HDFS-9806.002.patch, HDFS-12775-HDFS-9806.003.patch, > HDFS-12775-HDFS-9806.004.patch, provided_capacity_nn.png, > provided_storagetype_capacity.png, provided_storagetype_capacity_jmx.png > > > Provided Volumes currently report infinite capacity and 0 space used. > Further, PROVIDED locations are reported as {{/default-rack/null:0}} in fsck. > This JIRA is for making this more readable, and replace these with what users > would expect. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12825) After Block Corrupted, FSCK Report printing the Direct configuration.
[ https://issues.apache.org/jira/browse/HDFS-12825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16255399#comment-16255399 ] Hadoop QA commented on HDFS-12825: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 55s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 43s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 5s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 47s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 57s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 2s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 59s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 5s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 5s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 43s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 0 new + 94 unchanged - 1 fixed = 94 total (was 95) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 36s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}128m 19s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 24s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}184m 52s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Unreaped Processes | hadoop-hdfs:2 | | Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure200 | | | hadoop.hdfs.TestErasureCodingPoliciesWithRandomECPolicy | | | hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure210 | | | hadoop.hdfs.TestCrcCorruption | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure090 | | | hadoop.hdfs.TestSafeModeWithStripedFileWithRandomECPolicy | | | hadoop.fs.TestUnbuffer | | | hadoop.hdfs.TestAclsEndToEnd | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | | | hadoop.hdfs.server.balancer.TestBalancerRPCDelay | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 | | JIRA Issue | HDFS-12825 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12897968/HDFS-12825.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 864871aa109e 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality |
[jira] [Updated] (HDFS-12778) [READ] Report multiple locations for PROVIDED blocks
[ https://issues.apache.org/jira/browse/HDFS-12778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Virajith Jalaparti updated HDFS-12778: -- Status: Open (was: Patch Available) > [READ] Report multiple locations for PROVIDED blocks > > > Key: HDFS-12778 > URL: https://issues.apache.org/jira/browse/HDFS-12778 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Virajith Jalaparti >Assignee: Virajith Jalaparti > Attachments: HDFS-12778-HDFS-9806.001.patch > > > On {{getBlockLocations}}, only one Datanode is returned as the location for > all PROVIDED blocks. This can hurt the performance of applications which > typically 3 locations per block. We need to return multiple Datanodes for > each PROVIDED block for better application performance/resilience. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-12594) SnapshotDiff - snapshotDiff fails if the snapshotDiff report exceeds the RPC response limit
[ https://issues.apache.org/jira/browse/HDFS-12594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16255468#comment-16255468 ] Shashikant Banerjee edited comment on HDFS-12594 at 11/16/17 3:20 PM: -- Thanks [~szetszwo] , for the review comments. patch v8 addresses the same. >>DFSUtilClient.bytes2byteArray and DFSUtil.bytes2byteArray are very similar >>but there is a small difference when len == 0: DFSUtilClient returns new byte[0][] and DFSUtil returns new byte[][]{null}. Is it a bug? <{}(byte[])->byte[][]{null}. Reverse Mapping: byte[][]{null}->byte[]{(byte) ("/") }->String("/"). I have addressed the problems in conversion of byte[][] to byte[] . Please have a look. was (Author: shashikant): Thanks [~szetszwo] , for the review comments. patch v8 addresses the same. >>DFSUtilClient.bytes2byteArray and DFSUtil.bytes2byteArray are very similar >>but there is a small difference when len == 0: DFSUtilClient returns new byte[0][] and DFSUtil returns new byte[][]{null}. Is it a bug? <{}(byte[])->byte[][]{null}; Reverse Mapping: byte[][]{null}->byte[]{(byte) ("/") }->String("/"); I have addressed the problems in conversion of byte[][] to byte[] . Please have a look. > SnapshotDiff - snapshotDiff fails if the snapshotDiff report exceeds the RPC > response limit > --- > > Key: HDFS-12594 > URL: https://issues.apache.org/jira/browse/HDFS-12594 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Reporter: Shashikant Banerjee >Assignee: Shashikant Banerjee > Attachments: HDFS-12594.001.patch, HDFS-12594.002.patch, > HDFS-12594.003.patch, HDFS-12594.004.patch, HDFS-12594.005.patch, > HDFS-12594.006.patch, HDFS-12594.007.patch, HDFS-12594.008.patch, > SnapshotDiff_Improvemnets .pdf > > > The snapshotDiff command fails if the snapshotDiff report size is larger than > the configuration value of ipc.maximum.response.length which is by default > 128 MB. > Worst case, with all Renames ops in sanpshots each with source and target > name equal to MAX_PATH_LEN which is 8k characters, this would result in at > 8192 renames. > > SnapshotDiff is currently used by distcp to optimize copy operations and in > case of the the diff report exceeding the limit , it fails with the below > exception: > Test set: > org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotDiffReport > --- > Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 112.095 sec > <<< FAILURE! - in > org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotDiffReport > testDiffReportWithMillionFiles(org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotDiffReport) > Time elapsed: 111.906 sec <<< ERROR! > java.io.IOException: Failed on local exception: > org.apache.hadoop.ipc.RpcException: RPC response exceeds maximum data length; > Host Details : local host is: "hw15685.local/10.200.5.230"; destination host > is: "localhost":59808; > Attached is the proposal for the changes required. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-12594) SnapshotDiff - snapshotDiff fails if the snapshotDiff report exceeds the RPC response limit
[ https://issues.apache.org/jira/browse/HDFS-12594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16255468#comment-16255468 ] Shashikant Banerjee edited comment on HDFS-12594 at 11/16/17 3:16 PM: -- Thanks [~szetszwo] , for the review comments. patch v8 addresses the same. >>DFSUtilClient.bytes2byteArray and DFSUtil.bytes2byteArray are very similar >>but there is a small difference when len == 0: DFSUtilClient returns new byte[0][] and DFSUtil returns new byte[][]{null}. Is it a bug? <{}(byte[])->byte[][]{null}; Reverse Mapping: byte[][]{null}->byte[]{(byte) ("/") }->String("/"); I have addressed the problems in conversion of byte[][] to byte[] . Please have a look. was (Author: shashikant): Thanks [~szetszwo] , for the review comments. patch v8 addresses the same. >>DFSUtilClient.bytes2byteArray and DFSUtil.bytes2byteArray are very similar >>but there is a small difference when len == 0: DFSUtilClient returns new byte[0][] and DFSUtil returns new byte[][]{null}. Is it a bug? <{}(byte[])->byte[][]{null}; Reverse Mapping: byte[][]{null}->byte[]{(byte) ("/") }->String("/") I have addressed the problems in conversion of byte[][] to byte[] . Please have a look. > SnapshotDiff - snapshotDiff fails if the snapshotDiff report exceeds the RPC > response limit > --- > > Key: HDFS-12594 > URL: https://issues.apache.org/jira/browse/HDFS-12594 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Reporter: Shashikant Banerjee >Assignee: Shashikant Banerjee > Attachments: HDFS-12594.001.patch, HDFS-12594.002.patch, > HDFS-12594.003.patch, HDFS-12594.004.patch, HDFS-12594.005.patch, > HDFS-12594.006.patch, HDFS-12594.007.patch, HDFS-12594.008.patch, > SnapshotDiff_Improvemnets .pdf > > > The snapshotDiff command fails if the snapshotDiff report size is larger than > the configuration value of ipc.maximum.response.length which is by default > 128 MB. > Worst case, with all Renames ops in sanpshots each with source and target > name equal to MAX_PATH_LEN which is 8k characters, this would result in at > 8192 renames. > > SnapshotDiff is currently used by distcp to optimize copy operations and in > case of the the diff report exceeding the limit , it fails with the below > exception: > Test set: > org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotDiffReport > --- > Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 112.095 sec > <<< FAILURE! - in > org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotDiffReport > testDiffReportWithMillionFiles(org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotDiffReport) > Time elapsed: 111.906 sec <<< ERROR! > java.io.IOException: Failed on local exception: > org.apache.hadoop.ipc.RpcException: RPC response exceeds maximum data length; > Host Details : local host is: "hw15685.local/10.200.5.230"; destination host > is: "localhost":59808; > Attached is the proposal for the changes required. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-12594) SnapshotDiff - snapshotDiff fails if the snapshotDiff report exceeds the RPC response limit
[ https://issues.apache.org/jira/browse/HDFS-12594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16255468#comment-16255468 ] Shashikant Banerjee edited comment on HDFS-12594 at 11/16/17 3:14 PM: -- Thanks [~szetszwo] , for the review comments. patch v8 addresses the same. >>DFSUtilClient.bytes2byteArray and DFSUtil.bytes2byteArray are very similar >>but there is a small difference when len == 0: DFSUtilClient returns new byte[0][] and DFSUtil returns new byte[][]{null}. Is it a bug? <{}(byte[])->byte[][]{null}; Reverse Mapping: byte[][]{null}->byte[]{(byte) ("/") }->String("/") I have addressed the problems in conversion of byte[][] to byte[] . Please have a look. was (Author: shashikant): Thanks [~szetszwo] , for the review comments. patch v8 addresses the same. >>DFSUtilClient.bytes2byteArray and DFSUtil.bytes2byteArray are very similar >>but there is a small difference when len == 0: DFSUtilClient returns new byte[0][] and DFSUtil returns new byte[][]{null}. Is it a bug? < {}(byte[]) -> byte[][]{null}; Reverse Mapping: byte[][]{null} -> byte[]{(byte) ("/") } ->String("/") I have addressed the problems in conversion of byte[][] to byte[] . Please have a look. > SnapshotDiff - snapshotDiff fails if the snapshotDiff report exceeds the RPC > response limit > --- > > Key: HDFS-12594 > URL: https://issues.apache.org/jira/browse/HDFS-12594 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Reporter: Shashikant Banerjee >Assignee: Shashikant Banerjee > Attachments: HDFS-12594.001.patch, HDFS-12594.002.patch, > HDFS-12594.003.patch, HDFS-12594.004.patch, HDFS-12594.005.patch, > HDFS-12594.006.patch, HDFS-12594.007.patch, HDFS-12594.008.patch, > SnapshotDiff_Improvemnets .pdf > > > The snapshotDiff command fails if the snapshotDiff report size is larger than > the configuration value of ipc.maximum.response.length which is by default > 128 MB. > Worst case, with all Renames ops in sanpshots each with source and target > name equal to MAX_PATH_LEN which is 8k characters, this would result in at > 8192 renames. > > SnapshotDiff is currently used by distcp to optimize copy operations and in > case of the the diff report exceeding the limit , it fails with the below > exception: > Test set: > org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotDiffReport > --- > Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 112.095 sec > <<< FAILURE! - in > org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotDiffReport > testDiffReportWithMillionFiles(org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotDiffReport) > Time elapsed: 111.906 sec <<< ERROR! > java.io.IOException: Failed on local exception: > org.apache.hadoop.ipc.RpcException: RPC response exceeds maximum data length; > Host Details : local host is: "hw15685.local/10.200.5.230"; destination host > is: "localhost":59808; > Attached is the proposal for the changes required. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12748) NameNode memory leak when accessing webhdfs GETHOMEDIRECTORY
[ https://issues.apache.org/jira/browse/HDFS-12748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16255283#comment-16255283 ] Weiwei Yang commented on HDFS-12748: [~daryn] any comments? > NameNode memory leak when accessing webhdfs GETHOMEDIRECTORY > > > Key: HDFS-12748 > URL: https://issues.apache.org/jira/browse/HDFS-12748 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Affects Versions: 2.8.2 >Reporter: Jiandan Yang >Assignee: Weiwei Yang > Attachments: HDFS-12748.001.patch, HDFS-12748.002.patch, > HDFS-12748.003.patch > > > In our production environment, the standby NN often do fullgc, through mat we > found the largest object is FileSystem$Cache, which contains 7,844,890 > DistributedFileSystem. > By view hierarchy of method FileSystem.get() , I found only > NamenodeWebHdfsMethods#get call FileSystem.get(). I don't know why creating > different DistributedFileSystem every time instead of get a FileSystem from > cache. > {code:java} > case GETHOMEDIRECTORY: { > final String js = JsonUtil.toJsonString("Path", > FileSystem.get(conf != null ? conf : new Configuration()) > .getHomeDirectory().toUri().getPath()); > return Response.ok(js).type(MediaType.APPLICATION_JSON).build(); > } > {code} > When we close FileSystem when GETHOMEDIRECTORY, NN don't do fullgc. > {code:java} > case GETHOMEDIRECTORY: { > FileSystem fs = null; > try { > fs = FileSystem.get(conf != null ? conf : new Configuration()); > final String js = JsonUtil.toJsonString("Path", > fs.getHomeDirectory().toUri().getPath()); > return Response.ok(js).type(MediaType.APPLICATION_JSON).build(); > } finally { > if (fs != null) { > fs.close(); > } > } > } > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12647) DN commands processing should be async
[ https://issues.apache.org/jira/browse/HDFS-12647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16255424#comment-16255424 ] Hadoop QA commented on HDFS-12647: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 22s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 15s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 47s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 35s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 51s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 51s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 47s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 34s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 1 new + 485 unchanged - 5 fixed = 486 total (was 490) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 24s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 47s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}117m 41s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 23s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}163m 21s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:hadoop-hdfs-project/hadoop-hdfs | | | Naked notify in org.apache.hadoop.hdfs.server.datanode.BPServiceActor$CommandProcessor.run() At BPServiceActor.java:At BPServiceActor.java:[line 1325] | | | Unconditional wait in org.apache.hadoop.hdfs.server.datanode.BPServiceActor$CommandProcessor.processPendingCommands() At BPServiceActor.java:At BPServiceActor.java:[line 1376] | | Failed junit tests | hadoop.fs.TestUnbuffer | | | hadoop.hdfs.web.TestWebHdfsTimeouts | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure | | | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFS | | | hadoop.hdfs.server.datanode.TestDatanodeRegister | | | hadoop.hdfs.server.balancer.TestBalancerWithEncryptedTransfer | | | hadoop.hdfs.server.balancer.TestBalancerRPCDelay | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 | | JIRA Issue | HDFS-12647 | | JIRA Patch URL |
[jira] [Commented] (HDFS-12825) After Block Corrupted, FSCK Report printing the Direct configuration.
[ https://issues.apache.org/jira/browse/HDFS-12825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16255423#comment-16255423 ] Gabor Bota commented on HDFS-12825: --- Test failures seem unrelated to me. > After Block Corrupted, FSCK Report printing the Direct configuration. > --- > > Key: HDFS-12825 > URL: https://issues.apache.org/jira/browse/HDFS-12825 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Affects Versions: 3.0.0-alpha1 >Reporter: Harshakiran Reddy >Assignee: Gabor Bota >Priority: Minor > Labels: newbie > Attachments: HDFS-12825.001.patch, error.JPG > > > Scenario: > Corrupt the Block in any datanode > Take the *FSCK *Report for that file. > Actual Output: > == > printing the direct configuration in fsck report > {{dfs.namenode.replication.min}} > Expected Output: > > it should be {{MINIMAL BLOCK REPLICATION}} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12826) Document Saying the RPC port, But it's required IPC port in Balancer Document.
[ https://issues.apache.org/jira/browse/HDFS-12826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16255456#comment-16255456 ] Hadoop QA commented on HDFS-12826: -- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 26s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 1s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 59s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 27m 41s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 8s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 20s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 40m 53s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 | | JIRA Issue | HDFS-12826 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12897990/HDFS-12826.patch | | Optional Tests | asflicense mvnsite | | uname | Linux f03d2b745aeb 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 462e25a | | maven | version: Apache Maven 3.3.9 | | Max. process+thread count | 297 (vs. ulimit of 5000) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/22116/console | | Powered by | Apache Yetus 0.7.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Document Saying the RPC port, But it's required IPC port in Balancer Document. > -- > > Key: HDFS-12826 > URL: https://issues.apache.org/jira/browse/HDFS-12826 > Project: Hadoop HDFS > Issue Type: Bug > Components: balancer & mover, documentation >Affects Versions: 3.0.0-beta1 >Reporter: Harshakiran Reddy >Assignee: usharani >Priority: Minor > Attachments: HDFS-12826.patch > > > In {{Adding a new Namenode to an existing HDFS cluster}} , refreshNamenodes > command required IPC port but in Documentation it's saying the RPC port. > http://hadoop.apache.org/docs/r3.0.0-beta1/hadoop-project-dist/hadoop-hdfs/Federation.html#Balancer > {noformat} > bin>:~/hdfsdata/HA/install/hadoop/datanode/bin> ./hdfs dfsadmin > -refreshNamenodes host-name:65110 > refreshNamenodes: Unknown protocol: > org.apache.hadoop.hdfs.protocol.ClientDatanodeProtocol > bin.:~/hdfsdata/HA/install/hadoop/datanode/bin> ./hdfs dfsadmin > -refreshNamenodes > Usage: hdfs dfsadmin [-refreshNamenodes datanode-host:ipc_port] > bin>:~/hdfsdata/HA/install/hadoop/datanode/bin> ./hdfs dfsadmin > -refreshNamenodes host-name:50077 > bin>:~/hdfsdata/HA/install/hadoop/datanode/bin> > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12825) After Block Corrupted, FSCK Report printing the Direct configuration.
[ https://issues.apache.org/jira/browse/HDFS-12825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16255195#comment-16255195 ] Gabor Bota commented on HDFS-12825: --- Sorry, I assigned myself and made the patch before reading this > After Block Corrupted, FSCK Report printing the Direct configuration. > --- > > Key: HDFS-12825 > URL: https://issues.apache.org/jira/browse/HDFS-12825 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Affects Versions: 3.0.0-alpha1 >Reporter: Harshakiran Reddy >Assignee: Gabor Bota >Priority: Minor > Labels: newbie > Attachments: HDFS-12825.001.patch, error.JPG > > > Scenario: > Corrupt the Block in any datanode > Take the *FSCK *Report for that file. > Actual Output: > == > printing the direct configuration in fsck report > {{dfs.namenode.replication.min}} > Expected Output: > > it should be {{MINIMAL BLOCK REPLICATION}} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12775) [READ] Fix reporting of Provided volumes
[ https://issues.apache.org/jira/browse/HDFS-12775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Virajith Jalaparti updated HDFS-12775: -- Resolution: Fixed Status: Resolved (was: Patch Available) > [READ] Fix reporting of Provided volumes > > > Key: HDFS-12775 > URL: https://issues.apache.org/jira/browse/HDFS-12775 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Virajith Jalaparti >Assignee: Virajith Jalaparti > Attachments: HDFS-12775-HDFS-9806.001.patch, > HDFS-12775-HDFS-9806.002.patch, HDFS-12775-HDFS-9806.003.patch, > HDFS-12775-HDFS-9806.004.patch, provided_capacity_nn.png, > provided_storagetype_capacity.png, provided_storagetype_capacity_jmx.png > > > Provided Volumes currently report infinite capacity and 0 space used. > Further, PROVIDED locations are reported as {{/default-rack/null:0}} in fsck. > This JIRA is for making this more readable, and replace these with what users > would expect. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12647) DN commands processing should be async
[ https://issues.apache.org/jira/browse/HDFS-12647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16255185#comment-16255185 ] Nanda kumar commented on HDFS-12647: Patch v001 adds logic to make sure that all the outstanding DatanodeCommands are executed before sending FBR, and some minor refactoring. > DN commands processing should be async > -- > > Key: HDFS-12647 > URL: https://issues.apache.org/jira/browse/HDFS-12647 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode >Affects Versions: 2.8.0 >Reporter: Daryn Sharp >Assignee: Nanda kumar > Attachments: HDFS-12647.000.patch, HDFS-12647.001.patch > > > Due to dataset lock contention, service actors may encounter significant > latency while processing DN commands. Even the queuing of async deletions > require multiple lock acquisitions. A slow disk will cause a backlog of > xceivers instantiating block sender/receivers which starves the actor and > leads to the NN falsely declaring the node dead. > Async processing of all commands will free the actor to perform its primary > purpose of heartbeating and block reporting. Note that FBRs will be > dependent on queued block invalidations not being included in the report. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12748) NameNode memory leak when accessing webhdfs GETHOMEDIRECTORY
[ https://issues.apache.org/jira/browse/HDFS-12748?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Weiwei Yang updated HDFS-12748: --- Target Version/s: 3.0.0 > NameNode memory leak when accessing webhdfs GETHOMEDIRECTORY > > > Key: HDFS-12748 > URL: https://issues.apache.org/jira/browse/HDFS-12748 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Affects Versions: 2.8.2 >Reporter: Jiandan Yang >Assignee: Weiwei Yang > Attachments: HDFS-12748.001.patch, HDFS-12748.002.patch, > HDFS-12748.003.patch > > > In our production environment, the standby NN often do fullgc, through mat we > found the largest object is FileSystem$Cache, which contains 7,844,890 > DistributedFileSystem. > By view hierarchy of method FileSystem.get() , I found only > NamenodeWebHdfsMethods#get call FileSystem.get(). I don't know why creating > different DistributedFileSystem every time instead of get a FileSystem from > cache. > {code:java} > case GETHOMEDIRECTORY: { > final String js = JsonUtil.toJsonString("Path", > FileSystem.get(conf != null ? conf : new Configuration()) > .getHomeDirectory().toUri().getPath()); > return Response.ok(js).type(MediaType.APPLICATION_JSON).build(); > } > {code} > When we close FileSystem when GETHOMEDIRECTORY, NN don't do fullgc. > {code:java} > case GETHOMEDIRECTORY: { > FileSystem fs = null; > try { > fs = FileSystem.get(conf != null ? conf : new Configuration()); > final String js = JsonUtil.toJsonString("Path", > fs.getHomeDirectory().toUri().getPath()); > return Response.ok(js).type(MediaType.APPLICATION_JSON).build(); > } finally { > if (fs != null) { > fs.close(); > } > } > } > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12770) Add doc about how to disable client socket cache
[ https://issues.apache.org/jira/browse/HDFS-12770?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Weiwei Yang updated HDFS-12770: --- Target Version/s: 3.0.0 (was: 3.1.0) > Add doc about how to disable client socket cache > > > Key: HDFS-12770 > URL: https://issues.apache.org/jira/browse/HDFS-12770 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs-client >Reporter: Weiwei Yang >Assignee: Weiwei Yang >Priority: Trivial > Labels: cache, documentation > Attachments: HDFS-12770.001.patch > > > After HDFS-3365, client socket cache (PeerCache) can be disabled, but there > is no doc about this. We should add some doc in hdfs-default.xml to instruct > user how to disable it. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-12826) Document Saying the RPC port, But it's required IPC port in Balancer Document.
Harshakiran Reddy created HDFS-12826: Summary: Document Saying the RPC port, But it's required IPC port in Balancer Document. Key: HDFS-12826 URL: https://issues.apache.org/jira/browse/HDFS-12826 Project: Hadoop HDFS Issue Type: Bug Components: balancer & mover, documentation Affects Versions: 3.0.0-beta1 Reporter: Harshakiran Reddy Priority: Minor In {{Adding a new Namenode to an existing HDFS cluster}} , refreshNamenodes command required IPC port but in Documentation it's saying the RPC port. http://hadoop.apache.org/docs/r3.0.0-beta1/hadoop-project-dist/hadoop-hdfs/Federation.html#Balancer {noformat} bin>:~/hdfsdata/HA/install/hadoop/datanode/bin> ./hdfs dfsadmin -refreshNamenodes host-name:65110 refreshNamenodes: Unknown protocol: org.apache.hadoop.hdfs.protocol.ClientDatanodeProtocol bin.:~/hdfsdata/HA/install/hadoop/datanode/bin> ./hdfs dfsadmin -refreshNamenodes Usage: hdfs dfsadmin [-refreshNamenodes datanode-host:ipc_port] bin>:~/hdfsdata/HA/install/hadoop/datanode/bin> ./hdfs dfsadmin -refreshNamenodes host-name:50077 bin>:~/hdfsdata/HA/install/hadoop/datanode/bin> {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12826) Document Saying the RPC port, But it's required IPC port in Balancer Document.
[ https://issues.apache.org/jira/browse/HDFS-12826?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] usharani updated HDFS-12826: Status: Patch Available (was: Open) > Document Saying the RPC port, But it's required IPC port in Balancer Document. > -- > > Key: HDFS-12826 > URL: https://issues.apache.org/jira/browse/HDFS-12826 > Project: Hadoop HDFS > Issue Type: Bug > Components: balancer & mover, documentation >Affects Versions: 3.0.0-beta1 >Reporter: Harshakiran Reddy >Assignee: usharani >Priority: Minor > Attachments: HDFS-12826.patch > > > In {{Adding a new Namenode to an existing HDFS cluster}} , refreshNamenodes > command required IPC port but in Documentation it's saying the RPC port. > http://hadoop.apache.org/docs/r3.0.0-beta1/hadoop-project-dist/hadoop-hdfs/Federation.html#Balancer > {noformat} > bin>:~/hdfsdata/HA/install/hadoop/datanode/bin> ./hdfs dfsadmin > -refreshNamenodes host-name:65110 > refreshNamenodes: Unknown protocol: > org.apache.hadoop.hdfs.protocol.ClientDatanodeProtocol > bin.:~/hdfsdata/HA/install/hadoop/datanode/bin> ./hdfs dfsadmin > -refreshNamenodes > Usage: hdfs dfsadmin [-refreshNamenodes datanode-host:ipc_port] > bin>:~/hdfsdata/HA/install/hadoop/datanode/bin> ./hdfs dfsadmin > -refreshNamenodes host-name:50077 > bin>:~/hdfsdata/HA/install/hadoop/datanode/bin> > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12826) Document Saying the RPC port, But it's required IPC port in Balancer Document.
[ https://issues.apache.org/jira/browse/HDFS-12826?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] usharani updated HDFS-12826: Attachment: HDFS-12826.patch [~Harsha1206] thanks for reporting. It make sense to fix this.. Uploaded the patch..Kindly review. > Document Saying the RPC port, But it's required IPC port in Balancer Document. > -- > > Key: HDFS-12826 > URL: https://issues.apache.org/jira/browse/HDFS-12826 > Project: Hadoop HDFS > Issue Type: Bug > Components: balancer & mover, documentation >Affects Versions: 3.0.0-beta1 >Reporter: Harshakiran Reddy >Assignee: usharani >Priority: Minor > Attachments: HDFS-12826.patch > > > In {{Adding a new Namenode to an existing HDFS cluster}} , refreshNamenodes > command required IPC port but in Documentation it's saying the RPC port. > http://hadoop.apache.org/docs/r3.0.0-beta1/hadoop-project-dist/hadoop-hdfs/Federation.html#Balancer > {noformat} > bin>:~/hdfsdata/HA/install/hadoop/datanode/bin> ./hdfs dfsadmin > -refreshNamenodes host-name:65110 > refreshNamenodes: Unknown protocol: > org.apache.hadoop.hdfs.protocol.ClientDatanodeProtocol > bin.:~/hdfsdata/HA/install/hadoop/datanode/bin> ./hdfs dfsadmin > -refreshNamenodes > Usage: hdfs dfsadmin [-refreshNamenodes datanode-host:ipc_port] > bin>:~/hdfsdata/HA/install/hadoop/datanode/bin> ./hdfs dfsadmin > -refreshNamenodes host-name:50077 > bin>:~/hdfsdata/HA/install/hadoop/datanode/bin> > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12778) [READ] Report multiple locations for PROVIDED blocks
[ https://issues.apache.org/jira/browse/HDFS-12778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Virajith Jalaparti updated HDFS-12778: -- Attachment: HDFS-12778-HDFS-9806.002.patch Thanks for taking a look [~elgoiri]. Posting a new patch with the additional test cases ({{testNumberOfProvidedLocations}} and {{testNumberOfProvidedLocationsManyBlocks}}). > [READ] Report multiple locations for PROVIDED blocks > > > Key: HDFS-12778 > URL: https://issues.apache.org/jira/browse/HDFS-12778 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Virajith Jalaparti >Assignee: Virajith Jalaparti > Attachments: HDFS-12778-HDFS-9806.001.patch, > HDFS-12778-HDFS-9806.002.patch > > > On {{getBlockLocations}}, only one Datanode is returned as the location for > all PROVIDED blocks. This can hurt the performance of applications which > typically 3 locations per block. We need to return multiple Datanodes for > each PROVIDED block for better application performance/resilience. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12778) [READ] Report multiple locations for PROVIDED blocks
[ https://issues.apache.org/jira/browse/HDFS-12778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Virajith Jalaparti updated HDFS-12778: -- Status: Patch Available (was: Open) > [READ] Report multiple locations for PROVIDED blocks > > > Key: HDFS-12778 > URL: https://issues.apache.org/jira/browse/HDFS-12778 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Virajith Jalaparti >Assignee: Virajith Jalaparti > Attachments: HDFS-12778-HDFS-9806.001.patch, > HDFS-12778-HDFS-9806.002.patch > > > On {{getBlockLocations}}, only one Datanode is returned as the location for > all PROVIDED blocks. This can hurt the performance of applications which > typically 3 locations per block. We need to return multiple Datanodes for > each PROVIDED block for better application performance/resilience. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDFS-12826) Document Saying the RPC port, But it's required IPC port in Balancer Document.
[ https://issues.apache.org/jira/browse/HDFS-12826?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] usharani reassigned HDFS-12826: --- Assignee: usharani > Document Saying the RPC port, But it's required IPC port in Balancer Document. > -- > > Key: HDFS-12826 > URL: https://issues.apache.org/jira/browse/HDFS-12826 > Project: Hadoop HDFS > Issue Type: Bug > Components: balancer & mover, documentation >Affects Versions: 3.0.0-beta1 >Reporter: Harshakiran Reddy >Assignee: usharani >Priority: Minor > > In {{Adding a new Namenode to an existing HDFS cluster}} , refreshNamenodes > command required IPC port but in Documentation it's saying the RPC port. > http://hadoop.apache.org/docs/r3.0.0-beta1/hadoop-project-dist/hadoop-hdfs/Federation.html#Balancer > {noformat} > bin>:~/hdfsdata/HA/install/hadoop/datanode/bin> ./hdfs dfsadmin > -refreshNamenodes host-name:65110 > refreshNamenodes: Unknown protocol: > org.apache.hadoop.hdfs.protocol.ClientDatanodeProtocol > bin.:~/hdfsdata/HA/install/hadoop/datanode/bin> ./hdfs dfsadmin > -refreshNamenodes > Usage: hdfs dfsadmin [-refreshNamenodes datanode-host:ipc_port] > bin>:~/hdfsdata/HA/install/hadoop/datanode/bin> ./hdfs dfsadmin > -refreshNamenodes host-name:50077 > bin>:~/hdfsdata/HA/install/hadoop/datanode/bin> > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12810) Under "DFS Storage Types", the Namenode Web UI doesn't display the capacityRemaining correctly when it is 0.
[ https://issues.apache.org/jira/browse/HDFS-12810?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16255214#comment-16255214 ] Virajith Jalaparti commented on HDFS-12810: --- Thanks for posting a fix [~hanishakoneru]. While this fix solves this particular case, is there a reason why the value of 0 "gets lost" from {{dfshealth.js}} to {{dfshealth.html}} ({{b.capacityRemaining}} has a value of 0 in {{dfshealth.js}} but {{value.capacityRemaining}} doesn't retain this value in {{dfshealth.html}}) ? I suspect this might be something to do with the call to {{render()}} in {{dfshealth.js}}. > Under "DFS Storage Types", the Namenode Web UI doesn't display the > capacityRemaining correctly when it is 0. > > > Key: HDFS-12810 > URL: https://issues.apache.org/jira/browse/HDFS-12810 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Virajith Jalaparti >Assignee: Hanisha Koneru > Attachments: HDFS-12810.001.patch > > > When the {{capacityRemaining}} for a StorageType is 0, the Namenode's Web UI > displays an empty string ("()") instead of "0 (0%)". -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10285) Storage Policy Satisfier in Namenode
[ https://issues.apache.org/jira/browse/HDFS-10285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16255253#comment-16255253 ] Hadoop QA commented on HDFS-10285: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 26 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 33s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 32s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 16s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 38s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 13s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 20s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 20s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 29s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 29s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 29s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 1m 12s{color} | {color:orange} hadoop-hdfs-project: The patch generated 19 new + 2025 unchanged - 2 fixed = 2044 total (was 2027) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 24s{color} | {color:green} There were no new shellcheck issues. {color} | | {color:green}+1{color} | {color:green} shelldocs {color} | {color:green} 0m 13s{color} | {color:green} There were no new shelldocs issues. {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 42s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 32s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 15s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 16s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 93m 34s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 24s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}155m 30s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Unreaped Processes | hadoop-hdfs:1 | | Failed junit tests | hadoop.fs.TestUnbuffer | | | hadoop.hdfs.server.namenode.TestReencryptionWithKMS | | | hadoop.hdfs.server.datanode.TestBPOfferService | | |
[jira] [Commented] (HDFS-12711) deadly hdfs test
[ https://issues.apache.org/jira/browse/HDFS-12711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256584#comment-16256584 ] Allen Wittenauer commented on HDFS-12711: - Ignoring the hs_err_pid log files is pretty much just sticking our collective heads in the sand about actual, real problems with the unit tests. The unit tests themselves haven't been rock solid for a very long time, even before all of this start happening. Entries have been put into the ignore pile so often that I wouldn't be surprised if the community is already at the point that most developers are ignoring precommit. (e.g., commits with findbugs reported in the issues, javadoc compilation failures being treated as "environmental", etc, etc.) If I were actually paying more attention to day-to-day Hadoop bits these days, I'd probably be ready to disable unit tests (at least HDFS) to specifically avoid the "cried wolf" condition. The rest of the precommit tests work properly the vast majority of the time and are probably more important given the current state of things. (Never mind the massive speed up. QBT is hitting the 15 hour mark for a full run for branch-2 when it is actually allowed to complete.) No one seems to actually care that the unit tests are a broken mess and I doubt they'd be missed. My goal here was to prevent Hadoop from bringing down the rest of the ASF build infrastructure. It's under enough stress without this project making things that much worse. Achievement unlocked and other Yetus users will pick up those new safety features in the next release. I should probably close this JIRA issue. Unless someone else plans to spend some effort on these bugs? At least at this point in time, I view my work here as complete. Also: {code} /build/ {code} ARGH. That hasn't been valid since Hadoop used ant. A great example of "well, if we ignore it, it doesn't exist, right?" Because anything that is still using /build/ almost certainly isn't safe for parallel tests and likely contributing to a whole host of problems. > deadly hdfs test > > > Key: HDFS-12711 > URL: https://issues.apache.org/jira/browse/HDFS-12711 > Project: Hadoop HDFS > Issue Type: Test >Affects Versions: 2.9.0, 2.8.2 >Reporter: Allen Wittenauer >Priority: Critical > Attachments: HDFS-12711.branch-2.00.patch, fakepatch.branch-2.txt > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12500) Ozone: add logger for oz shell commands and move error stack traces to DEBUG level
[ https://issues.apache.org/jira/browse/HDFS-12500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16255873#comment-16255873 ] Anu Engineer commented on HDFS-12500: - [~linyiqun] Thanks for fixing this. Test failures are not related to this patch. I will commit this shortly. [~cheersyang] Thanks for filing this. > Ozone: add logger for oz shell commands and move error stack traces to DEBUG > level > -- > > Key: HDFS-12500 > URL: https://issues.apache.org/jira/browse/HDFS-12500 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Weiwei Yang >Assignee: Yiqun Lin >Priority: Minor > Attachments: HDFS-12500-HDFS-7240.001.patch > > > Per discussion in HDFS-12489 to reduce the verbosity of logs when exception > happens, lets add logger to {{Shell.java}} and move error stack traces to > DEBUG level. > And to track the execution time of oz commands, when logger is added, lets > add a debug log to print the total time a command execution spent. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12594) SnapshotDiff - snapshotDiff fails if the snapshotDiff report exceeds the RPC response limit
[ https://issues.apache.org/jira/browse/HDFS-12594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16255749#comment-16255749 ] Hadoop QA commented on HDFS-12594: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 50s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 40s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 55s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 48s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 0s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 28s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 19s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 37s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 51s{color} | {color:orange} hadoop-hdfs-project: The patch generated 4 new + 951 unchanged - 0 fixed = 955 total (was 951) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 45s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 49s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client generated 6 new + 0 unchanged - 0 fixed = 6 total (was 0) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 29s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 27s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}127m 55s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 22s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}190m 28s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:hadoop-hdfs-project/hadoop-hdfs-client | | | org.apache.hadoop.hdfs.protocol.SnapshotDiffReportListing.getStartPath() may expose internal representation by returning SnapshotDiffReportListing.startPath At SnapshotDiffReportListing.java:by returning SnapshotDiffReportListing.startPath At SnapshotDiffReportListing.java:[line 162] | | |
[jira] [Created] (HDFS-12827) Update the description about Replica Placement: The First Baby Steps in HDFS Architecture documentation
Suri babu Nuthalapati created HDFS-12827: Summary: Update the description about Replica Placement: The First Baby Steps in HDFS Architecture documentation Key: HDFS-12827 URL: https://issues.apache.org/jira/browse/HDFS-12827 Project: Hadoop HDFS Issue Type: Improvement Components: documentation Reporter: Suri babu Nuthalapati Priority: Minor The placement should be this: https://hadoop.apache.org/docs/r1.2.1/hdfs_design.html HDFS’s placement policy is to put one replica on one node in the local rack, another on a node in a different (remote) rack, and the last on a different node in the same remote rack. Hadoop Definitive guide says the same and I have tested and saw the same behavior as above. But the documentation in the versions after r2.5.2 it was mentioned as: http://hadoop.apache.org/docs/r2.5.2/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html HDFS’s placement policy is to put one replica on one node in the local rack, another on a different node in the local rack, and the last on a different node in a different rack. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12528) Short-circuit reads unnecessarily disabled for a long time
[ https://issues.apache.org/jira/browse/HDFS-12528?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Zhuge updated HDFS-12528: -- Summary: Short-circuit reads unnecessarily disabled for a long time (was: Short-circuit reads getting disabled frequently in certain scenarios) > Short-circuit reads unnecessarily disabled for a long time > -- > > Key: HDFS-12528 > URL: https://issues.apache.org/jira/browse/HDFS-12528 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs-client, performance >Affects Versions: 2.6.0 >Reporter: Andre Araujo >Assignee: John Zhuge > Attachments: HDFS-12528.000.patch > > > We have scenarios where data ingestion makes use of the -appendToFile > operation to add new data to existing HDFS files. In these situations, we're > frequently running into the problem described below. > We're using Impala to query the HDFS data with short-circuit reads (SCR) > enabled. After each file read, Impala "unbuffer"'s the HDFS file to reduce > the memory footprint. In some cases, though, Impala still keeps the HDFS file > handle open for reuse. > The "unbuffer" call, however, causes the file's current block reader to be > closed, which makes the associated ShortCircuitReplica evictable from the > ShortCircuitCache. When the cluster is under load, this means that the > ShortCircuitReplica can be purged off the cache pretty fast, which closes the > file descriptor to the underlying storage file. > That means that when Impala re-reads the file it has to re-open the storage > files associated with the ShortCircuitReplica's that were evicted from the > cache. If there were no appends to those blocks, the re-open will succeed > without problems. If one block was appended since the ShortCircuitReplica was > created, the re-open will fail with the following error: > {code} > Meta file for BP-810388474-172.31.113.69-1499543341726:blk_1074012183_273087 > not found > {code} > This error is handled as an "unknown response" by the BlockReaderFactory [1], > which disables short-circuit reads for 10 minutes [2] for the client. > These 10 minutes without SCR can have a big performance impact for the client > operations. In this particular case ("Meta file not found") it would suffice > to return null without disabling SCR. This particular block read would fall > back to the normal, non-short-circuited, path and other SCR requests would > continue to work as expected. > It might also be interesting to be able to control how long SCR is disabled > for in the "unknown response" case. 10 minutes seems a bit to long and not > being able to change that is a problem. > [1] > https://github.com/apache/hadoop/blob/f67237cbe7bc48a1b9088e990800b37529f1db2a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/impl/BlockReaderFactory.java#L646 > [2] > https://github.com/apache/hadoop/blob/f67237cbe7bc48a1b9088e990800b37529f1db2a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/shortcircuit/DomainSocketFactory.java#L97 -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-12827) Need Clarity on Replica Placement: The First Baby Steps in HDFS Architecture documentation
[ https://issues.apache.org/jira/browse/HDFS-12827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256077#comment-16256077 ] Bharat Viswanadham edited comment on HDFS-12827 at 11/16/17 10:38 PM: -- Hi [~surinuthalap...@live.com] In the latest design document, it is mentioned correctly {code:java} when the replication factor is three, HDFS’s placement policy is to put one replica on the local machine if the writer is on a datanode, otherwise on a random datanode, another replica on a node in a different (remote) rack, and the last on a different node in the same remote rack {code} . http://hadoop.apache.org/docs/r3.0.0-beta1/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html Pls let me know any more is needed? was (Author: bharatviswa): Hi [~surinuthalap...@live.com] In the latest design document, it is mentioned correctly {code:java} when the replication factor is three, HDFS’s placement policy is to put one replica on the local machine if the writer is on a datanode, otherwise on a random datanode, another replica on a node in a different (remote) rack, and the last on a different node in the same remote rack {code} . Pls let me know any more is needed? > Need Clarity on Replica Placement: The First Baby Steps in HDFS Architecture > documentation > -- > > Key: HDFS-12827 > URL: https://issues.apache.org/jira/browse/HDFS-12827 > Project: Hadoop HDFS > Issue Type: Improvement > Components: documentation >Reporter: Suri babu Nuthalapati >Priority: Minor > > The placement should be this: > https://hadoop.apache.org/docs/r1.2.1/hdfs_design.html > HDFS’s placement policy is to put one replica on one node in the local rack, > another on a node in a different (remote) rack, and the last on a different > node in the same remote rack. > Hadoop Definitive guide says the same and I have tested and saw the same > behavior as above. > > But the documentation in the versions after r2.5.2 it was mentioned as: > http://hadoop.apache.org/docs/r2.5.2/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html > HDFS’s placement policy is to put one replica on one node in the local rack, > another on a different node in the local rack, and the last on a different > node in a different rack. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-12828) OIV ReverseXML Processor Fails With Escaped Characters
Erik Krogen created HDFS-12828: -- Summary: OIV ReverseXML Processor Fails With Escaped Characters Key: HDFS-12828 URL: https://issues.apache.org/jira/browse/HDFS-12828 Project: Hadoop HDFS Issue Type: Bug Components: hdfs Affects Versions: 2.8.0 Reporter: Erik Krogen The HDFS OIV ReverseXML processor fails if the XML file contains escaped characters: {code} ekrogen at ekrogen-ld1 in ~/dev/hadoop/trunk/hadoop-dist/target/hadoop-3.0.0-beta1-SNAPSHOT on trunk! ± $HADOOP_HOME/bin/hdfs dfs -fs hdfs://localhost:9000/ -ls / Found 4 items drwxr-xr-x - ekrogen supergroup 0 2017-11-16 14:48 /foo drwxr-xr-x - ekrogen supergroup 0 2017-11-16 14:49 /foo" drwxr-xr-x - ekrogen supergroup 0 2017-11-16 14:50 /foo` drwxr-xr-x - ekrogen supergroup 0 2017-11-16 14:49 /foo& {code} Then after doing {{saveNamespace}} on that NameNode... {code} ekrogen at ekrogen-ld1 in ~/dev/hadoop/trunk/hadoop-dist/target/hadoop-3.0.0-beta1-SNAPSHOT on trunk! ± $HADOOP_HOME/bin/hdfs oiv -i /tmp/hadoop-ekrogen/dfs/name/current/fsimage_008 -o /tmp/hadoop-ekrogen/dfs/name/current/fsimage_008.xml -p XML ekrogen at ekrogen-ld1 in ~/dev/hadoop/trunk/hadoop-dist/target/hadoop-3.0.0-beta1-SNAPSHOT on trunk! ± $HADOOP_HOME/bin/hdfs oiv -i /tmp/hadoop-ekrogen/dfs/name/current/fsimage_008.xml -o /tmp/hadoop-ekrogen/dfs/name/current/fsimage_008.xml.rev -p ReverseXML OfflineImageReconstructor failed: unterminated entity ref starting with & org.apache.hadoop.hdfs.util.XMLUtils$UnmanglingError: unterminated entity ref starting with & at org.apache.hadoop.hdfs.util.XMLUtils.unmangleXmlString(XMLUtils.java:232) at org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageReconstructor.loadNodeChildrenHelper(OfflineImageReconstructor.java:383) at org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageReconstructor.loadNodeChildrenHelper(OfflineImageReconstructor.java:379) at org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageReconstructor.loadNodeChildren(OfflineImageReconstructor.java:418) at org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageReconstructor.access$1000(OfflineImageReconstructor.java:95) at org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageReconstructor$INodeSectionProcessor.process(OfflineImageReconstructor.java:524) at org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageReconstructor.processXml(OfflineImageReconstructor.java:1710) at org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageReconstructor.run(OfflineImageReconstructor.java:1765) at org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewerPB.run(OfflineImageViewerPB.java:191) at org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewerPB.main(OfflineImageViewerPB.java:134) {code} See attachments for relevant fsimage XML file. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-12827) Need Clarity on Replica Placement: The First Baby Steps in HDFS Architecture documentation
[ https://issues.apache.org/jira/browse/HDFS-12827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256118#comment-16256118 ] Bharat Viswanadham edited comment on HDFS-12827 at 11/16/17 11:09 PM: -- [~surinuthalap...@live.com] This is just a documentation issue. The behavior is same across all releases. This has been fixed by HDFS-11833 As 2.5.2 is a released version, I think documentation cannot be updated for already released version. For newer versions, this has been fixed. was (Author: bharatviswa): [~surinuthalap...@live.com] This is just a documentation issue. This has been fixed by HDFS-11833 As 2.5.2 is a released version, I think documentation cannot be updated for already released version. For newer versions, this has been fixed. > Need Clarity on Replica Placement: The First Baby Steps in HDFS Architecture > documentation > -- > > Key: HDFS-12827 > URL: https://issues.apache.org/jira/browse/HDFS-12827 > Project: Hadoop HDFS > Issue Type: Improvement > Components: documentation >Reporter: Suri babu Nuthalapati >Priority: Minor > > The placement should be this: > https://hadoop.apache.org/docs/r1.2.1/hdfs_design.html > HDFS’s placement policy is to put one replica on one node in the local rack, > another on a node in a different (remote) rack, and the last on a different > node in the same remote rack. > Hadoop Definitive guide says the same and I have tested and saw the same > behavior as above. > > But the documentation in the versions after r2.5.2 it was mentioned as: > http://hadoop.apache.org/docs/r2.5.2/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html > HDFS’s placement policy is to put one replica on one node in the local rack, > another on a different node in the local rack, and the last on a different > node in a different rack. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12823) Backport HDFS-9259 "Make SO_SNDBUF size configurable at DFSClient" to branch-2.7
[ https://issues.apache.org/jira/browse/HDFS-12823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256137#comment-16256137 ] Manoj Govindassamy commented on HDFS-12823: --- Thanks for the extra efforts [~xkrogen]. Much appreciated. +1, pending Jenkins. > Backport HDFS-9259 "Make SO_SNDBUF size configurable at DFSClient" to > branch-2.7 > > > Key: HDFS-12823 > URL: https://issues.apache.org/jira/browse/HDFS-12823 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs, hdfs-client >Reporter: Erik Krogen >Assignee: Erik Krogen > Attachments: HDFS-12823-branch-2.7.000.patch, > HDFS-12823-branch-2.7.001.patch > > > Given the pretty significant performance implications of HDFS-9259 (see > discussion in HDFS-10326) when doing transfers across high latency links, it > would be helpful to have this configurability exist in the 2.7 series. > Opening a new JIRA since the original HDFS-9259 has been closed for a while > and there are conflicts due to a few classes moving. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12801) RBF: Set MountTableResolver as default file resolver
[ https://issues.apache.org/jira/browse/HDFS-12801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256143#comment-16256143 ] Chris Douglas commented on HDFS-12801: -- Also +1 on the patch. > RBF: Set MountTableResolver as default file resolver > > > Key: HDFS-12801 > URL: https://issues.apache.org/jira/browse/HDFS-12801 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Minor > Attachments: HDFS-12801.000.patch > > > {{hdfs-default.xml}} is still using the {{MockResolver}} for the default > setup which is the one used for unit testing. This should be a real resolver > like the {{MountTableResolver}}. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12823) Backport HDFS-9259 "Make SO_SNDBUF size configurable at DFSClient" to branch-2.7
[ https://issues.apache.org/jira/browse/HDFS-12823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256146#comment-16256146 ] Hadoop QA commented on HDFS-12823: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 18m 28s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} branch-2.7 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 58s{color} | {color:green} branch-2.7 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 5s{color} | {color:green} branch-2.7 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 36s{color} | {color:green} branch-2.7 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 8s{color} | {color:green} branch-2.7 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 7s{color} | {color:green} branch-2.7 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 49s{color} | {color:green} branch-2.7 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 5s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 5s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 30s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 5 new + 822 unchanged - 0 fixed = 827 total (was 822) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 4s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 61 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 0s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 51s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 80m 20s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 1m 8s{color} | {color:red} The patch generated 184 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}126m 13s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Unreaped Processes | hadoop-hdfs:20 | | Failed junit tests | hadoop.hdfs.TestListPathServlet | | | hadoop.hdfs.TestDataTransferProtocol | | | hadoop.hdfs.server.datanode.TestDataNodeMXBean | | Timed out junit tests | org.apache.hadoop.hdfs.TestLeaseRecovery2 | | | org.apache.hadoop.hdfs.TestDatanodeRegistration | | | org.apache.hadoop.hdfs.TestDFSClientFailover | | | org.apache.hadoop.hdfs.TestDFSClientRetries | | | org.apache.hadoop.hdfs.web.TestWebHdfsTokens | | | org.apache.hadoop.hdfs.TestDFSInotifyEventInputStream | | | org.apache.hadoop.hdfs.TestFileAppendRestart | | | org.apache.hadoop.hdfs.TestSeekBug | | | org.apache.hadoop.hdfs.TestDFSMkdirs | | | org.apache.hadoop.hdfs.TestDatanodeReport | | | org.apache.hadoop.hdfs.web.TestWebHDFS | | | org.apache.hadoop.hdfs.web.TestWebHDFSXAttr | | | org.apache.hadoop.hdfs.web.TestWebHdfsWithMultipleNameNodes | | | org.apache.hadoop.hdfs.TestMiniDFSCluster | | | org.apache.hadoop.hdfs.TestDistributedFileSystem | | | org.apache.hadoop.hdfs.web.TestWebHDFSForHA | | | org.apache.hadoop.hdfs.TestBalancerBandwidth | | | org.apache.hadoop.hdfs.TestSetTimes | | | org.apache.hadoop.hdfs.TestDFSShell | | | org.apache.hadoop.hdfs.web.TestWebHDFSAcl | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce
[jira] [Commented] (HDFS-12827) Need Clarity on Replica Placement: The First Baby Steps in HDFS Architecture documentation
[ https://issues.apache.org/jira/browse/HDFS-12827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256090#comment-16256090 ] Suri babu Nuthalapati commented on HDFS-12827: -- Thank you for the Response, [~bharatviswa]. Is there a Design change in Hadoop V2 form V1 and V3 or is it just the documentation was misrepresented in v2? If not, Can we update the documentation which is in http://hadoop.apache.org/docs/r2.5.2/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html to reflect correct details. Suri > Need Clarity on Replica Placement: The First Baby Steps in HDFS Architecture > documentation > -- > > Key: HDFS-12827 > URL: https://issues.apache.org/jira/browse/HDFS-12827 > Project: Hadoop HDFS > Issue Type: Improvement > Components: documentation >Reporter: Suri babu Nuthalapati >Priority: Minor > > The placement should be this: > https://hadoop.apache.org/docs/r1.2.1/hdfs_design.html > HDFS’s placement policy is to put one replica on one node in the local rack, > another on a node in a different (remote) rack, and the last on a different > node in the same remote rack. > Hadoop Definitive guide says the same and I have tested and saw the same > behavior as above. > > But the documentation in the versions after r2.5.2 it was mentioned as: > http://hadoop.apache.org/docs/r2.5.2/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html > HDFS’s placement policy is to put one replica on one node in the local rack, > another on a different node in the local rack, and the last on a different > node in a different rack. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12711) deadly hdfs test
[ https://issues.apache.org/jira/browse/HDFS-12711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256166#comment-16256166 ] Erik Krogen commented on HDFS-12711: Hey [~aw], in addition to the wild fluctuations in success of HDFS unit tests (not your fault, but unfortunate) I'm seeing lots of false license violations caused by these changes, e.g.: https://builds.apache.org/job/PreCommit-HDFS-Build/22122/artifact/out/patch-asflicense-problems.txt Can we do something to solve that? > deadly hdfs test > > > Key: HDFS-12711 > URL: https://issues.apache.org/jira/browse/HDFS-12711 > Project: Hadoop HDFS > Issue Type: Test >Affects Versions: 2.9.0, 2.8.2 >Reporter: Allen Wittenauer >Priority: Critical > Attachments: HDFS-12711.branch-2.00.patch, fakepatch.branch-2.txt > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12823) Backport HDFS-9259 "Make SO_SNDBUF size configurable at DFSClient" to branch-2.7
[ https://issues.apache.org/jira/browse/HDFS-12823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16255937#comment-16255937 ] Manoj Govindassamy commented on HDFS-12823: --- [~xkrogen], Can we please make use of {{getSocketSendBufferSize()}} instead of directly referring to the member variable in the below check in {{DFSOutputStream}}? {noformat} 1704if (client.getConf().socketSendBufferSize > 0) { 1705 sock.setSendBufferSize(client.getConf().socketSendBufferSize); 1706} {noformat} > Backport HDFS-9259 "Make SO_SNDBUF size configurable at DFSClient" to > branch-2.7 > > > Key: HDFS-12823 > URL: https://issues.apache.org/jira/browse/HDFS-12823 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs, hdfs-client >Reporter: Erik Krogen >Assignee: Erik Krogen > Attachments: HDFS-12823-branch-2.7.000.patch > > > Given the pretty significant performance implications of HDFS-9259 (see > discussion in HDFS-10326) when doing transfers across high latency links, it > would be helpful to have this configurability exist in the 2.7 series. > Opening a new JIRA since the original HDFS-9259 has been closed for a while > and there are conflicts due to a few classes moving. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12813) RequestHedgingProxyProvider can hide Exception thrown from the Namenode for proxy size of 1
[ https://issues.apache.org/jira/browse/HDFS-12813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256439#comment-16256439 ] Tsz Wo Nicholas Sze commented on HDFS-12813: Patch looks good. However, the existing code does not. Some comments/questions: - Let's have two unwrap methods to handle two different cases -# ExecutionException(InvocationTargetExeption(SomeException)) -# InvocationTargetException(SomeException) - Also, the parameter of these two methods should be ExecutionException or InvocationTargetException instead of Exception. - Pass the unwrapped exception to logProxyException. Then, isStandbyException does not need to unwrap it again. - Question: It seems to me that the code expects either ExecutionException or InvocationTargetException, could we catch either ExecutionException or InvocationTargetException instead of Exception? - Question: the patch changes successfulProxy to lastUsedProxy. Then, getProxy() may return "last unsuccessful proxy". Is it okay? > RequestHedgingProxyProvider can hide Exception thrown from the Namenode for > proxy size of 1 > --- > > Key: HDFS-12813 > URL: https://issues.apache.org/jira/browse/HDFS-12813 > Project: Hadoop HDFS > Issue Type: Bug > Components: ha >Reporter: Mukul Kumar Singh >Assignee: Mukul Kumar Singh > Attachments: HDFS-12813.001.patch, HDFS-12813.002.patch > > > HDFS-11395 fixed the problem where the MultiException thrown by > RequestHedgingProxyProvider was hidden. However when the target proxy size is > 1, then unwrapping is not done for the InvocationTargetException. for target > proxy size of 1, the unwrapping should be done till first level where as for > multiple proxy size, it should be done at 2 levels. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12778) [READ] Report multiple locations for PROVIDED blocks
[ https://issues.apache.org/jira/browse/HDFS-12778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256452#comment-16256452 ] Hadoop QA commented on HDFS-12778: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 5m 43s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} HDFS-9806 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 41s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 52s{color} | {color:green} HDFS-9806 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 11s{color} | {color:green} HDFS-9806 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 0s{color} | {color:green} HDFS-9806 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 30s{color} | {color:green} HDFS-9806 passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 53s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 31s{color} | {color:red} hadoop-tools/hadoop-fs2img in HDFS-9806 has 1 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 18s{color} | {color:green} HDFS-9806 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 7s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 7s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 2s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 17s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}121m 41s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 4s{color} | {color:green} hadoop-fs2img in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 43s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}218m 36s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestReadStripedFileWithMissingBlocks | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure170 | | | hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics | | | hadoop.hdfs.TestLeaseRecoveryStriped | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 | | | hadoop.hdfs.TestSafeModeWithStripedFileWithRandomECPolicy | | | hadoop.hdfs.qjournal.server.TestJournalNodeSync | | | hadoop.hdfs.server.blockmanagement.TestReplicationPolicy | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 | | JIRA Issue | HDFS-12778 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12898085/HDFS-12778-HDFS-9806.003.patch | | Optional Tests | asflicense compile javac javadoc
[jira] [Assigned] (HDFS-12825) After Block Corrupted, FSCK Report printing the Direct configuration.
[ https://issues.apache.org/jira/browse/HDFS-12825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gabor Bota reassigned HDFS-12825: - Assignee: Gabor Bota > After Block Corrupted, FSCK Report printing the Direct configuration. > --- > > Key: HDFS-12825 > URL: https://issues.apache.org/jira/browse/HDFS-12825 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Affects Versions: 3.0.0-alpha1 >Reporter: Harshakiran Reddy >Assignee: Gabor Bota >Priority: Minor > Labels: newbie > Attachments: error.JPG > > > Scenario: > Corrupt the Block in any datanode > Take the *FSCK *Report for that file. > Actual Output: > == > printing the direct configuration in fsck report > {{dfs.namenode.replication.min}} > Expected Output: > > it should be {{MINIMAL BLOCK REPLICATION}} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12825) After Block Corrupted, FSCK Report printing the Direct configuration.
[ https://issues.apache.org/jira/browse/HDFS-12825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16255134#comment-16255134 ] usharani commented on HDFS-12825: - [~Harsha1206] thanks for reporting. [~gabor.bota] Could you please assign me ,I already have a patch..? > After Block Corrupted, FSCK Report printing the Direct configuration. > --- > > Key: HDFS-12825 > URL: https://issues.apache.org/jira/browse/HDFS-12825 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Affects Versions: 3.0.0-alpha1 >Reporter: Harshakiran Reddy >Assignee: Gabor Bota >Priority: Minor > Labels: newbie > Attachments: error.JPG > > > Scenario: > Corrupt the Block in any datanode > Take the *FSCK *Report for that file. > Actual Output: > == > printing the direct configuration in fsck report > {{dfs.namenode.replication.min}} > Expected Output: > > it should be {{MINIMAL BLOCK REPLICATION}} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12825) After Block Corrupted, FSCK Report printing the Direct configuration.
[ https://issues.apache.org/jira/browse/HDFS-12825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gabor Bota updated HDFS-12825: -- Status: Patch Available (was: Open) > After Block Corrupted, FSCK Report printing the Direct configuration. > --- > > Key: HDFS-12825 > URL: https://issues.apache.org/jira/browse/HDFS-12825 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Affects Versions: 3.0.0-alpha1 >Reporter: Harshakiran Reddy >Assignee: Gabor Bota >Priority: Minor > Labels: newbie > Attachments: HDFS-12825.001.patch, error.JPG > > > Scenario: > Corrupt the Block in any datanode > Take the *FSCK *Report for that file. > Actual Output: > == > printing the direct configuration in fsck report > {{dfs.namenode.replication.min}} > Expected Output: > > it should be {{MINIMAL BLOCK REPLICATION}} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12825) After Block Corrupted, FSCK Report printing the Direct configuration.
[ https://issues.apache.org/jira/browse/HDFS-12825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gabor Bota updated HDFS-12825: -- Attachment: HDFS-12825.001.patch > After Block Corrupted, FSCK Report printing the Direct configuration. > --- > > Key: HDFS-12825 > URL: https://issues.apache.org/jira/browse/HDFS-12825 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Affects Versions: 3.0.0-alpha1 >Reporter: Harshakiran Reddy >Assignee: Gabor Bota >Priority: Minor > Labels: newbie > Attachments: HDFS-12825.001.patch, error.JPG > > > Scenario: > Corrupt the Block in any datanode > Take the *FSCK *Report for that file. > Actual Output: > == > printing the direct configuration in fsck report > {{dfs.namenode.replication.min}} > Expected Output: > > it should be {{MINIMAL BLOCK REPLICATION}} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12778) [READ] Report multiple locations for PROVIDED blocks
[ https://issues.apache.org/jira/browse/HDFS-12778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16255737#comment-16255737 ] Hadoop QA commented on HDFS-12778: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} HDFS-9806 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 31s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 28s{color} | {color:green} HDFS-9806 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 5s{color} | {color:green} HDFS-9806 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 7s{color} | {color:green} HDFS-9806 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 39s{color} | {color:green} HDFS-9806 passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 17s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 33s{color} | {color:red} hadoop-tools/hadoop-fs2img in HDFS-9806 has 1 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 20s{color} | {color:green} HDFS-9806 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 55s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 2m 4s{color} | {color:orange} root: The patch generated 3 new + 18 unchanged - 0 fixed = 21 total (was 18) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 5s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 41s{color} | {color:red} hadoop-tools/hadoop-fs2img generated 1 new + 1 unchanged - 0 fixed = 2 total (was 1) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 19s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}129m 5s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 18s{color} | {color:green} hadoop-fs2img in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 46s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}216m 0s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:hadoop-tools/hadoop-fs2img | | | org.apache.hadoop.hdfs.server.namenode.FixedBlockResolver.BLOCKSIZE_DEFAULT isn't final but should be At FixedBlockResolver.java:be At FixedBlockResolver.java:[line 37] | | Failed junit tests | hadoop.hdfs.server.datanode.checker.TestThrottledAsyncChecker | | | hadoop.hdfs.server.namenode.TestCheckpoint | | | hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency | | | hadoop.hdfs.TestDFSStripedInputStream | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure180 | | | hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations | | | hadoop.hdfs.TestDFSStripedOutputStream | | |
[jira] [Updated] (HDFS-12730) Verify open files captured in the snapshots across config disable and enable
[ https://issues.apache.org/jira/browse/HDFS-12730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Manoj Govindassamy updated HDFS-12730: -- Attachment: HDFS-12730.02.patch Attached v02 patch to address the comment. -- added a case to verify the config switched on to off and the effect of file lengths for the open files in the newly taken snapshots. [~yzhangal], [~hanishakoneru], can you please take a look? > Verify open files captured in the snapshots across config disable and enable > > > Key: HDFS-12730 > URL: https://issues.apache.org/jira/browse/HDFS-12730 > Project: Hadoop HDFS > Issue Type: Test > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Manoj Govindassamy >Assignee: Manoj Govindassamy > Attachments: HDFS-12730.01.patch, HDFS-12730.02.patch > > > Open files captured in the snapshots have their meta data preserved based on > the config > _dfs.namenode.snapshot.capture.openfiles_ (refer HDFS-11402). During the > upgrade scenario or when the NameNode gets restarted with config turned on or > off, the attributes of the open files captured in the snapshots are > influenced accordingly. Better to have a test case to verify open file > attributes across config turn on and off, and the current expected behavior > with HDFS-11402 so as to catch any regressions in the future. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12681) Fold HdfsLocatedFileStatus into HdfsFileStatus
[ https://issues.apache.org/jira/browse/HDFS-12681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Douglas updated HDFS-12681: - Attachment: HDFS-12681.12.patch Revised patch. This should fix the unit test failures. Also added a unit test to ensure {{HdfsFileStatus}} remains a superset of {{FileStatus}}. This modifies the approach taken by HDFS-12455 by removing the {{setSnapShotEnabledFlag}} method and exposing {{AttrFlags}}. Frankly, I'm not convinced that exposing all these attribute flags in {{FileStatus}}, when most are only meaningful to HDFS, is valuable. The point is moot since we've already released it, but I hope we can eventually curtail the practice. > Fold HdfsLocatedFileStatus into HdfsFileStatus > -- > > Key: HDFS-12681 > URL: https://issues.apache.org/jira/browse/HDFS-12681 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Chris Douglas >Assignee: Chris Douglas >Priority: Minor > Attachments: HDFS-12681.00.patch, HDFS-12681.01.patch, > HDFS-12681.02.patch, HDFS-12681.03.patch, HDFS-12681.04.patch, > HDFS-12681.05.patch, HDFS-12681.06.patch, HDFS-12681.07.patch, > HDFS-12681.08.patch, HDFS-12681.09.patch, HDFS-12681.10.patch, > HDFS-12681.11.patch, HDFS-12681.12.patch > > > {{HdfsLocatedFileStatus}} is a subtype of {{HdfsFileStatus}}, but not of > {{LocatedFileStatus}}. Conversion requires copying common fields and shedding > unknown data. It would be cleaner and sufficient for {{HdfsFileStatus}} to > extend {{LocatedFileStatus}}. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12823) Backport HDFS-9259 "Make SO_SNDBUF size configurable at DFSClient" to branch-2.7
[ https://issues.apache.org/jira/browse/HDFS-12823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16255980#comment-16255980 ] Erik Krogen commented on HDFS-12823: Hi [~manojg], thanks for taking a look! I would love to but that method does not exist in branch-2.7. In the 2.7 branch the fields of {{DFSClient.Conf}} are generally accessed bare; there are 50+ fields and only 4 direct getter methods. > Backport HDFS-9259 "Make SO_SNDBUF size configurable at DFSClient" to > branch-2.7 > > > Key: HDFS-12823 > URL: https://issues.apache.org/jira/browse/HDFS-12823 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs, hdfs-client >Reporter: Erik Krogen >Assignee: Erik Krogen > Attachments: HDFS-12823-branch-2.7.000.patch > > > Given the pretty significant performance implications of HDFS-9259 (see > discussion in HDFS-10326) when doing transfers across high latency links, it > would be helpful to have this configurability exist in the 2.7 series. > Opening a new JIRA since the original HDFS-9259 has been closed for a while > and there are conflicts due to a few classes moving. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12823) Backport HDFS-9259 "Make SO_SNDBUF size configurable at DFSClient" to branch-2.7
[ https://issues.apache.org/jira/browse/HDFS-12823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256043#comment-16256043 ] Manoj Govindassamy commented on HDFS-12823: --- [~xkrogen], Yes, not a good idea to introduce getters and setters for all those 50+ fields as part of this jira. Adding a getter for the newly added ones will be better though. Otherwise, the v0 patch LGTM, +1. Thanks for working on this. > Backport HDFS-9259 "Make SO_SNDBUF size configurable at DFSClient" to > branch-2.7 > > > Key: HDFS-12823 > URL: https://issues.apache.org/jira/browse/HDFS-12823 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs, hdfs-client >Reporter: Erik Krogen >Assignee: Erik Krogen > Attachments: HDFS-12823-branch-2.7.000.patch > > > Given the pretty significant performance implications of HDFS-9259 (see > discussion in HDFS-10326) when doing transfers across high latency links, it > would be helpful to have this configurability exist in the 2.7 series. > Opening a new JIRA since the original HDFS-9259 has been closed for a while > and there are conflicts due to a few classes moving. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12778) [READ] Report multiple locations for PROVIDED blocks
[ https://issues.apache.org/jira/browse/HDFS-12778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Virajith Jalaparti updated HDFS-12778: -- Attachment: HDFS-12778-HDFS-9806.003.patch Updated patch fixing the findbugs and checkstyle issues. The failed tests pass locally except {{TestCheckpoint}}, which is unrelated. > [READ] Report multiple locations for PROVIDED blocks > > > Key: HDFS-12778 > URL: https://issues.apache.org/jira/browse/HDFS-12778 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Virajith Jalaparti >Assignee: Virajith Jalaparti > Attachments: HDFS-12778-HDFS-9806.001.patch, > HDFS-12778-HDFS-9806.002.patch, HDFS-12778-HDFS-9806.003.patch > > > On {{getBlockLocations}}, only one Datanode is returned as the location for > all PROVIDED blocks. This can hurt the performance of applications which > typically 3 locations per block. We need to return multiple Datanodes for > each PROVIDED block for better application performance/resilience. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12778) [READ] Report multiple locations for PROVIDED blocks
[ https://issues.apache.org/jira/browse/HDFS-12778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Virajith Jalaparti updated HDFS-12778: -- Status: Open (was: Patch Available) > [READ] Report multiple locations for PROVIDED blocks > > > Key: HDFS-12778 > URL: https://issues.apache.org/jira/browse/HDFS-12778 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Virajith Jalaparti >Assignee: Virajith Jalaparti > Attachments: HDFS-12778-HDFS-9806.001.patch, > HDFS-12778-HDFS-9806.002.patch, HDFS-12778-HDFS-9806.003.patch > > > On {{getBlockLocations}}, only one Datanode is returned as the location for > all PROVIDED blocks. This can hurt the performance of applications which > typically 3 locations per block. We need to return multiple Datanodes for > each PROVIDED block for better application performance/resilience. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12823) Backport HDFS-9259 "Make SO_SNDBUF size configurable at DFSClient" to branch-2.7
[ https://issues.apache.org/jira/browse/HDFS-12823?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Erik Krogen updated HDFS-12823: --- Status: Patch Available (was: In Progress) > Backport HDFS-9259 "Make SO_SNDBUF size configurable at DFSClient" to > branch-2.7 > > > Key: HDFS-12823 > URL: https://issues.apache.org/jira/browse/HDFS-12823 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs, hdfs-client >Reporter: Erik Krogen >Assignee: Erik Krogen > Attachments: HDFS-12823-branch-2.7.000.patch > > > Given the pretty significant performance implications of HDFS-9259 (see > discussion in HDFS-10326) when doing transfers across high latency links, it > would be helpful to have this configurability exist in the 2.7 series. > Opening a new JIRA since the original HDFS-9259 has been closed for a while > and there are conflicts due to a few classes moving. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12827) Need Clarity onReplica Placement: The First Baby Steps in HDFS Architecture documentation
[ https://issues.apache.org/jira/browse/HDFS-12827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Suri babu Nuthalapati updated HDFS-12827: - Summary: Need Clarity onReplica Placement: The First Baby Steps in HDFS Architecture documentation (was: Update the description about Replica Placement: The First Baby Steps in HDFS Architecture documentation) > Need Clarity onReplica Placement: The First Baby Steps in HDFS Architecture > documentation > - > > Key: HDFS-12827 > URL: https://issues.apache.org/jira/browse/HDFS-12827 > Project: Hadoop HDFS > Issue Type: Improvement > Components: documentation >Reporter: Suri babu Nuthalapati >Priority: Minor > > The placement should be this: > https://hadoop.apache.org/docs/r1.2.1/hdfs_design.html > HDFS’s placement policy is to put one replica on one node in the local rack, > another on a node in a different (remote) rack, and the last on a different > node in the same remote rack. > Hadoop Definitive guide says the same and I have tested and saw the same > behavior as above. > > But the documentation in the versions after r2.5.2 it was mentioned as: > http://hadoop.apache.org/docs/r2.5.2/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html > HDFS’s placement policy is to put one replica on one node in the local rack, > another on a different node in the local rack, and the last on a different > node in a different rack. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12827) Need Clarity on Replica Placement: The First Baby Steps in HDFS Architecture documentation
[ https://issues.apache.org/jira/browse/HDFS-12827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Suri babu Nuthalapati updated HDFS-12827: - Summary: Need Clarity on Replica Placement: The First Baby Steps in HDFS Architecture documentation (was: Need Clarity onReplica Placement: The First Baby Steps in HDFS Architecture documentation) > Need Clarity on Replica Placement: The First Baby Steps in HDFS Architecture > documentation > -- > > Key: HDFS-12827 > URL: https://issues.apache.org/jira/browse/HDFS-12827 > Project: Hadoop HDFS > Issue Type: Improvement > Components: documentation >Reporter: Suri babu Nuthalapati >Priority: Minor > > The placement should be this: > https://hadoop.apache.org/docs/r1.2.1/hdfs_design.html > HDFS’s placement policy is to put one replica on one node in the local rack, > another on a node in a different (remote) rack, and the last on a different > node in the same remote rack. > Hadoop Definitive guide says the same and I have tested and saw the same > behavior as above. > > But the documentation in the versions after r2.5.2 it was mentioned as: > http://hadoop.apache.org/docs/r2.5.2/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html > HDFS’s placement policy is to put one replica on one node in the local rack, > another on a different node in the local rack, and the last on a different > node in a different rack. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12823) Backport HDFS-9259 "Make SO_SNDBUF size configurable at DFSClient" to branch-2.7
[ https://issues.apache.org/jira/browse/HDFS-12823?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Erik Krogen updated HDFS-12823: --- Attachment: HDFS-12823-branch-2.7.001.patch Fair enough, attached v001 patch with a getter for {{socketSendBufferSize}}. > Backport HDFS-9259 "Make SO_SNDBUF size configurable at DFSClient" to > branch-2.7 > > > Key: HDFS-12823 > URL: https://issues.apache.org/jira/browse/HDFS-12823 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs, hdfs-client >Reporter: Erik Krogen >Assignee: Erik Krogen > Attachments: HDFS-12823-branch-2.7.000.patch, > HDFS-12823-branch-2.7.001.patch > > > Given the pretty significant performance implications of HDFS-9259 (see > discussion in HDFS-10326) when doing transfers across high latency links, it > would be helpful to have this configurability exist in the 2.7 series. > Opening a new JIRA since the original HDFS-9259 has been closed for a while > and there are conflicts due to a few classes moving. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12814) Add blockId when warning slow mirror/disk in BlockReceiver
[ https://issues.apache.org/jira/browse/HDFS-12814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Weiwei Yang updated HDFS-12814: --- Priority: Trivial (was: Minor) > Add blockId when warning slow mirror/disk in BlockReceiver > -- > > Key: HDFS-12814 > URL: https://issues.apache.org/jira/browse/HDFS-12814 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Reporter: Jiandan Yang >Assignee: Jiandan Yang >Priority: Trivial > Attachments: HDFS-12814.001.patch, HDFS-12814.002.patch > > > HDFS-11603 add downstream DataNodeIds and volume path. > In order to better debug, those warnning log should include blockId -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12813) RequestHedgingProxyProvider can hide Exception thrown from the Namenode for proxy size of 1
[ https://issues.apache.org/jira/browse/HDFS-12813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16254962#comment-16254962 ] Hadoop QA commented on HDFS-12813: -- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 9s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 33s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 15s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 36s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 2s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 27s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 24s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 33s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 14s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs-client: The patch generated 2 new + 7 unchanged - 0 fixed = 9 total (was 7) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 38s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 24s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 14s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 20s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 46m 28s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 | | JIRA Issue | HDFS-12813 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12897938/HDFS-12813.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux c86908d71cea 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 675e9a8 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/22109/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs-client.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/22109/testReport/ | | Max. process+thread count | 336 (vs. ulimit of 5000) | | modules | C: hadoop-hdfs-project/hadoop-hdfs-client U: hadoop-hdfs-project/hadoop-hdfs-client | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/22109/console | | Powered by | Apache Yetus
[jira] [Commented] (HDFS-12821) Block invalid IOException causes the DFSClient domain socket being disabled
[ https://issues.apache.org/jira/browse/HDFS-12821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16254904#comment-16254904 ] John Zhuge commented on HDFS-12821: --- Think so too. In HDFS-12528, the file was appended, thus the last block generate stamp was changed. Since the block meta file name contains the gen stamp, the meta file could not be found any more: {noformat} Meta file for BP-810388474-172.31.113.69-1499543341726:blk_1074012183_273087 not found {noformat} How was the block invalidated in this case? > Block invalid IOException causes the DFSClient domain socket being disabled > --- > > Key: HDFS-12821 > URL: https://issues.apache.org/jira/browse/HDFS-12821 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs-client >Affects Versions: 2.4.0, 2.6.0 >Reporter: Gang Xie > > We use HDFS2.4 & 2.6, and recently hit a issue that DFSClient domain socket > is disabled when datanode throw block invalid exception. > The block is invalidated for some reason on datanote and it's OK. Then > DFSClient tries to access this block on this datanode via domain socket. This > triggers a IOExcetion. On DFSClient side, when get a IOExcetion and error > code 'ERROR', it disables the domain socket and fails back to TCP. and the > worst is that it seems never recover the socket. > I think this is a defect and with such "block invalid" exception, we should > not disable the domain socket because the is nothing wrong about the domain > socket service. > And thoughts? > The code: > {code} > private ShortCircuitReplicaInfo requestFileDescriptors(DomainPeer peer, > Slot slot) throws IOException { > ShortCircuitCache cache = clientContext.getShortCircuitCache(); > final DataOutputStream out = > new DataOutputStream(new BufferedOutputStream(peer.getOutputStream())); > SlotId slotId = slot == null ? null : slot.getSlotId(); > new Sender(out).requestShortCircuitFds(block, token, slotId, 1); > DataInputStream in = new DataInputStream(peer.getInputStream()); > BlockOpResponseProto resp = BlockOpResponseProto.parseFrom( > PBHelper.vintPrefixed(in)); > DomainSocket sock = peer.getDomainSocket(); > switch (resp.getStatus()) { > case SUCCESS: > byte buf[] = new byte[1]; > FileInputStream fis[] = new FileInputStream[2]; > sock.recvFileInputStreams(fis, buf, 0, buf.length); > ShortCircuitReplica replica = null; > try { > ExtendedBlockId key = > new ExtendedBlockId(block.getBlockId(), block.getBlockPoolId()); > replica = new ShortCircuitReplica(key, fis[0], fis[1], cache, > Time.monotonicNow(), slot); > } catch (IOException e) { > // This indicates an error reading from disk, or a format error. Since > // it's not a socket communication problem, we return null rather than > // throwing an exception. > LOG.warn(this + ": error creating ShortCircuitReplica.", e); > return null; > } finally { > if (replica == null) { > IOUtils.cleanup(DFSClient.LOG, fis[0], fis[1]); > } > } > return new ShortCircuitReplicaInfo(replica); > case ERROR_UNSUPPORTED: > if (!resp.hasShortCircuitAccessVersion()) { > LOG.warn("short-circuit read access is disabled for " + > "DataNode " + datanode + ". reason: " + resp.getMessage()); > clientContext.getDomainSocketFactory() > .disableShortCircuitForPath(pathInfo.getPath()); > } else { > LOG.warn("short-circuit read access for the file " + > fileName + " is disabled for DataNode " + datanode + > ". reason: " + resp.getMessage()); > } > return null; > case ERROR_ACCESS_TOKEN: > String msg = "access control error while " + > "attempting to set up short-circuit access to " + > fileName + resp.getMessage(); > if (LOG.isDebugEnabled()) { > LOG.debug(this + ":" + msg); > } > return new ShortCircuitReplicaInfo(new InvalidToken(msg)); > default: > LOG.warn(this + ": unknown response code " + resp.getStatus() + > " while attempting to set up short-circuit access. " + > resp.getMessage()); > clientContext.getDomainSocketFactory() > .disableShortCircuitForPath(pathInfo.getPath()); > <<= > return null; > } > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12814) Add blockId when warning slow mirror/disk in BlockReceiver
[ https://issues.apache.org/jira/browse/HDFS-12814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16254908#comment-16254908 ] Weiwei Yang commented on HDFS-12814: Thanks [~yangjiandan] for the patch, and thanks [~msingh] for the review, I will commit this shortly. > Add blockId when warning slow mirror/disk in BlockReceiver > -- > > Key: HDFS-12814 > URL: https://issues.apache.org/jira/browse/HDFS-12814 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Reporter: Jiandan Yang >Assignee: Jiandan Yang >Priority: Minor > Attachments: HDFS-12814.001.patch, HDFS-12814.002.patch > > > HDFS-11603 add downstream DataNodeIds and volume path. > In order to better debug, those warnning log should include blockId -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Resolved] (HDFS-12821) Block invalid IOException causes the DFSClient domain socket being disabled
[ https://issues.apache.org/jira/browse/HDFS-12821?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Zhuge resolved HDFS-12821. --- Resolution: Duplicate > Block invalid IOException causes the DFSClient domain socket being disabled > --- > > Key: HDFS-12821 > URL: https://issues.apache.org/jira/browse/HDFS-12821 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs-client >Affects Versions: 2.4.0, 2.6.0 >Reporter: Gang Xie > > We use HDFS2.4 & 2.6, and recently hit a issue that DFSClient domain socket > is disabled when datanode throw block invalid exception. > The block is invalidated for some reason on datanote and it's OK. Then > DFSClient tries to access this block on this datanode via domain socket. This > triggers a IOExcetion. On DFSClient side, when get a IOExcetion and error > code 'ERROR', it disables the domain socket and fails back to TCP. and the > worst is that it seems never recover the socket. > I think this is a defect and with such "block invalid" exception, we should > not disable the domain socket because the is nothing wrong about the domain > socket service. > And thoughts? > The code: > {code} > private ShortCircuitReplicaInfo requestFileDescriptors(DomainPeer peer, > Slot slot) throws IOException { > ShortCircuitCache cache = clientContext.getShortCircuitCache(); > final DataOutputStream out = > new DataOutputStream(new BufferedOutputStream(peer.getOutputStream())); > SlotId slotId = slot == null ? null : slot.getSlotId(); > new Sender(out).requestShortCircuitFds(block, token, slotId, 1); > DataInputStream in = new DataInputStream(peer.getInputStream()); > BlockOpResponseProto resp = BlockOpResponseProto.parseFrom( > PBHelper.vintPrefixed(in)); > DomainSocket sock = peer.getDomainSocket(); > switch (resp.getStatus()) { > case SUCCESS: > byte buf[] = new byte[1]; > FileInputStream fis[] = new FileInputStream[2]; > sock.recvFileInputStreams(fis, buf, 0, buf.length); > ShortCircuitReplica replica = null; > try { > ExtendedBlockId key = > new ExtendedBlockId(block.getBlockId(), block.getBlockPoolId()); > replica = new ShortCircuitReplica(key, fis[0], fis[1], cache, > Time.monotonicNow(), slot); > } catch (IOException e) { > // This indicates an error reading from disk, or a format error. Since > // it's not a socket communication problem, we return null rather than > // throwing an exception. > LOG.warn(this + ": error creating ShortCircuitReplica.", e); > return null; > } finally { > if (replica == null) { > IOUtils.cleanup(DFSClient.LOG, fis[0], fis[1]); > } > } > return new ShortCircuitReplicaInfo(replica); > case ERROR_UNSUPPORTED: > if (!resp.hasShortCircuitAccessVersion()) { > LOG.warn("short-circuit read access is disabled for " + > "DataNode " + datanode + ". reason: " + resp.getMessage()); > clientContext.getDomainSocketFactory() > .disableShortCircuitForPath(pathInfo.getPath()); > } else { > LOG.warn("short-circuit read access for the file " + > fileName + " is disabled for DataNode " + datanode + > ". reason: " + resp.getMessage()); > } > return null; > case ERROR_ACCESS_TOKEN: > String msg = "access control error while " + > "attempting to set up short-circuit access to " + > fileName + resp.getMessage(); > if (LOG.isDebugEnabled()) { > LOG.debug(this + ":" + msg); > } > return new ShortCircuitReplicaInfo(new InvalidToken(msg)); > default: > LOG.warn(this + ": unknown response code " + resp.getStatus() + > " while attempting to set up short-circuit access. " + > resp.getMessage()); > clientContext.getDomainSocketFactory() > .disableShortCircuitForPath(pathInfo.getPath()); > <<= > return null; > } > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12814) Add blockId when warning slow mirror/disk in BlockReceiver
[ https://issues.apache.org/jira/browse/HDFS-12814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Weiwei Yang updated HDFS-12814: --- Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 3.0.0 Status: Resolved (was: Patch Available) Committed to trunk and branch-3.0. > Add blockId when warning slow mirror/disk in BlockReceiver > -- > > Key: HDFS-12814 > URL: https://issues.apache.org/jira/browse/HDFS-12814 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Reporter: Jiandan Yang >Assignee: Jiandan Yang >Priority: Trivial > Fix For: 3.0.0 > > Attachments: HDFS-12814.001.patch, HDFS-12814.002.patch > > > HDFS-11603 add downstream DataNodeIds and volume path. > In order to better debug, those warnning log should include blockId -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12814) Add blockId when warning slow mirror/disk in BlockReceiver
[ https://issues.apache.org/jira/browse/HDFS-12814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Weiwei Yang updated HDFS-12814: --- Issue Type: Improvement (was: Bug) > Add blockId when warning slow mirror/disk in BlockReceiver > -- > > Key: HDFS-12814 > URL: https://issues.apache.org/jira/browse/HDFS-12814 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Reporter: Jiandan Yang >Assignee: Jiandan Yang >Priority: Trivial > Fix For: 3.0.0 > > Attachments: HDFS-12814.001.patch, HDFS-12814.002.patch > > > HDFS-11603 add downstream DataNodeIds and volume path. > In order to better debug, those warnning log should include blockId -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12814) Add blockId when warning slow mirror/disk in BlockReceiver
[ https://issues.apache.org/jira/browse/HDFS-12814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16254968#comment-16254968 ] Hudson commented on HDFS-12814: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13244 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/13244/]) HDFS-12814. Add blockId when warning slow mirror/disk in BlockReceiver. (wwei: rev 462e25a3b264e1148d0cbca00db7f10d43a0555f) * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockReceiver.java > Add blockId when warning slow mirror/disk in BlockReceiver > -- > > Key: HDFS-12814 > URL: https://issues.apache.org/jira/browse/HDFS-12814 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Reporter: Jiandan Yang >Assignee: Jiandan Yang >Priority: Trivial > Fix For: 3.0.0 > > Attachments: HDFS-12814.001.patch, HDFS-12814.002.patch > > > HDFS-11603 add downstream DataNodeIds and volume path. > In order to better debug, those warnning log should include blockId -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12823) Backport HDFS-9259 "Make SO_SNDBUF size configurable at DFSClient" to branch-2.7
[ https://issues.apache.org/jira/browse/HDFS-12823?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Erik Krogen updated HDFS-12823: --- Attachment: (was: HDFS-12823-branch-2.7.002.patch) > Backport HDFS-9259 "Make SO_SNDBUF size configurable at DFSClient" to > branch-2.7 > > > Key: HDFS-12823 > URL: https://issues.apache.org/jira/browse/HDFS-12823 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs, hdfs-client >Reporter: Erik Krogen >Assignee: Erik Krogen > Attachments: HDFS-12823-branch-2.7.000.patch, > HDFS-12823-branch-2.7.001.patch > > > Given the pretty significant performance implications of HDFS-9259 (see > discussion in HDFS-10326) when doing transfers across high latency links, it > would be helpful to have this configurability exist in the 2.7 series. > Opening a new JIRA since the original HDFS-9259 has been closed for a while > and there are conflicts due to a few classes moving. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12711) deadly hdfs test
[ https://issues.apache.org/jira/browse/HDFS-12711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256295#comment-16256295 ] Erik Krogen commented on HDFS-12711: Yeah so although we obviously need to fix the unit tests, the license checker also shouldn't be picking up those temp output files in the meantime, right? > deadly hdfs test > > > Key: HDFS-12711 > URL: https://issues.apache.org/jira/browse/HDFS-12711 > Project: Hadoop HDFS > Issue Type: Test >Affects Versions: 2.9.0, 2.8.2 >Reporter: Allen Wittenauer >Priority: Critical > Attachments: HDFS-12711.branch-2.00.patch, fakepatch.branch-2.txt > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-12829) Moving logging APIs over to slf4j in hdfs
Bharat Viswanadham created HDFS-12829: - Summary: Moving logging APIs over to slf4j in hdfs Key: HDFS-12829 URL: https://issues.apache.org/jira/browse/HDFS-12829 Project: Hadoop HDFS Issue Type: Bug Reporter: Bharat Viswanadham Assignee: Bharat Viswanadham -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12808) Add LOG.isDebugEnabled() guard for LOG.debug("...")
[ https://issues.apache.org/jira/browse/HDFS-12808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256198#comment-16256198 ] Bharat Viswanadham commented on HDFS-12808: --- [~busbey] [~goiri] Updated to use slf4j. Created a task HDFS-12829 to update in other modules in hdfs > Add LOG.isDebugEnabled() guard for LOG.debug("...") > --- > > Key: HDFS-12808 > URL: https://issues.apache.org/jira/browse/HDFS-12808 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Mehran Hassani >Assignee: Bharat Viswanadham >Priority: Minor > Attachments: HDFS-12808.00.patch > > > I am conducting research on log related bugs. I tried to make a tool to fix > repetitive yet simple patterns of bugs that are related to logs. In this > file, there is a debug level logging statement containing multiple string > concatenation without the if statement before them: > hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestCachingStrategy.java, > LOG.debug("got fadvise(offset=" + offset + ", len=" + len +",flags=" + flags > + ")");, 82 > Would you be interested in adding the if, to the logging statement? -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12823) Backport HDFS-9259 "Make SO_SNDBUF size configurable at DFSClient" to branch-2.7
[ https://issues.apache.org/jira/browse/HDFS-12823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256197#comment-16256197 ] Hadoop QA commented on HDFS-12823: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 15m 46s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} branch-2.7 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 59s{color} | {color:green} branch-2.7 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 6s{color} | {color:green} branch-2.7 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 36s{color} | {color:green} branch-2.7 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 1s{color} | {color:green} branch-2.7 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 41s{color} | {color:green} branch-2.7 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 48s{color} | {color:green} branch-2.7 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 30s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 5 new + 822 unchanged - 0 fixed = 827 total (was 822) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 61 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 0s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 48s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 79m 25s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 1m 12s{color} | {color:red} The patch generated 331 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}121m 21s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Unreaped Processes | hadoop-hdfs:18 | | Failed junit tests | hadoop.hdfs.TestClientReportBadBlock | | | hadoop.hdfs.web.TestWebHdfsTimeouts | | Timed out junit tests | org.apache.hadoop.hdfs.TestSetrepDecreasing | | | org.apache.hadoop.hdfs.TestFileAppend4 | | | org.apache.hadoop.hdfs.TestRollingUpgradeDowngrade | | | org.apache.hadoop.hdfs.TestLease | | | org.apache.hadoop.hdfs.TestHDFSServerPorts | | | org.apache.hadoop.hdfs.TestDFSUpgrade | | | org.apache.hadoop.hdfs.web.TestWebHDFS | | | org.apache.hadoop.hdfs.TestAppendSnapshotTruncate | | | org.apache.hadoop.hdfs.TestRenameWhileOpen | | | org.apache.hadoop.hdfs.TestMiniDFSCluster | | | org.apache.hadoop.hdfs.TestBlockReaderFactory | | | org.apache.hadoop.hdfs.TestHFlush | | | org.apache.hadoop.hdfs.TestEncryptedTransfer | | | org.apache.hadoop.hdfs.TestDFSShell | | | org.apache.hadoop.hdfs.TestDataTransferProtocol | | | org.apache.hadoop.hdfs.TestDFSRename | | | org.apache.hadoop.hdfs.TestHDFSTrash | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:67e87c9 | | JIRA Issue | HDFS-12823 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12898064/HDFS-12823-branch-2.7.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit
[jira] [Updated] (HDFS-12808) Add LOG.isDebugEnabled() guard for LOG.debug("...")
[ https://issues.apache.org/jira/browse/HDFS-12808?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HDFS-12808: -- Status: Patch Available (was: Open) > Add LOG.isDebugEnabled() guard for LOG.debug("...") > --- > > Key: HDFS-12808 > URL: https://issues.apache.org/jira/browse/HDFS-12808 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Mehran Hassani >Assignee: Bharat Viswanadham >Priority: Minor > Attachments: HDFS-12808.00.patch > > > I am conducting research on log related bugs. I tried to make a tool to fix > repetitive yet simple patterns of bugs that are related to logs. In this > file, there is a debug level logging statement containing multiple string > concatenation without the if statement before them: > hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestCachingStrategy.java, > LOG.debug("got fadvise(offset=" + offset + ", len=" + len +",flags=" + flags > + ")");, 82 > Would you be interested in adding the if, to the logging statement? -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12823) Backport HDFS-9259 "Make SO_SNDBUF size configurable at DFSClient" to branch-2.7
[ https://issues.apache.org/jira/browse/HDFS-12823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256217#comment-16256217 ] Erik Krogen commented on HDFS-12823: - The license issues are false and I believe caused by HDFS-12711; I left a [comment there|https://issues.apache.org/jira/browse/HDFS-12711?focusedCommentId=16256166=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16256166] - Two checkstyle issues are caused by long static import lines; nothing I can do about it - Fixed the other three checkstyle issues; these came from matching my code to existing nearby code but I think in the same spirit as the v000 to v001 patch change it's better to just follow proper conventions - Most of the patch whitespace is invalid, it's calling out lines in hdfs-default I did not modify... One line was my fault - The tests are passing fine locally, think the numerous failures and timeouts are just due to the generic problems the HDFS unit tests are having currently Attaching v002 patch with modifications as described above > Backport HDFS-9259 "Make SO_SNDBUF size configurable at DFSClient" to > branch-2.7 > > > Key: HDFS-12823 > URL: https://issues.apache.org/jira/browse/HDFS-12823 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs, hdfs-client >Reporter: Erik Krogen >Assignee: Erik Krogen > Attachments: HDFS-12823-branch-2.7.000.patch, > HDFS-12823-branch-2.7.001.patch > > > Given the pretty significant performance implications of HDFS-9259 (see > discussion in HDFS-10326) when doing transfers across high latency links, it > would be helpful to have this configurability exist in the 2.7 series. > Opening a new JIRA since the original HDFS-9259 has been closed for a while > and there are conflicts due to a few classes moving. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12823) Backport HDFS-9259 "Make SO_SNDBUF size configurable at DFSClient" to branch-2.7
[ https://issues.apache.org/jira/browse/HDFS-12823?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Erik Krogen updated HDFS-12823: --- Attachment: HDFS-12823-branch-2.7.002.patch > Backport HDFS-9259 "Make SO_SNDBUF size configurable at DFSClient" to > branch-2.7 > > > Key: HDFS-12823 > URL: https://issues.apache.org/jira/browse/HDFS-12823 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs, hdfs-client >Reporter: Erik Krogen >Assignee: Erik Krogen > Attachments: HDFS-12823-branch-2.7.000.patch, > HDFS-12823-branch-2.7.001.patch, HDFS-12823-branch-2.7.002.patch > > > Given the pretty significant performance implications of HDFS-9259 (see > discussion in HDFS-10326) when doing transfers across high latency links, it > would be helpful to have this configurability exist in the 2.7 series. > Opening a new JIRA since the original HDFS-9259 has been closed for a while > and there are conflicts due to a few classes moving. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12823) Backport HDFS-9259 "Make SO_SNDBUF size configurable at DFSClient" to branch-2.7
[ https://issues.apache.org/jira/browse/HDFS-12823?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Erik Krogen updated HDFS-12823: --- Attachment: HDFS-12823-branch-2.7.002.patch > Backport HDFS-9259 "Make SO_SNDBUF size configurable at DFSClient" to > branch-2.7 > > > Key: HDFS-12823 > URL: https://issues.apache.org/jira/browse/HDFS-12823 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs, hdfs-client >Reporter: Erik Krogen >Assignee: Erik Krogen > Attachments: HDFS-12823-branch-2.7.000.patch, > HDFS-12823-branch-2.7.001.patch, HDFS-12823-branch-2.7.002.patch > > > Given the pretty significant performance implications of HDFS-9259 (see > discussion in HDFS-10326) when doing transfers across high latency links, it > would be helpful to have this configurability exist in the 2.7 series. > Opening a new JIRA since the original HDFS-9259 has been closed for a while > and there are conflicts due to a few classes moving. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12801) RBF: Set MountTableResolver as default file resolver
[ https://issues.apache.org/jira/browse/HDFS-12801?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Íñigo Goiri updated HDFS-12801: --- Resolution: Fixed Hadoop Flags: Reviewed Target Version/s: 3.1.0 Status: Resolved (was: Patch Available) > RBF: Set MountTableResolver as default file resolver > > > Key: HDFS-12801 > URL: https://issues.apache.org/jira/browse/HDFS-12801 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Minor > Labels: RBF > Attachments: HDFS-12801.000.patch > > > {{hdfs-default.xml}} is still using the {{MockResolver}} for the default > setup which is the one used for unit testing. This should be a real resolver > like the {{MountTableResolver}}. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12801) RBF: Set MountTableResolver as default file resolver
[ https://issues.apache.org/jira/browse/HDFS-12801?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Íñigo Goiri updated HDFS-12801: --- Target Version/s: 2.9.0, 3.0.0 (was: 3.1.0) > RBF: Set MountTableResolver as default file resolver > > > Key: HDFS-12801 > URL: https://issues.apache.org/jira/browse/HDFS-12801 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Minor > Labels: RBF > Fix For: 3.1.0 > > Attachments: HDFS-12801.000.patch > > > {{hdfs-default.xml}} is still using the {{MockResolver}} for the default > setup which is the one used for unit testing. This should be a real resolver > like the {{MountTableResolver}}. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12801) RBF: Set MountTableResolver as default file resolver
[ https://issues.apache.org/jira/browse/HDFS-12801?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Íñigo Goiri updated HDFS-12801: --- Fix Version/s: 3.1.0 > RBF: Set MountTableResolver as default file resolver > > > Key: HDFS-12801 > URL: https://issues.apache.org/jira/browse/HDFS-12801 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Minor > Labels: RBF > Fix For: 3.1.0 > > Attachments: HDFS-12801.000.patch > > > {{hdfs-default.xml}} is still using the {{MockResolver}} for the default > setup which is the one used for unit testing. This should be a real resolver > like the {{MountTableResolver}}. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12801) RBF: Set MountTableResolver as default file resolver
[ https://issues.apache.org/jira/browse/HDFS-12801?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Íñigo Goiri updated HDFS-12801: --- Labels: RBF (was: ) > RBF: Set MountTableResolver as default file resolver > > > Key: HDFS-12801 > URL: https://issues.apache.org/jira/browse/HDFS-12801 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Minor > Labels: RBF > Fix For: 3.1.0 > > Attachments: HDFS-12801.000.patch > > > {{hdfs-default.xml}} is still using the {{MockResolver}} for the default > setup which is the one used for unit testing. This should be a real resolver > like the {{MountTableResolver}}. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12681) Fold HdfsLocatedFileStatus into HdfsFileStatus
[ https://issues.apache.org/jira/browse/HDFS-12681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256271#comment-16256271 ] Hadoop QA commented on HDFS-12681: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 26s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 4 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 24s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 55s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 52s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 5s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 32s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 16m 11s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 51s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 46s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 13m 18s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 2m 8s{color} | {color:orange} root: The patch generated 19 new + 410 unchanged - 6 fixed = 429 total (was 416) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 1s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 3m 25s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client generated 4 new + 0 unchanged - 0 fixed = 4 total (was 0) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 5m 25s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 14m 24s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 42s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}127m 21s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 51s{color} | {color:red} hadoop-hdfs-httpfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 43s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}270m 41s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:hadoop-hdfs-project/hadoop-hdfs-client | | | org.apache.hadoop.hdfs.protocol.HdfsLocatedFileStatus.getLocalNameInBytes() may expose internal representation by returning HdfsLocatedFileStatus.uPath At HdfsLocatedFileStatus.java:by returning HdfsLocatedFileStatus.uPath At
[jira] [Commented] (HDFS-12711) deadly hdfs test
[ https://issues.apache.org/jira/browse/HDFS-12711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256275#comment-16256275 ] Allen Wittenauer commented on HDFS-12711: - Those files are the stack dumps from the unit tests that ran out of resources. Fix the unit tests, those files go away. > deadly hdfs test > > > Key: HDFS-12711 > URL: https://issues.apache.org/jira/browse/HDFS-12711 > Project: Hadoop HDFS > Issue Type: Test >Affects Versions: 2.9.0, 2.8.2 >Reporter: Allen Wittenauer >Priority: Critical > Attachments: HDFS-12711.branch-2.00.patch, fakepatch.branch-2.txt > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12638) NameNode exits due to ReplicationMonitor thread received Runtime exception in ReplicationWork#chooseTargets
[ https://issues.apache.org/jira/browse/HDFS-12638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256293#comment-16256293 ] Konstantin Shvachko commented on HDFS-12638: I think it's a blocker for all branches 2.8 and up. Even just removing that line {{toDelete.delete();}} would prevent crashing NameNode. > NameNode exits due to ReplicationMonitor thread received Runtime exception in > ReplicationWork#chooseTargets > --- > > Key: HDFS-12638 > URL: https://issues.apache.org/jira/browse/HDFS-12638 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Affects Versions: 2.8.2 >Reporter: Jiandan Yang >Priority: Critical > Attachments: HDFS-12638-branch-2.8.2.001.patch, HDFS-12638.002.patch, > OphanBlocksAfterTruncateDelete.jpg > > > Active NamNode exit due to NPE, I can confirm that the BlockCollection passed > in when creating ReplicationWork is null, but I do not know why > BlockCollection is null, By view history I found > [HDFS-9754|https://issues.apache.org/jira/browse/HDFS-9754] remove judging > whether BlockCollection is null. > NN logs are as following: > {code:java} > 2017-10-11 16:29:06,161 ERROR [ReplicationMonitor] > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: > ReplicationMonitor thread received Runtime exception. > java.lang.NullPointerException > at > org.apache.hadoop.hdfs.server.blockmanagement.ReplicationWork.chooseTargets(ReplicationWork.java:55) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeReplicationWorkForBlocks(BlockManager.java:1532) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeReplicationWork(BlockManager.java:1491) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeDatanodeWork(BlockManager.java:3792) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor.run(BlockManager.java:3744) > at java.lang.Thread.run(Thread.java:834) > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12778) [READ] Report multiple locations for PROVIDED blocks
[ https://issues.apache.org/jira/browse/HDFS-12778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256186#comment-16256186 ] Hadoop QA commented on HDFS-12778: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} HDFS-9806 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 29s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 40s{color} | {color:green} HDFS-9806 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 19s{color} | {color:green} HDFS-9806 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 57s{color} | {color:green} HDFS-9806 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 30s{color} | {color:green} HDFS-9806 passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 57s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 29s{color} | {color:red} hadoop-tools/hadoop-fs2img in HDFS-9806 has 1 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 13s{color} | {color:green} HDFS-9806 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 16s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 30s{color} | {color:red} hadoop-fs2img in the patch failed. {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 22s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 13s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 99m 12s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 57s{color} | {color:green} hadoop-fs2img in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 32s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}183m 23s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestReadStripedFileWithMissingBlocks | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | | | hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics | | | hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 | | JIRA Issue | HDFS-12778 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12898055/HDFS-12778-HDFS-9806.003.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 2502f1a6dbec 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven
[jira] [Updated] (HDFS-12778) [READ] Report multiple locations for PROVIDED blocks
[ https://issues.apache.org/jira/browse/HDFS-12778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Virajith Jalaparti updated HDFS-12778: -- Status: Open (was: Patch Available) > [READ] Report multiple locations for PROVIDED blocks > > > Key: HDFS-12778 > URL: https://issues.apache.org/jira/browse/HDFS-12778 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Virajith Jalaparti >Assignee: Virajith Jalaparti > Attachments: HDFS-12778-HDFS-9806.001.patch, > HDFS-12778-HDFS-9806.002.patch, HDFS-12778-HDFS-9806.003.patch > > > On {{getBlockLocations}}, only one Datanode is returned as the location for > all PROVIDED blocks. This can hurt the performance of applications which > typically 3 locations per block. We need to return multiple Datanodes for > each PROVIDED block for better application performance/resilience. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12778) [READ] Report multiple locations for PROVIDED blocks
[ https://issues.apache.org/jira/browse/HDFS-12778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Virajith Jalaparti updated HDFS-12778: -- Attachment: (was: HDFS-12778-HDFS-9806.003.patch) > [READ] Report multiple locations for PROVIDED blocks > > > Key: HDFS-12778 > URL: https://issues.apache.org/jira/browse/HDFS-12778 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Virajith Jalaparti >Assignee: Virajith Jalaparti > Attachments: HDFS-12778-HDFS-9806.001.patch, > HDFS-12778-HDFS-9806.002.patch, HDFS-12778-HDFS-9806.003.patch > > > On {{getBlockLocations}}, only one Datanode is returned as the location for > all PROVIDED blocks. This can hurt the performance of applications which > typically 3 locations per block. We need to return multiple Datanodes for > each PROVIDED block for better application performance/resilience. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12778) [READ] Report multiple locations for PROVIDED blocks
[ https://issues.apache.org/jira/browse/HDFS-12778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Virajith Jalaparti updated HDFS-12778: -- Status: Patch Available (was: Open) > [READ] Report multiple locations for PROVIDED blocks > > > Key: HDFS-12778 > URL: https://issues.apache.org/jira/browse/HDFS-12778 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Virajith Jalaparti >Assignee: Virajith Jalaparti > Attachments: HDFS-12778-HDFS-9806.001.patch, > HDFS-12778-HDFS-9806.002.patch, HDFS-12778-HDFS-9806.003.patch > > > On {{getBlockLocations}}, only one Datanode is returned as the location for > all PROVIDED blocks. This can hurt the performance of applications which > typically 3 locations per block. We need to return multiple Datanodes for > each PROVIDED block for better application performance/resilience. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12801) RBF: Set MountTableResolver as default file resolver
[ https://issues.apache.org/jira/browse/HDFS-12801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256239#comment-16256239 ] Íñigo Goiri commented on HDFS-12801: Thanks for the feedback [~chris.douglas] and [~ywskycn]. I don't expect any new of the current feature to break any functionality. I'll commit this one to trunk and target 3.1. I could backport to branch-3 (or even branch-2) if there is interest. Thanks for the review [~hanishakoneru], [~ywskycn] and [~chris.douglas]. > RBF: Set MountTableResolver as default file resolver > > > Key: HDFS-12801 > URL: https://issues.apache.org/jira/browse/HDFS-12801 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Minor > Attachments: HDFS-12801.000.patch > > > {{hdfs-default.xml}} is still using the {{MockResolver}} for the default > setup which is the one used for unit testing. This should be a real resolver > like the {{MountTableResolver}}. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12087) The error message is not friendly when set a path with the policy not enabled
[ https://issues.apache.org/jira/browse/HDFS-12087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] lufei updated HDFS-12087: - Affects Version/s: 3.0.0-beta1 > The error message is not friendly when set a path with the policy not enabled > - > > Key: HDFS-12087 > URL: https://issues.apache.org/jira/browse/HDFS-12087 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0-beta1, 3.0.0-alpha3 >Reporter: lufei >Assignee: lufei > Labels: hdfs-ec-3.0-nice-to-have > Attachments: HDFS-12087.001.patch > > > First user add a policy by -addPolicies command but not enabled, then user > set a path with this policy. The error message displayed as below: > {color:#707070}RemoteException: Policy 'XOR-2-1-128k' does not match any > enabled erasure coding policies: []. The set of enabled erasure coding > policies can be configured at 'dfs.namenode.ec.policies.enabled'.{color} > The policy 'XOR-2-1-128k' is added by user but not be enabled.The error > message is not promot user to enable the policy first.I think the error > message may be better as below: > {color:#707070}RemoteException: Policy 'XOR-2-1-128k' does not match any > enabled erasure coding policies: []. The set of enabled erasure coding > policies can be configured at 'dfs.namenode.ec.policies.enabled' or enable > the policy by '-enablePolicy' EC command before.{color} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-12087) The error message is not friendly when set a path with the policy not enabled
[ https://issues.apache.org/jira/browse/HDFS-12087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256263#comment-16256263 ] lufei edited comment on HDFS-12087 at 11/17/17 1:16 AM: This problem is already fixed.Please close this issue. was (Author: figo): This problem is already fixed. > The error message is not friendly when set a path with the policy not enabled > - > > Key: HDFS-12087 > URL: https://issues.apache.org/jira/browse/HDFS-12087 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0-beta1, 3.0.0-alpha3 >Reporter: lufei >Assignee: lufei > Labels: hdfs-ec-3.0-nice-to-have > Fix For: 3.0.0-beta1 > > Attachments: HDFS-12087.001.patch > > > First user add a policy by -addPolicies command but not enabled, then user > set a path with this policy. The error message displayed as below: > {color:#707070}RemoteException: Policy 'XOR-2-1-128k' does not match any > enabled erasure coding policies: []. The set of enabled erasure coding > policies can be configured at 'dfs.namenode.ec.policies.enabled'.{color} > The policy 'XOR-2-1-128k' is added by user but not be enabled.The error > message is not promot user to enable the policy first.I think the error > message may be better as below: > {color:#707070}RemoteException: Policy 'XOR-2-1-128k' does not match any > enabled erasure coding policies: []. The set of enabled erasure coding > policies can be configured at 'dfs.namenode.ec.policies.enabled' or enable > the policy by '-enablePolicy' EC command before.{color} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12801) RBF: Set MountTableResolver as default file resolver
[ https://issues.apache.org/jira/browse/HDFS-12801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256269#comment-16256269 ] Hudson commented on HDFS-12801: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13251 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/13251/]) HDFS-12801. RBF: Set MountTableResolver as default file resolver. (inigoiri: rev e182e777947a85943504a207deb3cf3ffc047910) * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml > RBF: Set MountTableResolver as default file resolver > > > Key: HDFS-12801 > URL: https://issues.apache.org/jira/browse/HDFS-12801 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Minor > Labels: RBF > Fix For: 3.1.0 > > Attachments: HDFS-12801.000.patch > > > {{hdfs-default.xml}} is still using the {{MockResolver}} for the default > setup which is the one used for unit testing. This should be a real resolver > like the {{MountTableResolver}}. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-12087) The error message is not friendly when set a path with the policy not enabled
[ https://issues.apache.org/jira/browse/HDFS-12087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256263#comment-16256263 ] lufei edited comment on HDFS-12087 at 11/17/17 1:18 AM: This problem is fixed by anyone.So please close this issue,thanks. was (Author: figo): This problem is already fixed.Please close this issue. > The error message is not friendly when set a path with the policy not enabled > - > > Key: HDFS-12087 > URL: https://issues.apache.org/jira/browse/HDFS-12087 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0-beta1, 3.0.0-alpha3 >Reporter: lufei >Assignee: lufei > Labels: hdfs-ec-3.0-nice-to-have > Fix For: 3.0.0-beta1 > > Attachments: HDFS-12087.001.patch > > > First user add a policy by -addPolicies command but not enabled, then user > set a path with this policy. The error message displayed as below: > {color:#707070}RemoteException: Policy 'XOR-2-1-128k' does not match any > enabled erasure coding policies: []. The set of enabled erasure coding > policies can be configured at 'dfs.namenode.ec.policies.enabled'.{color} > The policy 'XOR-2-1-128k' is added by user but not be enabled.The error > message is not promot user to enable the policy first.I think the error > message may be better as below: > {color:#707070}RemoteException: Policy 'XOR-2-1-128k' does not match any > enabled erasure coding policies: []. The set of enabled erasure coding > policies can be configured at 'dfs.namenode.ec.policies.enabled' or enable > the policy by '-enablePolicy' EC command before.{color} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12808) Add LOG.isDebugEnabled() guard for LOG.debug("...")
[ https://issues.apache.org/jira/browse/HDFS-12808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256267#comment-16256267 ] Íñigo Goiri commented on HDFS-12808: The change LGTM. The style for the {{Logger}} is a little ugly, I'd prefer: {code} private static final Logger LOG = LoggerFactory.getLogger(TestCachingStrategy.class); {code} BTW, just add new patch files and leave the old ones. > Add LOG.isDebugEnabled() guard for LOG.debug("...") > --- > > Key: HDFS-12808 > URL: https://issues.apache.org/jira/browse/HDFS-12808 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Mehran Hassani >Assignee: Bharat Viswanadham >Priority: Minor > Attachments: HDFS-12808.00.patch > > > I am conducting research on log related bugs. I tried to make a tool to fix > repetitive yet simple patterns of bugs that are related to logs. In this > file, there is a debug level logging statement containing multiple string > concatenation without the if statement before them: > hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestCachingStrategy.java, > LOG.debug("got fadvise(offset=" + offset + ", len=" + len +",flags=" + flags > + ")");, 82 > Would you be interested in adding the if, to the logging statement? -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12827) Need Clarity on Replica Placement: The First Baby Steps in HDFS Architecture documentation
[ https://issues.apache.org/jira/browse/HDFS-12827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256265#comment-16256265 ] Suri babu Nuthalapati commented on HDFS-12827: -- Thank you, I will mark it as resolved. Suri > Need Clarity on Replica Placement: The First Baby Steps in HDFS Architecture > documentation > -- > > Key: HDFS-12827 > URL: https://issues.apache.org/jira/browse/HDFS-12827 > Project: Hadoop HDFS > Issue Type: Improvement > Components: documentation >Reporter: Suri babu Nuthalapati >Priority: Minor > > The placement should be this: > https://hadoop.apache.org/docs/r1.2.1/hdfs_design.html > HDFS’s placement policy is to put one replica on one node in the local rack, > another on a node in a different (remote) rack, and the last on a different > node in the same remote rack. > Hadoop Definitive guide says the same and I have tested and saw the same > behavior as above. > > But the documentation in the versions after r2.5.2 it was mentioned as: > http://hadoop.apache.org/docs/r2.5.2/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html > HDFS’s placement policy is to put one replica on one node in the local rack, > another on a different node in the local rack, and the last on a different > node in a different rack. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12087) The error message is not friendly when set a path with the policy not enabled
[ https://issues.apache.org/jira/browse/HDFS-12087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] lufei updated HDFS-12087: - Status: Open (was: Patch Available) > The error message is not friendly when set a path with the policy not enabled > - > > Key: HDFS-12087 > URL: https://issues.apache.org/jira/browse/HDFS-12087 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0-alpha3, 3.0.0-beta1 >Reporter: lufei >Assignee: lufei > Labels: hdfs-ec-3.0-nice-to-have > Fix For: 3.0.0-beta1 > > Attachments: HDFS-12087.001.patch > > > First user add a policy by -addPolicies command but not enabled, then user > set a path with this policy. The error message displayed as below: > {color:#707070}RemoteException: Policy 'XOR-2-1-128k' does not match any > enabled erasure coding policies: []. The set of enabled erasure coding > policies can be configured at 'dfs.namenode.ec.policies.enabled'.{color} > The policy 'XOR-2-1-128k' is added by user but not be enabled.The error > message is not promot user to enable the policy first.I think the error > message may be better as below: > {color:#707070}RemoteException: Policy 'XOR-2-1-128k' does not match any > enabled erasure coding policies: []. The set of enabled erasure coding > policies can be configured at 'dfs.namenode.ec.policies.enabled' or enable > the policy by '-enablePolicy' EC command before.{color} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-12827) Need Clarity on Replica Placement: The First Baby Steps in HDFS Architecture documentation
[ https://issues.apache.org/jira/browse/HDFS-12827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256265#comment-16256265 ] Suri babu Nuthalapati edited comment on HDFS-12827 at 11/17/17 1:17 AM: Thank you, you can mark it as resolved. Suri was (Author: surinuthalap...@live.com): Thank you, I will mark it as resolved. Suri > Need Clarity on Replica Placement: The First Baby Steps in HDFS Architecture > documentation > -- > > Key: HDFS-12827 > URL: https://issues.apache.org/jira/browse/HDFS-12827 > Project: Hadoop HDFS > Issue Type: Improvement > Components: documentation >Reporter: Suri babu Nuthalapati >Priority: Minor > > The placement should be this: > https://hadoop.apache.org/docs/r1.2.1/hdfs_design.html > HDFS’s placement policy is to put one replica on one node in the local rack, > another on a node in a different (remote) rack, and the last on a different > node in the same remote rack. > Hadoop Definitive guide says the same and I have tested and saw the same > behavior as above. > > But the documentation in the versions after r2.5.2 it was mentioned as: > http://hadoop.apache.org/docs/r2.5.2/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html > HDFS’s placement policy is to put one replica on one node in the local rack, > another on a different node in the local rack, and the last on a different > node in a different rack. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12711) deadly hdfs test
[ https://issues.apache.org/jira/browse/HDFS-12711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256283#comment-16256283 ] Allen Wittenauer commented on HDFS-12711: - It's probably also worth pointing out that those files also represent tests that weren't actually executed. So they aren't recorded in the fail/success output. > deadly hdfs test > > > Key: HDFS-12711 > URL: https://issues.apache.org/jira/browse/HDFS-12711 > Project: Hadoop HDFS > Issue Type: Test >Affects Versions: 2.9.0, 2.8.2 >Reporter: Allen Wittenauer >Priority: Critical > Attachments: HDFS-12711.branch-2.00.patch, fakepatch.branch-2.txt > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12623) Add UT for the Test Command
[ https://issues.apache.org/jira/browse/HDFS-12623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] legend updated HDFS-12623: -- Resolution: Auto Closed Status: Resolved (was: Patch Available) > Add UT for the Test Command > --- > > Key: HDFS-12623 > URL: https://issues.apache.org/jira/browse/HDFS-12623 > Project: Hadoop HDFS > Issue Type: Test > Components: test >Affects Versions: 3.1.0 >Reporter: legend > Attachments: HDFS-12623.001.patch, HDFS-12623.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12824) ViewFileSystem should support EC.
[ https://issues.apache.org/jira/browse/HDFS-12824?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Harshakiran Reddy updated HDFS-12824: - Description: Current {{ViewFileSystem}} does not support EC, it will throw {{IllegalArgumentException}}. (was: Current ViewFileSystem does not support EC, it will throw IllegalArgumentException.) > ViewFileSystem should support EC. > - > > Key: HDFS-12824 > URL: https://issues.apache.org/jira/browse/HDFS-12824 > Project: Hadoop HDFS > Issue Type: Improvement > Components: erasure-coding, fs >Affects Versions: 3.0.0-alpha1 >Reporter: Harshakiran Reddy > > Current {{ViewFileSystem}} does not support EC, it will throw > {{IllegalArgumentException}}. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12824) ViewFileSystem should support EC.
[ https://issues.apache.org/jira/browse/HDFS-12824?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Harshakiran Reddy updated HDFS-12824: - Description: Current {{ViewFileSystem}} does not support EC, it will throw {{IllegalArgumentException}}. {noformat} ./hdfs ec -listPolicies IllegalArgumentException: FileSystem viewfs://ClusterX/ is not an HDFS file system {noformat} was:Current {{ViewFileSystem}} does not support EC, it will throw {{IllegalArgumentException}}. > ViewFileSystem should support EC. > - > > Key: HDFS-12824 > URL: https://issues.apache.org/jira/browse/HDFS-12824 > Project: Hadoop HDFS > Issue Type: Improvement > Components: erasure-coding, fs >Affects Versions: 3.0.0-alpha1 >Reporter: Harshakiran Reddy > > Current {{ViewFileSystem}} does not support EC, it will throw > {{IllegalArgumentException}}. > {noformat} > ./hdfs ec -listPolicies > IllegalArgumentException: FileSystem viewfs://ClusterX/ is not an HDFS file > system > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-12825) After Block Corrupted, FSCK Report printing the Direct configuration.
Harshakiran Reddy created HDFS-12825: Summary: After Block Corrupted, FSCK Report printing the Direct configuration. Key: HDFS-12825 URL: https://issues.apache.org/jira/browse/HDFS-12825 Project: Hadoop HDFS Issue Type: Wish Components: hdfs Affects Versions: 3.0.0-alpha1 Reporter: Harshakiran Reddy Priority: Minor Scenario: Corrupt the Block in any datanode Take the *FSCK *Report for that file. Actual Output: == printing the direct configuration in fsck report {{dfs.namenode.replication.min}} Expected Output: it should be {{MINIMAL BLOCK REPLICATION}} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDFS-12824) ViewFileSystem should support EC.
[ https://issues.apache.org/jira/browse/HDFS-12824?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Surendra Singh Lilhore reassigned HDFS-12824: - Assignee: Surendra Singh Lilhore > ViewFileSystem should support EC. > - > > Key: HDFS-12824 > URL: https://issues.apache.org/jira/browse/HDFS-12824 > Project: Hadoop HDFS > Issue Type: Improvement > Components: erasure-coding, fs >Affects Versions: 3.0.0-alpha1 >Reporter: Harshakiran Reddy >Assignee: Surendra Singh Lilhore > > Current {{ViewFileSystem}} does not support EC, it will throw > {{IllegalArgumentException}}. > {noformat} > ./hdfs ec -listPolicies > IllegalArgumentException: FileSystem viewfs://ClusterX/ is not an HDFS file > system > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12500) Ozone: add logger for oz shell commands and move error stack traces to DEBUG level
[ https://issues.apache.org/jira/browse/HDFS-12500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16255088#comment-16255088 ] Yiqun Lin commented on HDFS-12500: -- As ozone is very closest to merge to trunk, I'd like to make this JIRA done for reducing the verbosity of logs that introduced in HDFS-12489. Attach the patch. [~cheersyang], please have a review. > Ozone: add logger for oz shell commands and move error stack traces to DEBUG > level > -- > > Key: HDFS-12500 > URL: https://issues.apache.org/jira/browse/HDFS-12500 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Weiwei Yang >Assignee: Yiqun Lin >Priority: Minor > > Per discussion in HDFS-12489 to reduce the verbosity of logs when exception > happens, lets add logger to {{Shell.java}} and move error stack traces to > DEBUG level. > And to track the execution time of oz commands, when logger is added, lets > add a debug log to print the total time a command execution spent. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12500) Ozone: add logger for oz shell commands and move error stack traces to DEBUG level
[ https://issues.apache.org/jira/browse/HDFS-12500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yiqun Lin updated HDFS-12500: - Status: Patch Available (was: Open) > Ozone: add logger for oz shell commands and move error stack traces to DEBUG > level > -- > > Key: HDFS-12500 > URL: https://issues.apache.org/jira/browse/HDFS-12500 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Weiwei Yang >Assignee: Yiqun Lin >Priority: Minor > Attachments: HDFS-12500-HDFS-7240.001.patch > > > Per discussion in HDFS-12489 to reduce the verbosity of logs when exception > happens, lets add logger to {{Shell.java}} and move error stack traces to > DEBUG level. > And to track the execution time of oz commands, when logger is added, lets > add a debug log to print the total time a command execution spent. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12500) Ozone: add logger for oz shell commands and move error stack traces to DEBUG level
[ https://issues.apache.org/jira/browse/HDFS-12500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yiqun Lin updated HDFS-12500: - Attachment: HDFS-12500-HDFS-7240.001.patch > Ozone: add logger for oz shell commands and move error stack traces to DEBUG > level > -- > > Key: HDFS-12500 > URL: https://issues.apache.org/jira/browse/HDFS-12500 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Weiwei Yang >Assignee: Yiqun Lin >Priority: Minor > Attachments: HDFS-12500-HDFS-7240.001.patch > > > Per discussion in HDFS-12489 to reduce the verbosity of logs when exception > happens, lets add logger to {{Shell.java}} and move error stack traces to > DEBUG level. > And to track the execution time of oz commands, when logger is added, lets > add a debug log to print the total time a command execution spent. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org