[jira] [Commented] (HDFS-15261) RBF: Add Block Related Metrics
[ https://issues.apache.org/jira/browse/HDFS-15261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17075731#comment-17075731 ] Hadoop QA commented on HDFS-15261: -- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 35s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 3 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 57s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 34s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 24s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 36s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 17s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 6s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 29s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 0m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 27s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 16s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs-rbf: The patch generated 10 new + 3 unchanged - 0 fixed = 13 total (was 3) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 6s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 28s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 18s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 30s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 62m 23s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.8 Server=19.03.8 Image:yetus/hadoop:e6455cc864d | | JIRA Issue | HDFS-15261 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12998961/HDFS-15261-02.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle cc | | uname | Linux 0edeb94350aa 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / e6455cc | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_242 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/29080/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/29080/testReport/ | | Max. process+thread count | 2946 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: hadoop-hdfs-project/h
[jira] [Updated] (HDFS-15262) Add fsck servlet to Secondary NameNode
[ https://issues.apache.org/jira/browse/HDFS-15262?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siyao Meng updated HDFS-15262: -- Description: Allows for running fsck on Secondary NameNode. The diff is one-line: https://github.com/smengcl/hadoop/commit/0e45887d264258b91d53c09739c182ab02cb23bb was: Allows for running fsck on Secondary NameNode. https://github.com/smengcl/hadoop/commit/0e45887d264258b91d53c09739c182ab02cb23bb > Add fsck servlet to Secondary NameNode > -- > > Key: HDFS-15262 > URL: https://issues.apache.org/jira/browse/HDFS-15262 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode >Reporter: Siyao Meng >Assignee: Siyao Meng >Priority: Minor > > Allows for running fsck on Secondary NameNode. > The diff is one-line: > https://github.com/smengcl/hadoop/commit/0e45887d264258b91d53c09739c182ab02cb23bb -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15262) Add fsck servlet to Secondary NameNode
[ https://issues.apache.org/jira/browse/HDFS-15262?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siyao Meng updated HDFS-15262: -- Description: Allows for running fsck on Secondary NameNode. https://github.com/smengcl/hadoop/commit/0e45887d264258b91d53c09739c182ab02cb23bb was:Allows for running fsck on Secondary NameNode. > Add fsck servlet to Secondary NameNode > -- > > Key: HDFS-15262 > URL: https://issues.apache.org/jira/browse/HDFS-15262 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode >Reporter: Siyao Meng >Assignee: Siyao Meng >Priority: Minor > > Allows for running fsck on Secondary NameNode. > https://github.com/smengcl/hadoop/commit/0e45887d264258b91d53c09739c182ab02cb23bb -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Resolved] (HDFS-15262) Add fsck servlet to Secondary NameNode
[ https://issues.apache.org/jira/browse/HDFS-15262?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siyao Meng resolved HDFS-15262. --- Resolution: Won't Fix After a second thought it doesn't really make sense to put fsck in Secondary NameNode if it doesn't receive block reports from DNs. > Add fsck servlet to Secondary NameNode > -- > > Key: HDFS-15262 > URL: https://issues.apache.org/jira/browse/HDFS-15262 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode >Reporter: Siyao Meng >Assignee: Siyao Meng >Priority: Minor > > Allows for running fsck on Secondary NameNode. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15240) Erasure Coding: dirty buffer causes reconstruction block error
[ https://issues.apache.org/jira/browse/HDFS-15240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] HuangTao updated HDFS-15240: Attachment: HDFS-15240.004.patch > Erasure Coding: dirty buffer causes reconstruction block error > -- > > Key: HDFS-15240 > URL: https://issues.apache.org/jira/browse/HDFS-15240 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode, erasure-coding >Reporter: HuangTao >Assignee: HuangTao >Priority: Major > Attachments: HDFS-15240.001.patch, HDFS-15240.002.patch, > HDFS-15240.003.patch, HDFS-15240.004.patch > > > When read some lzo files we found some blocks were broken. > I read back all internal blocks(b0-b8) of the block group(RS-6-3-1024k) from > DN directly, and choose 6(b0-b5) blocks to decode other 3(b6', b7', b8') > blocks. And find the longest common sequenece(LCS) between b6'(decoded) and > b6(read from DN)(b7'/b7 and b8'/b8). > After selecting 6 blocks of the block group in combinations one time and > iterating through all cases, I find one case that the length of LCS is the > block length - 64KB, 64KB is just the length of ByteBuffer used by > StripedBlockReader. So the corrupt reconstruction block is made by a dirty > buffer. > The following log snippet(only show 2 of 28 cases) is my check program > output. In my case, I known the 3th block is corrupt, so need other 5 blocks > to decode another 3 blocks, then find the 1th block's LCS substring is block > length - 64kb. > It means (0,1,2,4,5,6)th blocks were used to reconstruct 3th block, and the > dirty buffer was used before read the 1th block. > Must be noted that StripedBlockReader read from the offset 0 of the 1th block > after used the dirty buffer. > {code:java} > decode from [0, 2, 3, 4, 5, 7] -> [1, 6, 8] > Check Block(1) first 131072 bytes longest common substring length 4 > Check Block(6) first 131072 bytes longest common substring length 4 > Check Block(8) first 131072 bytes longest common substring length 4 > decode from [0, 2, 3, 4, 5, 6] -> [1, 7, 8] > Check Block(1) first 131072 bytes longest common substring length 65536 > CHECK AGAIN: Block(1) all 27262976 bytes longest common substring length > 27197440 # this one > Check Block(7) first 131072 bytes longest common substring length 4 > Check Block(8) first 131072 bytes longest common substring length 4{code} > Now I know the dirty buffer causes reconstruction block error, but how does > the dirty buffer come about? > After digging into the code and DN log, I found this following DN log is the > root reason. > {code:java} > [INFO] [stripedRead-1017] : Interrupted while waiting for IO on channel > java.nio.channels.SocketChannel[connected local=/:52586 > remote=/:50010]. 18 millis timeout left. > [WARN] [StripedBlockReconstruction-199] : Failed to reconstruct striped > block: BP-714356632--1519726836856:blk_-YY_3472979393 > java.lang.NullPointerException > at > org.apache.hadoop.hdfs.util.StripedBlockUtil.getNextCompletedStripedRead(StripedBlockUtil.java:314) > at > org.apache.hadoop.hdfs.server.datanode.erasurecode.StripedReader.doReadMinimumSources(StripedReader.java:308) > at > org.apache.hadoop.hdfs.server.datanode.erasurecode.StripedReader.readMinimumSources(StripedReader.java:269) > at > org.apache.hadoop.hdfs.server.datanode.erasurecode.StripedBlockReconstructor.reconstruct(StripedBlockReconstructor.java:94) > at > org.apache.hadoop.hdfs.server.datanode.erasurecode.StripedBlockReconstructor.run(StripedBlockReconstructor.java:60) > at > java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) > at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) > at > java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) > at > java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) > at java.base/java.lang.Thread.run(Thread.java:834) {code} > Reading from DN may timeout(hold by a future(F)) and output the INFO log, but > the futures that contains the future(F) is cleared, > {code:java} > return new StripingChunkReadResult(futures.remove(future), > StripingChunkReadResult.CANCELLED); {code} > futures.remove(future) cause NPE. So the EC reconstruction is failed. In the > finally phase, the code snippet in *getStripedReader().close()* > {code:java} > reconstructor.freeBuffer(reader.getReadBuffer()); > reader.freeReadBuffer(); > reader.closeBlockReader(); {code} > free buffer firstly, but the StripedBlockReader still holds the buffer and > write it. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: h
[jira] [Updated] (HDFS-15261) RBF: Add Block Related Metrics
[ https://issues.apache.org/jira/browse/HDFS-15261?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ayush Saxena updated HDFS-15261: Attachment: HDFS-15261-02.patch > RBF: Add Block Related Metrics > -- > > Key: HDFS-15261 > URL: https://issues.apache.org/jira/browse/HDFS-15261 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Ayush Saxena >Assignee: Ayush Saxena >Priority: Major > Attachments: HDFS-15261-01.patch, HDFS-15261-02.patch > > > Add Metrics Related to Blocks. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-15262) Add fsck servlet to Secondary NameNode
Siyao Meng created HDFS-15262: - Summary: Add fsck servlet to Secondary NameNode Key: HDFS-15262 URL: https://issues.apache.org/jira/browse/HDFS-15262 Project: Hadoop HDFS Issue Type: Improvement Components: namenode Reporter: Siyao Meng Assignee: Siyao Meng Allows for running fsck on Secondary NameNode. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15262) Add fsck servlet to Secondary NameNode
[ https://issues.apache.org/jira/browse/HDFS-15262?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siyao Meng updated HDFS-15262: -- Priority: Minor (was: Major) > Add fsck servlet to Secondary NameNode > -- > > Key: HDFS-15262 > URL: https://issues.apache.org/jira/browse/HDFS-15262 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode >Reporter: Siyao Meng >Assignee: Siyao Meng >Priority: Minor > > Allows for running fsck on Secondary NameNode. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15261) RBF: Add Block Related Metrics
[ https://issues.apache.org/jira/browse/HDFS-15261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17075327#comment-17075327 ] Hadoop QA commented on HDFS-15261: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 44s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 29s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 19s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 31s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 1s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 4s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 29s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 29s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 0m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 24s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 14s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs-rbf: The patch generated 10 new + 3 unchanged - 0 fixed = 13 total (was 3) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 18s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 10s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-rbf generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 25s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 38s{color} | {color:red} hadoop-hdfs-rbf in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 26s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 65m 1s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:hadoop-hdfs-project/hadoop-hdfs-rbf | | | Self assignment of field NamenodeStatusReport.scheduledReplicationBlocks in org.apache.hadoop.hdfs.server.federation.resolver.NamenodeStatusReport.setNamenodeInfo(int, long, long, long) At NamenodeStatusReport.java:in org.apache.hadoop.hdfs.server.federation.resolver.NamenodeStatusReport.setNamenodeInfo(int, long, long, long) At NamenodeStatusReport.java:[line 405] | | Failed junit tests | hadoop.hdfs.server.federation.router.TestRouterNamenodeMonitoring | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.8 Server=19.03.8 Image:yetus/hadoop:e6455cc864d | | JIRA Issue | HDFS-15261 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12998856/HDFS-15261-01.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle cc | | uname | Linux 98f9389f5e86 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x
[jira] [Updated] (HDFS-15261) RBF: Add Block Related Metrics
[ https://issues.apache.org/jira/browse/HDFS-15261?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ayush Saxena updated HDFS-15261: Attachment: HDFS-15261-01.patch > RBF: Add Block Related Metrics > -- > > Key: HDFS-15261 > URL: https://issues.apache.org/jira/browse/HDFS-15261 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Ayush Saxena >Assignee: Ayush Saxena >Priority: Major > Attachments: HDFS-15261-01.patch > > > Add Metrics Related to Blocks. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-15261) RBF: Add Block Related Metrics
Ayush Saxena created HDFS-15261: --- Summary: RBF: Add Block Related Metrics Key: HDFS-15261 URL: https://issues.apache.org/jira/browse/HDFS-15261 Project: Hadoop HDFS Issue Type: Improvement Reporter: Ayush Saxena Assignee: Ayush Saxena Attachments: HDFS-15261-01.patch Add Metrics Related to Blocks. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15261) RBF: Add Block Related Metrics
[ https://issues.apache.org/jira/browse/HDFS-15261?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ayush Saxena updated HDFS-15261: Hadoop Flags: Reviewed Status: Patch Available (was: Open) > RBF: Add Block Related Metrics > -- > > Key: HDFS-15261 > URL: https://issues.apache.org/jira/browse/HDFS-15261 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Ayush Saxena >Assignee: Ayush Saxena >Priority: Major > Attachments: HDFS-15261-01.patch > > > Add Metrics Related to Blocks. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14505) "touchz" command should check quota limit before deleting an already existing file
[ https://issues.apache.org/jira/browse/HDFS-14505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17075289#comment-17075289 ] Hadoop QA commented on HDFS-14505: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 47s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 50s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 1s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 43s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 6s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 16m 2s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 50s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 40s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 21s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 37s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}105m 29s{color} | {color:red} hadoop-hdfs in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 34s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}170m 40s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.namenode.TestNamenodeCapacityReport | | | hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.8 Server=19.03.8 Image:yetus/hadoop:e6455cc864d | | JIRA Issue | HDFS-14505 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12998849/HDFS-14505.001.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux c8c0aa502183 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / e6455cc | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_242 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/29078/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/29078/testReport/ | | Max. process+thread count | 2992 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-B
[jira] [Comment Edited] (HDFS-14505) "touchz" command should check quota limit before deleting an already existing file
[ https://issues.apache.org/jira/browse/HDFS-14505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17075246#comment-17075246 ] Ayush Saxena edited comment on HDFS-14505 at 4/4/20, 6:56 PM: -- Well honestly, I didn't understand the fix itself. how is quota being ensured that it wont get violated post create. and moreover quota should be checked once only, if it is checked before during delete itself, it shouldn't be checked in that case later... What happens for normal file if it is of non zero size, and storage space quota gets violated, still the old file gets deleted. Or if file create fails due to some other reason rather than quota post delete, We can’t prevent in that case. Isn’t Overwrite tend to behave like that? was (Author: ayushtkn): Well honestly, I didn't understand the fix itself. how is quota being ensured that it wont get violated post create? There is mkdir for the parent too, if the parent doesn't exist and moreover quota should be checked once only, if it is checked before during delete itself, it shouldn't be checked in that case later... > "touchz" command should check quota limit before deleting an already existing > file > -- > > Key: HDFS-14505 > URL: https://issues.apache.org/jira/browse/HDFS-14505 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Shashikant Banerjee >Assignee: hemanthboyina >Priority: Major > Attachments: HDFS-14505.001.patch > > > {code:java} > HW15685:bin sbanerjee$ ./hdfs dfs -ls /dir2 > 2019-05-21 15:14:01,080 WARN util.NativeCodeLoader: Unable to load > native-hadoop library for your platform... using builtin-java classes where > applicable > Found 1 items > -rw-r--r-- 1 sbanerjee hadoop 0 2019-05-21 15:10 /dir2/file4 > HW15685:bin sbanerjee$ ./hdfs dfs -touchz /dir2/file4 > 2019-05-21 15:14:12,247 WARN util.NativeCodeLoader: Unable to load > native-hadoop library for your platform... using builtin-java classes where > applicable > touchz: The NameSpace quota (directories and files) of directory /dir2 is > exceeded: quota=3 file count=5 > HW15685:bin sbanerjee$ ./hdfs dfs -ls /dir2 > 2019-05-21 15:14:20,607 WARN util.NativeCodeLoader: Unable to load > native-hadoop library for your platform... using builtin-java classes where > applicable > {code} > Here, the "touchz" command failed to create the file as the quota limit was > hit, but ended up deleting the original file which existed. It should do the > quota check before deleting the file so that after successful deletion, > creation should succeed. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14505) "touchz" command should check quota limit before deleting an already existing file
[ https://issues.apache.org/jira/browse/HDFS-14505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17075246#comment-17075246 ] Ayush Saxena commented on HDFS-14505: - Well honestly, I didn't understand the fix itself. how is quota being ensured that it wont get violated post create? There is mkdir for the parent too, if the parent doesn't exist and moreover quota should be checked once only, if it is checked before during delete itself, it shouldn't be checked in that case later... > "touchz" command should check quota limit before deleting an already existing > file > -- > > Key: HDFS-14505 > URL: https://issues.apache.org/jira/browse/HDFS-14505 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Shashikant Banerjee >Assignee: hemanthboyina >Priority: Major > Attachments: HDFS-14505.001.patch > > > {code:java} > HW15685:bin sbanerjee$ ./hdfs dfs -ls /dir2 > 2019-05-21 15:14:01,080 WARN util.NativeCodeLoader: Unable to load > native-hadoop library for your platform... using builtin-java classes where > applicable > Found 1 items > -rw-r--r-- 1 sbanerjee hadoop 0 2019-05-21 15:10 /dir2/file4 > HW15685:bin sbanerjee$ ./hdfs dfs -touchz /dir2/file4 > 2019-05-21 15:14:12,247 WARN util.NativeCodeLoader: Unable to load > native-hadoop library for your platform... using builtin-java classes where > applicable > touchz: The NameSpace quota (directories and files) of directory /dir2 is > exceeded: quota=3 file count=5 > HW15685:bin sbanerjee$ ./hdfs dfs -ls /dir2 > 2019-05-21 15:14:20,607 WARN util.NativeCodeLoader: Unable to load > native-hadoop library for your platform... using builtin-java classes where > applicable > {code} > Here, the "touchz" command failed to create the file as the quota limit was > hit, but ended up deleting the original file which existed. It should do the > quota check before deleting the file so that after successful deletion, > creation should succeed. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15259) Reduce useless log information in FSNamesystemAuditLogger
[ https://issues.apache.org/jira/browse/HDFS-15259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17075225#comment-17075225 ] Hadoop QA commented on HDFS-15259: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 2m 10s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 4 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 38s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 6s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 20m 56s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 16s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 11s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 20m 45s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 34s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 13s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 20s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 3s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 2m 34s{color} | {color:orange} root: The patch generated 9 new + 238 unchanged - 1 fixed = 247 total (was 239) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 18s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 16s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}102m 38s{color} | {color:red} hadoop-hdfs in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 1m 5s{color} | {color:red} hadoop-dynamometer-workload in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 46s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}223m 10s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.datanode.TestBPOfferService | | | hadoop.hdfs.TestRollingUpgrade | | | hadoop.tools.dynamometer.workloadgenerator.audit.TestAuditLogDirectParser | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.8 Server=19.03.8 Image:yetus/hadoop:e6455cc864d | | JIRA Issue | HDFS-15259 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12998836/HDFS-15259.002.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux a02a809d66e7 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | g
[jira] [Updated] (HDFS-14505) "touchz" command should check quota limit before deleting an already existing file
[ https://issues.apache.org/jira/browse/HDFS-14505?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] hemanthboyina updated HDFS-14505: - Attachment: HDFS-14505.001.patch Status: Patch Available (was: Open) > "touchz" command should check quota limit before deleting an already existing > file > -- > > Key: HDFS-14505 > URL: https://issues.apache.org/jira/browse/HDFS-14505 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Shashikant Banerjee >Assignee: hemanthboyina >Priority: Major > Attachments: HDFS-14505.001.patch > > > {code:java} > HW15685:bin sbanerjee$ ./hdfs dfs -ls /dir2 > 2019-05-21 15:14:01,080 WARN util.NativeCodeLoader: Unable to load > native-hadoop library for your platform... using builtin-java classes where > applicable > Found 1 items > -rw-r--r-- 1 sbanerjee hadoop 0 2019-05-21 15:10 /dir2/file4 > HW15685:bin sbanerjee$ ./hdfs dfs -touchz /dir2/file4 > 2019-05-21 15:14:12,247 WARN util.NativeCodeLoader: Unable to load > native-hadoop library for your platform... using builtin-java classes where > applicable > touchz: The NameSpace quota (directories and files) of directory /dir2 is > exceeded: quota=3 file count=5 > HW15685:bin sbanerjee$ ./hdfs dfs -ls /dir2 > 2019-05-21 15:14:20,607 WARN util.NativeCodeLoader: Unable to load > native-hadoop library for your platform... using builtin-java classes where > applicable > {code} > Here, the "touchz" command failed to create the file as the quota limit was > hit, but ended up deleting the original file which existed. It should do the > quota check before deleting the file so that after successful deletion, > creation should succeed. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15207) VolumeScanner skip to scan blocks accessed during recent scan peroid
[ https://issues.apache.org/jira/browse/HDFS-15207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17075219#comment-17075219 ] Wei-Chiu Chuang commented on HDFS-15207: LGTM > VolumeScanner skip to scan blocks accessed during recent scan peroid > > > Key: HDFS-15207 > URL: https://issues.apache.org/jira/browse/HDFS-15207 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Reporter: Yang Yun >Assignee: Yang Yun >Priority: Minor > Attachments: HDFS-15207.002.patch, HDFS-15207.003.patch, > HDFS-15207.004.patch, HDFS-15207.patch, HDFS-15207.patch > > > Check the access time of block file to avoid scanning recently changed > blocks, reducing disk IO. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15207) VolumeScanner skip to scan blocks accessed during recent scan peroid
[ https://issues.apache.org/jira/browse/HDFS-15207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17075213#comment-17075213 ] Hadoop QA commented on HDFS-15207: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 37s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 3s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 3s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 52s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 8s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 52s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 37s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 43s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 54s{color} | {color:red} hadoop-hdfs-project_hadoop-hdfs generated 1 new + 585 unchanged - 0 fixed = 586 total (was 585) {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 46s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 1 new + 501 unchanged - 0 fixed = 502 total (was 501) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 3s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 40s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 93m 42s{color} | {color:red} hadoop-hdfs in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 38s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}154m 30s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks | | | hadoop.hdfs.server.namenode.TestAddOverReplicatedStripedBlocks | | | hadoop.hdfs.server.namenode.TestDecommissioningStatusWithBackoffMonitor | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.8 Server=19.03.8 Image:yetus/hadoop:e6455cc864d | | JIRA Issue | HDFS-15207 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12998838/HDFS-15207.004.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle xml | | uname | Linux 53b428e7119a 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / e6455cc | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_242 | | findbugs | v3.1.0-RC1 | | javac | https://bu
[jira] [Commented] (HDFS-15207) VolumeScanner skip to scan blocks accessed during recent scan peroid
[ https://issues.apache.org/jira/browse/HDFS-15207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17075176#comment-17075176 ] Íñigo Goiri commented on HDFS-15207: LamdaTestUtils with a wait inside is kind of weird, so I guess using fail() in this case is fine. +1 on [^HDFS-15207.004.patch]. > VolumeScanner skip to scan blocks accessed during recent scan peroid > > > Key: HDFS-15207 > URL: https://issues.apache.org/jira/browse/HDFS-15207 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Reporter: Yang Yun >Assignee: Yang Yun >Priority: Minor > Attachments: HDFS-15207.002.patch, HDFS-15207.003.patch, > HDFS-15207.004.patch, HDFS-15207.patch, HDFS-15207.patch > > > Check the access time of block file to avoid scanning recently changed > blocks, reducing disk IO. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15255) Consider StorageType when DatanodeManager#sortLocatedBlock()
[ https://issues.apache.org/jira/browse/HDFS-15255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17075162#comment-17075162 ] Hadoop QA commented on HDFS-15255: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 2m 58s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 48s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 3s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 21m 37s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 38s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 5m 10s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 26m 28s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 10m 46s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 40s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 26s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 19m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 19m 47s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 3m 11s{color} | {color:orange} root: The patch generated 2 new + 536 unchanged - 0 fixed = 538 total (was 536) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 4m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 17m 10s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 4m 19s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 55s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 41s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 5s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}114m 17s{color} | {color:red} hadoop-hdfs in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 38s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 47s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}295m 26s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:hadoop-hdfs-project/hadoop-hdfs | | | Call to org.apache.hadoop.hdfs.protocol.DatanodeInfoWithStorage.equals(org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor) in org.apache.hadoop.hdfs.server.namenode.CacheManager.setCachedLocations(LocatedBl
[jira] [Commented] (HDFS-15207) VolumeScanner skip to scan blocks accessed during recent scan peroid
[ https://issues.apache.org/jira/browse/HDFS-15207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17075161#comment-17075161 ] Yang Yun commented on HDFS-15207: - Thanks [~weichiu] and [~elgoiri] for the review. Updated to new patch HDFS-15207.004.patch with following changes, * Use fail() than assertTrue in the test code. * merge the else and the if in the VolumeScanner. > VolumeScanner skip to scan blocks accessed during recent scan peroid > > > Key: HDFS-15207 > URL: https://issues.apache.org/jira/browse/HDFS-15207 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Reporter: Yang Yun >Assignee: Yang Yun >Priority: Minor > Attachments: HDFS-15207.002.patch, HDFS-15207.003.patch, > HDFS-15207.004.patch, HDFS-15207.patch, HDFS-15207.patch > > > Check the access time of block file to avoid scanning recently changed > blocks, reducing disk IO. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15207) VolumeScanner skip to scan blocks accessed during recent scan peroid
[ https://issues.apache.org/jira/browse/HDFS-15207?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yang Yun updated HDFS-15207: Status: Open (was: Patch Available) > VolumeScanner skip to scan blocks accessed during recent scan peroid > > > Key: HDFS-15207 > URL: https://issues.apache.org/jira/browse/HDFS-15207 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Reporter: Yang Yun >Assignee: Yang Yun >Priority: Minor > Attachments: HDFS-15207.002.patch, HDFS-15207.003.patch, > HDFS-15207.004.patch, HDFS-15207.patch, HDFS-15207.patch > > > Check the access time of block file to avoid scanning recently changed > blocks, reducing disk IO. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15207) VolumeScanner skip to scan blocks accessed during recent scan peroid
[ https://issues.apache.org/jira/browse/HDFS-15207?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yang Yun updated HDFS-15207: Attachment: HDFS-15207.004.patch Status: Patch Available (was: Open) > VolumeScanner skip to scan blocks accessed during recent scan peroid > > > Key: HDFS-15207 > URL: https://issues.apache.org/jira/browse/HDFS-15207 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Reporter: Yang Yun >Assignee: Yang Yun >Priority: Minor > Attachments: HDFS-15207.002.patch, HDFS-15207.003.patch, > HDFS-15207.004.patch, HDFS-15207.patch, HDFS-15207.patch > > > Check the access time of block file to avoid scanning recently changed > blocks, reducing disk IO. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14504) Rename with Snapshots does not honor quota limit
[ https://issues.apache.org/jira/browse/HDFS-14504?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17075160#comment-17075160 ] Hadoop QA commented on HDFS-14504: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 2m 7s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 14s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 4s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 10s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 0s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 38s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 44s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 53s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 42s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 92m 6s{color} | {color:red} hadoop-hdfs in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 36s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}154m 23s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks | | | hadoop.hdfs.server.balancer.TestBalancer | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.8 Server=19.03.8 Image:yetus/hadoop:e6455cc864d | | JIRA Issue | HDFS-14504 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12998832/HDFS-14504.001.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 975c857d92ef 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / e6455cc | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_242 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/29075/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/29075/testReport/ | | Max. process+thread count | 4092 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/29075/cons
[jira] [Assigned] (HDFS-15225) RBF: Add snapshot counts to content summary in router
[ https://issues.apache.org/jira/browse/HDFS-15225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ayush Saxena reassigned HDFS-15225: --- Assignee: Quan Li > RBF: Add snapshot counts to content summary in router > - > > Key: HDFS-15225 > URL: https://issues.apache.org/jira/browse/HDFS-15225 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Quan Li >Assignee: Quan Li >Priority: Major > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15228) Cannot rename file with space in name
[ https://issues.apache.org/jira/browse/HDFS-15228?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17075152#comment-17075152 ] Ayush Saxena commented on HDFS-15228: - [~dwernerm] if this is working on the latest version, then we can close this. > Cannot rename file with space in name > - > > Key: HDFS-15228 > URL: https://issues.apache.org/jira/browse/HDFS-15228 > Project: Hadoop HDFS > Issue Type: Bug > Components: webhdfs >Affects Versions: 3.2.0 > Environment: Oracle jdk1.8.161 >Reporter: Dylan Werner-Meier >Priority: Major > Attachments: TestWithStrangeFilenames.java, pom.xml > > > Hello, > While using webhdfs, I encountered a strange bug where I just cannot rename a > file if it has a space in the filename. > It seems strange to me, is there anything I am missing ? > > Edit: After some debugging, it seems to be linked with the way spaces are > encoded the webhdfs url: the JDK's URLEncoder uses '+' to encode spaces, > whereas a CURL command where the filename is encoded with '%20' for spaces > works just fine. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15237) Get checksum of EC file failed, when some block is missing or corrupt
[ https://issues.apache.org/jira/browse/HDFS-15237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17075151#comment-17075151 ] Ayush Saxena commented on HDFS-15237: - if the problem is solved? Can we close this as duplicate? > Get checksum of EC file failed, when some block is missing or corrupt > - > > Key: HDFS-15237 > URL: https://issues.apache.org/jira/browse/HDFS-15237 > Project: Hadoop HDFS > Issue Type: Bug > Components: ec, hdfs >Affects Versions: 3.2.1 >Reporter: zhengchenyu >Priority: Major > Fix For: 3.2.2 > > > When we distcp from an ec directory to another one, I found some error like > this. > {code} > 2020-03-20 20:18:21,366 WARN [main] > org.apache.hadoop.hdfs.FileChecksumHelper: src=/EC/6-3//000325_0, > datanodes[6]=DatanodeInfoWithStorage[10.200.128.40:9866,DS-65ac4407-9d33-4c59-8f72-dd1d80d26d9f,DISK]2020-03-20 > 20:18:21,366 WARN [main] org.apache.hadoop.hdfs.FileChecksumHelper: > src=/EC/6-3//000325_0, > datanodes[6]=DatanodeInfoWithStorage[10.200.128.40:9866,DS-65ac4407-9d33-4c59-8f72-dd1d80d26d9f,DISK]java.io.EOFException: > Unexpected EOF while trying to read response from server at > org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:550) > at > org.apache.hadoop.hdfs.FileChecksumHelper$StripedFileNonStripedChecksumComputer.tryDatanode(FileChecksumHelper.java:709) > at > org.apache.hadoop.hdfs.FileChecksumHelper$StripedFileNonStripedChecksumComputer.checksumBlockGroup(FileChecksumHelper.java:664) > at > org.apache.hadoop.hdfs.FileChecksumHelper$StripedFileNonStripedChecksumComputer.checksumBlocks(FileChecksumHelper.java:638) > at > org.apache.hadoop.hdfs.FileChecksumHelper$FileChecksumComputer.compute(FileChecksumHelper.java:252) > at > org.apache.hadoop.hdfs.DFSClient.getFileChecksumInternal(DFSClient.java:1790) > at > org.apache.hadoop.hdfs.DFSClient.getFileChecksumWithCombineMode(DFSClient.java:1810) > at > org.apache.hadoop.hdfs.DistributedFileSystem$33.doCall(DistributedFileSystem.java:1691) > at > org.apache.hadoop.hdfs.DistributedFileSystem$33.doCall(DistributedFileSystem.java:1688) > at > org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) > at > org.apache.hadoop.hdfs.DistributedFileSystem.getFileChecksum(DistributedFileSystem.java:1700) > at > org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doCopy(RetriableFileCopyCommand.java:138) > at > org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doExecute(RetriableFileCopyCommand.java:115) > at > org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:87) > at > org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:259) > at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:220) at > org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:48) at > org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146) at > org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:799) at > org.apache.hadoop.mapred.MapTask.run(MapTask.java:347) at > org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174) at > java.security.AccessController.doPrivileged(Native Method) at > javax.security.auth.Subject.doAs(Subject.java:422) at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730) > at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:168) > {code} > And Then I found some error in datanode like this > {code} > 2020-03-20 20:54:16,573 INFO > org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient: > SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = > false > 2020-03-20 20:54:16,577 ERROR > org.apache.hadoop.hdfs.server.datanode.DataNode: > bd-hadoop-128050.zeus.lianjia.com:9866:DataXceiver error processing > BLOCK_GROUP_CHECKSUM operation src: /10.201.1.38:33264 dst: > /10.200.128.50:9866 > java.lang.UnsupportedOperationException > at java.nio.ByteBuffer.array(ByteBuffer.java:994) > at > org.apache.hadoop.hdfs.server.datanode.erasurecode.StripedBlockChecksumReconstructor.reconstruct(StripedBlockChecksumReconstructor.java:90) > at > org.apache.hadoop.hdfs.server.datanode.BlockChecksumHelper$BlockGroupNonStripedChecksumComputer.recalculateChecksum(BlockChecksumHelper.java:711) > at > org.apache.hadoop.hdfs.server.datanode.BlockChecksumHelper$BlockGroupNonStripedChecksumComputer.compute(BlockChecksumHelper.java:489) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.blockGroupChecksum(DataXceiver.java:1047) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opStripedBlockChecksum(Receiver.java:327) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:119) > at
[jira] [Assigned] (HDFS-15250) Setting `dfs.client.use.datanode.hostname` to true can crash the system because of unhandled UnresolvedAddressException
[ https://issues.apache.org/jira/browse/HDFS-15250?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ayush Saxena reassigned HDFS-15250: --- Assignee: Ctest > Setting `dfs.client.use.datanode.hostname` to true can crash the system > because of unhandled UnresolvedAddressException > --- > > Key: HDFS-15250 > URL: https://issues.apache.org/jira/browse/HDFS-15250 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Ctest >Assignee: Ctest >Priority: Major > > *Problem:* > `dfs.client.use.datanode.hostname` by default is set to false, which means > the client will use the IP address of the datanode to connect to the > datanode, rather than the hostname of the datanode. > In `org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.nextTcpPeer`: > > {code:java} > try { > Peer peer = remotePeerFactory.newConnectedPeer(inetSocketAddress, token, > datanode); > LOG.trace("nextTcpPeer: created newConnectedPeer {}", peer); > return new BlockReaderPeer(peer, false); > } catch (IOException e) { > LOG.trace("nextTcpPeer: failed to create newConnectedPeer connected to" > + "{}", datanode); > throw e; > } > {code} > > If `dfs.client.use.datanode.hostname` is false, then it will try to connect > via IP address. If the IP address is illegal and the connection fails, > IOException will be thrown from `newConnectedPeer` and be handled. > If `dfs.client.use.datanode.hostname` is true, then it will try to connect > via hostname. If the hostname cannot be resolved, UnresolvedAddressException > will be thrown from `newConnectedPeer`. However, UnresolvedAddressException > is not a subclass of IOException so `nextTcpPeer` doesn’t handle this > exception at all. This unhandled exception could crash the system. > > *Solution:* > Since the method is handling the illegal IP address, then the illegal > hostname should be also handled as well. One solution is to add the handling > logic in `nextTcpPeer`: > {code:java} > } catch (IOException e) { > LOG.trace("nextTcpPeer: failed to create newConnectedPeer connected to" > + "{}", datanode); > throw e; > } catch (UnresolvedAddressException e) { > ... // handling logic > }{code} > I am very happy to provide a patch to do this. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15250) Setting `dfs.client.use.datanode.hostname` to true can crash the system because of unhandled UnresolvedAddressException
[ https://issues.apache.org/jira/browse/HDFS-15250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17075150#comment-17075150 ] Ayush Saxena commented on HDFS-15250: - bq. I am very happy to provide a patch to do this. Go ahead!!! > Setting `dfs.client.use.datanode.hostname` to true can crash the system > because of unhandled UnresolvedAddressException > --- > > Key: HDFS-15250 > URL: https://issues.apache.org/jira/browse/HDFS-15250 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Ctest >Priority: Major > > *Problem:* > `dfs.client.use.datanode.hostname` by default is set to false, which means > the client will use the IP address of the datanode to connect to the > datanode, rather than the hostname of the datanode. > In `org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.nextTcpPeer`: > > {code:java} > try { > Peer peer = remotePeerFactory.newConnectedPeer(inetSocketAddress, token, > datanode); > LOG.trace("nextTcpPeer: created newConnectedPeer {}", peer); > return new BlockReaderPeer(peer, false); > } catch (IOException e) { > LOG.trace("nextTcpPeer: failed to create newConnectedPeer connected to" > + "{}", datanode); > throw e; > } > {code} > > If `dfs.client.use.datanode.hostname` is false, then it will try to connect > via IP address. If the IP address is illegal and the connection fails, > IOException will be thrown from `newConnectedPeer` and be handled. > If `dfs.client.use.datanode.hostname` is true, then it will try to connect > via hostname. If the hostname cannot be resolved, UnresolvedAddressException > will be thrown from `newConnectedPeer`. However, UnresolvedAddressException > is not a subclass of IOException so `nextTcpPeer` doesn’t handle this > exception at all. This unhandled exception could crash the system. > > *Solution:* > Since the method is handling the illegal IP address, then the illegal > hostname should be also handled as well. One solution is to add the handling > logic in `nextTcpPeer`: > {code:java} > } catch (IOException e) { > LOG.trace("nextTcpPeer: failed to create newConnectedPeer connected to" > + "{}", datanode); > throw e; > } catch (UnresolvedAddressException e) { > ... // handling logic > }{code} > I am very happy to provide a patch to do this. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15259) Reduce useless log information in FSNamesystemAuditLogger
[ https://issues.apache.org/jira/browse/HDFS-15259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17075147#comment-17075147 ] Yang Yun commented on HDFS-15259: - Thanks [~csun] for the review. Yes, it looks break many things.:( > Reduce useless log information in FSNamesystemAuditLogger > - > > Key: HDFS-15259 > URL: https://issues.apache.org/jira/browse/HDFS-15259 > Project: Hadoop HDFS > Issue Type: Improvement > Components: logging, namenode >Reporter: Yang Yun >Assignee: Yang Yun >Priority: Minor > Attachments: HDFS-15259.001.patch, HDFS-15259.002.patch > > > For most operations, the 'dst' is null, add checking before logging the 'dst' > information in FSNamesystemAuditLogger > {code:java} > 2020-04-03 16:34:40,021 INFO > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: allowed=true > ugi=user (auth:SIMPLE) ip=/127.0.0.1 cmd=listStatus src=/ dst=null perm=null > proto=rpc > 2020-04-03 16:35:16,329 INFO > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: allowed=true > ugi=user (auth:SIMPLE) ip=/127.0.0.1 cmd=getfileinfo src=/ dst=null perm=null > proto=rpc > 2020-04-03 16:35:16,362 INFO > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: allowed=true > ugi=user (auth:SIMPLE) ip=/127.0.0.1 cmd=mkdirs src=/user dst=null > perm=yang:supergroup:rwxr-xr-x proto=rpc{code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15259) Reduce useless log information in FSNamesystemAuditLogger
[ https://issues.apache.org/jira/browse/HDFS-15259?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yang Yun updated HDFS-15259: Attachment: HDFS-15259.002.patch Status: Patch Available (was: Open) > Reduce useless log information in FSNamesystemAuditLogger > - > > Key: HDFS-15259 > URL: https://issues.apache.org/jira/browse/HDFS-15259 > Project: Hadoop HDFS > Issue Type: Improvement > Components: logging, namenode >Reporter: Yang Yun >Assignee: Yang Yun >Priority: Minor > Attachments: HDFS-15259.001.patch, HDFS-15259.002.patch > > > For most operations, the 'dst' is null, add checking before logging the 'dst' > information in FSNamesystemAuditLogger > {code:java} > 2020-04-03 16:34:40,021 INFO > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: allowed=true > ugi=user (auth:SIMPLE) ip=/127.0.0.1 cmd=listStatus src=/ dst=null perm=null > proto=rpc > 2020-04-03 16:35:16,329 INFO > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: allowed=true > ugi=user (auth:SIMPLE) ip=/127.0.0.1 cmd=getfileinfo src=/ dst=null perm=null > proto=rpc > 2020-04-03 16:35:16,362 INFO > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: allowed=true > ugi=user (auth:SIMPLE) ip=/127.0.0.1 cmd=mkdirs src=/user dst=null > perm=yang:supergroup:rwxr-xr-x proto=rpc{code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15259) Reduce useless log information in FSNamesystemAuditLogger
[ https://issues.apache.org/jira/browse/HDFS-15259?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yang Yun updated HDFS-15259: Status: Open (was: Patch Available) > Reduce useless log information in FSNamesystemAuditLogger > - > > Key: HDFS-15259 > URL: https://issues.apache.org/jira/browse/HDFS-15259 > Project: Hadoop HDFS > Issue Type: Improvement > Components: logging, namenode >Reporter: Yang Yun >Assignee: Yang Yun >Priority: Minor > Attachments: HDFS-15259.001.patch > > > For most operations, the 'dst' is null, add checking before logging the 'dst' > information in FSNamesystemAuditLogger > {code:java} > 2020-04-03 16:34:40,021 INFO > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: allowed=true > ugi=user (auth:SIMPLE) ip=/127.0.0.1 cmd=listStatus src=/ dst=null perm=null > proto=rpc > 2020-04-03 16:35:16,329 INFO > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: allowed=true > ugi=user (auth:SIMPLE) ip=/127.0.0.1 cmd=getfileinfo src=/ dst=null perm=null > proto=rpc > 2020-04-03 16:35:16,362 INFO > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: allowed=true > ugi=user (auth:SIMPLE) ip=/127.0.0.1 cmd=mkdirs src=/user dst=null > perm=yang:supergroup:rwxr-xr-x proto=rpc{code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-14504) Rename with Snapshots does not honor quota limit
[ https://issues.apache.org/jira/browse/HDFS-14504?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] hemanthboyina updated HDFS-14504: - Attachment: HDFS-14504.001.patch Status: Patch Available (was: Open) > Rename with Snapshots does not honor quota limit > > > Key: HDFS-14504 > URL: https://issues.apache.org/jira/browse/HDFS-14504 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Shashikant Banerjee >Assignee: hemanthboyina >Priority: Major > Attachments: HDFS-14504.001.patch > > > Steps to Reproduce: > > {code:java} > HW15685:bin sbanerjee$ ./hdfs dfs -mkdir /dir2 > 2019-05-21 15:08:41,615 WARN util.NativeCodeLoader: Unable to load > native-hadoop library for your platform... using builtin-java classes where > applicable > HW15685:bin sbanerjee$ ./hdfs dfsadmin -setQuota 3 /dir2 > 2019-05-21 15:08:57,326 WARN util.NativeCodeLoader: Unable to load > native-hadoop library for your platform... using builtin-java classes where > applicable > HW15685:bin sbanerjee$ ./hdfs dfsadmin -allowSnapshot /dir2 > 2019-05-21 15:09:47,239 WARN util.NativeCodeLoader: Unable to load > native-hadoop library for your platform... using builtin-java classes where > applicable > Allowing snapshot on /dir2 succeeded > HW15685:bin sbanerjee$ ./hdfs dfs -touchz /dir2/file1 > 2019-05-21 15:10:01,573 WARN util.NativeCodeLoader: Unable to load > native-hadoop library for your platform... using builtin-java classes where > applicable > HW15685:bin sbanerjee$ ./hdfs dfs -createSnapshot /dir2 snap1 > 2019-05-21 15:10:16,332 WARN util.NativeCodeLoader: Unable to load > native-hadoop library for your platform... using builtin-java classes where > applicable > Created snapshot /dir2/.snapshot/snap1 > HW15685:bin sbanerjee$ ./hdfs dfs -mv /dir2/file1 /dir2/file2 > 2019-05-21 15:10:49,292 WARN util.NativeCodeLoader: Unable to load > native-hadoop library for your platform... using builtin-java classes where > applicable > HW15685:bin sbanerjee$ ./hdfs dfs -ls /dir2 > 2019-05-21 15:11:05,207 WARN util.NativeCodeLoader: Unable to load > native-hadoop library for your platform... using builtin-java classes where > applicable > Found 1 items > -rw-r--r-- 1 sbanerjee hadoop 0 2019-05-21 15:10 /dir2/file2 > HW15685:bin sbanerjee$ ./hdfs dfs -touchz /dir2/filex > 2019-05-21 15:11:43,765 WARN util.NativeCodeLoader: Unable to load > native-hadoop library for your platform... using builtin-java classes where > applicable > touchz: The NameSpace quota (directories and files) of directory /dir2 is > exceeded: quota=3 file count=4 > HW15685:bin sbanerjee$ ./hdfs dfs -createSnapshot /dir2 snap2 > 2019-05-21 15:12:05,464 WARN util.NativeCodeLoader: Unable to load > native-hadoop library for your platform... using builtin-java classes where > applicable > Created snapshot /dir2/.snapshot/snap2 > HW15685:bin sbanerjee$ ./hdfs dfs -ls /dir2 > 2019-05-21 15:12:25,072 WARN util.NativeCodeLoader: Unable to load > native-hadoop library for your platform... using builtin-java classes where > applicable > Found 1 items > -rw-r--r-- 1 sbanerjee hadoop 0 2019-05-21 15:10 /dir2/file2 > HW15685:bin sbanerjee$ ./hdfs dfs -mv /dir2/file2 /dir2/file3 > 2019-05-21 15:12:35,908 WARN util.NativeCodeLoader: Unable to load > native-hadoop library for your platform... using builtin-java classes where > applicable > HW15685:bin sbanerjee$ ./hdfs dfs -touchz /dir2/filey > 2019-05-21 15:12:49,998 WARN util.NativeCodeLoader: Unable to load > native-hadoop library for your platform... using builtin-java classes where > applicable > touchz: The NameSpace quota (directories and files) of directory /dir2 is > exceeded: quota=3 file count=5 > {code} > // create operation fails here as it has already exceeded the quota limit > {code} > HW15685:bin sbanerjee$ ./hdfs dfs -createSnapshot /dir2 snap3 > 2019-05-21 15:13:07,656 WARN util.NativeCodeLoader: Unable to load > native-hadoop library for your platform... using builtin-java classes where > applicable > Created snapshot /dir2/.snapshot/snap3 > HW15685:bin sbanerjee$ ./hdfs dfs -mv /dir2/file3 /dir2/file4 > 2019-05-21 15:13:20,715 WARN util.NativeCodeLoader: Unable to load > native-hadoop library for your platform... using builtin-java classes where > applicable > {code} > // Rename operation succeeds here adding on to the namespace quota > {code} > HW15685:bin sbanerjee$ ./hdfs dfs -touchz /dir2/filez > 2019-05-21 15:13:30,486 WARN util.NativeCodeLoader: Unable to load > native-hadoop library for your platform... using builtin-java classes where > applicable > touchz: The NameSpace quota (directories and files) of directory /dir2 is > exceeded: quota=3 file count=6 > {code} > // File creation fails here but file count has been increas
[jira] [Assigned] (HDFS-14504) Rename with Snapshots does not honor quota limit
[ https://issues.apache.org/jira/browse/HDFS-14504?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] hemanthboyina reassigned HDFS-14504: Assignee: hemanthboyina > Rename with Snapshots does not honor quota limit > > > Key: HDFS-14504 > URL: https://issues.apache.org/jira/browse/HDFS-14504 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Shashikant Banerjee >Assignee: hemanthboyina >Priority: Major > > Steps to Reproduce: > > {code:java} > HW15685:bin sbanerjee$ ./hdfs dfs -mkdir /dir2 > 2019-05-21 15:08:41,615 WARN util.NativeCodeLoader: Unable to load > native-hadoop library for your platform... using builtin-java classes where > applicable > HW15685:bin sbanerjee$ ./hdfs dfsadmin -setQuota 3 /dir2 > 2019-05-21 15:08:57,326 WARN util.NativeCodeLoader: Unable to load > native-hadoop library for your platform... using builtin-java classes where > applicable > HW15685:bin sbanerjee$ ./hdfs dfsadmin -allowSnapshot /dir2 > 2019-05-21 15:09:47,239 WARN util.NativeCodeLoader: Unable to load > native-hadoop library for your platform... using builtin-java classes where > applicable > Allowing snapshot on /dir2 succeeded > HW15685:bin sbanerjee$ ./hdfs dfs -touchz /dir2/file1 > 2019-05-21 15:10:01,573 WARN util.NativeCodeLoader: Unable to load > native-hadoop library for your platform... using builtin-java classes where > applicable > HW15685:bin sbanerjee$ ./hdfs dfs -createSnapshot /dir2 snap1 > 2019-05-21 15:10:16,332 WARN util.NativeCodeLoader: Unable to load > native-hadoop library for your platform... using builtin-java classes where > applicable > Created snapshot /dir2/.snapshot/snap1 > HW15685:bin sbanerjee$ ./hdfs dfs -mv /dir2/file1 /dir2/file2 > 2019-05-21 15:10:49,292 WARN util.NativeCodeLoader: Unable to load > native-hadoop library for your platform... using builtin-java classes where > applicable > HW15685:bin sbanerjee$ ./hdfs dfs -ls /dir2 > 2019-05-21 15:11:05,207 WARN util.NativeCodeLoader: Unable to load > native-hadoop library for your platform... using builtin-java classes where > applicable > Found 1 items > -rw-r--r-- 1 sbanerjee hadoop 0 2019-05-21 15:10 /dir2/file2 > HW15685:bin sbanerjee$ ./hdfs dfs -touchz /dir2/filex > 2019-05-21 15:11:43,765 WARN util.NativeCodeLoader: Unable to load > native-hadoop library for your platform... using builtin-java classes where > applicable > touchz: The NameSpace quota (directories and files) of directory /dir2 is > exceeded: quota=3 file count=4 > HW15685:bin sbanerjee$ ./hdfs dfs -createSnapshot /dir2 snap2 > 2019-05-21 15:12:05,464 WARN util.NativeCodeLoader: Unable to load > native-hadoop library for your platform... using builtin-java classes where > applicable > Created snapshot /dir2/.snapshot/snap2 > HW15685:bin sbanerjee$ ./hdfs dfs -ls /dir2 > 2019-05-21 15:12:25,072 WARN util.NativeCodeLoader: Unable to load > native-hadoop library for your platform... using builtin-java classes where > applicable > Found 1 items > -rw-r--r-- 1 sbanerjee hadoop 0 2019-05-21 15:10 /dir2/file2 > HW15685:bin sbanerjee$ ./hdfs dfs -mv /dir2/file2 /dir2/file3 > 2019-05-21 15:12:35,908 WARN util.NativeCodeLoader: Unable to load > native-hadoop library for your platform... using builtin-java classes where > applicable > HW15685:bin sbanerjee$ ./hdfs dfs -touchz /dir2/filey > 2019-05-21 15:12:49,998 WARN util.NativeCodeLoader: Unable to load > native-hadoop library for your platform... using builtin-java classes where > applicable > touchz: The NameSpace quota (directories and files) of directory /dir2 is > exceeded: quota=3 file count=5 > {code} > // create operation fails here as it has already exceeded the quota limit > {code} > HW15685:bin sbanerjee$ ./hdfs dfs -createSnapshot /dir2 snap3 > 2019-05-21 15:13:07,656 WARN util.NativeCodeLoader: Unable to load > native-hadoop library for your platform... using builtin-java classes where > applicable > Created snapshot /dir2/.snapshot/snap3 > HW15685:bin sbanerjee$ ./hdfs dfs -mv /dir2/file3 /dir2/file4 > 2019-05-21 15:13:20,715 WARN util.NativeCodeLoader: Unable to load > native-hadoop library for your platform... using builtin-java classes where > applicable > {code} > // Rename operation succeeds here adding on to the namespace quota > {code} > HW15685:bin sbanerjee$ ./hdfs dfs -touchz /dir2/filez > 2019-05-21 15:13:30,486 WARN util.NativeCodeLoader: Unable to load > native-hadoop library for your platform... using builtin-java classes where > applicable > touchz: The NameSpace quota (directories and files) of directory /dir2 is > exceeded: quota=3 file count=6 > {code} > // File creation fails here but file count has been increased to 6, bcoz of > the previous rename operation{code} > The quota being set here is 3. Each
[jira] [Updated] (HDFS-14504) Rename with Snapshots does not honor quota limit
[ https://issues.apache.org/jira/browse/HDFS-14504?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] hemanthboyina updated HDFS-14504: - Attachment: HDFS-14504.001.patch > Rename with Snapshots does not honor quota limit > > > Key: HDFS-14504 > URL: https://issues.apache.org/jira/browse/HDFS-14504 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Shashikant Banerjee >Priority: Major > > Steps to Reproduce: > > {code:java} > HW15685:bin sbanerjee$ ./hdfs dfs -mkdir /dir2 > 2019-05-21 15:08:41,615 WARN util.NativeCodeLoader: Unable to load > native-hadoop library for your platform... using builtin-java classes where > applicable > HW15685:bin sbanerjee$ ./hdfs dfsadmin -setQuota 3 /dir2 > 2019-05-21 15:08:57,326 WARN util.NativeCodeLoader: Unable to load > native-hadoop library for your platform... using builtin-java classes where > applicable > HW15685:bin sbanerjee$ ./hdfs dfsadmin -allowSnapshot /dir2 > 2019-05-21 15:09:47,239 WARN util.NativeCodeLoader: Unable to load > native-hadoop library for your platform... using builtin-java classes where > applicable > Allowing snapshot on /dir2 succeeded > HW15685:bin sbanerjee$ ./hdfs dfs -touchz /dir2/file1 > 2019-05-21 15:10:01,573 WARN util.NativeCodeLoader: Unable to load > native-hadoop library for your platform... using builtin-java classes where > applicable > HW15685:bin sbanerjee$ ./hdfs dfs -createSnapshot /dir2 snap1 > 2019-05-21 15:10:16,332 WARN util.NativeCodeLoader: Unable to load > native-hadoop library for your platform... using builtin-java classes where > applicable > Created snapshot /dir2/.snapshot/snap1 > HW15685:bin sbanerjee$ ./hdfs dfs -mv /dir2/file1 /dir2/file2 > 2019-05-21 15:10:49,292 WARN util.NativeCodeLoader: Unable to load > native-hadoop library for your platform... using builtin-java classes where > applicable > HW15685:bin sbanerjee$ ./hdfs dfs -ls /dir2 > 2019-05-21 15:11:05,207 WARN util.NativeCodeLoader: Unable to load > native-hadoop library for your platform... using builtin-java classes where > applicable > Found 1 items > -rw-r--r-- 1 sbanerjee hadoop 0 2019-05-21 15:10 /dir2/file2 > HW15685:bin sbanerjee$ ./hdfs dfs -touchz /dir2/filex > 2019-05-21 15:11:43,765 WARN util.NativeCodeLoader: Unable to load > native-hadoop library for your platform... using builtin-java classes where > applicable > touchz: The NameSpace quota (directories and files) of directory /dir2 is > exceeded: quota=3 file count=4 > HW15685:bin sbanerjee$ ./hdfs dfs -createSnapshot /dir2 snap2 > 2019-05-21 15:12:05,464 WARN util.NativeCodeLoader: Unable to load > native-hadoop library for your platform... using builtin-java classes where > applicable > Created snapshot /dir2/.snapshot/snap2 > HW15685:bin sbanerjee$ ./hdfs dfs -ls /dir2 > 2019-05-21 15:12:25,072 WARN util.NativeCodeLoader: Unable to load > native-hadoop library for your platform... using builtin-java classes where > applicable > Found 1 items > -rw-r--r-- 1 sbanerjee hadoop 0 2019-05-21 15:10 /dir2/file2 > HW15685:bin sbanerjee$ ./hdfs dfs -mv /dir2/file2 /dir2/file3 > 2019-05-21 15:12:35,908 WARN util.NativeCodeLoader: Unable to load > native-hadoop library for your platform... using builtin-java classes where > applicable > HW15685:bin sbanerjee$ ./hdfs dfs -touchz /dir2/filey > 2019-05-21 15:12:49,998 WARN util.NativeCodeLoader: Unable to load > native-hadoop library for your platform... using builtin-java classes where > applicable > touchz: The NameSpace quota (directories and files) of directory /dir2 is > exceeded: quota=3 file count=5 > {code} > // create operation fails here as it has already exceeded the quota limit > {code} > HW15685:bin sbanerjee$ ./hdfs dfs -createSnapshot /dir2 snap3 > 2019-05-21 15:13:07,656 WARN util.NativeCodeLoader: Unable to load > native-hadoop library for your platform... using builtin-java classes where > applicable > Created snapshot /dir2/.snapshot/snap3 > HW15685:bin sbanerjee$ ./hdfs dfs -mv /dir2/file3 /dir2/file4 > 2019-05-21 15:13:20,715 WARN util.NativeCodeLoader: Unable to load > native-hadoop library for your platform... using builtin-java classes where > applicable > {code} > // Rename operation succeeds here adding on to the namespace quota > {code} > HW15685:bin sbanerjee$ ./hdfs dfs -touchz /dir2/filez > 2019-05-21 15:13:30,486 WARN util.NativeCodeLoader: Unable to load > native-hadoop library for your platform... using builtin-java classes where > applicable > touchz: The NameSpace quota (directories and files) of directory /dir2 is > exceeded: quota=3 file count=6 > {code} > // File creation fails here but file count has been increased to 6, bcoz of > the previous rename operation{code} > The quota being set here is 3. Each successive rename adds an entry to
[jira] [Updated] (HDFS-14504) Rename with Snapshots does not honor quota limit
[ https://issues.apache.org/jira/browse/HDFS-14504?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] hemanthboyina updated HDFS-14504: - Attachment: (was: HDFS-14504.001.patch) > Rename with Snapshots does not honor quota limit > > > Key: HDFS-14504 > URL: https://issues.apache.org/jira/browse/HDFS-14504 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Shashikant Banerjee >Assignee: hemanthboyina >Priority: Major > > Steps to Reproduce: > > {code:java} > HW15685:bin sbanerjee$ ./hdfs dfs -mkdir /dir2 > 2019-05-21 15:08:41,615 WARN util.NativeCodeLoader: Unable to load > native-hadoop library for your platform... using builtin-java classes where > applicable > HW15685:bin sbanerjee$ ./hdfs dfsadmin -setQuota 3 /dir2 > 2019-05-21 15:08:57,326 WARN util.NativeCodeLoader: Unable to load > native-hadoop library for your platform... using builtin-java classes where > applicable > HW15685:bin sbanerjee$ ./hdfs dfsadmin -allowSnapshot /dir2 > 2019-05-21 15:09:47,239 WARN util.NativeCodeLoader: Unable to load > native-hadoop library for your platform... using builtin-java classes where > applicable > Allowing snapshot on /dir2 succeeded > HW15685:bin sbanerjee$ ./hdfs dfs -touchz /dir2/file1 > 2019-05-21 15:10:01,573 WARN util.NativeCodeLoader: Unable to load > native-hadoop library for your platform... using builtin-java classes where > applicable > HW15685:bin sbanerjee$ ./hdfs dfs -createSnapshot /dir2 snap1 > 2019-05-21 15:10:16,332 WARN util.NativeCodeLoader: Unable to load > native-hadoop library for your platform... using builtin-java classes where > applicable > Created snapshot /dir2/.snapshot/snap1 > HW15685:bin sbanerjee$ ./hdfs dfs -mv /dir2/file1 /dir2/file2 > 2019-05-21 15:10:49,292 WARN util.NativeCodeLoader: Unable to load > native-hadoop library for your platform... using builtin-java classes where > applicable > HW15685:bin sbanerjee$ ./hdfs dfs -ls /dir2 > 2019-05-21 15:11:05,207 WARN util.NativeCodeLoader: Unable to load > native-hadoop library for your platform... using builtin-java classes where > applicable > Found 1 items > -rw-r--r-- 1 sbanerjee hadoop 0 2019-05-21 15:10 /dir2/file2 > HW15685:bin sbanerjee$ ./hdfs dfs -touchz /dir2/filex > 2019-05-21 15:11:43,765 WARN util.NativeCodeLoader: Unable to load > native-hadoop library for your platform... using builtin-java classes where > applicable > touchz: The NameSpace quota (directories and files) of directory /dir2 is > exceeded: quota=3 file count=4 > HW15685:bin sbanerjee$ ./hdfs dfs -createSnapshot /dir2 snap2 > 2019-05-21 15:12:05,464 WARN util.NativeCodeLoader: Unable to load > native-hadoop library for your platform... using builtin-java classes where > applicable > Created snapshot /dir2/.snapshot/snap2 > HW15685:bin sbanerjee$ ./hdfs dfs -ls /dir2 > 2019-05-21 15:12:25,072 WARN util.NativeCodeLoader: Unable to load > native-hadoop library for your platform... using builtin-java classes where > applicable > Found 1 items > -rw-r--r-- 1 sbanerjee hadoop 0 2019-05-21 15:10 /dir2/file2 > HW15685:bin sbanerjee$ ./hdfs dfs -mv /dir2/file2 /dir2/file3 > 2019-05-21 15:12:35,908 WARN util.NativeCodeLoader: Unable to load > native-hadoop library for your platform... using builtin-java classes where > applicable > HW15685:bin sbanerjee$ ./hdfs dfs -touchz /dir2/filey > 2019-05-21 15:12:49,998 WARN util.NativeCodeLoader: Unable to load > native-hadoop library for your platform... using builtin-java classes where > applicable > touchz: The NameSpace quota (directories and files) of directory /dir2 is > exceeded: quota=3 file count=5 > {code} > // create operation fails here as it has already exceeded the quota limit > {code} > HW15685:bin sbanerjee$ ./hdfs dfs -createSnapshot /dir2 snap3 > 2019-05-21 15:13:07,656 WARN util.NativeCodeLoader: Unable to load > native-hadoop library for your platform... using builtin-java classes where > applicable > Created snapshot /dir2/.snapshot/snap3 > HW15685:bin sbanerjee$ ./hdfs dfs -mv /dir2/file3 /dir2/file4 > 2019-05-21 15:13:20,715 WARN util.NativeCodeLoader: Unable to load > native-hadoop library for your platform... using builtin-java classes where > applicable > {code} > // Rename operation succeeds here adding on to the namespace quota > {code} > HW15685:bin sbanerjee$ ./hdfs dfs -touchz /dir2/filez > 2019-05-21 15:13:30,486 WARN util.NativeCodeLoader: Unable to load > native-hadoop library for your platform... using builtin-java classes where > applicable > touchz: The NameSpace quota (directories and files) of directory /dir2 is > exceeded: quota=3 file count=6 > {code} > // File creation fails here but file count has been increased to 6, bcoz of > the previous rename operation{code} > The quota being set he
[jira] [Updated] (HDFS-15255) Consider StorageType when DatanodeManager#sortLocatedBlock()
[ https://issues.apache.org/jira/browse/HDFS-15255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lisheng Sun updated HDFS-15255: --- Attachment: HDFS-15255.001.patch Status: Patch Available (was: Open) > Consider StorageType when DatanodeManager#sortLocatedBlock() > > > Key: HDFS-15255 > URL: https://issues.apache.org/jira/browse/HDFS-15255 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Lisheng Sun >Assignee: Lisheng Sun >Priority: Major > Attachments: HDFS-15255.001.patch > > > When only one replica of a block is SDD, the others are HDD. > When the client reads the data, the current logic is that it considers the > distance between the client and the dn. I think it should also consider the > StorageType of the replica. Priority to return a replica of the specified > StorageType -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15060) namenode doesn't retry JN when other JN goes down
[ https://issues.apache.org/jira/browse/HDFS-15060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17075100#comment-17075100 ] Andrew Timonin commented on HDFS-15060: --- Sorry for a long delay... [~sodonnell] Yes, if I run hdfs dfsadmin -rollEdits before and after restarting JN then NN doesn't restart maybe this should be mentioned in Rolling Upgrade doc. > namenode doesn't retry JN when other JN goes down > - > > Key: HDFS-15060 > URL: https://issues.apache.org/jira/browse/HDFS-15060 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Affects Versions: 3.1.1 >Reporter: Andrew Timonin >Priority: Minor > > When I upgrade hadoop to new version (using for ex. > [https://hadoop.apache.org/docs/r3.1.3/hadoop-project-dist/hadoop-hdfs/HdfsRollingUpgrade.html#namenode_-rollingUpgrade] > as instruction) I've got a situation: > I'm upgrading JN's one by one. > # Upgrade and restart JN1 > # NN see JN offline: > WARN client.QuorumJournalManager: Remote journal 10.73.67.132:8485 failed to > write txns 1205396-1205399. Will try to write to this JN again after the next > log roll. > # No log roll for some time (at least 1min) > # Upgrade and restart JN2 > # NN see it again: > WARN client.QuorumJournalManager: Remote journal 10.73.67.68:8485 failed to > write txns 1205799-1205800. Will try to write to this JN again after the next > log roll. > # BUT! At this time we have no JN quorum: > FATAL namenode.FSEditLog: Error: flush failed for required journal > (JournalAndStream(mgr=QJM to [10.73.67.212:8485, 10.73.67.132:8485, > 10.73.67.68:8485], stream=QuorumOutputStream starting at txid 1205246)) > 10.73.67.212:8485: null [success] > 2 exceptions thrown: > 10.73.67.132:8485: Journal disabled until next roll > 10.73.67.68:8485: End of File Exception between local host is: > "srv05.lt01.gismt.crpt.tech/10.73.67.132"; destination host is: > "srv07.lt01.gismt.crpt.tech":8485; : java.io.EOFException; For more details > see: http://wiki.apache.org/hadoop/EOFException > although JN1 is online already > It looks like NN should retry JN's marked as offline before giving up. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15240) Erasure Coding: dirty buffer causes reconstruction block error
[ https://issues.apache.org/jira/browse/HDFS-15240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17075078#comment-17075078 ] Hadoop QA commented on HDFS-15240: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 59s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 3s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 31s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 19s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 36s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 27s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 21m 5s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 7m 18s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 16s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 20s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 23s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 7m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 14s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 55s{color} | {color:red} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 3s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}104m 52s{color} | {color:red} hadoop-hdfs in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 48s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}243m 46s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.security.TestRaceWhenRelogin | | | hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy | | | hadoop.hdfs.TestReconstructStripedFile | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.8 Server=19.03.8 Image:yetus/hadoop:4454c6d14b7 | | JIRA Issue | HDFS-15240 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12998815/HDFS-15240.003.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux cefe41f3d2c1 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/pat
[jira] [Updated] (HDFS-15255) Consider StorageType when DatanodeManager#sortLocatedBlock()
[ https://issues.apache.org/jira/browse/HDFS-15255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lisheng Sun updated HDFS-15255: --- Summary: Consider StorageType when DatanodeManager#sortLocatedBlock() (was: Consider StorageType in DatanodeManager#sortLocatedBlock()) > Consider StorageType when DatanodeManager#sortLocatedBlock() > > > Key: HDFS-15255 > URL: https://issues.apache.org/jira/browse/HDFS-15255 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Lisheng Sun >Assignee: Lisheng Sun >Priority: Major > > When only one replica of a block is SDD, the others are HDD. > When the client reads the data, the current logic is that it considers the > distance between the client and the dn. I think it should also consider the > StorageType of the replica. Priority to return a replica of the specified > StorageType -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org