[jira] [Comment Edited] (HDFS-15559) Complement initialize member variables in TestHdfsConfigFields#initializeMemberVariables

2020-09-09 Thread Lisheng Sun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17193386#comment-17193386
 ] 

Lisheng Sun edited comment on HDFS-15559 at 9/10/20, 5:56 AM:
--

Hi [~hexiaoqiao]

There are two places of missing infomation. 

1.  missing config in hdfs-default.xml 
 2.  missing constant interfaces in 
TestHdfsConfigFields#initializeMemberVariables as follow.
{code:java}
HdfsClientConfigKeys.Write.class,
HdfsClientConfigKeys.Read.class, HdfsClientConfigKeys.Retry.class,
HdfsClientConfigKeys.ShortCircuit.class,
HdfsClientConfigKeys.Mmap.class, HdfsClientConfigKeys.HedgedRead.class,
{code}
Since missing HdfsClientConfigKeys.Write.class in 
TestHdfsConfigFields#initializeMemberVariables, 

TestHdfsConfigFields#testCompareConfigurationClassAgainstXml pass when missed 
config at HDFS-14694

Other missing constant interfaces in 
TestHdfsConfigFields#initializeMemberVariables are not related to HDFS-14694, 
so i create this issue.


was (Author: leosun08):
Hi [~hexiaoqiao]

There are two places of missing infomation. 

1.  missing config in hdfs-default.xml 
2.  missing constant interfaces in 
TestHdfsConfigFields#initializeMemberVariables as follow.
{code:java}
HdfsClientConfigKeys.Write.class,
HdfsClientConfigKeys.Read.class, HdfsClientConfigKeys.Retry.class,
HdfsClientConfigKeys.ShortCircuit.class,
HdfsClientConfigKeys.Mmap.class, HdfsClientConfigKeys.HedgedRead.class,
{code}
Since missing HdfsClientConfigKeys.Write.class in 
TestHdfsConfigFields#initializeMemberVariables, 

TestHdfsConfigFields#testCompareConfigurationClassAgainstXml pass when missed 
config at HDFS-14694

Other missing constant interfaces in 
TestHdfsConfigFields#initializeMemberVariables are not related to HDFS-14694, 
so i create this issue.++

> Complement initialize member variables in 
> TestHdfsConfigFields#initializeMemberVariables
> 
>
> Key: HDFS-15559
> URL: https://issues.apache.org/jira/browse/HDFS-15559
> Project: Hadoop HDFS
>  Issue Type: Test
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Minor
> Attachments: HDFS-15559.001.patch
>
>
> There are some missing constant interfaces in 
> TestHdfsConfigFields#initializeMemberVariables
> {code:java}
> @Override
> public void initializeMemberVariables() {
>   xmlFilename = new String("hdfs-default.xml");
>   configurationClasses = new Class[] { HdfsClientConfigKeys.class,
>   HdfsClientConfigKeys.Failover.class,
>   HdfsClientConfigKeys.StripedRead.class, DFSConfigKeys.class,
>   HdfsClientConfigKeys.BlockWrite.class,
>   HdfsClientConfigKeys.BlockWrite.ReplaceDatanodeOnFailure.class };
> }{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15559) Complement initialize member variables in TestHdfsConfigFields#initializeMemberVariables

2020-09-09 Thread Lisheng Sun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17193386#comment-17193386
 ] 

Lisheng Sun commented on HDFS-15559:


Hi [~hexiaoqiao]

There are two places of missing infomation. 

1.  missing config in hdfs-default.xml 
2.  missing constant interfaces in 
TestHdfsConfigFields#initializeMemberVariables as follow.
{code:java}
HdfsClientConfigKeys.Write.class,
HdfsClientConfigKeys.Read.class, HdfsClientConfigKeys.Retry.class,
HdfsClientConfigKeys.ShortCircuit.class,
HdfsClientConfigKeys.Mmap.class, HdfsClientConfigKeys.HedgedRead.class,
{code}
Since missing HdfsClientConfigKeys.Write.class in 
TestHdfsConfigFields#initializeMemberVariables, 

TestHdfsConfigFields#testCompareConfigurationClassAgainstXml pass when missed 
config at HDFS-14694

Other missing constant interfaces in 
TestHdfsConfigFields#initializeMemberVariables are not related to HDFS-14694, 
so i create this issue.++

> Complement initialize member variables in 
> TestHdfsConfigFields#initializeMemberVariables
> 
>
> Key: HDFS-15559
> URL: https://issues.apache.org/jira/browse/HDFS-15559
> Project: Hadoop HDFS
>  Issue Type: Test
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Minor
> Attachments: HDFS-15559.001.patch
>
>
> There are some missing constant interfaces in 
> TestHdfsConfigFields#initializeMemberVariables
> {code:java}
> @Override
> public void initializeMemberVariables() {
>   xmlFilename = new String("hdfs-default.xml");
>   configurationClasses = new Class[] { HdfsClientConfigKeys.class,
>   HdfsClientConfigKeys.Failover.class,
>   HdfsClientConfigKeys.StripedRead.class, DFSConfigKeys.class,
>   HdfsClientConfigKeys.BlockWrite.class,
>   HdfsClientConfigKeys.BlockWrite.ReplaceDatanodeOnFailure.class };
> }{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14997) BPServiceActor processes commands from NameNode asynchronously

2020-09-09 Thread zy.jordan (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17193381#comment-17193381
 ] 

zy.jordan commented on HDFS-14997:
--

Hi, [~hexiaoqiao]. I have encounter this problem in a very heavy cmd.

I have an idea. It can put the `updateActorStatesFromHeartbeat` function to 
`CommandProcessingThread`. In this case, it will not block heartbeat due to 
`writeLock`.

Do you think it will do effort?

> BPServiceActor processes commands from NameNode asynchronously
> --
>
> Key: HDFS-14997
> URL: https://issues.apache.org/jira/browse/HDFS-14997
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Xiaoqiao He
>Assignee: Xiaoqiao He
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14997.001.patch, HDFS-14997.002.patch, 
> HDFS-14997.003.patch, HDFS-14997.004.patch, HDFS-14997.005.patch, 
> HDFS-14997.addendum.patch, image-2019-12-26-16-15-44-814.png
>
>
> There are two core functions, report(#sendHeartbeat, #blockReport, 
> #cacheReport) and #processCommand in #BPServiceActor main process flow. If 
> processCommand cost long time it will block send report flow. Meanwhile 
> processCommand could cost long time(over 1000s the worst case I meet) when IO 
> load  of DataNode is very high. Since some IO operations are under 
> #datasetLock, So it has to wait to acquire #datasetLock long time when 
> process some of commands(such as #DNA_INVALIDATE). In such case, #heartbeat 
> will not send to NameNode in-time, and trigger other disasters.
> I propose to improve #processCommand asynchronously and not block 
> #BPServiceActor to send heartbeat back to NameNode when meet high IO load.
> Notes:
> 1. Lifeline could be one effective solution, however some old branches are 
> not support this feature.
> 2. IO operations under #datasetLock is another issue, I think we should solve 
> it at another JIRA.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15559) Complement initialize member variables in TestHdfsConfigFields#initializeMemberVariables

2020-09-09 Thread Xiaoqiao He (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17193366#comment-17193366
 ] 

Xiaoqiao He commented on HDFS-15559:


Hi [~leosun08], it seems that missed to add config for hdfs-default.xml at 
HDFS-14694? if that, we should submit addendum patch at HDFS-14694 directly. 
Thanks.

> Complement initialize member variables in 
> TestHdfsConfigFields#initializeMemberVariables
> 
>
> Key: HDFS-15559
> URL: https://issues.apache.org/jira/browse/HDFS-15559
> Project: Hadoop HDFS
>  Issue Type: Test
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Minor
> Attachments: HDFS-15559.001.patch
>
>
> There are some missing constant interfaces in 
> TestHdfsConfigFields#initializeMemberVariables
> {code:java}
> @Override
> public void initializeMemberVariables() {
>   xmlFilename = new String("hdfs-default.xml");
>   configurationClasses = new Class[] { HdfsClientConfigKeys.class,
>   HdfsClientConfigKeys.Failover.class,
>   HdfsClientConfigKeys.StripedRead.class, DFSConfigKeys.class,
>   HdfsClientConfigKeys.BlockWrite.class,
>   HdfsClientConfigKeys.BlockWrite.ReplaceDatanodeOnFailure.class };
> }{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15559) Complement initialize member variables in TestHdfsConfigFields#initializeMemberVariables

2020-09-09 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17193352#comment-17193352
 ] 

Hadoop QA commented on HDFS-15559:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
40s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
14s{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
9s{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m  5s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
21s{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  3m 
42s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
40s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 20s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
35s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 45m 22s{color} 
| {color:red} hadoop-hdfs in the patch passed. {color} |
| {color:blue}0{color} | 

[jira] [Commented] (HDFS-15564) Add Test annotation for TestPersistBlocks#testRestartDfsWithSync

2020-09-09 Thread Xiaoqiao He (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17193346#comment-17193346
 ] 

Xiaoqiao He commented on HDFS-15564:


+1 on [^HDFS-15564.001.patch].

> Add Test annotation for TestPersistBlocks#testRestartDfsWithSync
> 
>
> Key: HDFS-15564
> URL: https://issues.apache.org/jira/browse/HDFS-15564
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: hdfs
>Affects Versions: 3.3.0
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Minor
> Attachments: HDFS-15564.001.patch
>
>
> Add Test annotation for TestPersistBlocks#testRestartDfsWithSync,  otherwise 
> it’s dead code



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15565) Remove the invalid code in the Balancer#doBalance() method.

2020-09-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15565?focusedWorklogId=481219=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-481219
 ]

ASF GitHub Bot logged work on HDFS-15565:
-

Author: ASF GitHub Bot
Created on: 10/Sep/20 04:00
Start Date: 10/Sep/20 04:00
Worklog Time Spent: 10m 
  Work Description: Hexiaoqiao closed pull request #2292:
URL: https://github.com/apache/hadoop/pull/2292


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 481219)
Time Spent: 40m  (was: 0.5h)

> Remove the invalid code in the Balancer#doBalance() method.
> ---
>
> Key: HDFS-15565
> URL: https://issues.apache.org/jira/browse/HDFS-15565
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: balancer  mover
>Reporter: jianghua zhu
>Assignee: jianghua zhu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.0
>
> Attachments: HDFS-15565.001.patch
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> In the Balancer#doBalance() method, an invalid line of code is added, as 
> follows:
>  static private int doBalance(Collection namenodes,
>  Collection nsIds, final BalancerParameters p, Configuration conf)
>  throws IOException, InterruptedException
> { ... System.out.println("Time Stamp Iteration# Bytes Already Moved Bytes 
> Left To Move Bytes Being Moved"); ... }
>  
> I think it was originally used for testing.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15565) Remove the invalid code in the Balancer#doBalance() method.

2020-09-09 Thread Xiaoqiao He (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoqiao He updated HDFS-15565:
---
Resolution: Not A Problem
Status: Resolved  (was: Patch Available)

[~jianghuazhu] Thanks for your report and contributions.
BTW, Please submit patch here or Github, keep only one place which you like 
(Github is a better practice IMO), otherwise it will disperse 
reviewers/watchers' comments. Thanks.
For this issue, I think this standard output of balancer is useful to note the 
header line for the balancer progress as [~sunchao] comments at Github.
Please feel free to reopen if do not answer your questions.
Thanks again.

> Remove the invalid code in the Balancer#doBalance() method.
> ---
>
> Key: HDFS-15565
> URL: https://issues.apache.org/jira/browse/HDFS-15565
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: balancer  mover
>Reporter: jianghua zhu
>Assignee: jianghua zhu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.0
>
> Attachments: HDFS-15565.001.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> In the Balancer#doBalance() method, an invalid line of code is added, as 
> follows:
>  static private int doBalance(Collection namenodes,
>  Collection nsIds, final BalancerParameters p, Configuration conf)
>  throws IOException, InterruptedException
> { ... System.out.println("Time Stamp Iteration# Bytes Already Moved Bytes 
> Left To Move Bytes Being Moved"); ... }
>  
> I think it was originally used for testing.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-15556) Fix NPE in DatanodeDescriptor#updateStorageStats when handle DN Lifeline

2020-09-09 Thread Xiaoqiao He (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoqiao He resolved HDFS-15556.

Resolution: Duplicate

Close this issue and linked to HDFS-14042.
[~haiyang Hu] Please feel free to reopen if meet other issues which HDFS-14042 
can not resolve. Thanks.

> Fix NPE in DatanodeDescriptor#updateStorageStats when handle DN Lifeline
> 
>
> Key: HDFS-15556
> URL: https://issues.apache.org/jira/browse/HDFS-15556
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.2.0
>Reporter: huhaiyang
>Priority: Critical
> Attachments: HDFS-15556.001.patch, NN-CPU.png, NN_DN.LOG
>
>
> In our cluster, the NameNode appears NPE when processing lifeline messages 
> sent by the DataNode, which will cause an maxLoad exception calculated by NN.
> because DataNode is identified as busy and unable to allocate available nodes 
> in choose  DataNode, program loop execution results in high CPU and reduces 
> the processing performance of the cluster.
> *NameNode the exception stack*:
> {code:java}
> 2020-08-25 00:59:02,977 WARN org.apache.hadoop.ipc.Server: IPC Server handler 
> 5 on 8022, call Call#20535 Retry#0 
> org.apache.hadoop.hdfs.server.protocol.DatanodeLifelineProtocol.sendLifeline 
> from x:34766
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor.updateStorageStats(DatanodeDescriptor.java:460)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor.updateHeartbeatState(DatanodeDescriptor.java:390)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager.updateLifeline(HeartbeatManager.java:254)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.handleLifeline(DatanodeManager.java:1805)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.handleLifeline(FSNamesystem.java:4039)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.sendLifeline(NameNodeRpcServer.java:1761)
> at 
> org.apache.hadoop.hdfs.protocolPB.DatanodeLifelineProtocolServerSideTranslatorPB.sendLifeline(DatanodeLifelineProtocolServerSideTranslatorPB.java:62)
> at 
> org.apache.hadoop.hdfs.protocol.proto.DatanodeLifelineProtocolProtos$DatanodeLifelineProtocolService$2.callBlockingMethod(DatanodeLifelineProtocolProtos.java:409)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:886)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:828)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1903)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2717)
> {code}
> {code:java}
> // DatanodeDescriptor#updateStorageStats
> ...
> for (StorageReport report : reports) {
>   DatanodeStorageInfo storage = null;
>   synchronized (storageMap) {
> storage =
> storageMap.get(report.getStorage().getStorageID());
>   }
>   if (checkFailedStorages) {
> failedStorageInfos.remove(storage);
>   }
>   storage.receivedHeartbeat(report);  //  NPE exception occurred here 
>   // skip accounting for capacity of PROVIDED storages!
>   if (StorageType.PROVIDED.equals(storage.getStorageType())) {
> continue;
>   }
> ...
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15329) Provide FileContext based ViewFSOverloadScheme implementation

2020-09-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15329?focusedWorklogId=481205=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-481205
 ]

ASF GitHub Bot logged work on HDFS-15329:
-

Author: ASF GitHub Bot
Created on: 10/Sep/20 02:52
Start Date: 10/Sep/20 02:52
Worklog Time Spent: 10m 
  Work Description: umamaheswararao commented on a change in pull request 
#2225:
URL: https://github.com/apache/hadoop/pull/2225#discussion_r484551422



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsOverloadSchemeWithHdfsScheme.java
##
@@ -0,0 +1,309 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.viewfs;
+
+import java.io.File;
+import java.io.IOException;
+import java.net.URI;
+import java.net.URISyntaxException;
+import java.util.ArrayList;
+import java.util.List;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.CommonConfigurationKeys;
+import org.apache.hadoop.fs.FileContext;
+import org.apache.hadoop.fs.FileContextTestHelper;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FsConstants;
+import org.apache.hadoop.fs.Hdfs;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.RawLocalFileSystem;
+import org.apache.hadoop.fs.RemoteIterator;
+import org.apache.hadoop.fs.UnsupportedFileSystemException;
+import org.apache.hadoop.hdfs.DistributedFileSystem;
+import org.apache.hadoop.hdfs.MiniDFSCluster;
+import org.apache.hadoop.security.AccessControlException;
+import org.apache.hadoop.test.PathUtils;
+import org.junit.After;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Test;;
+
+
+/**

Review comment:
   ViewFileSystemOverloadScheme --> ViewFsOverloadScheme

##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFsOverloadScheme.java
##
@@ -0,0 +1,218 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.fs.viewfs;
+
+import java.io.IOException;
+import java.lang.reflect.Constructor;
+import java.lang.reflect.InvocationTargetException;
+import java.net.URI;
+
+import java.net.URISyntaxException;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.AbstractFileSystem;
+import org.apache.hadoop.fs.FsConstants;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.UnsupportedFileSystemException;
+
+import static 
org.apache.hadoop.fs.viewfs.Constants.CONFIG_VIEWFS_IGNORE_PORT_IN_MOUNT_TABLE_NAME;
+
+/**
+ * This class is AbstractFileSystem implementation corresponding to
+ * ViewFileSystemOverloadScheme. This class is extended from the ViewFs
+ * for the overloaded scheme file system. Mount link configurations and
+ * in-memory mount table building behaviors are inherited from ViewFs.
+ * Unlike ViewFs scheme (viewfs://), the users would be able to use any scheme.
+ *
+ * To use this class, the following configurations need to be added in
+ * core-site.xml file.
+ * 1) fs.AbstractFileSystem..impl
+ *= org.apache.hadoop.fs.viewfs.ViewFsOverloadScheme
+ * 2) fs.viewfs.overload.scheme.target.abstract..impl
+ *= "
+ *
+ * Here  can be any scheme, but with that scheme there should be a
+ * hadoop compatible file 

[jira] [Updated] (HDFS-15559) Complement initialize member variables in TestHdfsConfigFields#initializeMemberVariables

2020-09-09 Thread Lisheng Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15559?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HDFS-15559:
---
Attachment: HDFS-15559.001.patch
Status: Patch Available  (was: Open)

> Complement initialize member variables in 
> TestHdfsConfigFields#initializeMemberVariables
> 
>
> Key: HDFS-15559
> URL: https://issues.apache.org/jira/browse/HDFS-15559
> Project: Hadoop HDFS
>  Issue Type: Test
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Minor
> Attachments: HDFS-15559.001.patch
>
>
> There are some missing constant interfaces in 
> TestHdfsConfigFields#initializeMemberVariables
> {code:java}
> @Override
> public void initializeMemberVariables() {
>   xmlFilename = new String("hdfs-default.xml");
>   configurationClasses = new Class[] { HdfsClientConfigKeys.class,
>   HdfsClientConfigKeys.Failover.class,
>   HdfsClientConfigKeys.StripedRead.class, DFSConfigKeys.class,
>   HdfsClientConfigKeys.BlockWrite.class,
>   HdfsClientConfigKeys.BlockWrite.ReplaceDatanodeOnFailure.class };
> }{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15556) Fix NPE in DatanodeDescriptor#updateStorageStats when handle DN Lifeline

2020-09-09 Thread Fei Hui (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17193291#comment-17193291
 ] 

Fei Hui commented on HDFS-15556:


[~haiyang Hu] It's the same as HDFS-14042, Should we resolve this issue as 
Duplicate?

> Fix NPE in DatanodeDescriptor#updateStorageStats when handle DN Lifeline
> 
>
> Key: HDFS-15556
> URL: https://issues.apache.org/jira/browse/HDFS-15556
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.2.0
>Reporter: huhaiyang
>Priority: Critical
> Attachments: HDFS-15556.001.patch, NN-CPU.png, NN_DN.LOG
>
>
> In our cluster, the NameNode appears NPE when processing lifeline messages 
> sent by the DataNode, which will cause an maxLoad exception calculated by NN.
> because DataNode is identified as busy and unable to allocate available nodes 
> in choose  DataNode, program loop execution results in high CPU and reduces 
> the processing performance of the cluster.
> *NameNode the exception stack*:
> {code:java}
> 2020-08-25 00:59:02,977 WARN org.apache.hadoop.ipc.Server: IPC Server handler 
> 5 on 8022, call Call#20535 Retry#0 
> org.apache.hadoop.hdfs.server.protocol.DatanodeLifelineProtocol.sendLifeline 
> from x:34766
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor.updateStorageStats(DatanodeDescriptor.java:460)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor.updateHeartbeatState(DatanodeDescriptor.java:390)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager.updateLifeline(HeartbeatManager.java:254)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.handleLifeline(DatanodeManager.java:1805)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.handleLifeline(FSNamesystem.java:4039)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.sendLifeline(NameNodeRpcServer.java:1761)
> at 
> org.apache.hadoop.hdfs.protocolPB.DatanodeLifelineProtocolServerSideTranslatorPB.sendLifeline(DatanodeLifelineProtocolServerSideTranslatorPB.java:62)
> at 
> org.apache.hadoop.hdfs.protocol.proto.DatanodeLifelineProtocolProtos$DatanodeLifelineProtocolService$2.callBlockingMethod(DatanodeLifelineProtocolProtos.java:409)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:886)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:828)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1903)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2717)
> {code}
> {code:java}
> // DatanodeDescriptor#updateStorageStats
> ...
> for (StorageReport report : reports) {
>   DatanodeStorageInfo storage = null;
>   synchronized (storageMap) {
> storage =
> storageMap.get(report.getStorage().getStorageID());
>   }
>   if (checkFailedStorages) {
> failedStorageInfos.remove(storage);
>   }
>   storage.receivedHeartbeat(report);  //  NPE exception occurred here 
>   // skip accounting for capacity of PROVIDED storages!
>   if (StorageType.PROVIDED.equals(storage.getStorageType())) {
> continue;
>   }
> ...
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15563) Incorrect getTrashRoot return value when a non-snapshottable dir prefix matches the path of a snapshottable dir

2020-09-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15563?focusedWorklogId=481188=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-481188
 ]

ASF GitHub Bot logged work on HDFS-15563:
-

Author: ASF GitHub Bot
Created on: 10/Sep/20 00:51
Start Date: 10/Sep/20 00:51
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2295:
URL: https://github.com/apache/hadoop/pull/2295#issuecomment-689903340


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |  28m 37s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   3m 18s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  25m 55s |  trunk passed  |
   | +1 :green_heart: |  compile  |   4m  4s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |   3m 56s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   1m  1s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 15s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  17m 51s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 32s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   2m  1s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   2m 22s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   5m 24s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 29s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m  2s |  the patch passed  |
   | +1 :green_heart: |  compile  |   4m 27s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |   4m 27s |  the patch passed  |
   | +1 :green_heart: |  compile  |   3m 52s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |   3m 52s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 53s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 55s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  13m 59s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 27s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 59s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   5m 50s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m  5s |  hadoop-hdfs-client in the patch 
passed.  |
   | -1 :x: |  unit  |  96m  0s |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 43s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 232m 27s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics |
   |   | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
   |   | hadoop.hdfs.TestFileChecksumCompositeCrc |
   |   | hadoop.hdfs.TestFileChecksum |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2295/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2295 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux f01ab8cd82b5 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / e5fe3262702 |
   | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | unit | 

[jira] [Commented] (HDFS-15566) NN restart fails after RollingUpgrade from 3.1.3/3.2.1 to 3.3.0

2020-09-09 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17193240#comment-17193240
 ] 

Hadoop QA commented on HDFS-15566:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 28m 
37s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
 7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
16s{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
11s{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 10s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
26s{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  3m 
15s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
12s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 52s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 3 new + 332 unchanged - 0 fixed = 335 total (was 332) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 46s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
6s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 96m 31s{color} 
| 

[jira] [Commented] (HDFS-15567) [SBN Read] HDFS should expose msync() API to allow downstream applications call it explicetly.

2020-09-09 Thread Konstantin Shvachko (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17193230#comment-17193230
 ] 

Konstantin Shvachko commented on HDFS-15567:


Here are some options to start the discussion:
# A straightforward way is to just add {{msync()}} to {{FileSystem}} with the 
default implementation as unsupported operation. This is similar to snapshot or 
extended attributes operations, which have meaningful implementation only for 
HDFS-based file systems: {{WebHdfsFileSystem}}, {{ViewFileSystem}}.
# Add  {{DistributedFileSystem.msync()}}, but not {{FileSystem}}. {{msync()}} 
is specific to HDFS, although both WebHDFS and ViewFS can benefit from it.
# Expose {{msync()}} via some utility class like {{HAUtil}}. In the end it is 
an HA feature.
# Do nothing - calling {{msync()}} via DFSClient is good enough.

> [SBN Read] HDFS should expose msync() API to allow downstream applications 
> call it explicetly.
> --
>
> Key: HDFS-15567
> URL: https://issues.apache.org/jira/browse/HDFS-15567
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: ha, hdfs-client
>Reporter: Konstantin Shvachko
>Priority: Major
>
> Consistent reads from Standby introduced {{msync()}} API HDFS-13688, which 
> updates client's state ID with current state of the Active NameNode to 
> guarantee consistency of subsequent calls to an ObserverNode. Currently this 
> API is exposed via {{DFSClient}} only, which makes it hard for applications 
> to access {{msync()}}. One way is to use something like this:
> {code}
> if(fs instanceof DistributedFileSystem) {
>   ((DistributedFileSystem)fs).getClient().msync();
> }
> {code}
> This should be exposed both for {{FileSystem}} and {{FileContext}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15554) RBF: force router check file existence in destinations before adding/updating mount points

2020-09-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15554?focusedWorklogId=481115=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-481115
 ]

ASF GitHub Bot logged work on HDFS-15554:
-

Author: ASF GitHub Bot
Created on: 09/Sep/20 22:41
Start Date: 09/Sep/20 22:41
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2266:
URL: https://github.com/apache/hadoop/pull/2266#issuecomment-689860446


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |  30m 53s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  1s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
4 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  29m 46s |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 38s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |   0m 34s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 24s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 38s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  14m 46s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 38s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 55s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   1m 15s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   1m 13s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 32s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 32s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |   0m 32s |  
hadoop-hdfs-project_hadoop-hdfs-rbf-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1
 with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 generated 0 new + 30 
unchanged - 2 fixed = 30 total (was 32)  |
   | +1 :green_heart: |  compile  |   0m 28s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |   0m 28s |  
hadoop-hdfs-project_hadoop-hdfs-rbf-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01
 with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 generated 0 new 
+ 30 unchanged - 2 fixed = 30 total (was 32)  |
   | -0 :warning: |  checkstyle  |   0m 17s |  
hadoop-hdfs-project/hadoop-hdfs-rbf: The patch generated 2 new + 0 unchanged - 
0 fixed = 2 total (was 0)  |
   | +1 :green_heart: |  mvnsite  |   0m 30s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  xml  |   0m  1s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  13m 51s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 33s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 52s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   1m 14s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |   8m 28s |  hadoop-hdfs-rbf in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 34s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 111m 23s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdfs.server.federation.router.TestRouterRpcMultiDestination |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2266/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2266 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml markdownlint |
   | uname | Linux 8083f1367af7 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / e5fe3262702 |
   | Default Java | Private 

[jira] [Created] (HDFS-15567) [SBN Read] HDFS should expose msync() API to allow downstream applications call it explicetly.

2020-09-09 Thread Konstantin Shvachko (Jira)
Konstantin Shvachko created HDFS-15567:
--

 Summary: [SBN Read] HDFS should expose msync() API to allow 
downstream applications call it explicetly.
 Key: HDFS-15567
 URL: https://issues.apache.org/jira/browse/HDFS-15567
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: ha, hdfs-client
Reporter: Konstantin Shvachko


Consistent reads from Standby introduced {{msync()}} API HDFS-13688, which 
updates client's state ID with current state of the Active NameNode to 
guarantee consistency of subsequent calls to an ObserverNode. Currently this 
API is exposed via {{DFSClient}} only, which makes it hard for applications to 
access {{msync()}}. One way is to use something like this:
{code}
if(fs instanceof DistributedFileSystem) {
  ((DistributedFileSystem)fs).getClient().msync();
}
{code}
This should be exposed both for {{FileSystem}} and {{FileContext}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-15566) NN restart fails after RollingUpgrade from 3.1.3/3.2.1 to 3.3.0

2020-09-09 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17193191#comment-17193191
 ] 

Wei-Chiu Chuang edited comment on HDFS-15566 at 9/9/20, 9:44 PM:
-

Should we add a doc to tell users not to rolling upgrade to Hadoop 3.3.0?

We should also consider supporting the downgrade scenario, which means do not 
write out the mtime field to fsimage until the upgrade finalizes. See: 
HDFS-14396


was (Author: jojochuang):
Should we add a doc to tell users not to rolling upgrade to Hadoop 3.3.0?

> NN restart fails after RollingUpgrade from  3.1.3/3.2.1 to 3.3.0
> 
>
> Key: HDFS-15566
> URL: https://issues.apache.org/jira/browse/HDFS-15566
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Blocker
> Attachments: HDFS-15566-001.patch
>
>
> * After rollingUpgrade NN from 3.1.3/3.2.1 to 3.3.0, if the NN is restarted, 
> it fails while replaying edit logs.
>  * HDFS-14922, HDFS-14924, and HDFS-15054 introduced the *modification time* 
> bits to the editLog transactions.
>  * When NN is restarted and the edit logs are replayed, the NN reads the old 
> layout version from the editLog file. When parsing the transactions, it 
> assumes that the transactions are also from the previous layout and hence 
> skips parsing the *modification time* bits.
>  * This cascades into reading the wrong set of bits for other fields and 
> leads to NN shutting down.
> {noformat}
> 2020-09-07 19:34:42,085 | DEBUG | main | Stopping client | Client.java:1361
> 2020-09-07 19:34:42,087 | ERROR | main | Failed to start namenode. | 
> NameNode.java:1751
> java.lang.IllegalArgumentException
>  at com.google.common.base.Preconditions.checkArgument(Preconditions.java:72)
>  at org.apache.hadoop.ipc.ClientId.toString(ClientId.java:56)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogOp.appendRpcIdsToString(FSEditLogOp.java:318)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogOp.access$700(FSEditLogOp.java:153)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogOp$DeleteSnapshotOp.toString(FSEditLogOp.java:3606)
>  at java.lang.String.valueOf(String.java:2994)
>  at java.lang.StringBuilder.append(StringBuilder.java:131)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:305)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:188)
>  at org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:932)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:779)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:337)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1136)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:742)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:654)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:716)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:959)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:932)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1674)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1744){noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15566) NN restart fails after RollingUpgrade from 3.1.3/3.2.1 to 3.3.0

2020-09-09 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17193191#comment-17193191
 ] 

Wei-Chiu Chuang commented on HDFS-15566:


Should we add a doc to tell users not to rolling upgrade to Hadoop 3.3.0?

> NN restart fails after RollingUpgrade from  3.1.3/3.2.1 to 3.3.0
> 
>
> Key: HDFS-15566
> URL: https://issues.apache.org/jira/browse/HDFS-15566
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Blocker
> Attachments: HDFS-15566-001.patch
>
>
> * After rollingUpgrade NN from 3.1.3/3.2.1 to 3.3.0, if the NN is restarted, 
> it fails while replaying edit logs.
>  * HDFS-14922, HDFS-14924, and HDFS-15054 introduced the *modification time* 
> bits to the editLog transactions.
>  * When NN is restarted and the edit logs are replayed, the NN reads the old 
> layout version from the editLog file. When parsing the transactions, it 
> assumes that the transactions are also from the previous layout and hence 
> skips parsing the *modification time* bits.
>  * This cascades into reading the wrong set of bits for other fields and 
> leads to NN shutting down.
> {noformat}
> 2020-09-07 19:34:42,085 | DEBUG | main | Stopping client | Client.java:1361
> 2020-09-07 19:34:42,087 | ERROR | main | Failed to start namenode. | 
> NameNode.java:1751
> java.lang.IllegalArgumentException
>  at com.google.common.base.Preconditions.checkArgument(Preconditions.java:72)
>  at org.apache.hadoop.ipc.ClientId.toString(ClientId.java:56)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogOp.appendRpcIdsToString(FSEditLogOp.java:318)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogOp.access$700(FSEditLogOp.java:153)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogOp$DeleteSnapshotOp.toString(FSEditLogOp.java:3606)
>  at java.lang.String.valueOf(String.java:2994)
>  at java.lang.StringBuilder.append(StringBuilder.java:131)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:305)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:188)
>  at org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:932)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:779)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:337)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1136)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:742)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:654)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:716)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:959)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:932)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1674)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1744){noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15563) Incorrect getTrashRoot return value when a non-snapshottable dir prefix matches the path of a snapshottable dir

2020-09-09 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDFS-15563:
--
Description: 
Note: Only impacts a user if {{dfs.namenode.snapshot.trashroot.enabled}} is 
enabled.

Root cause analysis:

{{SnapshottableDirectoryStatus}} paths retrived inside 
{{DFSClient#getSnapshotRoot}} aren't appended with '/', causing some 
directories with the same path prefix to be mistakenly classified as 
snapshottable directory.

Thanks [~shashikant] for the test case addition.

---

Repro:

{code:java}
1. snapshottable directory present in the cluster
hdfs lsSnapshottableDir
drwx-x-x 0 hrt_2 hrt_2 0 2020-09-08 07:42 0 65536 /user/hrt_2
drwxr-xr-x 0 hrt_4 hrt_4 0 2020-09-08 13:16 0 65536 /user/hrt_4/newdir/subdir2. 
Created a new directory outside snapshottable directory
hdfs dfs -mkdir /user/hrt_4/newdir/subdir23. Tried to delete subdir2 , it failed
hdfs dfs -rm -r /user/hrt_4/newdir/subdir2
rm: Failed to move to trash: hdfs://ns1/user/hrt_4/newdir/subdir2: Source 
/user/hrt_4/newdir/subdir2 and dest 
/user/hrt_4/newdir/subdir/.Trash/hdfs/Current/user/hrt_4/newdir/subdir2 are not 
under the same snapshot root.
{code}
"*/user/hrt_4/newdir/subdir2*" , the trash root location is coming to be 
"*/user/hrt_4/newdir/subdir/.Trash*"

as clear from the msg here:
{noformat}
rm: Failed to move to trash: hdfs://ns1/user/hrt_4/newdir/subdir2: Source 
/user/hrt_4/newdir/subdir2 and dest 
/user/hrt_4/newdir/subdir/.Trash/hdfs/Current/user/hrt_4/newdir/subdir2 are not 
under the same snapshot root.{noformat}

  was:
Note: Only impacts a user if {{dfs.namenode.snapshot.trashroot.enabled}} is 
enabled.

Root cause analysis:

{{DFSClient#getSnapshotRoot}} check loop 

Thanks [~shashikant] for the test case addition.

---

Repro:

{code:java}
1. snapshottable directory present in the cluster
hdfs lsSnapshottableDir
drwx-x-x 0 hrt_2 hrt_2 0 2020-09-08 07:42 0 65536 /user/hrt_2
drwxr-xr-x 0 hrt_4 hrt_4 0 2020-09-08 13:16 0 65536 /user/hrt_4/newdir/subdir2. 
Created a new directory outside snapshottable directory
hdfs dfs -mkdir /user/hrt_4/newdir/subdir23. Tried to delete subdir2 , it failed
hdfs dfs -rm -r /user/hrt_4/newdir/subdir2
rm: Failed to move to trash: hdfs://ns1/user/hrt_4/newdir/subdir2: Source 
/user/hrt_4/newdir/subdir2 and dest 
/user/hrt_4/newdir/subdir/.Trash/hdfs/Current/user/hrt_4/newdir/subdir2 are not 
under the same snapshot root.
{code}
"*/user/hrt_4/newdir/subdir2*" , the trash root location is coming to be 
"*/user/hrt_4/newdir/subdir/.Trash*"

as clear from the msg here:
{noformat}
rm: Failed to move to trash: hdfs://ns1/user/hrt_4/newdir/subdir2: Source 
/user/hrt_4/newdir/subdir2 and dest 
/user/hrt_4/newdir/subdir/.Trash/hdfs/Current/user/hrt_4/newdir/subdir2 are not 
under the same snapshot root.{noformat}


> Incorrect getTrashRoot return value when a non-snapshottable dir prefix 
> matches the path of a snapshottable dir
> ---
>
> Key: HDFS-15563
> URL: https://issues.apache.org/jira/browse/HDFS-15563
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: snapshots
>Affects Versions: 3.4.0
>Reporter: Nilotpal Nandi
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Note: Only impacts a user if {{dfs.namenode.snapshot.trashroot.enabled}} is 
> enabled.
> Root cause analysis:
> {{SnapshottableDirectoryStatus}} paths retrived inside 
> {{DFSClient#getSnapshotRoot}} aren't appended with '/', causing some 
> directories with the same path prefix to be mistakenly classified as 
> snapshottable directory.
> Thanks [~shashikant] for the test case addition.
> ---
> Repro:
> {code:java}
> 1. snapshottable directory present in the cluster
> hdfs lsSnapshottableDir
> drwx-x-x 0 hrt_2 hrt_2 0 2020-09-08 07:42 0 65536 /user/hrt_2
> drwxr-xr-x 0 hrt_4 hrt_4 0 2020-09-08 13:16 0 65536 
> /user/hrt_4/newdir/subdir2. Created a new directory outside snapshottable 
> directory
> hdfs dfs -mkdir /user/hrt_4/newdir/subdir23. Tried to delete subdir2 , it 
> failed
> hdfs dfs -rm -r /user/hrt_4/newdir/subdir2
> rm: Failed to move to trash: hdfs://ns1/user/hrt_4/newdir/subdir2: Source 
> /user/hrt_4/newdir/subdir2 and dest 
> /user/hrt_4/newdir/subdir/.Trash/hdfs/Current/user/hrt_4/newdir/subdir2 are 
> not under the same snapshot root.
> {code}
> "*/user/hrt_4/newdir/subdir2*" , the trash root location is coming to be 
> "*/user/hrt_4/newdir/subdir/.Trash*"
> as clear from the msg here:
> {noformat}
> rm: Failed to move to trash: hdfs://ns1/user/hrt_4/newdir/subdir2: Source 
> /user/hrt_4/newdir/subdir2 and dest 
> 

[jira] [Work logged] (HDFS-15563) Incorrect getTrashRoot return value when a non-snapshottable dir prefix matches the path of a snapshottable dir

2020-09-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15563?focusedWorklogId=481057=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-481057
 ]

ASF GitHub Bot logged work on HDFS-15563:
-

Author: ASF GitHub Bot
Created on: 09/Sep/20 20:57
Start Date: 09/Sep/20 20:57
Worklog Time Spent: 10m 
  Work Description: smengcl opened a new pull request #2295:
URL: https://github.com/apache/hadoop/pull/2295


   https://issues.apache.org/jira/browse/HDFS-15563
   
   This may impact clusters with `dfs.namenode.snapshot.trashroot.enabled` is 
set to `true`.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 481057)
Remaining Estimate: 0h
Time Spent: 10m

> Incorrect getTrashRoot return value when a non-snapshottable dir prefix 
> matches the path of a snapshottable dir
> ---
>
> Key: HDFS-15563
> URL: https://issues.apache.org/jira/browse/HDFS-15563
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: snapshots
>Affects Versions: 3.4.0
>Reporter: Nilotpal Nandi
>Assignee: Siyao Meng
>Priority: Major
> Fix For: 3.4.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Note: Only impacts a user if {{dfs.namenode.snapshot.trashroot.enabled}} is 
> enabled.
> Root cause analysis:
> {{DFSClient#getSnapshotRoot}} check loop 
> Thanks [~shashikant] for the test case addition.
> ---
> Repro:
> {code:java}
> 1. snapshottable directory present in the cluster
> hdfs lsSnapshottableDir
> drwx-x-x 0 hrt_2 hrt_2 0 2020-09-08 07:42 0 65536 /user/hrt_2
> drwxr-xr-x 0 hrt_4 hrt_4 0 2020-09-08 13:16 0 65536 
> /user/hrt_4/newdir/subdir2. Created a new directory outside snapshottable 
> directory
> hdfs dfs -mkdir /user/hrt_4/newdir/subdir23. Tried to delete subdir2 , it 
> failed
> hdfs dfs -rm -r /user/hrt_4/newdir/subdir2
> rm: Failed to move to trash: hdfs://ns1/user/hrt_4/newdir/subdir2: Source 
> /user/hrt_4/newdir/subdir2 and dest 
> /user/hrt_4/newdir/subdir/.Trash/hdfs/Current/user/hrt_4/newdir/subdir2 are 
> not under the same snapshot root.
> {code}
> "*/user/hrt_4/newdir/subdir2*" , the trash root location is coming to be 
> "*/user/hrt_4/newdir/subdir/.Trash*"
> as clear from the msg here:
> {noformat}
> rm: Failed to move to trash: hdfs://ns1/user/hrt_4/newdir/subdir2: Source 
> /user/hrt_4/newdir/subdir2 and dest 
> /user/hrt_4/newdir/subdir/.Trash/hdfs/Current/user/hrt_4/newdir/subdir2 are 
> not under the same snapshot root.{noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15563) Incorrect getTrashRoot return value when a non-snapshottable dir prefix matches the path of a snapshottable dir

2020-09-09 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDFS-15563:
--
Status: Patch Available  (was: Open)

> Incorrect getTrashRoot return value when a non-snapshottable dir prefix 
> matches the path of a snapshottable dir
> ---
>
> Key: HDFS-15563
> URL: https://issues.apache.org/jira/browse/HDFS-15563
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: snapshots
>Affects Versions: 3.4.0
>Reporter: Nilotpal Nandi
>Assignee: Siyao Meng
>Priority: Major
> Fix For: 3.4.0
>
>
> Note: Only impacts a user if {{dfs.namenode.snapshot.trashroot.enabled}} is 
> enabled.
> Root cause analysis:
> {{DFSClient#getSnapshotRoot}} check loop 
> Thanks [~shashikant] for the test case addition.
> ---
> Repro:
> {code:java}
> 1. snapshottable directory present in the cluster
> hdfs lsSnapshottableDir
> drwx-x-x 0 hrt_2 hrt_2 0 2020-09-08 07:42 0 65536 /user/hrt_2
> drwxr-xr-x 0 hrt_4 hrt_4 0 2020-09-08 13:16 0 65536 
> /user/hrt_4/newdir/subdir2. Created a new directory outside snapshottable 
> directory
> hdfs dfs -mkdir /user/hrt_4/newdir/subdir23. Tried to delete subdir2 , it 
> failed
> hdfs dfs -rm -r /user/hrt_4/newdir/subdir2
> rm: Failed to move to trash: hdfs://ns1/user/hrt_4/newdir/subdir2: Source 
> /user/hrt_4/newdir/subdir2 and dest 
> /user/hrt_4/newdir/subdir/.Trash/hdfs/Current/user/hrt_4/newdir/subdir2 are 
> not under the same snapshot root.
> {code}
> "*/user/hrt_4/newdir/subdir2*" , the trash root location is coming to be 
> "*/user/hrt_4/newdir/subdir/.Trash*"
> as clear from the msg here:
> {noformat}
> rm: Failed to move to trash: hdfs://ns1/user/hrt_4/newdir/subdir2: Source 
> /user/hrt_4/newdir/subdir2 and dest 
> /user/hrt_4/newdir/subdir/.Trash/hdfs/Current/user/hrt_4/newdir/subdir2 are 
> not under the same snapshot root.{noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15563) Incorrect getTrashRoot return value when a non-snapshottable dir prefix matches the path of a snapshottable dir

2020-09-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDFS-15563:
--
Labels: pull-request-available  (was: )

> Incorrect getTrashRoot return value when a non-snapshottable dir prefix 
> matches the path of a snapshottable dir
> ---
>
> Key: HDFS-15563
> URL: https://issues.apache.org/jira/browse/HDFS-15563
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: snapshots
>Affects Versions: 3.4.0
>Reporter: Nilotpal Nandi
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Note: Only impacts a user if {{dfs.namenode.snapshot.trashroot.enabled}} is 
> enabled.
> Root cause analysis:
> {{DFSClient#getSnapshotRoot}} check loop 
> Thanks [~shashikant] for the test case addition.
> ---
> Repro:
> {code:java}
> 1. snapshottable directory present in the cluster
> hdfs lsSnapshottableDir
> drwx-x-x 0 hrt_2 hrt_2 0 2020-09-08 07:42 0 65536 /user/hrt_2
> drwxr-xr-x 0 hrt_4 hrt_4 0 2020-09-08 13:16 0 65536 
> /user/hrt_4/newdir/subdir2. Created a new directory outside snapshottable 
> directory
> hdfs dfs -mkdir /user/hrt_4/newdir/subdir23. Tried to delete subdir2 , it 
> failed
> hdfs dfs -rm -r /user/hrt_4/newdir/subdir2
> rm: Failed to move to trash: hdfs://ns1/user/hrt_4/newdir/subdir2: Source 
> /user/hrt_4/newdir/subdir2 and dest 
> /user/hrt_4/newdir/subdir/.Trash/hdfs/Current/user/hrt_4/newdir/subdir2 are 
> not under the same snapshot root.
> {code}
> "*/user/hrt_4/newdir/subdir2*" , the trash root location is coming to be 
> "*/user/hrt_4/newdir/subdir/.Trash*"
> as clear from the msg here:
> {noformat}
> rm: Failed to move to trash: hdfs://ns1/user/hrt_4/newdir/subdir2: Source 
> /user/hrt_4/newdir/subdir2 and dest 
> /user/hrt_4/newdir/subdir/.Trash/hdfs/Current/user/hrt_4/newdir/subdir2 are 
> not under the same snapshot root.{noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15563) Incorrect getTrashRoot return value when a non-snapshottable dir prefix matches the path of a snapshottable dir

2020-09-09 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDFS-15563:
--
Description: 
Note: Only impacts a user if {{dfs.namenode.snapshot.trashroot.enabled}} is 
enabled.

Root cause analysis:

{{DFSClient#getSnapshotRoot}} check loop 

Thanks [~shashikant] for the test case addition.

---

Repro:

{code:java}
1. snapshottable directory present in the cluster
hdfs lsSnapshottableDir
drwx-x-x 0 hrt_2 hrt_2 0 2020-09-08 07:42 0 65536 /user/hrt_2
drwxr-xr-x 0 hrt_4 hrt_4 0 2020-09-08 13:16 0 65536 /user/hrt_4/newdir/subdir2. 
Created a new directory outside snapshottable directory
hdfs dfs -mkdir /user/hrt_4/newdir/subdir23. Tried to delete subdir2 , it failed
hdfs dfs -rm -r /user/hrt_4/newdir/subdir2
rm: Failed to move to trash: hdfs://ns1/user/hrt_4/newdir/subdir2: Source 
/user/hrt_4/newdir/subdir2 and dest 
/user/hrt_4/newdir/subdir/.Trash/hdfs/Current/user/hrt_4/newdir/subdir2 are not 
under the same snapshot root.
{code}
"*/user/hrt_4/newdir/subdir2*" , the trash root location is coming to be 
"*/user/hrt_4/newdir/subdir/.Trash*"

as clear from the msg here:
{noformat}
rm: Failed to move to trash: hdfs://ns1/user/hrt_4/newdir/subdir2: Source 
/user/hrt_4/newdir/subdir2 and dest 
/user/hrt_4/newdir/subdir/.Trash/hdfs/Current/user/hrt_4/newdir/subdir2 are not 
under the same snapshot root.{noformat}

  was:
Note: Only impacts a user if {{dfs.namenode.snapshot.trashroot.enabled}} is 
enabled.

Root cause analysis:

DFSClient#getSnapshotRoot

Thanks [~shashikant] for the test case addition.

---

Repro:

{code:java}
1. snapshottable directory present in the cluster
hdfs lsSnapshottableDir
drwx-x-x 0 hrt_2 hrt_2 0 2020-09-08 07:42 0 65536 /user/hrt_2
drwxr-xr-x 0 hrt_4 hrt_4 0 2020-09-08 13:16 0 65536 /user/hrt_4/newdir/subdir2. 
Created a new directory outside snapshottable directory
hdfs dfs -mkdir /user/hrt_4/newdir/subdir23. Tried to delete subdir2 , it failed
hdfs dfs -rm -r /user/hrt_4/newdir/subdir2
rm: Failed to move to trash: hdfs://ns1/user/hrt_4/newdir/subdir2: Source 
/user/hrt_4/newdir/subdir2 and dest 
/user/hrt_4/newdir/subdir/.Trash/hdfs/Current/user/hrt_4/newdir/subdir2 are not 
under the same snapshot root.
{code}
"*/user/hrt_4/newdir/subdir2*" , the trash root location is coming to be 
"*/user/hrt_4/newdir/subdir/.Trash*"

as clear from the msg here:
{noformat}
rm: Failed to move to trash: hdfs://ns1/user/hrt_4/newdir/subdir2: Source 
/user/hrt_4/newdir/subdir2 and dest 
/user/hrt_4/newdir/subdir/.Trash/hdfs/Current/user/hrt_4/newdir/subdir2 are not 
under the same snapshot root.{noformat}


> Incorrect getTrashRoot return value when a non-snapshottable dir prefix 
> matches the path of a snapshottable dir
> ---
>
> Key: HDFS-15563
> URL: https://issues.apache.org/jira/browse/HDFS-15563
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: snapshots
>Affects Versions: 3.4.0
>Reporter: Nilotpal Nandi
>Assignee: Siyao Meng
>Priority: Major
> Fix For: 3.4.0
>
>
> Note: Only impacts a user if {{dfs.namenode.snapshot.trashroot.enabled}} is 
> enabled.
> Root cause analysis:
> {{DFSClient#getSnapshotRoot}} check loop 
> Thanks [~shashikant] for the test case addition.
> ---
> Repro:
> {code:java}
> 1. snapshottable directory present in the cluster
> hdfs lsSnapshottableDir
> drwx-x-x 0 hrt_2 hrt_2 0 2020-09-08 07:42 0 65536 /user/hrt_2
> drwxr-xr-x 0 hrt_4 hrt_4 0 2020-09-08 13:16 0 65536 
> /user/hrt_4/newdir/subdir2. Created a new directory outside snapshottable 
> directory
> hdfs dfs -mkdir /user/hrt_4/newdir/subdir23. Tried to delete subdir2 , it 
> failed
> hdfs dfs -rm -r /user/hrt_4/newdir/subdir2
> rm: Failed to move to trash: hdfs://ns1/user/hrt_4/newdir/subdir2: Source 
> /user/hrt_4/newdir/subdir2 and dest 
> /user/hrt_4/newdir/subdir/.Trash/hdfs/Current/user/hrt_4/newdir/subdir2 are 
> not under the same snapshot root.
> {code}
> "*/user/hrt_4/newdir/subdir2*" , the trash root location is coming to be 
> "*/user/hrt_4/newdir/subdir/.Trash*"
> as clear from the msg here:
> {noformat}
> rm: Failed to move to trash: hdfs://ns1/user/hrt_4/newdir/subdir2: Source 
> /user/hrt_4/newdir/subdir2 and dest 
> /user/hrt_4/newdir/subdir/.Trash/hdfs/Current/user/hrt_4/newdir/subdir2 are 
> not under the same snapshot root.{noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15563) Incorrect getTrashRoot return value when a non-snapshottable dir prefix matches the path of a snapshottable dir

2020-09-09 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDFS-15563:
--
Description: 
Note: Only impacts a user if {{dfs.namenode.snapshot.trashroot.enabled}} is 
enabled.

Root cause analysis:

DFSClient#getSnapshotRoot

Thanks [~shashikant] for the test case addition.

---

Repro:

{code:java}
1. snapshottable directory present in the cluster
hdfs lsSnapshottableDir
drwx-x-x 0 hrt_2 hrt_2 0 2020-09-08 07:42 0 65536 /user/hrt_2
drwxr-xr-x 0 hrt_4 hrt_4 0 2020-09-08 13:16 0 65536 /user/hrt_4/newdir/subdir2. 
Created a new directory outside snapshottable directory
hdfs dfs -mkdir /user/hrt_4/newdir/subdir23. Tried to delete subdir2 , it failed
hdfs dfs -rm -r /user/hrt_4/newdir/subdir2
rm: Failed to move to trash: hdfs://ns1/user/hrt_4/newdir/subdir2: Source 
/user/hrt_4/newdir/subdir2 and dest 
/user/hrt_4/newdir/subdir/.Trash/hdfs/Current/user/hrt_4/newdir/subdir2 are not 
under the same snapshot root.
{code}
"*/user/hrt_4/newdir/subdir2*" , the trash root location is coming to be 
"*/user/hrt_4/newdir/subdir/.Trash*"

as clear from the msg here:
{noformat}
rm: Failed to move to trash: hdfs://ns1/user/hrt_4/newdir/subdir2: Source 
/user/hrt_4/newdir/subdir2 and dest 
/user/hrt_4/newdir/subdir/.Trash/hdfs/Current/user/hrt_4/newdir/subdir2 are not 
under the same snapshot root.{noformat}

  was:
{code:java}
1. snapshottable directory present in the cluster
hdfs lsSnapshottableDir
drwx-x-x 0 hrt_2 hrt_2 0 2020-09-08 07:42 0 65536 /user/hrt_2
drwxr-xr-x 0 hrt_4 hrt_4 0 2020-09-08 13:16 0 65536 /user/hrt_4/newdir/subdir2. 
Created a new directory outside snapshottable directory
hdfs dfs -mkdir /user/hrt_4/newdir/subdir23. Tried to delete subdir2 , it failed
hdfs dfs -rm -r /user/hrt_4/newdir/subdir2
rm: Failed to move to trash: hdfs://ns1/user/hrt_4/newdir/subdir2: Source 
/user/hrt_4/newdir/subdir2 and dest 
/user/hrt_4/newdir/subdir/.Trash/hdfs/Current/user/hrt_4/newdir/subdir2 are not 
under the same snapshot root.
{code}
"*/user/hrt_4/newdir/subdir2*" , the trash root location is coming to be 
"*/user/hrt_4/newdir/subdir/.Trash*"

as clear from the msg here:
{noformat}
rm: Failed to move to trash: hdfs://ns1/user/hrt_4/newdir/subdir2: Source 
/user/hrt_4/newdir/subdir2 and dest 
/user/hrt_4/newdir/subdir/.Trash/hdfs/Current/user/hrt_4/newdir/subdir2 are not 
under the same snapshot root.{noformat}


> Incorrect getTrashRoot return value when a non-snapshottable dir prefix 
> matches the path of a snapshottable dir
> ---
>
> Key: HDFS-15563
> URL: https://issues.apache.org/jira/browse/HDFS-15563
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: snapshots
>Affects Versions: 3.4.0
>Reporter: Nilotpal Nandi
>Assignee: Siyao Meng
>Priority: Major
> Fix For: 3.4.0
>
>
> Note: Only impacts a user if {{dfs.namenode.snapshot.trashroot.enabled}} is 
> enabled.
> Root cause analysis:
> DFSClient#getSnapshotRoot
> Thanks [~shashikant] for the test case addition.
> ---
> Repro:
> {code:java}
> 1. snapshottable directory present in the cluster
> hdfs lsSnapshottableDir
> drwx-x-x 0 hrt_2 hrt_2 0 2020-09-08 07:42 0 65536 /user/hrt_2
> drwxr-xr-x 0 hrt_4 hrt_4 0 2020-09-08 13:16 0 65536 
> /user/hrt_4/newdir/subdir2. Created a new directory outside snapshottable 
> directory
> hdfs dfs -mkdir /user/hrt_4/newdir/subdir23. Tried to delete subdir2 , it 
> failed
> hdfs dfs -rm -r /user/hrt_4/newdir/subdir2
> rm: Failed to move to trash: hdfs://ns1/user/hrt_4/newdir/subdir2: Source 
> /user/hrt_4/newdir/subdir2 and dest 
> /user/hrt_4/newdir/subdir/.Trash/hdfs/Current/user/hrt_4/newdir/subdir2 are 
> not under the same snapshot root.
> {code}
> "*/user/hrt_4/newdir/subdir2*" , the trash root location is coming to be 
> "*/user/hrt_4/newdir/subdir/.Trash*"
> as clear from the msg here:
> {noformat}
> rm: Failed to move to trash: hdfs://ns1/user/hrt_4/newdir/subdir2: Source 
> /user/hrt_4/newdir/subdir2 and dest 
> /user/hrt_4/newdir/subdir/.Trash/hdfs/Current/user/hrt_4/newdir/subdir2 are 
> not under the same snapshot root.{noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15563) Incorrect getTrashRoot return value when a non-snapshottable dir prefix matches the path of a snapshottable dir

2020-09-09 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDFS-15563:
--
Summary: Incorrect getTrashRoot return value when a non-snapshottable dir 
prefix matches the path of a snapshottable dir  (was: getTrashRoot() location 
can be wrong when the non-snapshottable directory contains the name of the 
snapshottable directory in its name )

> Incorrect getTrashRoot return value when a non-snapshottable dir prefix 
> matches the path of a snapshottable dir
> ---
>
> Key: HDFS-15563
> URL: https://issues.apache.org/jira/browse/HDFS-15563
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: snapshots
>Affects Versions: 3.4.0
>Reporter: Nilotpal Nandi
>Assignee: Siyao Meng
>Priority: Major
> Fix For: 3.4.0
>
>
> {code:java}
> 1. snapshottable directory present in the cluster
> hdfs lsSnapshottableDir
> drwx-x-x 0 hrt_2 hrt_2 0 2020-09-08 07:42 0 65536 /user/hrt_2
> drwxr-xr-x 0 hrt_4 hrt_4 0 2020-09-08 13:16 0 65536 
> /user/hrt_4/newdir/subdir2. Created a new directory outside snapshottable 
> directory
> hdfs dfs -mkdir /user/hrt_4/newdir/subdir23. Tried to delete subdir2 , it 
> failed
> hdfs dfs -rm -r /user/hrt_4/newdir/subdir2
> rm: Failed to move to trash: hdfs://ns1/user/hrt_4/newdir/subdir2: Source 
> /user/hrt_4/newdir/subdir2 and dest 
> /user/hrt_4/newdir/subdir/.Trash/hdfs/Current/user/hrt_4/newdir/subdir2 are 
> not under the same snapshot root.
> {code}
> "*/user/hrt_4/newdir/subdir2*" , the trash root location is coming to be 
> "*/user/hrt_4/newdir/subdir/.Trash*"
> as clear from the msg here:
> {noformat}
> rm: Failed to move to trash: hdfs://ns1/user/hrt_4/newdir/subdir2: Source 
> /user/hrt_4/newdir/subdir2 and dest 
> /user/hrt_4/newdir/subdir/.Trash/hdfs/Current/user/hrt_4/newdir/subdir2 are 
> not under the same snapshot root.{noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13293) RBF: The RouterRPCServer should transfer CallerContext and client ip to NamenodeRpcServer

2020-09-09 Thread Jira


[ 
https://issues.apache.org/jira/browse/HDFS-13293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17193158#comment-17193158
 ] 

Íñigo Goiri commented on HDFS-13293:


I like it too, let's remove the changes to commons though.

> RBF: The RouterRPCServer should transfer CallerContext and client ip to 
> NamenodeRpcServer
> -
>
> Key: HDFS-13293
> URL: https://issues.apache.org/jira/browse/HDFS-13293
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: maobaolong
>Assignee: Fei Hui
>Priority: Major
> Attachments: HDFS-13293.001.patch
>
>
> Otherwise, the namenode don't know the client's callerContext



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15554) RBF: force router check file existence in destinations before adding/updating mount points

2020-09-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15554?focusedWorklogId=481039=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-481039
 ]

ASF GitHub Bot logged work on HDFS-15554:
-

Author: ASF GitHub Bot
Created on: 09/Sep/20 20:07
Start Date: 09/Sep/20 20:07
Worklog Time Spent: 10m 
  Work Description: goiri commented on a change in pull request #2266:
URL: https://github.com/apache/hadoop/pull/2266#discussion_r485892025



##
File path: 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterAdmin.java
##
@@ -78,6 +83,11 @@
   "Hadoop:service=Router,name=FederationRPC";
   private static List mockMountTable;
   private static StateStoreService stateStore;
+  private static RouterRpcClient mockRpcClient;
+  private static final Map mockResponse0 =

Review comment:
   checkstyle is complaining





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 481039)
Time Spent: 2h 50m  (was: 2h 40m)

> RBF: force router check file existence in destinations before adding/updating 
> mount points
> --
>
> Key: HDFS-15554
> URL: https://issues.apache.org/jira/browse/HDFS-15554
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Fengnan Li
>Assignee: Fengnan Li
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> Adding/Updating mount points right now is only a router action without 
> validation in the downstream namenodes for the destination files/directories.
> In practice we have set up the dangling mount points and when clients call 
> listStatus they would get the file returned, but then if they try to access 
> the file FileNotFoundException would be thrown out.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15554) RBF: force router check file existence in destinations before adding/updating mount points

2020-09-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15554?focusedWorklogId=481038=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-481038
 ]

ASF GitHub Bot logged work on HDFS-15554:
-

Author: ASF GitHub Bot
Created on: 09/Sep/20 20:06
Start Date: 09/Sep/20 20:06
Worklog Time Spent: 10m 
  Work Description: goiri commented on a change in pull request #2266:
URL: https://github.com/apache/hadoop/pull/2266#discussion_r485891289



##
File path: 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterAdmin.java
##
@@ -128,7 +175,6 @@ public void testAddMountTable() throws IOException {
 MountTable newEntry = MountTable.newInstance(
 "/testpath", Collections.singletonMap("ns0", "/testdir"),
 Time.now(), Time.now());
-

Review comment:
   Avoid the empty change





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 481038)
Time Spent: 2h 40m  (was: 2.5h)

> RBF: force router check file existence in destinations before adding/updating 
> mount points
> --
>
> Key: HDFS-15554
> URL: https://issues.apache.org/jira/browse/HDFS-15554
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Fengnan Li
>Assignee: Fengnan Li
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> Adding/Updating mount points right now is only a router action without 
> validation in the downstream namenodes for the destination files/directories.
> In practice we have set up the dangling mount points and when clients call 
> listStatus they would get the file returned, but then if they try to access 
> the file FileNotFoundException would be thrown out.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-14350) dfs.datanode.ec.reconstruction.threads not take effect

2020-09-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14350?focusedWorklogId=480962=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-480962
 ]

ASF GitHub Bot logged work on HDFS-14350:
-

Author: ASF GitHub Bot
Created on: 09/Sep/20 18:03
Start Date: 09/Sep/20 18:03
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #582:
URL: https://github.com/apache/hadoop/pull/582#issuecomment-689726036


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m  0s |  Docker mode activated.  |
   | -1 :x: |  patch  |   0m  6s |  https://github.com/apache/hadoop/pull/582 
does not apply to trunk. Rebase required? Wrong Branch? See 
https://wiki.apache.org/hadoop/HowToContribute for help.  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/582 |
   | JIRA Issue | HDFS-14350 |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-582/1/console |
   | versions | git=2.17.1 |
   | Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 480962)
Remaining Estimate: 0h
Time Spent: 10m

> dfs.datanode.ec.reconstruction.threads not take effect
> --
>
> Key: HDFS-14350
> URL: https://issues.apache.org/jira/browse/HDFS-14350
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, ec
>Affects Versions: 3.2.0
>Reporter: hunshenshi
>Assignee: hunshenshi
>Priority: Major
> Fix For: 3.2.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> In ErasureCodingWorker, stripedReconstructionPool is create by 
> {code:java}
> initializeStripedBlkReconstructionThreadPool(conf.getInt(
> DFSConfigKeys.DFS_DN_EC_RECONSTRUCTION_THREADS_KEY,
> DFSConfigKeys.DFS_DN_EC_RECONSTRUCTION_THREADS_DEFAULT));
> private void initializeStripedBlkReconstructionThreadPool(int numThreads) {
>   LOG.debug("Using striped block reconstruction; pool threads={}",
>   numThreads);
>   stripedReconstructionPool = DFSUtilClient.getThreadPoolExecutor(2,
>   numThreads, 60, new LinkedBlockingQueue<>(),
>   "StripedBlockReconstruction-", false);
>   stripedReconstructionPool.allowCoreThreadTimeOut(true);
> }{code}
> so stripedReconstructionPool is a ThreadPoolExecutor, and the queue is a 
> LinkedBlockingQueue, then the active thread is awalys 2, the 
> dfs.datanode.ec.reconstruction.threads not take effect.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14350) dfs.datanode.ec.reconstruction.threads not take effect

2020-09-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDFS-14350:
--
Labels: pull-request-available  (was: )

> dfs.datanode.ec.reconstruction.threads not take effect
> --
>
> Key: HDFS-14350
> URL: https://issues.apache.org/jira/browse/HDFS-14350
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, ec
>Affects Versions: 3.2.0
>Reporter: hunshenshi
>Assignee: hunshenshi
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.2.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> In ErasureCodingWorker, stripedReconstructionPool is create by 
> {code:java}
> initializeStripedBlkReconstructionThreadPool(conf.getInt(
> DFSConfigKeys.DFS_DN_EC_RECONSTRUCTION_THREADS_KEY,
> DFSConfigKeys.DFS_DN_EC_RECONSTRUCTION_THREADS_DEFAULT));
> private void initializeStripedBlkReconstructionThreadPool(int numThreads) {
>   LOG.debug("Using striped block reconstruction; pool threads={}",
>   numThreads);
>   stripedReconstructionPool = DFSUtilClient.getThreadPoolExecutor(2,
>   numThreads, 60, new LinkedBlockingQueue<>(),
>   "StripedBlockReconstruction-", false);
>   stripedReconstructionPool.allowCoreThreadTimeOut(true);
> }{code}
> so stripedReconstructionPool is a ThreadPoolExecutor, and the queue is a 
> LinkedBlockingQueue, then the active thread is awalys 2, the 
> dfs.datanode.ec.reconstruction.threads not take effect.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14350) dfs.datanode.ec.reconstruction.threads not take effect

2020-09-09 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17193094#comment-17193094
 ] 

Hadoop QA commented on HDFS-14350:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  6s{color} 
| {color:red} https://github.com/apache/hadoop/pull/582 does not apply to 
trunk. Rebase required? Wrong Branch? See 
https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| GITHUB PR | https://github.com/apache/hadoop/pull/582 |
| JIRA Issue | HDFS-14350 |
| Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-582/1/console |
| versions | git=2.17.1 |
| Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org |


This message was automatically generated.



> dfs.datanode.ec.reconstruction.threads not take effect
> --
>
> Key: HDFS-14350
> URL: https://issues.apache.org/jira/browse/HDFS-14350
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, ec
>Affects Versions: 3.2.0
>Reporter: hunshenshi
>Assignee: hunshenshi
>Priority: Major
> Fix For: 3.2.0
>
>
> In ErasureCodingWorker, stripedReconstructionPool is create by 
> {code:java}
> initializeStripedBlkReconstructionThreadPool(conf.getInt(
> DFSConfigKeys.DFS_DN_EC_RECONSTRUCTION_THREADS_KEY,
> DFSConfigKeys.DFS_DN_EC_RECONSTRUCTION_THREADS_DEFAULT));
> private void initializeStripedBlkReconstructionThreadPool(int numThreads) {
>   LOG.debug("Using striped block reconstruction; pool threads={}",
>   numThreads);
>   stripedReconstructionPool = DFSUtilClient.getThreadPoolExecutor(2,
>   numThreads, 60, new LinkedBlockingQueue<>(),
>   "StripedBlockReconstruction-", false);
>   stripedReconstructionPool.allowCoreThreadTimeOut(true);
> }{code}
> so stripedReconstructionPool is a ThreadPoolExecutor, and the queue is a 
> LinkedBlockingQueue, then the active thread is awalys 2, the 
> dfs.datanode.ec.reconstruction.threads not take effect.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15566) NN restart fails after RollingUpgrade from 3.1.3/3.2.1 to 3.3.0

2020-09-09 Thread Brahma Reddy Battula (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15566?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-15566:

Attachment: HDFS-15566-001.patch

> NN restart fails after RollingUpgrade from  3.1.3/3.2.1 to 3.3.0
> 
>
> Key: HDFS-15566
> URL: https://issues.apache.org/jira/browse/HDFS-15566
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Blocker
> Attachments: HDFS-15566-001.patch
>
>
> * After rollingUpgrade NN from 3.1.3/3.2.1 to 3.3.0, if the NN is restarted, 
> it fails while replaying edit logs.
>  * HDFS-14922, HDFS-14924, and HDFS-15054 introduced the *modification time* 
> bits to the editLog transactions.
>  * When NN is restarted and the edit logs are replayed, the NN reads the old 
> layout version from the editLog file. When parsing the transactions, it 
> assumes that the transactions are also from the previous layout and hence 
> skips parsing the *modification time* bits.
>  * This cascades into reading the wrong set of bits for other fields and 
> leads to NN shutting down.
> {noformat}
> 2020-09-07 19:34:42,085 | DEBUG | main | Stopping client | Client.java:1361
> 2020-09-07 19:34:42,087 | ERROR | main | Failed to start namenode. | 
> NameNode.java:1751
> java.lang.IllegalArgumentException
>  at com.google.common.base.Preconditions.checkArgument(Preconditions.java:72)
>  at org.apache.hadoop.ipc.ClientId.toString(ClientId.java:56)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogOp.appendRpcIdsToString(FSEditLogOp.java:318)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogOp.access$700(FSEditLogOp.java:153)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogOp$DeleteSnapshotOp.toString(FSEditLogOp.java:3606)
>  at java.lang.String.valueOf(String.java:2994)
>  at java.lang.StringBuilder.append(StringBuilder.java:131)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:305)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:188)
>  at org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:932)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:779)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:337)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1136)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:742)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:654)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:716)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:959)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:932)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1674)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1744){noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-15566) NN restart fails after RollingUpgrade from 3.1.3/3.2.1 to 3.3.0

2020-09-09 Thread Brahma Reddy Battula (Jira)
Brahma Reddy Battula created HDFS-15566:
---

 Summary: NN restart fails after RollingUpgrade from  3.1.3/3.2.1 
to 3.3.0
 Key: HDFS-15566
 URL: https://issues.apache.org/jira/browse/HDFS-15566
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Brahma Reddy Battula


* After rollingUpgrade NN from 3.1.3/3.2.1 to 3.3.0, if the NN is restarted, it 
fails while replaying edit logs.
 * HDFS-14922, HDFS-14924, and HDFS-15054 introduced the *modification time* 
bits to the editLog transactions.

 * When NN is restarted and the edit logs are replayed, the NN reads the old 
layout version from the editLog file. When parsing the transactions, it assumes 
that the transactions are also from the previous layout and hence skips parsing 
the *modification time* bits.
 * This cascades into reading the wrong set of bits for other fields and leads 
to NN shutting down.

{noformat}
2020-09-07 19:34:42,085 | DEBUG | main | Stopping client | Client.java:1361
2020-09-07 19:34:42,087 | ERROR | main | Failed to start namenode. | 
NameNode.java:1751
java.lang.IllegalArgumentException
 at com.google.common.base.Preconditions.checkArgument(Preconditions.java:72)
 at org.apache.hadoop.ipc.ClientId.toString(ClientId.java:56)
 at 
org.apache.hadoop.hdfs.server.namenode.FSEditLogOp.appendRpcIdsToString(FSEditLogOp.java:318)
 at 
org.apache.hadoop.hdfs.server.namenode.FSEditLogOp.access$700(FSEditLogOp.java:153)
 at 
org.apache.hadoop.hdfs.server.namenode.FSEditLogOp$DeleteSnapshotOp.toString(FSEditLogOp.java:3606)
 at java.lang.String.valueOf(String.java:2994)
 at java.lang.StringBuilder.append(StringBuilder.java:131)
 at 
org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:305)
 at 
org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:188)
 at org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:932)
 at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:779)
 at 
org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:337)
 at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1136)
 at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:742)
 at 
org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:654)
 at 
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:716)
 at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:959)
 at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:932)
 at 
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1674)
 at 
org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1744){noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-15566) NN restart fails after RollingUpgrade from 3.1.3/3.2.1 to 3.3.0

2020-09-09 Thread Brahma Reddy Battula (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15566?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula reassigned HDFS-15566:
---

Assignee: Brahma Reddy Battula

> NN restart fails after RollingUpgrade from  3.1.3/3.2.1 to 3.3.0
> 
>
> Key: HDFS-15566
> URL: https://issues.apache.org/jira/browse/HDFS-15566
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Blocker
>
> * After rollingUpgrade NN from 3.1.3/3.2.1 to 3.3.0, if the NN is restarted, 
> it fails while replaying edit logs.
>  * HDFS-14922, HDFS-14924, and HDFS-15054 introduced the *modification time* 
> bits to the editLog transactions.
>  * When NN is restarted and the edit logs are replayed, the NN reads the old 
> layout version from the editLog file. When parsing the transactions, it 
> assumes that the transactions are also from the previous layout and hence 
> skips parsing the *modification time* bits.
>  * This cascades into reading the wrong set of bits for other fields and 
> leads to NN shutting down.
> {noformat}
> 2020-09-07 19:34:42,085 | DEBUG | main | Stopping client | Client.java:1361
> 2020-09-07 19:34:42,087 | ERROR | main | Failed to start namenode. | 
> NameNode.java:1751
> java.lang.IllegalArgumentException
>  at com.google.common.base.Preconditions.checkArgument(Preconditions.java:72)
>  at org.apache.hadoop.ipc.ClientId.toString(ClientId.java:56)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogOp.appendRpcIdsToString(FSEditLogOp.java:318)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogOp.access$700(FSEditLogOp.java:153)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogOp$DeleteSnapshotOp.toString(FSEditLogOp.java:3606)
>  at java.lang.String.valueOf(String.java:2994)
>  at java.lang.StringBuilder.append(StringBuilder.java:131)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:305)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:188)
>  at org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:932)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:779)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:337)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1136)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:742)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:654)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:716)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:959)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:932)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1674)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1744){noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15566) NN restart fails after RollingUpgrade from 3.1.3/3.2.1 to 3.3.0

2020-09-09 Thread Brahma Reddy Battula (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15566?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-15566:

Status: Patch Available  (was: Open)

> NN restart fails after RollingUpgrade from  3.1.3/3.2.1 to 3.3.0
> 
>
> Key: HDFS-15566
> URL: https://issues.apache.org/jira/browse/HDFS-15566
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Blocker
> Attachments: HDFS-15566-001.patch
>
>
> * After rollingUpgrade NN from 3.1.3/3.2.1 to 3.3.0, if the NN is restarted, 
> it fails while replaying edit logs.
>  * HDFS-14922, HDFS-14924, and HDFS-15054 introduced the *modification time* 
> bits to the editLog transactions.
>  * When NN is restarted and the edit logs are replayed, the NN reads the old 
> layout version from the editLog file. When parsing the transactions, it 
> assumes that the transactions are also from the previous layout and hence 
> skips parsing the *modification time* bits.
>  * This cascades into reading the wrong set of bits for other fields and 
> leads to NN shutting down.
> {noformat}
> 2020-09-07 19:34:42,085 | DEBUG | main | Stopping client | Client.java:1361
> 2020-09-07 19:34:42,087 | ERROR | main | Failed to start namenode. | 
> NameNode.java:1751
> java.lang.IllegalArgumentException
>  at com.google.common.base.Preconditions.checkArgument(Preconditions.java:72)
>  at org.apache.hadoop.ipc.ClientId.toString(ClientId.java:56)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogOp.appendRpcIdsToString(FSEditLogOp.java:318)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogOp.access$700(FSEditLogOp.java:153)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogOp$DeleteSnapshotOp.toString(FSEditLogOp.java:3606)
>  at java.lang.String.valueOf(String.java:2994)
>  at java.lang.StringBuilder.append(StringBuilder.java:131)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:305)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:188)
>  at org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:932)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:779)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:337)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1136)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:742)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:654)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:716)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:959)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:932)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1674)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1744){noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15548) Allow configuring DISK/ARCHIVE storage types on same device mount

2020-09-09 Thread Leon Gao (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17193079#comment-17193079
 ] 

Leon Gao commented on HDFS-15548:
-

Yes, I think the common use case will be between DISK and ARCHIVE, as they are 
normally on HDDs.

Putting cold blocks on SSD would be a waste so I guess it may not be needed

> Allow configuring DISK/ARCHIVE storage types on same device mount
> -
>
> Key: HDFS-15548
> URL: https://issues.apache.org/jira/browse/HDFS-15548
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Reporter: Leon Gao
>Assignee: Leon Gao
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> We can allow configuring DISK/ARCHIVE storage types on the same device mount 
> on two separate directories.
> Users should be able to configure the capacity for each. Also, the datanode 
> usage report should report stats correctly.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15565) Remove the invalid code in the Balancer#doBalance() method.

2020-09-09 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17193068#comment-17193068
 ] 

Hadoop QA commented on HDFS-15565:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  4m 
44s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 36m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
31s{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
27s{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
19m 38s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
35s{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  4m  
3s{color} | {color:blue} Used deprecated FindBugs config; considering switching 
to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
0s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
35s{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
50s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 
0 new + 32 unchanged - 1 fixed = 32 total (was 33) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m 29s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
8s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}125m 26s{color} 
| {color:red} 

[jira] [Work logged] (HDFS-15565) Remove the invalid code in the Balancer#doBalance() method.

2020-09-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15565?focusedWorklogId=480919=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-480919
 ]

ASF GitHub Bot logged work on HDFS-15565:
-

Author: ASF GitHub Bot
Created on: 09/Sep/20 16:43
Start Date: 09/Sep/20 16:43
Worklog Time Spent: 10m 
  Work Description: sunchao commented on a change in pull request #2292:
URL: https://github.com/apache/hadoop/pull/2292#discussion_r485763717



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java
##
@@ -705,7 +705,6 @@ static private int doBalance(Collection namenodes,
 LOG.info("excluded nodes = " + p.getExcludedNodes());
 LOG.info("source nodes = " + p.getSourceNodes());
 checkKeytabAndInit(conf);
-System.out.println("Time Stamp   Iteration#  Bytes Already 
Moved  Bytes Left To Move  Bytes Being Moved");

Review comment:
   Hmm why you think this is not valid? I thought this is the header line 
for the balancer progress.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 480919)
Time Spent: 0.5h  (was: 20m)

> Remove the invalid code in the Balancer#doBalance() method.
> ---
>
> Key: HDFS-15565
> URL: https://issues.apache.org/jira/browse/HDFS-15565
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: balancer  mover
>Reporter: jianghua zhu
>Assignee: jianghua zhu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.0
>
> Attachments: HDFS-15565.001.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> In the Balancer#doBalance() method, an invalid line of code is added, as 
> follows:
>  static private int doBalance(Collection namenodes,
>  Collection nsIds, final BalancerParameters p, Configuration conf)
>  throws IOException, InterruptedException
> { ... System.out.println("Time Stamp Iteration# Bytes Already Moved Bytes 
> Left To Move Bytes Being Moved"); ... }
>  
> I think it was originally used for testing.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15564) Add Test annotation for TestPersistBlocks#testRestartDfsWithSync

2020-09-09 Thread Jira


[ 
https://issues.apache.org/jira/browse/HDFS-15564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17193015#comment-17193015
 ] 

Íñigo Goiri commented on HDFS-15564:


The tests run now:
https://ci-hadoop.apache.org/job/PreCommit-HDFS-Build/138/testReport/org.apache.hadoop.hdfs/TestPersistBlocks/

+1 on  [^HDFS-15564.001.patch].

> Add Test annotation for TestPersistBlocks#testRestartDfsWithSync
> 
>
> Key: HDFS-15564
> URL: https://issues.apache.org/jira/browse/HDFS-15564
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: hdfs
>Affects Versions: 3.3.0
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Minor
> Attachments: HDFS-15564.001.patch
>
>
> Add Test annotation for TestPersistBlocks#testRestartDfsWithSync,  otherwise 
> it’s dead code



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15565) Remove the invalid code in the Balancer#doBalance() method.

2020-09-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15565?focusedWorklogId=480913=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-480913
 ]

ASF GitHub Bot logged work on HDFS-15565:
-

Author: ASF GitHub Bot
Created on: 09/Sep/20 16:23
Start Date: 09/Sep/20 16:23
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2292:
URL: https://github.com/apache/hadoop/pull/2292#issuecomment-689671513


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 42s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  34m 51s |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 32s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |   1m 12s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 50s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 30s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  18m  8s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 56s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 22s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   2m 56s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   2m 54s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 10s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  6s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |   1m  6s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  2s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |   1m  2s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 40s |  
hadoop-hdfs-project/hadoop-hdfs: The patch generated 0 new + 33 unchanged - 1 
fixed = 33 total (was 34)  |
   | +1 :green_heart: |  mvnsite  |   1m 10s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  13m 53s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 48s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 21s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   2m 58s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 100m  8s |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 40s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 190m 43s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
   |   | hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics |
   |   | hadoop.hdfs.TestFileChecksumCompositeCrc |
   |   | hadoop.hdfs.TestStripedFileAppend |
   |   | hadoop.hdfs.TestFileChecksum |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2292/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2292 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux ccd0b8079a2d 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 773ac799c63 |
   | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | unit | 

[jira] [Commented] (HDFS-15516) Add info for create flags in NameNode audit logs

2020-09-09 Thread jianghua zhu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17192949#comment-17192949
 ] 

jianghua zhu commented on HDFS-15516:
-

Hi [~hexiaoqiao] , is your suggestion like this:
logAuditEvent(false, "create(createFlag=CREATE:OVERWRITE)", src);

 

> Add info for create flags in NameNode audit logs
> 
>
> Key: HDFS-15516
> URL: https://issues.apache.org/jira/browse/HDFS-15516
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Shashikant Banerjee
>Assignee: jianghua zhu
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDFS-15516.001.patch, HDFS-15516.002.patch, 
> HDFS-15516.003.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Currently, if file create happens with flags like overwrite , the audit logs 
> doesn't seem to contain the info regarding the flags in the audit logs. It 
> would be useful to add info regarding the create options in the audit logs 
> similar to Rename ops. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15516) Add info for create flags in NameNode audit logs

2020-09-09 Thread jianghua zhu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17192943#comment-17192943
 ] 

jianghua zhu commented on HDFS-15516:
-

[~LiJinglun] , thank you very much for your suggestions.

 

> Add info for create flags in NameNode audit logs
> 
>
> Key: HDFS-15516
> URL: https://issues.apache.org/jira/browse/HDFS-15516
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Shashikant Banerjee
>Assignee: jianghua zhu
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDFS-15516.001.patch, HDFS-15516.002.patch, 
> HDFS-15516.003.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Currently, if file create happens with flags like overwrite , the audit logs 
> doesn't seem to contain the info regarding the flags in the audit logs. It 
> would be useful to add info regarding the create options in the audit logs 
> similar to Rename ops. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15564) Add Test annotation for TestPersistBlocks#testRestartDfsWithSync

2020-09-09 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17192940#comment-17192940
 ] 

Hadoop QA commented on HDFS-15564:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
34s{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
24s{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
19m 25s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
29s{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  3m 
27s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
23s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 
0 new + 21 unchanged - 2 fixed = 21 total (was 23) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 49s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
14s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}120m 56s{color} 
| {color:red} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 

[jira] [Commented] (HDFS-15516) Add info for create flags in NameNode audit logs

2020-09-09 Thread Xiaoqiao He (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17192891#comment-17192891
 ] 

Xiaoqiao He commented on HDFS-15516:


[~jianghuazhu] Thanks for involve me here. Actually, I do not think it is a 
good idea to add another new field for create, because it could break some log 
collector or parser. I prefer to add parameter for cmd field of `create`, just 
as `rename` do. I would like to hear some other suggestions. Thanks.

> Add info for create flags in NameNode audit logs
> 
>
> Key: HDFS-15516
> URL: https://issues.apache.org/jira/browse/HDFS-15516
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Shashikant Banerjee
>Assignee: jianghua zhu
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDFS-15516.001.patch, HDFS-15516.002.patch, 
> HDFS-15516.003.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Currently, if file create happens with flags like overwrite , the audit logs 
> doesn't seem to contain the info regarding the flags in the audit logs. It 
> would be useful to add info regarding the create options in the audit logs 
> similar to Rename ops. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14694) Call recoverLease on DFSOutputStream close exception

2020-09-09 Thread Xiaoqiao He (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoqiao He updated HDFS-14694:
---
Fix Version/s: 3.4.0

> Call recoverLease on DFSOutputStream close exception
> 
>
> Key: HDFS-14694
> URL: https://issues.apache.org/jira/browse/HDFS-14694
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Reporter: Chen Zhang
>Assignee: Lisheng Sun
>Priority: Major
> Fix For: 3.4.0
>
> Attachments: HDFS-14694.001.patch, HDFS-14694.002.patch, 
> HDFS-14694.003.patch, HDFS-14694.004.patch, HDFS-14694.005.patch, 
> HDFS-14694.006.patch, HDFS-14694.007.patch, HDFS-14694.008.patch, 
> HDFS-14694.009.patch, HDFS-14694.010.patch, HDFS-14694.011.patch, 
> HDFS-14694.012.patch, HDFS-14694.013.patch, HDFS-14694.014.patch
>
>
> HDFS uses file-lease to manage opened files, when a file is not closed 
> normally, NN will recover lease automatically after hard limit exceeded. But 
> for a long running service(e.g. HBase), the hdfs-client will never die and NN 
> don't have any chances to recover the file.
> Usually client program needs to handle exceptions by themself to avoid this 
> condition(e.g. HBase automatically call recover lease for files that not 
> closed normally), but in our experience, most services (in our company) don't 
> process this condition properly, which will cause lots of files in abnormal 
> status or even data loss.
> This Jira propose to add a feature that call recoverLease operation 
> automatically when DFSOutputSteam close encounters exception. It should be 
> disabled by default, but when somebody builds a long-running service based on 
> HDFS, they can enable this option.
> We've add this feature to our internal Hadoop distribution for more than 3 
> years, it's quite useful according our experience.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14694) Call recoverLease on DFSOutputStream close exception

2020-09-09 Thread Xiaoqiao He (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoqiao He updated HDFS-14694:
---
Hadoop Flags: Reviewed
  Resolution: Fixed
  Status: Resolved  (was: Patch Available)

Committed to trunk.
Thanks [~leosun08] and [~zhangchen] for your report and contributions!
Thanks [~ayushtkn],[~weichiu] and [~xkrogen] for your reviews!

> Call recoverLease on DFSOutputStream close exception
> 
>
> Key: HDFS-14694
> URL: https://issues.apache.org/jira/browse/HDFS-14694
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Reporter: Chen Zhang
>Assignee: Lisheng Sun
>Priority: Major
> Attachments: HDFS-14694.001.patch, HDFS-14694.002.patch, 
> HDFS-14694.003.patch, HDFS-14694.004.patch, HDFS-14694.005.patch, 
> HDFS-14694.006.patch, HDFS-14694.007.patch, HDFS-14694.008.patch, 
> HDFS-14694.009.patch, HDFS-14694.010.patch, HDFS-14694.011.patch, 
> HDFS-14694.012.patch, HDFS-14694.013.patch, HDFS-14694.014.patch
>
>
> HDFS uses file-lease to manage opened files, when a file is not closed 
> normally, NN will recover lease automatically after hard limit exceeded. But 
> for a long running service(e.g. HBase), the hdfs-client will never die and NN 
> don't have any chances to recover the file.
> Usually client program needs to handle exceptions by themself to avoid this 
> condition(e.g. HBase automatically call recover lease for files that not 
> closed normally), but in our experience, most services (in our company) don't 
> process this condition properly, which will cause lots of files in abnormal 
> status or even data loss.
> This Jira propose to add a feature that call recoverLease operation 
> automatically when DFSOutputSteam close encounters exception. It should be 
> disabled by default, but when somebody builds a long-running service based on 
> HDFS, they can enable this option.
> We've add this feature to our internal Hadoop distribution for more than 3 
> years, it's quite useful according our experience.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13522) Support observer node from Router-Based Federation

2020-09-09 Thread Hemanth Boyina (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-13522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17192877#comment-17192877
 ] 

Hemanth Boyina commented on HDFS-13522:
---

thanks [~surendrasingh] for the review
{quote} # Load balancing between multiple observer.{quote}
we are shuffling the observers so that same observer doesn't always receive the 
call
{code:java}
// shuffle the observers if observer are greater than 1
// so same observer  doesn't come always
if (observerRead && observerMemberships.size() > 1) {
  Collections.shuffle(observerMemberships);
  Collections
  .sort(nonObserverMemberships, new NamenodePriorityComparator());
} {code}
{quote}2.Webhdfs call, I think you may get NPE for webhdfs call
{quote}
will fix and add a test case and will update the patch

> Support observer node from Router-Based Federation
> --
>
> Key: HDFS-13522
> URL: https://issues.apache.org/jira/browse/HDFS-13522
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: federation, namenode
>Reporter: Erik Krogen
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-13522.001.patch, HDFS-13522_WIP.patch, RBF_ 
> Observer support.pdf, Router+Observer RPC clogging.png, 
> ShortTerm-Routers+Observer.png
>
>
> Changes will need to occur to the router to support the new observer node.
> One such change will be to make the router understand the observer state, 
> e.g. {{FederationNamenodeServiceState}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15565) Remove the invalid code in the Balancer#doBalance() method.

2020-09-09 Thread jianghua zhu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jianghua zhu updated HDFS-15565:

Description: 
In the Balancer#doBalance() method, an invalid line of code is added, as 
follows:
 static private int doBalance(Collection namenodes,
 Collection nsIds, final BalancerParameters p, Configuration conf)
 throws IOException, InterruptedException

{ ... System.out.println("Time Stamp Iteration# Bytes Already Moved Bytes Left 
To Move Bytes Being Moved"); ... }

 

I think it was originally used for testing.

  was:
In the Balancer#doBalance() method, an invalid line of code is added, as 
follows:
static private int doBalance(Collection namenodes,
 Collection nsIds, final BalancerParameters p, Configuration conf)
 throws IOException, InterruptedException {
...
System.out.println("Time Stamp Iteration# Bytes Already Moved Bytes Left To 
Move Bytes Being Moved");
...
}


> Remove the invalid code in the Balancer#doBalance() method.
> ---
>
> Key: HDFS-15565
> URL: https://issues.apache.org/jira/browse/HDFS-15565
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: balancer  mover
>Reporter: jianghua zhu
>Assignee: jianghua zhu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.0
>
> Attachments: HDFS-15565.001.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> In the Balancer#doBalance() method, an invalid line of code is added, as 
> follows:
>  static private int doBalance(Collection namenodes,
>  Collection nsIds, final BalancerParameters p, Configuration conf)
>  throws IOException, InterruptedException
> { ... System.out.println("Time Stamp Iteration# Bytes Already Moved Bytes 
> Left To Move Bytes Being Moved"); ... }
>  
> I think it was originally used for testing.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15565) Remove the invalid code in the Balancer#doBalance() method.

2020-09-09 Thread jianghua zhu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jianghua zhu updated HDFS-15565:

Attachment: HDFS-15565.001.patch
Status: Patch Available  (was: Open)

> Remove the invalid code in the Balancer#doBalance() method.
> ---
>
> Key: HDFS-15565
> URL: https://issues.apache.org/jira/browse/HDFS-15565
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: balancer  mover
>Reporter: jianghua zhu
>Assignee: jianghua zhu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.0
>
> Attachments: HDFS-15565.001.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> In the Balancer#doBalance() method, an invalid line of code is added, as 
> follows:
> static private int doBalance(Collection namenodes,
>  Collection nsIds, final BalancerParameters p, Configuration conf)
>  throws IOException, InterruptedException {
> ...
> System.out.println("Time Stamp Iteration# Bytes Already Moved Bytes Left To 
> Move Bytes Being Moved");
> ...
> }



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15565) Remove the invalid code in the Balancer#doBalance() method.

2020-09-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDFS-15565:
--
Labels: pull-request-available  (was: )

> Remove the invalid code in the Balancer#doBalance() method.
> ---
>
> Key: HDFS-15565
> URL: https://issues.apache.org/jira/browse/HDFS-15565
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: balancer  mover
>Reporter: jianghua zhu
>Assignee: jianghua zhu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> In the Balancer#doBalance() method, an invalid line of code is added, as 
> follows:
> static private int doBalance(Collection namenodes,
>  Collection nsIds, final BalancerParameters p, Configuration conf)
>  throws IOException, InterruptedException {
> ...
> System.out.println("Time Stamp Iteration# Bytes Already Moved Bytes Left To 
> Move Bytes Being Moved");
> ...
> }



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15565) Remove the invalid code in the Balancer#doBalance() method.

2020-09-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15565?focusedWorklogId=480802=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-480802
 ]

ASF GitHub Bot logged work on HDFS-15565:
-

Author: ASF GitHub Bot
Created on: 09/Sep/20 13:11
Start Date: 09/Sep/20 13:11
Worklog Time Spent: 10m 
  Work Description: jianghuazhu opened a new pull request #2292:
URL: https://github.com/apache/hadoop/pull/2292


   ## NOTICE
   
   Please create an issue in ASF JIRA before opening a pull request,
   and you need to set the title of the pull request which starts with
   the corresponding JIRA issue number. (e.g. HADOOP-X. Fix a typo in YYY.)
   For more details, please see 
https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 480802)
Remaining Estimate: 0h
Time Spent: 10m

> Remove the invalid code in the Balancer#doBalance() method.
> ---
>
> Key: HDFS-15565
> URL: https://issues.apache.org/jira/browse/HDFS-15565
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: balancer  mover
>Reporter: jianghua zhu
>Assignee: jianghua zhu
>Priority: Major
> Fix For: 3.3.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> In the Balancer#doBalance() method, an invalid line of code is added, as 
> follows:
> static private int doBalance(Collection namenodes,
>  Collection nsIds, final BalancerParameters p, Configuration conf)
>  throws IOException, InterruptedException {
> ...
> System.out.println("Time Stamp Iteration# Bytes Already Moved Bytes Left To 
> Move Bytes Being Moved");
> ...
> }



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-15565) Remove the invalid code in the Balancer#doBalance() method.

2020-09-09 Thread jianghua zhu (Jira)
jianghua zhu created HDFS-15565:
---

 Summary: Remove the invalid code in the Balancer#doBalance() 
method.
 Key: HDFS-15565
 URL: https://issues.apache.org/jira/browse/HDFS-15565
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: balancer  mover
Reporter: jianghua zhu
 Fix For: 3.3.0


In the Balancer#doBalance() method, an invalid line of code is added, as 
follows:
static private int doBalance(Collection namenodes,
 Collection nsIds, final BalancerParameters p, Configuration conf)
 throws IOException, InterruptedException {
...
System.out.println("Time Stamp Iteration# Bytes Already Moved Bytes Left To 
Move Bytes Being Moved");
...
}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-15565) Remove the invalid code in the Balancer#doBalance() method.

2020-09-09 Thread jianghua zhu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jianghua zhu reassigned HDFS-15565:
---

Assignee: jianghua zhu

> Remove the invalid code in the Balancer#doBalance() method.
> ---
>
> Key: HDFS-15565
> URL: https://issues.apache.org/jira/browse/HDFS-15565
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: balancer  mover
>Reporter: jianghua zhu
>Assignee: jianghua zhu
>Priority: Major
> Fix For: 3.3.0
>
>
> In the Balancer#doBalance() method, an invalid line of code is added, as 
> follows:
> static private int doBalance(Collection namenodes,
>  Collection nsIds, final BalancerParameters p, Configuration conf)
>  throws IOException, InterruptedException {
> ...
> System.out.println("Time Stamp Iteration# Bytes Already Moved Bytes Left To 
> Move Bytes Being Moved");
> ...
> }



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15564) Add Test annotation for TestPersistBlocks#testRestartDfsWithSync

2020-09-09 Thread Fei Hui (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fei Hui updated HDFS-15564:
---
Status: Patch Available  (was: Open)

> Add Test annotation for TestPersistBlocks#testRestartDfsWithSync
> 
>
> Key: HDFS-15564
> URL: https://issues.apache.org/jira/browse/HDFS-15564
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: hdfs
>Affects Versions: 3.3.0
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Minor
> Attachments: HDFS-15564.001.patch
>
>
> Add Test annotation for TestPersistBlocks#testRestartDfsWithSync,  otherwise 
> it’s dead code



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15564) Add Test annotation for TestPersistBlocks#testRestartDfsWithSync

2020-09-09 Thread Fei Hui (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fei Hui updated HDFS-15564:
---
Attachment: HDFS-15564.001.patch

> Add Test annotation for TestPersistBlocks#testRestartDfsWithSync
> 
>
> Key: HDFS-15564
> URL: https://issues.apache.org/jira/browse/HDFS-15564
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: hdfs
>Affects Versions: 3.3.0
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Minor
> Attachments: HDFS-15564.001.patch
>
>
> Add Test annotation for TestPersistBlocks#testRestartDfsWithSync,  otherwise 
> it’s dead code



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-15564) Add Test annotation for TestPersistBlocks#testRestartDfsWithSync

2020-09-09 Thread Fei Hui (Jira)
Fei Hui created HDFS-15564:
--

 Summary: Add Test annotation for 
TestPersistBlocks#testRestartDfsWithSync
 Key: HDFS-15564
 URL: https://issues.apache.org/jira/browse/HDFS-15564
 Project: Hadoop HDFS
  Issue Type: Test
  Components: hdfs
Affects Versions: 3.3.0
Reporter: Fei Hui
Assignee: Fei Hui


Add Test annotation for TestPersistBlocks#testRestartDfsWithSync,  otherwise 
it’s dead code




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-15563) getTrashRoot() location can be wrong when the non-snapshottable directory contains the name of the snapshottable directory in its name

2020-09-09 Thread Shashikant Banerjee (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee reassigned HDFS-15563:
--

Assignee: Siyao Meng

> getTrashRoot() location can be wrong when the non-snapshottable directory 
> contains the name of the snapshottable directory in its name 
> ---
>
> Key: HDFS-15563
> URL: https://issues.apache.org/jira/browse/HDFS-15563
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: snapshots
>Affects Versions: 3.4.0
>Reporter: Nilotpal Nandi
>Assignee: Siyao Meng
>Priority: Major
> Fix For: 3.4.0
>
>
> {code:java}
> 1. snapshottable directory present in the cluster
> hdfs lsSnapshottableDir
> drwx-x-x 0 hrt_2 hrt_2 0 2020-09-08 07:42 0 65536 /user/hrt_2
> drwxr-xr-x 0 hrt_4 hrt_4 0 2020-09-08 13:16 0 65536 
> /user/hrt_4/newdir/subdir2. Created a new directory outside snapshottable 
> directory
> hdfs dfs -mkdir /user/hrt_4/newdir/subdir23. Tried to delete subdir2 , it 
> failed
> hdfs dfs -rm -r /user/hrt_4/newdir/subdir2
> rm: Failed to move to trash: hdfs://ns1/user/hrt_4/newdir/subdir2: Source 
> /user/hrt_4/newdir/subdir2 and dest 
> /user/hrt_4/newdir/subdir/.Trash/hdfs/Current/user/hrt_4/newdir/subdir2 are 
> not under the same snapshot root.
> {code}
> "*/user/hrt_4/newdir/subdir2*" , the trash root location is coming to be 
> "*/user/hrt_4/newdir/subdir/.Trash*"
> as clear from the msg here:
> {noformat}
> rm: Failed to move to trash: hdfs://ns1/user/hrt_4/newdir/subdir2: Source 
> /user/hrt_4/newdir/subdir2 and dest 
> /user/hrt_4/newdir/subdir/.Trash/hdfs/Current/user/hrt_4/newdir/subdir2 are 
> not under the same snapshot root.{noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-15563) getTrashRoot() location can be wrong when the non-snapshottable directory contains the name of the snapshottable directory in its name

2020-09-09 Thread Shashikant Banerjee (Jira)
Shashikant Banerjee created HDFS-15563:
--

 Summary: getTrashRoot() location can be wrong when the 
non-snapshottable directory contains the name of the snapshottable directory in 
its name 
 Key: HDFS-15563
 URL: https://issues.apache.org/jira/browse/HDFS-15563
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: snapshots
Affects Versions: 3.4.0
Reporter: Nilotpal Nandi
 Fix For: 3.4.0


{code:java}
1. snapshottable directory present in the cluster
hdfs lsSnapshottableDir
drwx-x-x 0 hrt_2 hrt_2 0 2020-09-08 07:42 0 65536 /user/hrt_2
drwxr-xr-x 0 hrt_4 hrt_4 0 2020-09-08 13:16 0 65536 /user/hrt_4/newdir/subdir2. 
Created a new directory outside snapshottable directory
hdfs dfs -mkdir /user/hrt_4/newdir/subdir23. Tried to delete subdir2 , it failed
hdfs dfs -rm -r /user/hrt_4/newdir/subdir2
rm: Failed to move to trash: hdfs://ns1/user/hrt_4/newdir/subdir2: Source 
/user/hrt_4/newdir/subdir2 and dest 
/user/hrt_4/newdir/subdir/.Trash/hdfs/Current/user/hrt_4/newdir/subdir2 are not 
under the same snapshot root.
{code}
"*/user/hrt_4/newdir/subdir2*" , the trash root location is coming to be 
"*/user/hrt_4/newdir/subdir/.Trash*"

as clear from the msg here:
{noformat}
rm: Failed to move to trash: hdfs://ns1/user/hrt_4/newdir/subdir2: Source 
/user/hrt_4/newdir/subdir2 and dest 
/user/hrt_4/newdir/subdir/.Trash/hdfs/Current/user/hrt_4/newdir/subdir2 are not 
under the same snapshot root.{noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-15516) Add info for create flags in NameNode audit logs

2020-09-09 Thread Jinglun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17192749#comment-17192749
 ] 

Jinglun edited comment on HDFS-15516 at 9/9/20, 9:54 AM:
-

Hi [~jianghuazhu], would you mind to fix the white spaces. They are:

FSNamesystem.java#405

TestAuditLogger.java#236 and #251

 

And at HdfsAuditLogger.java#60, could you make the description more specific ? 
Like 'Set of create flags'.


was (Author: lijinglun):
Hi [~jianghuazhu], would you mind to fix the white spaces. They are:

> Add info for create flags in NameNode audit logs
> 
>
> Key: HDFS-15516
> URL: https://issues.apache.org/jira/browse/HDFS-15516
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Shashikant Banerjee
>Assignee: jianghua zhu
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDFS-15516.001.patch, HDFS-15516.002.patch, 
> HDFS-15516.003.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Currently, if file create happens with flags like overwrite , the audit logs 
> doesn't seem to contain the info regarding the flags in the audit logs. It 
> would be useful to add info regarding the create options in the audit logs 
> similar to Rename ops. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15516) Add info for create flags in NameNode audit logs

2020-09-09 Thread Jinglun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17192749#comment-17192749
 ] 

Jinglun commented on HDFS-15516:


Hi [~jianghuazhu], would you mind to fix the white spaces. They are:

> Add info for create flags in NameNode audit logs
> 
>
> Key: HDFS-15516
> URL: https://issues.apache.org/jira/browse/HDFS-15516
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Shashikant Banerjee
>Assignee: jianghua zhu
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDFS-15516.001.patch, HDFS-15516.002.patch, 
> HDFS-15516.003.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Currently, if file create happens with flags like overwrite , the audit logs 
> doesn't seem to contain the info regarding the flags in the audit logs. It 
> would be useful to add info regarding the create options in the audit logs 
> similar to Rename ops. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15025) Applying NVDIMM storage media to HDFS

2020-09-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15025?focusedWorklogId=480696=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-480696
 ]

ASF GitHub Bot logged work on HDFS-15025:
-

Author: ASF GitHub Bot
Created on: 09/Sep/20 09:14
Start Date: 09/Sep/20 09:14
Worklog Time Spent: 10m 
  Work Description: huangtianhua commented on a change in pull request 
#2189:
URL: https://github.com/apache/hadoop/pull/2189#discussion_r485463441



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/StorageType.java
##
@@ -34,28 +34,35 @@
 @InterfaceStability.Unstable
 public enum StorageType {
   // sorted by the speed of the storage types, from fast to slow
-  RAM_DISK(true),
-  SSD(false),
-  DISK(false),
-  ARCHIVE(false),
-  PROVIDED(false);
+  RAM_DISK(true, true),
+  NVDIMM(false, true),
+  SSD(false, false),
+  DISK(false, false),
+  ARCHIVE(false, false),
+  PROVIDED(false, false);
 
   private final boolean isTransient;
+  private final boolean isRAM;
 
   public static final StorageType DEFAULT = DISK;
 
   public static final StorageType[] EMPTY_ARRAY = {};
 
   private static final StorageType[] VALUES = values();
 
-  StorageType(boolean isTransient) {
+  StorageType(boolean isTransient, boolean isRAM) {
 this.isTransient = isTransient;
+this.isRAM = isRAM;
   }
 
   public boolean isTransient() {
 return isTransient;
   }
 
+  public boolean isRAM() {
+return isRAM;
+  }

Review comment:
   @brahmareddybattula  would you please have a look for this? Thanks.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 480696)
Time Spent: 3h 10m  (was: 3h)

> Applying NVDIMM storage media to HDFS
> -
>
> Key: HDFS-15025
> URL: https://issues.apache.org/jira/browse/HDFS-15025
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, hdfs
>Reporter: YaYun Wang
>Assignee: YaYun Wang
>Priority: Major
>  Labels: pull-request-available
> Attachments: Applying NVDIMM to HDFS.pdf, HDFS-15025.001.patch, 
> HDFS-15025.002.patch, HDFS-15025.003.patch, HDFS-15025.004.patch, 
> HDFS-15025.005.patch, HDFS-15025.006.patch, NVDIMM_patch(WIP).patch
>
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> The non-volatile memory NVDIMM is faster than SSD, it can be used 
> simultaneously with RAM, DISK, SSD. The data of HDFS stored directly on 
> NVDIMM can not only improves the response rate of HDFS, but also ensure the 
> reliability of the data.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15548) Allow configuring DISK/ARCHIVE storage types on same device mount

2020-09-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15548?focusedWorklogId=480691=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-480691
 ]

ASF GitHub Bot logged work on HDFS-15548:
-

Author: ASF GitHub Bot
Created on: 09/Sep/20 09:10
Start Date: 09/Sep/20 09:10
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2288:
URL: https://github.com/apache/hadoop/pull/2288#issuecomment-689435050


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 31s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  31m 26s |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 22s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |   1m 10s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 58s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 19s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  19m 59s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 10s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 33s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   3m 33s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m 30s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 13s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 23s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | -1 :x: |  javac  |   1m 23s |  
hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 
with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 generated 2 new + 602 
unchanged - 0 fixed = 604 total (was 602)  |
   | +1 :green_heart: |  compile  |   1m 12s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | -1 :x: |  javac  |   1m 12s |  
hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01
 with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 generated 2 new 
+ 586 unchanged - 0 fixed = 588 total (was 586)  |
   | -0 :warning: |  checkstyle  |   0m 56s |  hadoop-hdfs-project/hadoop-hdfs: 
The patch generated 7 new + 570 unchanged - 0 fixed = 577 total (was 570)  |
   | +1 :green_heart: |  mvnsite  |   1m 22s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  xml  |   0m  1s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  17m  4s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 55s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 32s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   3m 40s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 100m  9s |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 40s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 195m 23s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdfs.TestErasureCodingPoliciesWithRandomECPolicy |
   |   | hadoop.hdfs.TestFileChecksumCompositeCrc |
   |   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
   |   | hadoop.hdfs.qjournal.client.TestQuorumJournalManager |
   |   | hadoop.hdfs.qjournal.client.TestQJMWithFaults |
   |   | hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy |
   |   | hadoop.hdfs.TestErasureCodingPolicies |
   |   | hadoop.hdfs.TestDFSStripedOutputStreamWithRandomECPolicy |
   |   | hadoop.hdfs.server.namenode.ha.TestBootstrapAliasmap |
   |   | hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier |
   |   | hadoop.hdfs.TestFileChecksum |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2288/1/artifact/out/Dockerfile
 |
   | GITHUB 

[jira] [Commented] (HDFS-15548) Allow configuring DISK/ARCHIVE storage types on same device mount

2020-09-09 Thread jianghua zhu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17192724#comment-17192724
 ] 

jianghua zhu commented on HDFS-15548:
-

[~LeonG] , I have read the code. Are you mainly solving the relationship 
between DISK and ARCHIVE?
Does the relationship between SSD and ARCHIVE need to be considered?

 

> Allow configuring DISK/ARCHIVE storage types on same device mount
> -
>
> Key: HDFS-15548
> URL: https://issues.apache.org/jira/browse/HDFS-15548
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Reporter: Leon Gao
>Assignee: Leon Gao
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> We can allow configuring DISK/ARCHIVE storage types on the same device mount 
> on two separate directories.
> Users should be able to configure the capacity for each. Also, the datanode 
> usage report should report stats correctly.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15562) StandbyCheckpointer will do checkpoint repeatedly while connecting observer/active namenode failed

2020-09-09 Thread jianghua zhu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17192719#comment-17192719
 ] 

jianghua zhu commented on HDFS-15562:
-

[~aswqazxsd] , can you tell which version you are using.

 

 

> StandbyCheckpointer will do checkpoint repeatedly while connecting 
> observer/active namenode failed
> --
>
> Key: HDFS-15562
> URL: https://issues.apache.org/jira/browse/HDFS-15562
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: SunHao
>Priority: Major
>
> We find the standby namenode will do checkpoint over and over while 
> connecting observer/active namenode failed.
> StandbyCheckpointer won't update “lastCheckpointTime” when upload new fsimage 
> to the other namenode failed, so that the standby namenode will keep doing 
> checkpoint repeatedly.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15554) RBF: force router check file existence in destinations before adding/updating mount points

2020-09-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15554?focusedWorklogId=480658=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-480658
 ]

ASF GitHub Bot logged work on HDFS-15554:
-

Author: ASF GitHub Bot
Created on: 09/Sep/20 07:44
Start Date: 09/Sep/20 07:44
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2266:
URL: https://github.com/apache/hadoop/pull/2266#issuecomment-689374610


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 17s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  1s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
4 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  33m 32s |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 46s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |   0m 35s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 25s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 40s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  17m 14s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 41s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 53s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   1m 23s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   1m 21s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 34s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 37s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |   0m 37s |  
hadoop-hdfs-project_hadoop-hdfs-rbf-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1
 with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 generated 0 new + 30 
unchanged - 2 fixed = 30 total (was 32)  |
   | +1 :green_heart: |  compile  |   0m 29s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |   0m 29s |  
hadoop-hdfs-project_hadoop-hdfs-rbf-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01
 with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 generated 0 new 
+ 30 unchanged - 2 fixed = 30 total (was 32)  |
   | -0 :warning: |  checkstyle  |   0m 17s |  
hadoop-hdfs-project/hadoop-hdfs-rbf: The patch generated 4 new + 0 unchanged - 
0 fixed = 4 total (was 0)  |
   | +1 :green_heart: |  mvnsite  |   0m 34s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  xml  |   0m  1s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  16m  9s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 36s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 55s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   1m 25s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |  11m 19s |  hadoop-hdfs-rbf in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 31s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  93m 36s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2266/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2266 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml markdownlint |
   | uname | Linux 6c1b6f812c1d 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 0d855159f09 |
   | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 

[jira] [Work logged] (HDFS-15516) Add info for create flags in NameNode audit logs

2020-09-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15516?focusedWorklogId=480642=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-480642
 ]

ASF GitHub Bot logged work on HDFS-15516:
-

Author: ASF GitHub Bot
Created on: 09/Sep/20 07:12
Start Date: 09/Sep/20 07:12
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2281:
URL: https://github.com/apache/hadoop/pull/2281#issuecomment-689356757


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   3m 43s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  37m 24s |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 38s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |   1m 22s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   1m  2s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 25s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  18m 23s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 50s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 21s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   3m  7s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m  5s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m  8s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  9s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |   1m  9s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  2s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |   1m  2s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   0m 44s |  hadoop-hdfs-project/hadoop-hdfs: 
The patch generated 3 new + 184 unchanged - 1 fixed = 187 total (was 185)  |
   | +1 :green_heart: |  mvnsite  |   1m  9s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  15m 25s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 46s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 15s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   3m 11s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 110m 16s |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 36s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 208m 20s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.TestFileChecksum |
   |   | hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics |
   |   | hadoop.hdfs.TestFileChecksumCompositeCrc |
   |   | hadoop.hdfs.server.namenode.TestFsck |
   |   | hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2281/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2281 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 64bef8d94ebf 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 0d855159f09 |
   | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | checkstyle | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2281/4/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
 

[jira] [Work logged] (HDFS-15552) Let DeadNode Detector also work for EC cases

2020-09-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15552?focusedWorklogId=480632=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-480632
 ]

ASF GitHub Bot logged work on HDFS-15552:
-

Author: ASF GitHub Bot
Created on: 09/Sep/20 06:55
Start Date: 09/Sep/20 06:55
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2287:
URL: https://github.com/apache/hadoop/pull/2287#issuecomment-689348278


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |  28m 48s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   3m 21s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  29m 55s |  trunk passed  |
   | +1 :green_heart: |  compile  |   4m 47s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |   4m 18s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   1m  1s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 19s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  19m  4s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 33s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   2m  1s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   2m 43s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   6m  8s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 26s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m  5s |  the patch passed  |
   | +1 :green_heart: |  compile  |   4m 48s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |   4m 48s |  the patch passed  |
   | +1 :green_heart: |  compile  |   4m 11s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |   4m 11s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 56s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   2m  6s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  15m 45s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 28s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 58s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   7m 14s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 12s |  hadoop-hdfs-client in the patch 
passed.  |
   | -1 :x: |  unit  | 116m 30s |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m  4s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 264m 41s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped |
   |   | hadoop.hdfs.TestDeadNodeDetection |
   |   | hadoop.hdfs.server.datanode.TestBPOfferService |
   |   | hadoop.hdfs.TestFileChecksum |
   |   | hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics |
   |   | hadoop.hdfs.TestSafeModeWithStripedFile |
   |   | hadoop.hdfs.TestFileChecksumCompositeCrc |
   |   | hadoop.hdfs.server.mover.TestMover |
   |   | hadoop.hdfs.TestGetFileChecksum |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2287/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2287 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 5484194563c1 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 0d855159f09 |
   | Default Java | Private 

[jira] [Commented] (HDFS-15516) Add info for create flags in NameNode audit logs

2020-09-09 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17192679#comment-17192679
 ] 

Hadoop QA commented on HDFS-15516:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
50s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
1s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
 0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
39s{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
23s{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
20m 50s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
32s{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  3m 
57s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
54s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 55s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 3 new + 184 unchanged - 1 fixed = 187 total (was 185) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 54s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
0s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 98m 28s{color} 
| {color:red} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  

[jira] [Commented] (HDFS-15413) DFSStripedInputStream throws exception when datanodes close idle connections

2020-09-09 Thread Solvannan R M (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17192673#comment-17192673
 ] 

Solvannan R M commented on HDFS-15413:
--

We are also facing the same issue in our hbase clusters where compaction fails 
with connection reset exception. The scenario is, during compaction the erasure 
coded file is opened and kept idle for a long time exceeding the timeout on 
datanode (dfs.datanode.socket.write.timeout) leads to connection reset by peer 
exception in hbase regionservers. We are using hadoop 3.1.3 with RS-10-4-1024k 
ec enabled. Currently, we have increased the dfs.datanode.socket.write.timeout 
value in all datanodes, to mitigate the issue. Can anyone please check this 
issue and assist us ?

> DFSStripedInputStream throws exception when datanodes close idle connections
> 
>
> Key: HDFS-15413
> URL: https://issues.apache.org/jira/browse/HDFS-15413
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ec, erasure-coding, hdfs-client
>Affects Versions: 3.1.3
> Environment: - Hadoop 3.1.3
> - erasure coding with ISA-L and RS-3-2-1024k scheme
> - running in kubernetes
> - dfs.client.socket-timeout = 1
> - dfs.datanode.socket.write.timeout = 1
>Reporter: Andrey Elenskiy
>Priority: Critical
> Attachments: out.log
>
>
> We've run into an issue with compactions failing in HBase when erasure coding 
> is enabled on a table directory. After digging further I was able to narrow 
> it down to a seek + read logic and able to reproduce the issue with hdfs 
> client only:
> {code:java}
> import org.apache.hadoop.conf.Configuration;
> import org.apache.hadoop.fs.Path;
> import org.apache.hadoop.fs.FileSystem;
> import org.apache.hadoop.fs.FSDataInputStream;
> public class ReaderRaw {
> public static void main(final String[] args) throws Exception {
> Path p = new Path(args[0]);
> int bufLen = Integer.parseInt(args[1]);
> int sleepDuration = Integer.parseInt(args[2]);
> int countBeforeSleep = Integer.parseInt(args[3]);
> int countAfterSleep = Integer.parseInt(args[4]);
> Configuration conf = new Configuration();
> FSDataInputStream istream = FileSystem.get(conf).open(p);
> byte[] buf = new byte[bufLen];
> int readTotal = 0;
> int count = 0;
> try {
>   while (true) {
> istream.seek(readTotal);
> int bytesRemaining = bufLen;
> int bufOffset = 0;
> while (bytesRemaining > 0) {
>   int nread = istream.read(buf, 0, bufLen);
>   if (nread < 0) {
>   throw new Exception("nread is less than zero");
>   }
>   readTotal += nread;
>   bufOffset += nread;
>   bytesRemaining -= nread;
> }
> count++;
> if (count == countBeforeSleep) {
> System.out.println("sleeping for " + sleepDuration + " 
> milliseconds");
> Thread.sleep(sleepDuration);
> System.out.println("resuming");
> }
> if (count == countBeforeSleep + countAfterSleep) {
> System.out.println("done");
> break;
> }
>   }
> } catch (Exception e) {
> System.out.println("exception on read " + count + " read total " 
> + readTotal);
> throw e;
> }
> }
> }
> {code}
> The issue appears to be due to the fact that datanodes close the connection 
> of EC client if it doesn't fetch next packet for longer than 
> dfs.client.socket-timeout. The EC client doesn't retry and instead assumes 
> that those datanodes went away resulting in "missing blocks" exception.
> I was able to consistently reproduce with the following arguments:
> {noformat}
> bufLen = 100 (just below 1MB which is the size of the stripe) 
> sleepDuration = (dfs.client.socket-timeout + 1) * 1000 (in our case 11000)
> countBeforeSleep = 1
> countAfterSleep = 7
> {noformat}
> I've attached the entire log output of running the snippet above against 
> erasure coded file with RS-3-2-1024k policy. And here are the logs from 
> datanodes of disconnecting the client:
> datanode 1:
> {noformat}
> 2020-06-15 19:06:20,697 INFO datanode.DataNode: Likely the client has stopped 
> reading, disconnecting it (datanode-v11-0-hadoop.hadoop:9866:DataXceiver 
> error processing READ_BLOCK operation  src: /10.128.23.40:53748 dst: 
> /10.128.14.46:9866); java.net.SocketTimeoutException: 1 millis timeout 
> while waiting for channel to be ready for write. ch : 
> java.nio.channels.SocketChannel[connected local=/10.128.14.46:9866 
> remote=/10.128.23.40:53748]
> {noformat}
> datanode 2:
> {noformat}
> 

[jira] [Commented] (HDFS-15516) Add info for create flags in NameNode audit logs

2020-09-09 Thread jianghua zhu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17192660#comment-17192660
 ] 

jianghua zhu commented on HDFS-15516:
-

[~elgoiri] , thank you very much for your suggestions.
Here is a new patch file (HDFS-15516.003.patch), can you review it again?
thank you very much.

 

> Add info for create flags in NameNode audit logs
> 
>
> Key: HDFS-15516
> URL: https://issues.apache.org/jira/browse/HDFS-15516
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Shashikant Banerjee
>Assignee: jianghua zhu
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDFS-15516.001.patch, HDFS-15516.002.patch, 
> HDFS-15516.003.patch
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Currently, if file create happens with flags like overwrite , the audit logs 
> doesn't seem to contain the info regarding the flags in the audit logs. It 
> would be useful to add info regarding the create options in the audit logs 
> similar to Rename ops. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14694) Call recoverLease on DFSOutputStream close exception

2020-09-09 Thread Xiaoqiao He (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17192656#comment-17192656
 ] 

Xiaoqiao He commented on HDFS-14694:


+1, Thanks [~leosun08] and [~ayushtkn]. Try to run all failed unit tests at 
local with  [^HDFS-14694.014.patch]. All of them passed, seems unrelated with 
this changes.
Will commit to trunk.

> Call recoverLease on DFSOutputStream close exception
> 
>
> Key: HDFS-14694
> URL: https://issues.apache.org/jira/browse/HDFS-14694
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Reporter: Chen Zhang
>Assignee: Lisheng Sun
>Priority: Major
> Attachments: HDFS-14694.001.patch, HDFS-14694.002.patch, 
> HDFS-14694.003.patch, HDFS-14694.004.patch, HDFS-14694.005.patch, 
> HDFS-14694.006.patch, HDFS-14694.007.patch, HDFS-14694.008.patch, 
> HDFS-14694.009.patch, HDFS-14694.010.patch, HDFS-14694.011.patch, 
> HDFS-14694.012.patch, HDFS-14694.013.patch, HDFS-14694.014.patch
>
>
> HDFS uses file-lease to manage opened files, when a file is not closed 
> normally, NN will recover lease automatically after hard limit exceeded. But 
> for a long running service(e.g. HBase), the hdfs-client will never die and NN 
> don't have any chances to recover the file.
> Usually client program needs to handle exceptions by themself to avoid this 
> condition(e.g. HBase automatically call recover lease for files that not 
> closed normally), but in our experience, most services (in our company) don't 
> process this condition properly, which will cause lots of files in abnormal 
> status or even data loss.
> This Jira propose to add a feature that call recoverLease operation 
> automatically when DFSOutputSteam close encounters exception. It should be 
> disabled by default, but when somebody builds a long-running service based on 
> HDFS, they can enable this option.
> We've add this feature to our internal Hadoop distribution for more than 3 
> years, it's quite useful according our experience.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13293) RBF: The RouterRPCServer should transfer CallerContext and client ip to NamenodeRpcServer

2020-09-09 Thread Fei Hui (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-13293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17192646#comment-17192646
 ] 

Fei Hui commented on HDFS-13293:


[~aajisaka][~elgoiri] Thanks for bringing this up again.
If we are in agreed on this, I will rebase the patch.

> RBF: The RouterRPCServer should transfer CallerContext and client ip to 
> NamenodeRpcServer
> -
>
> Key: HDFS-13293
> URL: https://issues.apache.org/jira/browse/HDFS-13293
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: maobaolong
>Assignee: Fei Hui
>Priority: Major
> Attachments: HDFS-13293.001.patch
>
>
> Otherwise, the namenode don't know the client's callerContext



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15549) Improve DISK/ARCHIVE movement if they are on same filesystem

2020-09-09 Thread Leon Gao (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17192644#comment-17192644
 ] 

Leon Gao commented on HDFS-15549:
-

[~jianghuazhu] I think this will depend on the implementation of HDFS-15548

> Improve DISK/ARCHIVE movement if they are on same filesystem
> 
>
> Key: HDFS-15549
> URL: https://issues.apache.org/jira/browse/HDFS-15549
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Reporter: Leon Gao
>Assignee: Leon Gao
>Priority: Major
>
> When moving blocks between DISK/ARCHIVE, we should prefer the volume on the 
> same underlying filesystem and use "rename" instead of "copy" to save IO.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org