[jira] [Commented] (HDFS-15290) NPE in HttpServer during NameNode startup

2020-08-28 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17186873#comment-17186873
 ] 

Hadoop QA commented on HDFS-15290:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 16m 
47s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2.10 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
16s{color} | {color:green} branch-2.10 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
9s{color} | {color:green} branch-2.10 passed with JDK Oracle 
Corporation-1.7.0_95-b00 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} branch-2.10 passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~16.04-b01 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
34s{color} | {color:green} branch-2.10 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
4s{color} | {color:green} branch-2.10 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
15s{color} | {color:green} branch-2.10 passed with JDK Oracle 
Corporation-1.7.0_95-b00 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} branch-2.10 passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~16.04-b01 {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  2m 
51s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
48s{color} | {color:green} branch-2.10 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed with JDK Oracle 
Corporation-1.7.0_95-b00 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~16.04-b01 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed with JDK Oracle 
Corporation-1.7.0_95-b00 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~16.04-b01 {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
53s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 80m 29s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}132m 24s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | ClientAPI=1.40 ServerAPI=1.40 base: 

[jira] [Commented] (HDFS-15329) Provide FileContext based ViewFSOverloadScheme implementation

2020-08-28 Thread Abhishek Das (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17186871#comment-17186871
 ] 

Abhishek Das commented on HDFS-15329:
-

I think because this is a sub task, I can not assign it to myself

> Provide FileContext based ViewFSOverloadScheme implementation
> -
>
> Key: HDFS-15329
> URL: https://issues.apache.org/jira/browse/HDFS-15329
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs, hdfs, viewfs, viewfsOverloadScheme
>Affects Versions: 3.2.1
>Reporter: Uma Maheswara Rao G
>Priority: Major
>
> This Jira to track for FileContext based ViewFSOverloadScheme implementation.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15290) NPE in HttpServer during NameNode startup

2020-08-28 Thread Konstantin Shvachko (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-15290:
---
Attachment: HDFS-15290-branch-2.10.003.patch

> NPE in HttpServer during NameNode startup
> -
>
> Key: HDFS-15290
> URL: https://issues.apache.org/jira/browse/HDFS-15290
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.10.0, 2.7.8, 3.3.0
>Reporter: Konstantin Shvachko
>Assignee: Simbarashe Dzinamarira
>Priority: Major
> Attachments: HDFS-15290-branch-2.10.003.patch, HDFS-15290.001.patch, 
> HDFS-15290.002.patch, HDFS-15290.003.patch
>
>
> When NameNode starts it first starts HttpServer, then starts loading fsImage 
> and edits. While loading the namesystem field in NameNode is null. I saw that 
> a StandbyNode sends a checkpoint request, which fails with NPE because 
> NNStorage is not instantiated yet.
> We should check the NameNode startup status before accepting checkpoint 
> requests.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-15549) Improve DISK/ARCHIVE movement if they are on same filesystem

2020-08-28 Thread Leon Gao (Jira)
Leon Gao created HDFS-15549:
---

 Summary: Improve DISK/ARCHIVE movement if they are on same 
filesystem
 Key: HDFS-15549
 URL: https://issues.apache.org/jira/browse/HDFS-15549
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Reporter: Leon Gao
Assignee: Leon Gao


When moving blocks between DISK/ARCHIVE, we should prefer the volume on the 
same underlying filesystem and use "rename" instead of "copy" to save IO.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-15548) Allow configuring DISK/ARCHIVE storage types on same device mount

2020-08-28 Thread Leon Gao (Jira)
Leon Gao created HDFS-15548:
---

 Summary: Allow configuring DISK/ARCHIVE storage types on same 
device mount
 Key: HDFS-15548
 URL: https://issues.apache.org/jira/browse/HDFS-15548
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Reporter: Leon Gao
Assignee: Leon Gao


We can allow configuring DISK/ARCHIVE storage types on the same device mount on 
two separate directories.

Users should be able to configure the capacity for each. Also, the datanode 
usage report should report stats correctly.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15547) Dynamic disk-level tiering

2020-08-28 Thread Leon Gao (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17186860#comment-17186860
 ] 

Leon Gao commented on HDFS-15547:
-

[~sureshms] [~weichiu] [~sunchao] This is the new use case of archival storage 
we discussed. I will upload some initial changes soon.

[~jeffreyz] [~ekanth] FYI

> Dynamic disk-level tiering
> --
>
> Key: HDFS-15547
> URL: https://issues.apache.org/jira/browse/HDFS-15547
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode
>Reporter: Leon Gao
>Assignee: Leon Gao
>Priority: Major
> Attachments: Proposal - Dynamic disk-level tiering.pdf
>
>
> This is a proposal for a new use case based on archival storage, to allow 
> configuring DISK and ARCHIVE storage types on the same device (filesystem) to 
> balance disk IO for disks with different density.
> The proposal is to mainly solve two problems:
> 1) The disk IO of ARCHIVE disks is underutilized. This is normal in many use 
> cases where the data hotness is highly skewed.
> 2) Over the years, as better/cheaper hard drives showing on the market, a 
> large production environment can have mixed disk densities. For example, in 
> our prod environment, we have 2TB, 4TB, 8TB, and 16TB disks. When putting all 
> different HDDs into the cluster, we should be able to utilize disk capacity 
> and disk IO efficiently for all of them.
> When moving blocks from DISK to ARCHIVE, we can prefer the same disk and 
> simply rename the files instead of copying.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-15547) Dynamic disk-level tiering

2020-08-28 Thread Leon Gao (Jira)
Leon Gao created HDFS-15547:
---

 Summary: Dynamic disk-level tiering
 Key: HDFS-15547
 URL: https://issues.apache.org/jira/browse/HDFS-15547
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: datanode
Reporter: Leon Gao
Assignee: Leon Gao
 Attachments: Proposal - Dynamic disk-level tiering.pdf

This is a proposal for a new use case based on archival storage, to allow 
configuring DISK and ARCHIVE storage types on the same device (filesystem) to 
balance disk IO for disks with different density.

The proposal is to mainly solve two problems:

1) The disk IO of ARCHIVE disks is underutilized. This is normal in many use 
cases where the data hotness is highly skewed.

2) Over the years, as better/cheaper hard drives showing on the market, a large 
production environment can have mixed disk densities. For example, in our prod 
environment, we have 2TB, 4TB, 8TB, and 16TB disks. When putting all different 
HDDs into the cluster, we should be able to utilize disk capacity and disk IO 
efficiently for all of them.

When moving blocks from DISK to ARCHIVE, we can prefer the same disk and simply 
rename the files instead of copying.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15547) Dynamic disk-level tiering

2020-08-28 Thread Leon Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15547?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leon Gao updated HDFS-15547:

Attachment: Proposal - Dynamic disk-level tiering.pdf

> Dynamic disk-level tiering
> --
>
> Key: HDFS-15547
> URL: https://issues.apache.org/jira/browse/HDFS-15547
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode
>Reporter: Leon Gao
>Assignee: Leon Gao
>Priority: Major
> Attachments: Proposal - Dynamic disk-level tiering.pdf
>
>
> This is a proposal for a new use case based on archival storage, to allow 
> configuring DISK and ARCHIVE storage types on the same device (filesystem) to 
> balance disk IO for disks with different density.
> The proposal is to mainly solve two problems:
> 1) The disk IO of ARCHIVE disks is underutilized. This is normal in many use 
> cases where the data hotness is highly skewed.
> 2) Over the years, as better/cheaper hard drives showing on the market, a 
> large production environment can have mixed disk densities. For example, in 
> our prod environment, we have 2TB, 4TB, 8TB, and 16TB disks. When putting all 
> different HDDs into the cluster, we should be able to utilize disk capacity 
> and disk IO efficiently for all of them.
> When moving blocks from DISK to ARCHIVE, we can prefer the same disk and 
> simply rename the files instead of copying.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15545) (S)Webhdfs will not use updated delegation tokens available in the ugi after the old ones expire

2020-08-28 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17186844#comment-17186844
 ] 

Hadoop QA commented on HDFS-15545:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
27s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  4m 
14s{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
49s{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m 29s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
25s{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
50s{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  3m  
5s{color} | {color:blue} Used deprecated FindBugs config; considering switching 
to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
29s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  4m 
14s{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  4m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
41s{color} | {color:green} the patch passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  3m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
56s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 2 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 24s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
45s{color} | {color:green} the patch passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
46s{color} | {color:green} the patch 

[jira] [Work logged] (HDFS-15545) (S)Webhdfs will not use updated delegation tokens available in the ugi after the old ones expire

2020-08-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15545?focusedWorklogId=476009=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-476009
 ]

ASF GitHub Bot logged work on HDFS-15545:
-

Author: ASF GitHub Bot
Created on: 28/Aug/20 22:15
Start Date: 28/Aug/20 22:15
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2255:
URL: https://github.com/apache/hadoop/pull/2255#issuecomment-683169468


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 26s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   3m 13s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  28m  0s |  trunk passed  |
   | +1 :green_heart: |  compile  |   4m 18s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |   3m 45s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 59s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m  7s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  18m 50s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 27s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   2m  1s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   2m 44s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   6m  2s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 25s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m  0s |  the patch passed  |
   | +1 :green_heart: |  compile  |   4m 16s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |   4m 16s |  the patch passed  |
   | +1 :green_heart: |  compile  |   3m 53s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |   3m 53s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 56s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   2m 10s |  the patch passed  |
   | -1 :x: |  whitespace  |   0m  0s |  The patch has 2 line(s) that end in 
whitespace. Use git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply  |
   | +1 :green_heart: |  shadedclient  |  15m 23s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 29s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 55s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   5m 59s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 57s |  hadoop-hdfs-client in the patch 
passed.  |
   | -1 :x: |  unit  | 100m 19s |  hadoop-hdfs in the patch passed.  |
   | -1 :x: |  asflicense  |   0m 40s |  The patch generated 3 ASF License 
warnings.  |
   |  |   | 213m 27s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped |
   |   | hadoop.hdfs.server.namenode.ha.TestStandbyCheckpoints |
   |   | hadoop.hdfs.TestFileChecksum |
   |   | hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics |
   |   | hadoop.hdfs.TestFileChecksumCompositeCrc |
   |   | hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2255/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2255 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux d2fc9072467c 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 44542863f41 |
   | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | Multi-JDK versions | 

[jira] [Comment Edited] (HDFS-15546) Remove duplicate initializeGenericKeys method when creating DFSZKFailoverController

2020-08-28 Thread Mingliang Liu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17186782#comment-17186782
 ] 

Mingliang Liu edited comment on HDFS-15546 at 8/28/20, 8:04 PM:


Thanks for good discussion [~jianghuazhu] and [~chaosun]

Since this is not committed to any branch, I have cleared the "Fix Version/s" 
and marked this Jira as "Not a Problem". "Resolved" with fix versions only 
applies to JIRAs when it has code change.


was (Author: liuml07):
Thanks for good discussion [~jianghuazhu] and [~chaosun]

Since this is not committed to any branch, I have cleared the "Fix Version/s" 
and marked this Jira as "Not a Problem". A Jira which is "Resolved" with fix 
versions only apply to code change.

> Remove duplicate initializeGenericKeys method when creating 
> DFSZKFailoverController
> ---
>
> Key: HDFS-15546
> URL: https://issues.apache.org/jira/browse/HDFS-15546
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: jianghua zhu
>Assignee: jianghua zhu
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDFS-15546.001.patch
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> When calling the DFSZKFailoverController#create() method, call 
> NameNode#initializeGenericKeys() twice.
> First call:
> DFSZKFailoverController#create() {
> ...
> NameNode.initializeGenericKeys(localNNConf, nsId, nnId);
> ...
> }
> The second call:
> NNHAServiceTarget construction method:
> NNHAServiceTarget() {
> ...
> NameNode.initializeGenericKeys(targetConf, nsId, nnId);
> ...
> }
> In fact, the parameters passed in the two calls are the same. Here you can 
> remove the first call without affecting the program itself.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Closed] (HDFS-15546) Remove duplicate initializeGenericKeys method when creating DFSZKFailoverController

2020-08-28 Thread Mingliang Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu closed HDFS-15546.


> Remove duplicate initializeGenericKeys method when creating 
> DFSZKFailoverController
> ---
>
> Key: HDFS-15546
> URL: https://issues.apache.org/jira/browse/HDFS-15546
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: jianghua zhu
>Assignee: jianghua zhu
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDFS-15546.001.patch
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> When calling the DFSZKFailoverController#create() method, call 
> NameNode#initializeGenericKeys() twice.
> First call:
> DFSZKFailoverController#create() {
> ...
> NameNode.initializeGenericKeys(localNNConf, nsId, nnId);
> ...
> }
> The second call:
> NNHAServiceTarget construction method:
> NNHAServiceTarget() {
> ...
> NameNode.initializeGenericKeys(targetConf, nsId, nnId);
> ...
> }
> In fact, the parameters passed in the two calls are the same. Here you can 
> remove the first call without affecting the program itself.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-15546) Remove duplicate initializeGenericKeys method when creating DFSZKFailoverController

2020-08-28 Thread Mingliang Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu resolved HDFS-15546.
--
Resolution: Not A Problem

Thanks for good discussion [~jianghuazhu] and [~chaosun]

Since this is not committed to any branch, I have cleared the "Fix Version/s" 
and marked this Jira as "Not a Problem". A Jira which is "Resolved" with fix 
versions only apply to code change.

> Remove duplicate initializeGenericKeys method when creating 
> DFSZKFailoverController
> ---
>
> Key: HDFS-15546
> URL: https://issues.apache.org/jira/browse/HDFS-15546
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: jianghua zhu
>Assignee: jianghua zhu
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDFS-15546.001.patch
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> When calling the DFSZKFailoverController#create() method, call 
> NameNode#initializeGenericKeys() twice.
> First call:
> DFSZKFailoverController#create() {
> ...
> NameNode.initializeGenericKeys(localNNConf, nsId, nnId);
> ...
> }
> The second call:
> NNHAServiceTarget construction method:
> NNHAServiceTarget() {
> ...
> NameNode.initializeGenericKeys(targetConf, nsId, nnId);
> ...
> }
> In fact, the parameters passed in the two calls are the same. Here you can 
> remove the first call without affecting the program itself.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Reopened] (HDFS-15546) Remove duplicate initializeGenericKeys method when creating DFSZKFailoverController

2020-08-28 Thread Mingliang Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu reopened HDFS-15546:
--

> Remove duplicate initializeGenericKeys method when creating 
> DFSZKFailoverController
> ---
>
> Key: HDFS-15546
> URL: https://issues.apache.org/jira/browse/HDFS-15546
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: jianghua zhu
>Assignee: jianghua zhu
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDFS-15546.001.patch
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> When calling the DFSZKFailoverController#create() method, call 
> NameNode#initializeGenericKeys() twice.
> First call:
> DFSZKFailoverController#create() {
> ...
> NameNode.initializeGenericKeys(localNNConf, nsId, nnId);
> ...
> }
> The second call:
> NNHAServiceTarget construction method:
> NNHAServiceTarget() {
> ...
> NameNode.initializeGenericKeys(targetConf, nsId, nnId);
> ...
> }
> In fact, the parameters passed in the two calls are the same. Here you can 
> remove the first call without affecting the program itself.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15546) Remove duplicate initializeGenericKeys method when creating DFSZKFailoverController

2020-08-28 Thread Mingliang Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-15546:
-
Fix Version/s: (was: 3.3.0)

> Remove duplicate initializeGenericKeys method when creating 
> DFSZKFailoverController
> ---
>
> Key: HDFS-15546
> URL: https://issues.apache.org/jira/browse/HDFS-15546
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: jianghua zhu
>Assignee: jianghua zhu
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDFS-15546.001.patch
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> When calling the DFSZKFailoverController#create() method, call 
> NameNode#initializeGenericKeys() twice.
> First call:
> DFSZKFailoverController#create() {
> ...
> NameNode.initializeGenericKeys(localNNConf, nsId, nnId);
> ...
> }
> The second call:
> NNHAServiceTarget construction method:
> NNHAServiceTarget() {
> ...
> NameNode.initializeGenericKeys(targetConf, nsId, nnId);
> ...
> }
> In fact, the parameters passed in the two calls are the same. Here you can 
> remove the first call without affecting the program itself.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15545) (S)Webhdfs will not use updated delegation tokens available in the ugi after the old ones expire

2020-08-28 Thread Issac Buenrostro (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Issac Buenrostro updated HDFS-15545:

Status: Patch Available  (was: Open)

> (S)Webhdfs will not use updated delegation tokens available in the ugi after 
> the old ones expire
> 
>
> Key: HDFS-15545
> URL: https://issues.apache.org/jira/browse/HDFS-15545
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Issac Buenrostro
>Assignee: Issac Buenrostro
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDFS-15545.001.patch
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> WebHdfsFileSystem can select a delegation token to use from the current user 
> UGI. The token selection is sticky, and WebHdfsFileSystem will re-use it 
> every time without searching the UGI again.
> If the previous token expires, WebHdfsFileSystem will catch the exception and 
> attempt to get a new token. However, the mechanism to get a new token 
> bypasses searching for one on the UGI, so even if there is external logic 
> that has retrieved a new token, it is not possible to make the FileSystem use 
> the new, valid token, rendering the FileSystem object unusable.
> A simple fix would allow WebHdfsFileSystem to re-search the UGI, and if it 
> finds a different token than the cached one try to use it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15545) (S)Webhdfs will not use updated delegation tokens available in the ugi after the old ones expire

2020-08-28 Thread Issac Buenrostro (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Issac Buenrostro updated HDFS-15545:

Attachment: HDFS-15545.001.patch

> (S)Webhdfs will not use updated delegation tokens available in the ugi after 
> the old ones expire
> 
>
> Key: HDFS-15545
> URL: https://issues.apache.org/jira/browse/HDFS-15545
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Issac Buenrostro
>Assignee: Issac Buenrostro
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDFS-15545.001.patch
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> WebHdfsFileSystem can select a delegation token to use from the current user 
> UGI. The token selection is sticky, and WebHdfsFileSystem will re-use it 
> every time without searching the UGI again.
> If the previous token expires, WebHdfsFileSystem will catch the exception and 
> attempt to get a new token. However, the mechanism to get a new token 
> bypasses searching for one on the UGI, so even if there is external logic 
> that has retrieved a new token, it is not possible to make the FileSystem use 
> the new, valid token, rendering the FileSystem object unusable.
> A simple fix would allow WebHdfsFileSystem to re-search the UGI, and if it 
> finds a different token than the cached one try to use it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15539) When disallowing snapshot on a dir, throw exception if its trash root is not empty

2020-08-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15539?focusedWorklogId=475957=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-475957
 ]

ASF GitHub Bot logged work on HDFS-15539:
-

Author: ASF GitHub Bot
Created on: 28/Aug/20 19:42
Start Date: 28/Aug/20 19:42
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2258:
URL: https://github.com/apache/hadoop/pull/2258#issuecomment-683112835


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m  6s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   3m 15s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  28m 34s |  trunk passed  |
   | +1 :green_heart: |  compile  |   4m 38s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |   3m 49s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 58s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 10s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  19m  3s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 25s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 54s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   2m 29s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   5m 34s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 23s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 56s |  the patch passed  |
   | +1 :green_heart: |  compile  |   4m 10s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |   4m 10s |  the patch passed  |
   | +1 :green_heart: |  compile  |   3m 50s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |   3m 50s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 53s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 55s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  15m 51s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 18s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 45s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   5m 45s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 57s |  hadoop-hdfs-client in the patch 
passed.  |
   | -1 :x: |  unit  | 110m 48s |  hadoop-hdfs in the patch passed.  |
   | -1 :x: |  asflicense  |   0m 36s |  The patch generated 3 ASF License 
warnings.  |
   |  |   | 224m 24s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.TestGetFileChecksum |
   |   | hadoop.hdfs.TestFileChecksumCompositeCrc |
   |   | hadoop.hdfs.TestViewDistributedFileSystem |
   |   | hadoop.hdfs.TestFileChecksum |
   |   | hadoop.hdfs.TestDistributedFileSystem |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2258/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2258 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 656e503fc62c 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 44542863f41 |
   | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | unit | 

[jira] [Commented] (HDFS-14353) Erasure Coding: metrics xmitsInProgress become to negative.

2020-08-28 Thread Andras Bokor (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17186763#comment-17186763
 ] 

Andras Bokor commented on HDFS-14353:
-

For git greppers: The commit misses the JIRA id so you can find the commit by 
grepping on the title of this JIRA: {{Erasure Coding: metrics xmitsInProgress 
become to negative.}}
Or find by commit hash: d6fc482a541310d83d9cf1393e8f6ed220ef4c1e

> Erasure Coding: metrics xmitsInProgress become to negative.
> ---
>
> Key: HDFS-14353
> URL: https://issues.apache.org/jira/browse/HDFS-14353
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, erasure-coding
>Affects Versions: 3.3.0
>Reporter: maobaolong
>Assignee: maobaolong
>Priority: Major
> Fix For: 3.3.0, 3.2.2, 3.4.0, 3.1.5
>
> Attachments: HDFS-14353.001.patch, HDFS-14353.002.patch, 
> HDFS-14353.003.patch, HDFS-14353.004.patch, HDFS-14353.005.patch, 
> HDFS-14353.006.patch, HDFS-14353.007.patch, HDFS-14353.008.patch, 
> HDFS-14353.009.patch, HDFS-14353.010.patch, screenshot-1.png
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-15545) (S)Webhdfs will not use updated delegation tokens available in the ugi after the old ones expire

2020-08-28 Thread Konstantin Shvachko (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko reassigned HDFS-15545:
--

Assignee: Issac Buenrostro

> (S)Webhdfs will not use updated delegation tokens available in the ugi after 
> the old ones expire
> 
>
> Key: HDFS-15545
> URL: https://issues.apache.org/jira/browse/HDFS-15545
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Issac Buenrostro
>Assignee: Issac Buenrostro
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> WebHdfsFileSystem can select a delegation token to use from the current user 
> UGI. The token selection is sticky, and WebHdfsFileSystem will re-use it 
> every time without searching the UGI again.
> If the previous token expires, WebHdfsFileSystem will catch the exception and 
> attempt to get a new token. However, the mechanism to get a new token 
> bypasses searching for one on the UGI, so even if there is external logic 
> that has retrieved a new token, it is not possible to make the FileSystem use 
> the new, valid token, rendering the FileSystem object unusable.
> A simple fix would allow WebHdfsFileSystem to re-search the UGI, and if it 
> finds a different token than the cached one try to use it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15545) (S)Webhdfs will not use updated delegation tokens available in the ugi after the old ones expire

2020-08-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15545?focusedWorklogId=475943=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-475943
 ]

ASF GitHub Bot logged work on HDFS-15545:
-

Author: ASF GitHub Bot
Created on: 28/Aug/20 18:38
Start Date: 28/Aug/20 18:38
Worklog Time Spent: 10m 
  Work Description: ibuenros commented on pull request #2255:
URL: https://github.com/apache/hadoop/pull/2255#issuecomment-683062619


   The test failures seem to all be on the server-side of HDFS. No changes in 
this PR are on server side, so they should be unrelated.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 475943)
Time Spent: 40m  (was: 0.5h)

> (S)Webhdfs will not use updated delegation tokens available in the ugi after 
> the old ones expire
> 
>
> Key: HDFS-15545
> URL: https://issues.apache.org/jira/browse/HDFS-15545
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Issac Buenrostro
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> WebHdfsFileSystem can select a delegation token to use from the current user 
> UGI. The token selection is sticky, and WebHdfsFileSystem will re-use it 
> every time without searching the UGI again.
> If the previous token expires, WebHdfsFileSystem will catch the exception and 
> attempt to get a new token. However, the mechanism to get a new token 
> bypasses searching for one on the UGI, so even if there is external logic 
> that has retrieved a new token, it is not possible to make the FileSystem use 
> the new, valid token, rendering the FileSystem object unusable.
> A simple fix would allow WebHdfsFileSystem to re-search the UGI, and if it 
> finds a different token than the cached one try to use it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15545) (S)Webhdfs will not use updated delegation tokens available in the ugi after the old ones expire

2020-08-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDFS-15545:
--
Labels: pull-request-available  (was: )

> (S)Webhdfs will not use updated delegation tokens available in the ugi after 
> the old ones expire
> 
>
> Key: HDFS-15545
> URL: https://issues.apache.org/jira/browse/HDFS-15545
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Issac Buenrostro
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> WebHdfsFileSystem can select a delegation token to use from the current user 
> UGI. The token selection is sticky, and WebHdfsFileSystem will re-use it 
> every time without searching the UGI again.
> If the previous token expires, WebHdfsFileSystem will catch the exception and 
> attempt to get a new token. However, the mechanism to get a new token 
> bypasses searching for one on the UGI, so even if there is external logic 
> that has retrieved a new token, it is not possible to make the FileSystem use 
> the new, valid token, rendering the FileSystem object unusable.
> A simple fix would allow WebHdfsFileSystem to re-search the UGI, and if it 
> finds a different token than the cached one try to use it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15545) (S)Webhdfs will not use updated delegation tokens available in the ugi after the old ones expire

2020-08-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15545?focusedWorklogId=475939=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-475939
 ]

ASF GitHub Bot logged work on HDFS-15545:
-

Author: ASF GitHub Bot
Created on: 28/Aug/20 18:28
Start Date: 28/Aug/20 18:28
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2255:
URL: https://github.com/apache/hadoop/pull/2255#issuecomment-683047915


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 25s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   3m 13s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  27m 38s |  trunk passed  |
   | +1 :green_heart: |  compile  |   4m 12s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |   3m 50s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 58s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m  6s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  18m 35s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 24s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 49s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   2m 28s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   5m 31s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 24s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 56s |  the patch passed  |
   | +1 :green_heart: |  compile  |   4m 11s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |   4m 11s |  the patch passed  |
   | +1 :green_heart: |  compile  |   3m 44s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |   3m 44s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   0m 53s |  hadoop-hdfs-project: The patch 
generated 2 new + 69 unchanged - 0 fixed = 71 total (was 69)  |
   | +1 :green_heart: |  mvnsite  |   1m 55s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  15m 26s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 19s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 43s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   5m 44s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 59s |  hadoop-hdfs-client in the patch 
passed.  |
   | -1 :x: |  unit  |  90m 31s |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 35s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 200m 55s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.TestReconstructStripedFile |
   |   | hadoop.hdfs.TestFileChecksum |
   |   | hadoop.hdfs.TestFileChecksumCompositeCrc |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2255/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2255 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 8cdbc10bd954 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / c392d9022a3 |
   | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | checkstyle | 

[jira] [Commented] (HDFS-15545) (S)Webhdfs will not use updated delegation tokens available in the ugi after the old ones expire

2020-08-28 Thread Issac Buenrostro (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17186745#comment-17186745
 ] 

Issac Buenrostro commented on HDFS-15545:
-

GIthub PR: [https://github.com/apache/hadoop/pull/2255]

> (S)Webhdfs will not use updated delegation tokens available in the ugi after 
> the old ones expire
> 
>
> Key: HDFS-15545
> URL: https://issues.apache.org/jira/browse/HDFS-15545
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Issac Buenrostro
>Priority: Major
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> WebHdfsFileSystem can select a delegation token to use from the current user 
> UGI. The token selection is sticky, and WebHdfsFileSystem will re-use it 
> every time without searching the UGI again.
> If the previous token expires, WebHdfsFileSystem will catch the exception and 
> attempt to get a new token. However, the mechanism to get a new token 
> bypasses searching for one on the UGI, so even if there is external logic 
> that has retrieved a new token, it is not possible to make the FileSystem use 
> the new, valid token, rendering the FileSystem object unusable.
> A simple fix would allow WebHdfsFileSystem to re-search the UGI, and if it 
> finds a different token than the cached one try to use it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15539) When disallowing snapshot on a dir, throw exception if its trash root is not empty

2020-08-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15539?focusedWorklogId=475936=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-475936
 ]

ASF GitHub Bot logged work on HDFS-15539:
-

Author: ASF GitHub Bot
Created on: 28/Aug/20 18:05
Start Date: 28/Aug/20 18:05
Worklog Time Spent: 10m 
  Work Description: jojochuang commented on a change in pull request #2258:
URL: https://github.com/apache/hadoop/pull/2258#discussion_r479458772



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDistributedFileSystem.java
##
@@ -2442,4 +2442,38 @@ public void testGetTrashRootOnEZInSnapshottableDir()
   }
 }
   }
+
+  @Test
+  public void testDisallowSnapshotShouldThrowWhenTrashRootExists()
+  throws IOException {
+Configuration conf = getTestConfiguration();
+MiniDFSCluster cluster =
+new MiniDFSCluster.Builder(conf).numDataNodes(1).build();
+try {
+  DistributedFileSystem dfs = cluster.getFileSystem();
+  Path testDir = new Path("/disallowss/test1/");
+  Path file0path = new Path(testDir, "file-0");
+  dfs.create(file0path);
+  dfs.allowSnapshot(testDir);
+  // Create trash root manually
+  Path testDirTrashRoot = new Path(testDir, FileSystem.TRASH_PREFIX);
+  dfs.mkdirs(testDirTrashRoot);
+  // Try disallowing snapshot, should throw
+  try {
+dfs.disallowSnapshot(testDir);
+fail("Should have thrown IOException when trash root exists inside "

Review comment:
   I don't think it's being used in the HDFS code base a lot, but we can 
use LambdaTestUtils 
   like in 
https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestS3ABlockOutputStream.java#L81





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 475936)
Time Spent: 20m  (was: 10m)

> When disallowing snapshot on a dir, throw exception if its trash root is not 
> empty
> --
>
> Key: HDFS-15539
> URL: https://issues.apache.org/jira/browse/HDFS-15539
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> When snapshot is disallowed on a dir, {{getTrashRoots()}} won't return the 
> trash root in that dir anymore (if any). The risk is the trash root will be 
> left there forever.
> We need to throw an exception there and prompt the user to clean up or rename 
> the trash root if it is not empty.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15492) Make trash root inside each snapshottable directory

2020-08-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15492?focusedWorklogId=475926=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-475926
 ]

ASF GitHub Bot logged work on HDFS-15492:
-

Author: ASF GitHub Bot
Created on: 28/Aug/20 17:50
Start Date: 28/Aug/20 17:50
Worklog Time Spent: 10m 
  Work Description: jojochuang commented on pull request #2176:
URL: https://github.com/apache/hadoop/pull/2176#issuecomment-682989066


   I'm late to this. But we should verify to make sure httpfs also behaves the 
sames. Otherwise it'll break Hue.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 475926)
Remaining Estimate: 0h
Time Spent: 10m

> Make trash root inside each snapshottable directory
> ---
>
> Key: HDFS-15492
> URL: https://issues.apache.org/jira/browse/HDFS-15492
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs, hdfs-client
>Affects Versions: 3.2.1
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Fix For: 3.4.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> We have seen FSImage corruption cases (e.g. HDFS-13101) where files inside 
> one snapshottable directories are moved outside of it. The most common case 
> of this is when trash is enabled and user deletes some file via the command 
> line without skipTrash.
> This jira aims to make a trash root for each snapshottable directory, same as 
> how encryption zone behaves at the moment.
> This will make trash cleanup a little bit more expensive on the NameNode as 
> it will be to iterate all trash roots. But should be fine as long as there 
> aren't many snapshottable directories.
> I could make this improvement as an option and disable it by default if 
> needed, such as {{dfs.namenode.snapshot.trashroot.enabled}}
> One small caveat though, when disabling (disallowing) snapshot on the 
> snapshottable directory when this improvement is in place. The client should 
> merge the snapshottable directory's trash with that user's trash to ensure 
> proper trash cleanup.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15492) Make trash root inside each snapshottable directory

2020-08-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15492?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDFS-15492:
--
Labels: pull-request-available  (was: )

> Make trash root inside each snapshottable directory
> ---
>
> Key: HDFS-15492
> URL: https://issues.apache.org/jira/browse/HDFS-15492
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs, hdfs-client
>Affects Versions: 3.2.1
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> We have seen FSImage corruption cases (e.g. HDFS-13101) where files inside 
> one snapshottable directories are moved outside of it. The most common case 
> of this is when trash is enabled and user deletes some file via the command 
> line without skipTrash.
> This jira aims to make a trash root for each snapshottable directory, same as 
> how encryption zone behaves at the moment.
> This will make trash cleanup a little bit more expensive on the NameNode as 
> it will be to iterate all trash roots. But should be fine as long as there 
> aren't many snapshottable directories.
> I could make this improvement as an option and disable it by default if 
> needed, such as {{dfs.namenode.snapshot.trashroot.enabled}}
> One small caveat though, when disabling (disallowing) snapshot on the 
> snapshottable directory when this improvement is in place. The client should 
> merge the snapshottable directory's trash with that user's trash to ensure 
> proper trash cleanup.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15546) Remove duplicate initializeGenericKeys method when creating DFSZKFailoverController

2020-08-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15546?focusedWorklogId=475917=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-475917
 ]

ASF GitHub Bot logged work on HDFS-15546:
-

Author: ASF GitHub Bot
Created on: 28/Aug/20 17:31
Start Date: 28/Aug/20 17:31
Worklog Time Spent: 10m 
  Work Description: jianghuazhu closed pull request #2256:
URL: https://github.com/apache/hadoop/pull/2256


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 475917)
Time Spent: 1.5h  (was: 1h 20m)

> Remove duplicate initializeGenericKeys method when creating 
> DFSZKFailoverController
> ---
>
> Key: HDFS-15546
> URL: https://issues.apache.org/jira/browse/HDFS-15546
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: jianghua zhu
>Assignee: jianghua zhu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.0
>
> Attachments: HDFS-15546.001.patch
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> When calling the DFSZKFailoverController#create() method, call 
> NameNode#initializeGenericKeys() twice.
> First call:
> DFSZKFailoverController#create() {
> ...
> NameNode.initializeGenericKeys(localNNConf, nsId, nnId);
> ...
> }
> The second call:
> NNHAServiceTarget construction method:
> NNHAServiceTarget() {
> ...
> NameNode.initializeGenericKeys(targetConf, nsId, nnId);
> ...
> }
> In fact, the parameters passed in the two calls are the same. Here you can 
> remove the first call without affecting the program itself.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15546) Remove duplicate initializeGenericKeys method when creating DFSZKFailoverController

2020-08-28 Thread jianghua zhu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jianghua zhu updated HDFS-15546:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Remove duplicate initializeGenericKeys method when creating 
> DFSZKFailoverController
> ---
>
> Key: HDFS-15546
> URL: https://issues.apache.org/jira/browse/HDFS-15546
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: jianghua zhu
>Assignee: jianghua zhu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.0
>
> Attachments: HDFS-15546.001.patch
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> When calling the DFSZKFailoverController#create() method, call 
> NameNode#initializeGenericKeys() twice.
> First call:
> DFSZKFailoverController#create() {
> ...
> NameNode.initializeGenericKeys(localNNConf, nsId, nnId);
> ...
> }
> The second call:
> NNHAServiceTarget construction method:
> NNHAServiceTarget() {
> ...
> NameNode.initializeGenericKeys(targetConf, nsId, nnId);
> ...
> }
> In fact, the parameters passed in the two calls are the same. Here you can 
> remove the first call without affecting the program itself.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15546) Remove duplicate initializeGenericKeys method when creating DFSZKFailoverController

2020-08-28 Thread jianghua zhu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17186723#comment-17186723
 ] 

jianghua zhu commented on HDFS-15546:
-

I double checked. This is not a fatal problem.
I will withdraw.

> Remove duplicate initializeGenericKeys method when creating 
> DFSZKFailoverController
> ---
>
> Key: HDFS-15546
> URL: https://issues.apache.org/jira/browse/HDFS-15546
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: jianghua zhu
>Assignee: jianghua zhu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.0
>
> Attachments: HDFS-15546.001.patch
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> When calling the DFSZKFailoverController#create() method, call 
> NameNode#initializeGenericKeys() twice.
> First call:
> DFSZKFailoverController#create() {
> ...
> NameNode.initializeGenericKeys(localNNConf, nsId, nnId);
> ...
> }
> The second call:
> NNHAServiceTarget construction method:
> NNHAServiceTarget() {
> ...
> NameNode.initializeGenericKeys(targetConf, nsId, nnId);
> ...
> }
> In fact, the parameters passed in the two calls are the same. Here you can 
> remove the first call without affecting the program itself.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15546) Remove duplicate initializeGenericKeys method when creating DFSZKFailoverController

2020-08-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15546?focusedWorklogId=475910=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-475910
 ]

ASF GitHub Bot logged work on HDFS-15546:
-

Author: ASF GitHub Bot
Created on: 28/Aug/20 17:20
Start Date: 28/Aug/20 17:20
Worklog Time Spent: 10m 
  Work Description: jianghuazhu commented on a change in pull request #2256:
URL: https://github.com/apache/hadoop/pull/2256#discussion_r479437316



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSZKFailoverController.java
##
@@ -158,7 +158,7 @@ public static DFSZKFailoverController create(Configuration 
conf) {
   "You may run zkfc on the node other than namenode.";
   throw new HadoopIllegalArgumentException(msg);
 }
-NameNode.initializeGenericKeys(localNNConf, nsId, nnId);

Review comment:
   I double checked. I approve of your ideas.
   thank you very much.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 475910)
Time Spent: 1h 20m  (was: 1h 10m)

> Remove duplicate initializeGenericKeys method when creating 
> DFSZKFailoverController
> ---
>
> Key: HDFS-15546
> URL: https://issues.apache.org/jira/browse/HDFS-15546
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: jianghua zhu
>Assignee: jianghua zhu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.0
>
> Attachments: HDFS-15546.001.patch
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> When calling the DFSZKFailoverController#create() method, call 
> NameNode#initializeGenericKeys() twice.
> First call:
> DFSZKFailoverController#create() {
> ...
> NameNode.initializeGenericKeys(localNNConf, nsId, nnId);
> ...
> }
> The second call:
> NNHAServiceTarget construction method:
> NNHAServiceTarget() {
> ...
> NameNode.initializeGenericKeys(targetConf, nsId, nnId);
> ...
> }
> In fact, the parameters passed in the two calls are the same. Here you can 
> remove the first call without affecting the program itself.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15546) Remove duplicate initializeGenericKeys method when creating DFSZKFailoverController

2020-08-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15546?focusedWorklogId=475908=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-475908
 ]

ASF GitHub Bot logged work on HDFS-15546:
-

Author: ASF GitHub Bot
Created on: 28/Aug/20 17:07
Start Date: 28/Aug/20 17:07
Worklog Time Spent: 10m 
  Work Description: sunchao commented on a change in pull request #2256:
URL: https://github.com/apache/hadoop/pull/2256#discussion_r479430862



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestDFSZKFailoverController.java
##
@@ -226,9 +228,11 @@ public void testWithBindAddressSet() throws Exception {
 DFSZKFailoverController zkfc = DFSZKFailoverController.create(
 conf);
 String addr = zkfc.getRpcAddressToBindTo().getHostString();
+String nameserviceId = conf.get(DFS_NAMESERVICE_ID);
 
 assertEquals("Bind address " + addr + " is not wildcard.",
 addr, WILDCARD_ADDRESS);
+assertEquals(nameserviceId, NAMESERVICE_ID);

Review comment:
   Yes: `assertEquals(NAMESERVICE_ID, nameserviceId)`





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 475908)
Time Spent: 1h 10m  (was: 1h)

> Remove duplicate initializeGenericKeys method when creating 
> DFSZKFailoverController
> ---
>
> Key: HDFS-15546
> URL: https://issues.apache.org/jira/browse/HDFS-15546
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: jianghua zhu
>Assignee: jianghua zhu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.0
>
> Attachments: HDFS-15546.001.patch
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> When calling the DFSZKFailoverController#create() method, call 
> NameNode#initializeGenericKeys() twice.
> First call:
> DFSZKFailoverController#create() {
> ...
> NameNode.initializeGenericKeys(localNNConf, nsId, nnId);
> ...
> }
> The second call:
> NNHAServiceTarget construction method:
> NNHAServiceTarget() {
> ...
> NameNode.initializeGenericKeys(targetConf, nsId, nnId);
> ...
> }
> In fact, the parameters passed in the two calls are the same. Here you can 
> remove the first call without affecting the program itself.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15546) Remove duplicate initializeGenericKeys method when creating DFSZKFailoverController

2020-08-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15546?focusedWorklogId=475904=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-475904
 ]

ASF GitHub Bot logged work on HDFS-15546:
-

Author: ASF GitHub Bot
Created on: 28/Aug/20 16:57
Start Date: 28/Aug/20 16:57
Worklog Time Spent: 10m 
  Work Description: sunchao commented on a change in pull request #2256:
URL: https://github.com/apache/hadoop/pull/2256#discussion_r479426043



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSZKFailoverController.java
##
@@ -158,7 +158,7 @@ public static DFSZKFailoverController create(Configuration 
conf) {
   "You may run zkfc on the node other than namenode.";
   throw new HadoopIllegalArgumentException(msg);
 }
-NameNode.initializeGenericKeys(localNNConf, nsId, nnId);

Review comment:
   Yes but as I said, the call in `NNHAServiceTarget` is on a copy:
   ```java
   // Make a copy of the conf, and override configs based on the
   // target node -- not the node we happen to be running on.
   HdfsConfiguration targetConf = new HdfsConfiguration(conf);
   NameNode.initializeGenericKeys(targetConf, nsId, nnId);
   ```
   so the original `localNNConf` won't have those set. I'm not sure there will 
be any side effect by doing this.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 475904)
Time Spent: 1h  (was: 50m)

> Remove duplicate initializeGenericKeys method when creating 
> DFSZKFailoverController
> ---
>
> Key: HDFS-15546
> URL: https://issues.apache.org/jira/browse/HDFS-15546
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: jianghua zhu
>Assignee: jianghua zhu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.0
>
> Attachments: HDFS-15546.001.patch
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> When calling the DFSZKFailoverController#create() method, call 
> NameNode#initializeGenericKeys() twice.
> First call:
> DFSZKFailoverController#create() {
> ...
> NameNode.initializeGenericKeys(localNNConf, nsId, nnId);
> ...
> }
> The second call:
> NNHAServiceTarget construction method:
> NNHAServiceTarget() {
> ...
> NameNode.initializeGenericKeys(targetConf, nsId, nnId);
> ...
> }
> In fact, the parameters passed in the two calls are the same. Here you can 
> remove the first call without affecting the program itself.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15546) Remove duplicate initializeGenericKeys method when creating DFSZKFailoverController

2020-08-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15546?focusedWorklogId=475903=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-475903
 ]

ASF GitHub Bot logged work on HDFS-15546:
-

Author: ASF GitHub Bot
Created on: 28/Aug/20 16:55
Start Date: 28/Aug/20 16:55
Worklog Time Spent: 10m 
  Work Description: jianghuazhu commented on a change in pull request #2256:
URL: https://github.com/apache/hadoop/pull/2256#discussion_r479424939



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestDFSZKFailoverController.java
##
@@ -226,9 +228,11 @@ public void testWithBindAddressSet() throws Exception {
 DFSZKFailoverController zkfc = DFSZKFailoverController.create(
 conf);
 String addr = zkfc.getRpcAddressToBindTo().getHostString();
+String nameserviceId = conf.get(DFS_NAMESERVICE_ID);
 
 assertEquals("Bind address " + addr + " is not wildcard.",
 addr, WILDCARD_ADDRESS);
+assertEquals(nameserviceId, NAMESERVICE_ID);

Review comment:
   Is this usage wrong?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 475903)
Time Spent: 50m  (was: 40m)

> Remove duplicate initializeGenericKeys method when creating 
> DFSZKFailoverController
> ---
>
> Key: HDFS-15546
> URL: https://issues.apache.org/jira/browse/HDFS-15546
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: jianghua zhu
>Assignee: jianghua zhu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.0
>
> Attachments: HDFS-15546.001.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> When calling the DFSZKFailoverController#create() method, call 
> NameNode#initializeGenericKeys() twice.
> First call:
> DFSZKFailoverController#create() {
> ...
> NameNode.initializeGenericKeys(localNNConf, nsId, nnId);
> ...
> }
> The second call:
> NNHAServiceTarget construction method:
> NNHAServiceTarget() {
> ...
> NameNode.initializeGenericKeys(targetConf, nsId, nnId);
> ...
> }
> In fact, the parameters passed in the two calls are the same. Here you can 
> remove the first call without affecting the program itself.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15546) Remove duplicate initializeGenericKeys method when creating DFSZKFailoverController

2020-08-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15546?focusedWorklogId=475902=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-475902
 ]

ASF GitHub Bot logged work on HDFS-15546:
-

Author: ASF GitHub Bot
Created on: 28/Aug/20 16:54
Start Date: 28/Aug/20 16:54
Worklog Time Spent: 10m 
  Work Description: jianghuazhu commented on a change in pull request #2256:
URL: https://github.com/apache/hadoop/pull/2256#discussion_r479424352



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSZKFailoverController.java
##
@@ -158,7 +158,7 @@ public static DFSZKFailoverController create(Configuration 
conf) {
   "You may run zkfc on the node other than namenode.";
   throw new HadoopIllegalArgumentException(msg);
 }
-NameNode.initializeGenericKeys(localNNConf, nsId, nnId);

Review comment:
   NameNode.initializeGenericKeys(), this method will be called twice. In 
addition to this, it will be called once when NNHAServiceTarget() is 
initialized. Therefore, it can be deleted here.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 475902)
Time Spent: 40m  (was: 0.5h)

> Remove duplicate initializeGenericKeys method when creating 
> DFSZKFailoverController
> ---
>
> Key: HDFS-15546
> URL: https://issues.apache.org/jira/browse/HDFS-15546
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: jianghua zhu
>Assignee: jianghua zhu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.0
>
> Attachments: HDFS-15546.001.patch
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> When calling the DFSZKFailoverController#create() method, call 
> NameNode#initializeGenericKeys() twice.
> First call:
> DFSZKFailoverController#create() {
> ...
> NameNode.initializeGenericKeys(localNNConf, nsId, nnId);
> ...
> }
> The second call:
> NNHAServiceTarget construction method:
> NNHAServiceTarget() {
> ...
> NameNode.initializeGenericKeys(targetConf, nsId, nnId);
> ...
> }
> In fact, the parameters passed in the two calls are the same. Here you can 
> remove the first call without affecting the program itself.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15539) When disallowing snapshot on a dir, throw exception if its trash root is not empty

2020-08-28 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15539?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDFS-15539:
--
Status: Patch Available  (was: Open)

> When disallowing snapshot on a dir, throw exception if its trash root is not 
> empty
> --
>
> Key: HDFS-15539
> URL: https://issues.apache.org/jira/browse/HDFS-15539
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> When snapshot is disallowed on a dir, {{getTrashRoots()}} won't return the 
> trash root in that dir anymore (if any). The risk is the trash root will be 
> left there forever.
> We need to throw an exception there and prompt the user to clean up or rename 
> the trash root if it is not empty.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15539) When disallowing snapshot on a dir, throw exception if its trash root is not empty

2020-08-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15539?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDFS-15539:
--
Labels: pull-request-available  (was: )

> When disallowing snapshot on a dir, throw exception if its trash root is not 
> empty
> --
>
> Key: HDFS-15539
> URL: https://issues.apache.org/jira/browse/HDFS-15539
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> When snapshot is disallowed on a dir, {{getTrashRoots()}} won't return the 
> trash root in that dir anymore (if any). The risk is the trash root will be 
> left there forever.
> We need to throw an exception there and prompt the user to clean up or rename 
> the trash root if it is not empty.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15539) When disallowing snapshot on a dir, throw exception if its trash root is not empty

2020-08-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15539?focusedWorklogId=475877=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-475877
 ]

ASF GitHub Bot logged work on HDFS-15539:
-

Author: ASF GitHub Bot
Created on: 28/Aug/20 15:56
Start Date: 28/Aug/20 15:56
Worklog Time Spent: 10m 
  Work Description: smengcl opened a new pull request #2258:
URL: https://github.com/apache/hadoop/pull/2258


   https://issues.apache.org/jira/browse/HDFS-15539
   
   I initially intended to put the logic in 
`SnapshotManager#resetSnapshottable`.
   But later I figured it makes more sense to put the check on the client side 
instead:
   1. Trash is more of a client-side concept.
   2. As a result of (1), it is much cleaner to add the check on the client 
than on the server side.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 475877)
Remaining Estimate: 0h
Time Spent: 10m

> When disallowing snapshot on a dir, throw exception if its trash root is not 
> empty
> --
>
> Key: HDFS-15539
> URL: https://issues.apache.org/jira/browse/HDFS-15539
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> When snapshot is disallowed on a dir, {{getTrashRoots()}} won't return the 
> trash root in that dir anymore (if any). The risk is the trash root will be 
> left there forever.
> We need to throw an exception there and prompt the user to clean up or rename 
> the trash root if it is not empty.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15546) Remove duplicate initializeGenericKeys method when creating DFSZKFailoverController

2020-08-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15546?focusedWorklogId=475861=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-475861
 ]

ASF GitHub Bot logged work on HDFS-15546:
-

Author: ASF GitHub Bot
Created on: 28/Aug/20 15:32
Start Date: 28/Aug/20 15:32
Worklog Time Spent: 10m 
  Work Description: sunchao commented on a change in pull request #2256:
URL: https://github.com/apache/hadoop/pull/2256#discussion_r479379964



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSZKFailoverController.java
##
@@ -158,7 +158,7 @@ public static DFSZKFailoverController create(Configuration 
conf) {
   "You may run zkfc on the node other than namenode.";
   throw new HadoopIllegalArgumentException(msg);
 }
-NameNode.initializeGenericKeys(localNNConf, nsId, nnId);

Review comment:
   In `NNHAServiceTarget` the call is on a copy of `localNNConf` though so 
the original conf will not be set.

##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestDFSZKFailoverController.java
##
@@ -226,9 +228,11 @@ public void testWithBindAddressSet() throws Exception {
 DFSZKFailoverController zkfc = DFSZKFailoverController.create(
 conf);
 String addr = zkfc.getRpcAddressToBindTo().getHostString();
+String nameserviceId = conf.get(DFS_NAMESERVICE_ID);
 
 assertEquals("Bind address " + addr + " is not wildcard.",
 addr, WILDCARD_ADDRESS);
+assertEquals(nameserviceId, NAMESERVICE_ID);

Review comment:
   nit: `assertEquals(expected, actual)`.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 475861)
Time Spent: 0.5h  (was: 20m)

> Remove duplicate initializeGenericKeys method when creating 
> DFSZKFailoverController
> ---
>
> Key: HDFS-15546
> URL: https://issues.apache.org/jira/browse/HDFS-15546
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: jianghua zhu
>Assignee: jianghua zhu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.0
>
> Attachments: HDFS-15546.001.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> When calling the DFSZKFailoverController#create() method, call 
> NameNode#initializeGenericKeys() twice.
> First call:
> DFSZKFailoverController#create() {
> ...
> NameNode.initializeGenericKeys(localNNConf, nsId, nnId);
> ...
> }
> The second call:
> NNHAServiceTarget construction method:
> NNHAServiceTarget() {
> ...
> NameNode.initializeGenericKeys(targetConf, nsId, nnId);
> ...
> }
> In fact, the parameters passed in the two calls are the same. Here you can 
> remove the first call without affecting the program itself.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15542) Add identified snapshot corruption tests for ordered snapshot deletion

2020-08-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15542?focusedWorklogId=475855=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-475855
 ]

ASF GitHub Bot logged work on HDFS-15542:
-

Author: ASF GitHub Bot
Created on: 28/Aug/20 15:09
Start Date: 28/Aug/20 15:09
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2251:
URL: https://github.com/apache/hadoop/pull/2251#issuecomment-682683028


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 30s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  28m 20s |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 17s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |   1m 12s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 48s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 18s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m  7s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 50s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 24s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   2m 59s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   2m 57s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m  9s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  7s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |   1m  7s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  2s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |   1m  2s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   0m 40s |  hadoop-hdfs-project/hadoop-hdfs: 
The patch generated 4 new + 0 unchanged - 0 fixed = 4 total (was 0)  |
   | +1 :green_heart: |  mvnsite  |   1m  5s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  14m 50s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 47s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 24s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   3m  7s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 101m 21s |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 43s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 184m  2s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.namenode.TestReencryption |
   |   | hadoop.hdfs.TestGetFileChecksum |
   |   | hadoop.hdfs.TestFileChecksumCompositeCrc |
   |   | hadoop.hdfs.server.namenode.ha.TestHAAppend |
   |   | hadoop.hdfs.server.namenode.ha.TestStandbyCheckpoints |
   |   | 
hadoop.hdfs.server.blockmanagement.TestAvailableSpaceRackFaultTolerantBPP |
   |   | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
   |   | hadoop.hdfs.TestReplaceDatanodeFailureReplication |
   |   | hadoop.fs.contract.hdfs.TestHDFSContractMultipartUploader |
   |   | hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier |
   |   | hadoop.hdfs.TestFileChecksum |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2251/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2251 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux b70fa96d748d 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 30b1b1cc047 |
   | Default Java | Private 

[jira] [Updated] (HDFS-15542) Add identified snapshot corruption tests for ordered snapshot deletion

2020-08-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDFS-15542:
--
Labels: pull-request-available  (was: )

> Add identified snapshot corruption tests for ordered snapshot deletion
> --
>
> Key: HDFS-15542
> URL: https://issues.apache.org/jira/browse/HDFS-15542
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: snapshots
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> HDFS-13101, HDFS-15012 and HDFS-15313 along with HDFS-15470 have fsimage 
> corruption sequences with snapshots . The idea here is to aggregate these 
> unit tests and enabled them for ordered snapshot deletion feature.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-15471) TestHDFSContractMultipartUploader fails on trunk

2020-08-28 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15471?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HDFS-15471.
---
Fix Version/s: 3.3.1
   Resolution: Fixed

> TestHDFSContractMultipartUploader fails on trunk
> 
>
> Key: HDFS-15471
> URL: https://issues.apache.org/jira/browse/HDFS-15471
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.3.1, 3.4.0
>Reporter: Ahmed Hussein
>Assignee: Steve Loughran
>Priority: Major
>  Labels: pull-request-available, test
> Fix For: 3.3.1
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> {{TestHDFSContractMultipartUploader}} fails on trunk with 
> {{IllegalArgumentException}}
> {code:bash}
> [ERROR] 
> testConcurrentUploads(org.apache.hadoop.fs.contract.hdfs.TestHDFSContractMultipartUploader)
>   Time elapsed: 0.127 s  <<< ERROR!
> java.lang.IllegalArgumentException
>   at 
> com.google.common.base.Preconditions.checkArgument(Preconditions.java:127)
>   at 
> org.apache.hadoop.test.LambdaTestUtils$ProportionalRetryInterval.(LambdaTestUtils.java:907)
>   at 
> org.apache.hadoop.fs.contract.AbstractContractMultipartUploaderTest.testConcurrentUploads(AbstractContractMultipartUploaderTest.java:815)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15471) TestHDFSContractMultipartUploader fails on trunk

2020-08-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15471?focusedWorklogId=475842=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-475842
 ]

ASF GitHub Bot logged work on HDFS-15471:
-

Author: ASF GitHub Bot
Created on: 28/Aug/20 14:39
Start Date: 28/Aug/20 14:39
Worklog Time Spent: 10m 
  Work Description: steveloughran merged pull request #2252:
URL: https://github.com/apache/hadoop/pull/2252


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 475842)
Time Spent: 50m  (was: 40m)

> TestHDFSContractMultipartUploader fails on trunk
> 
>
> Key: HDFS-15471
> URL: https://issues.apache.org/jira/browse/HDFS-15471
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.3.1, 3.4.0
>Reporter: Ahmed Hussein
>Assignee: Steve Loughran
>Priority: Major
>  Labels: test
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> {{TestHDFSContractMultipartUploader}} fails on trunk with 
> {{IllegalArgumentException}}
> {code:bash}
> [ERROR] 
> testConcurrentUploads(org.apache.hadoop.fs.contract.hdfs.TestHDFSContractMultipartUploader)
>   Time elapsed: 0.127 s  <<< ERROR!
> java.lang.IllegalArgumentException
>   at 
> com.google.common.base.Preconditions.checkArgument(Preconditions.java:127)
>   at 
> org.apache.hadoop.test.LambdaTestUtils$ProportionalRetryInterval.(LambdaTestUtils.java:907)
>   at 
> org.apache.hadoop.fs.contract.AbstractContractMultipartUploaderTest.testConcurrentUploads(AbstractContractMultipartUploaderTest.java:815)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15471) TestHDFSContractMultipartUploader fails on trunk

2020-08-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15471?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDFS-15471:
--
Labels: pull-request-available test  (was: test)

> TestHDFSContractMultipartUploader fails on trunk
> 
>
> Key: HDFS-15471
> URL: https://issues.apache.org/jira/browse/HDFS-15471
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.3.1, 3.4.0
>Reporter: Ahmed Hussein
>Assignee: Steve Loughran
>Priority: Major
>  Labels: pull-request-available, test
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> {{TestHDFSContractMultipartUploader}} fails on trunk with 
> {{IllegalArgumentException}}
> {code:bash}
> [ERROR] 
> testConcurrentUploads(org.apache.hadoop.fs.contract.hdfs.TestHDFSContractMultipartUploader)
>   Time elapsed: 0.127 s  <<< ERROR!
> java.lang.IllegalArgumentException
>   at 
> com.google.common.base.Preconditions.checkArgument(Preconditions.java:127)
>   at 
> org.apache.hadoop.test.LambdaTestUtils$ProportionalRetryInterval.(LambdaTestUtils.java:907)
>   at 
> org.apache.hadoop.fs.contract.AbstractContractMultipartUploaderTest.testConcurrentUploads(AbstractContractMultipartUploaderTest.java:815)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15546) Remove duplicate initializeGenericKeys method when creating DFSZKFailoverController

2020-08-28 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17186577#comment-17186577
 ] 

Hadoop QA commented on HDFS-15546:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m  
8s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
43s{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
28s{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
21m 45s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
5s{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
43s{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  4m 
20s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
16s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
37s{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 56s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 1 new + 13 unchanged - 0 fixed = 14 total (was 13) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
19m  5s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
43s{color} | {color:green} the patch passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
25s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}140m 19s{color} 
| {color:red} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 

[jira] [Updated] (HDFS-15546) Remove duplicate initializeGenericKeys method when creating DFSZKFailoverController

2020-08-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDFS-15546:
--
Labels: pull-request-available  (was: )

> Remove duplicate initializeGenericKeys method when creating 
> DFSZKFailoverController
> ---
>
> Key: HDFS-15546
> URL: https://issues.apache.org/jira/browse/HDFS-15546
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: jianghua zhu
>Assignee: jianghua zhu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.0
>
> Attachments: HDFS-15546.001.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> When calling the DFSZKFailoverController#create() method, call 
> NameNode#initializeGenericKeys() twice.
> First call:
> DFSZKFailoverController#create() {
> ...
> NameNode.initializeGenericKeys(localNNConf, nsId, nnId);
> ...
> }
> The second call:
> NNHAServiceTarget construction method:
> NNHAServiceTarget() {
> ...
> NameNode.initializeGenericKeys(targetConf, nsId, nnId);
> ...
> }
> In fact, the parameters passed in the two calls are the same. Here you can 
> remove the first call without affecting the program itself.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15546) Remove duplicate initializeGenericKeys method when creating DFSZKFailoverController

2020-08-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15546?focusedWorklogId=475830=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-475830
 ]

ASF GitHub Bot logged work on HDFS-15546:
-

Author: ASF GitHub Bot
Created on: 28/Aug/20 13:58
Start Date: 28/Aug/20 13:58
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2256:
URL: https://github.com/apache/hadoop/pull/2256#issuecomment-682590378


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 41s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  36m 27s |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 43s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |   1m 21s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 53s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 34s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 46s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 20s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 37s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   4m  4s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   4m  1s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 25s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 31s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |   1m 31s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 13s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |   1m 13s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   0m 51s |  hadoop-hdfs-project/hadoop-hdfs: 
The patch generated 1 new + 12 unchanged - 0 fixed = 13 total (was 12)  |
   | +1 :green_heart: |  mvnsite  |   1m 25s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  16m  9s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 46s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 14s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   3m 10s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 111m 42s |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 36s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 213m  9s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier |
   |   | hadoop.hdfs.TestGetFileChecksum |
   |   | hadoop.hdfs.TestFileChecksumCompositeCrc |
   |   | hadoop.hdfs.server.balancer.TestBalancerRPCDelay |
   |   | hadoop.hdfs.TestRollingUpgrade |
   |   | hadoop.hdfs.TestFileChecksum |
   |   | hadoop.fs.contract.hdfs.TestHDFSContractMultipartUploader |
   |   | hadoop.hdfs.security.TestDelegationToken |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2256/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2256 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 1c3e217fb50f 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 06793da1001 |
   | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | 

[jira] [Comment Edited] (HDFS-15546) Remove duplicate initializeGenericKeys method when creating DFSZKFailoverController

2020-08-28 Thread jianghua zhu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17186461#comment-17186461
 ] 

jianghua zhu edited comment on HDFS-15546 at 8/28/20, 10:39 AM:


[~elgoiri] , [~hexiaoqiao] , [~hemanthboyina] ,here is a patch file 
(HDFS-15546.001.patch), can you help review the code?

 


was (Author: jianghuazhu):
[~elgoiri] , [~hexiaoqiao] , here is a patch file (HDFS-15546.001.patch), can 
you help review the code?

 

> Remove duplicate initializeGenericKeys method when creating 
> DFSZKFailoverController
> ---
>
> Key: HDFS-15546
> URL: https://issues.apache.org/jira/browse/HDFS-15546
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: jianghua zhu
>Assignee: jianghua zhu
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-15546.001.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> When calling the DFSZKFailoverController#create() method, call 
> NameNode#initializeGenericKeys() twice.
> First call:
> DFSZKFailoverController#create() {
> ...
> NameNode.initializeGenericKeys(localNNConf, nsId, nnId);
> ...
> }
> The second call:
> NNHAServiceTarget construction method:
> NNHAServiceTarget() {
> ...
> NameNode.initializeGenericKeys(targetConf, nsId, nnId);
> ...
> }
> In fact, the parameters passed in the two calls are the same. Here you can 
> remove the first call without affecting the program itself.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15546) Remove duplicate initializeGenericKeys method when creating DFSZKFailoverController

2020-08-28 Thread jianghua zhu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17186461#comment-17186461
 ] 

jianghua zhu commented on HDFS-15546:
-

[~elgoiri] , [~hexiaoqiao] , here is a patch file (HDFS-15546.001.patch), can 
you help review the code?

 

> Remove duplicate initializeGenericKeys method when creating 
> DFSZKFailoverController
> ---
>
> Key: HDFS-15546
> URL: https://issues.apache.org/jira/browse/HDFS-15546
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: jianghua zhu
>Assignee: jianghua zhu
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-15546.001.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> When calling the DFSZKFailoverController#create() method, call 
> NameNode#initializeGenericKeys() twice.
> First call:
> DFSZKFailoverController#create() {
> ...
> NameNode.initializeGenericKeys(localNNConf, nsId, nnId);
> ...
> }
> The second call:
> NNHAServiceTarget construction method:
> NNHAServiceTarget() {
> ...
> NameNode.initializeGenericKeys(targetConf, nsId, nnId);
> ...
> }
> In fact, the parameters passed in the two calls are the same. Here you can 
> remove the first call without affecting the program itself.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15546) Remove duplicate initializeGenericKeys method when creating DFSZKFailoverController

2020-08-28 Thread jianghua zhu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jianghua zhu updated HDFS-15546:

Fix Version/s: 3.3.0

> Remove duplicate initializeGenericKeys method when creating 
> DFSZKFailoverController
> ---
>
> Key: HDFS-15546
> URL: https://issues.apache.org/jira/browse/HDFS-15546
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: jianghua zhu
>Assignee: jianghua zhu
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-15546.001.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> When calling the DFSZKFailoverController#create() method, call 
> NameNode#initializeGenericKeys() twice.
> First call:
> DFSZKFailoverController#create() {
> ...
> NameNode.initializeGenericKeys(localNNConf, nsId, nnId);
> ...
> }
> The second call:
> NNHAServiceTarget construction method:
> NNHAServiceTarget() {
> ...
> NameNode.initializeGenericKeys(targetConf, nsId, nnId);
> ...
> }
> In fact, the parameters passed in the two calls are the same. Here you can 
> remove the first call without affecting the program itself.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15546) Remove duplicate initializeGenericKeys method when creating DFSZKFailoverController

2020-08-28 Thread jianghua zhu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jianghua zhu updated HDFS-15546:

Attachment: HDFS-15546.001.patch
Status: Patch Available  (was: Open)

> Remove duplicate initializeGenericKeys method when creating 
> DFSZKFailoverController
> ---
>
> Key: HDFS-15546
> URL: https://issues.apache.org/jira/browse/HDFS-15546
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: jianghua zhu
>Assignee: jianghua zhu
>Priority: Major
> Attachments: HDFS-15546.001.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> When calling the DFSZKFailoverController#create() method, call 
> NameNode#initializeGenericKeys() twice.
> First call:
> DFSZKFailoverController#create() {
> ...
> NameNode.initializeGenericKeys(localNNConf, nsId, nnId);
> ...
> }
> The second call:
> NNHAServiceTarget construction method:
> NNHAServiceTarget() {
> ...
> NameNode.initializeGenericKeys(targetConf, nsId, nnId);
> ...
> }
> In fact, the parameters passed in the two calls are the same. Here you can 
> remove the first call without affecting the program itself.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15546) Remove duplicate initializeGenericKeys method when creating DFSZKFailoverController

2020-08-28 Thread jianghua zhu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jianghua zhu updated HDFS-15546:

Attachment: (was: HDFS-15546.001.patch)

> Remove duplicate initializeGenericKeys method when creating 
> DFSZKFailoverController
> ---
>
> Key: HDFS-15546
> URL: https://issues.apache.org/jira/browse/HDFS-15546
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: jianghua zhu
>Assignee: jianghua zhu
>Priority: Major
> Attachments: HDFS-15546.001.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> When calling the DFSZKFailoverController#create() method, call 
> NameNode#initializeGenericKeys() twice.
> First call:
> DFSZKFailoverController#create() {
> ...
> NameNode.initializeGenericKeys(localNNConf, nsId, nnId);
> ...
> }
> The second call:
> NNHAServiceTarget construction method:
> NNHAServiceTarget() {
> ...
> NameNode.initializeGenericKeys(targetConf, nsId, nnId);
> ...
> }
> In fact, the parameters passed in the two calls are the same. Here you can 
> remove the first call without affecting the program itself.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15546) Remove duplicate initializeGenericKeys method when creating DFSZKFailoverController

2020-08-28 Thread jianghua zhu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jianghua zhu updated HDFS-15546:

Attachment: HDFS-15546.001.patch

> Remove duplicate initializeGenericKeys method when creating 
> DFSZKFailoverController
> ---
>
> Key: HDFS-15546
> URL: https://issues.apache.org/jira/browse/HDFS-15546
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: jianghua zhu
>Assignee: jianghua zhu
>Priority: Major
> Attachments: HDFS-15546.001.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> When calling the DFSZKFailoverController#create() method, call 
> NameNode#initializeGenericKeys() twice.
> First call:
> DFSZKFailoverController#create() {
> ...
> NameNode.initializeGenericKeys(localNNConf, nsId, nnId);
> ...
> }
> The second call:
> NNHAServiceTarget construction method:
> NNHAServiceTarget() {
> ...
> NameNode.initializeGenericKeys(targetConf, nsId, nnId);
> ...
> }
> In fact, the parameters passed in the two calls are the same. Here you can 
> remove the first call without affecting the program itself.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15546) Remove duplicate initializeGenericKeys method when creating DFSZKFailoverController

2020-08-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15546?focusedWorklogId=475791=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-475791
 ]

ASF GitHub Bot logged work on HDFS-15546:
-

Author: ASF GitHub Bot
Created on: 28/Aug/20 10:24
Start Date: 28/Aug/20 10:24
Worklog Time Spent: 10m 
  Work Description: jianghuazhu opened a new pull request #2256:
URL: https://github.com/apache/hadoop/pull/2256


   …ng DFSZKFailoverController
   
   ## NOTICE
   
   Please create an issue in ASF JIRA before opening a pull request,
   and you need to set the title of the pull request which starts with
   the corresponding JIRA issue number. (e.g. HADOOP-X. Fix a typo in YYY.)
   For more details, please see 
https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 475791)
Remaining Estimate: 0h
Time Spent: 10m

> Remove duplicate initializeGenericKeys method when creating 
> DFSZKFailoverController
> ---
>
> Key: HDFS-15546
> URL: https://issues.apache.org/jira/browse/HDFS-15546
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: jianghua zhu
>Assignee: jianghua zhu
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> When calling the DFSZKFailoverController#create() method, call 
> NameNode#initializeGenericKeys() twice.
> First call:
> DFSZKFailoverController#create() {
> ...
> NameNode.initializeGenericKeys(localNNConf, nsId, nnId);
> ...
> }
> The second call:
> NNHAServiceTarget construction method:
> NNHAServiceTarget() {
> ...
> NameNode.initializeGenericKeys(targetConf, nsId, nnId);
> ...
> }
> In fact, the parameters passed in the two calls are the same. Here you can 
> remove the first call without affecting the program itself.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-15546) Remove duplicate initializeGenericKeys method when creating DFSZKFailoverController

2020-08-28 Thread jianghua zhu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jianghua zhu reassigned HDFS-15546:
---

Assignee: jianghua zhu

> Remove duplicate initializeGenericKeys method when creating 
> DFSZKFailoverController
> ---
>
> Key: HDFS-15546
> URL: https://issues.apache.org/jira/browse/HDFS-15546
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: jianghua zhu
>Assignee: jianghua zhu
>Priority: Major
>
> When calling the DFSZKFailoverController#create() method, call 
> NameNode#initializeGenericKeys() twice.
> First call:
> DFSZKFailoverController#create() {
> ...
> NameNode.initializeGenericKeys(localNNConf, nsId, nnId);
> ...
> }
> The second call:
> NNHAServiceTarget construction method:
> NNHAServiceTarget() {
> ...
> NameNode.initializeGenericKeys(targetConf, nsId, nnId);
> ...
> }
> In fact, the parameters passed in the two calls are the same. Here you can 
> remove the first call without affecting the program itself.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-15546) Remove duplicate initializeGenericKeys method when creating DFSZKFailoverController

2020-08-28 Thread jianghua zhu (Jira)
jianghua zhu created HDFS-15546:
---

 Summary: Remove duplicate initializeGenericKeys method when 
creating DFSZKFailoverController
 Key: HDFS-15546
 URL: https://issues.apache.org/jira/browse/HDFS-15546
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: jianghua zhu


When calling the DFSZKFailoverController#create() method, call 
NameNode#initializeGenericKeys() twice.
First call:
DFSZKFailoverController#create() {
...
NameNode.initializeGenericKeys(localNNConf, nsId, nnId);
...
}


The second call:
NNHAServiceTarget construction method:
NNHAServiceTarget() {
...
NameNode.initializeGenericKeys(targetConf, nsId, nnId);
...
}
In fact, the parameters passed in the two calls are the same. Here you can 
remove the first call without affecting the program itself.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15098) Add SM4 encryption method for HDFS

2020-08-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15098?focusedWorklogId=475757=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-475757
 ]

ASF GitHub Bot logged work on HDFS-15098:
-

Author: ASF GitHub Bot
Created on: 28/Aug/20 08:50
Start Date: 28/Aug/20 08:50
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2211:
URL: https://github.com/apache/hadoop/pull/2211#issuecomment-682408783


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |  42m  2s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  1s |  markdownlint was not available.  |
   | +0 :ok: |  buf  |   0m  1s |  buf was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  1s |  The patch appears to include 
4 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   3m 28s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  36m 31s |  trunk passed  |
   | +1 :green_heart: |  compile  |  28m 54s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |  24m 33s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   4m  3s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   4m 51s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  29m  6s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m 44s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   4m 36s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   3m 35s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |  10m 56s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 33s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m 43s |  the patch passed  |
   | +1 :green_heart: |  compile  |  28m 21s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | -1 :x: |  cc  |  28m 21s |  
root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 generated 15 new + 148 unchanged - 
15 fixed = 163 total (was 163)  |
   | +1 :green_heart: |  golang  |  28m 21s |  the patch passed  |
   | -1 :x: |  javac  |  28m 21s |  
root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 generated 1 new + 2053 unchanged - 
4 fixed = 2054 total (was 2057)  |
   | +1 :green_heart: |  compile  |  24m 21s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | -1 :x: |  cc  |  24m 21s |  
root-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 generated 12 new + 151 unchanged - 
12 fixed = 163 total (was 163)  |
   | +1 :green_heart: |  golang  |  24m 21s |  the patch passed  |
   | +1 :green_heart: |  javac  |  24m 21s |  
root-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 generated 0 new + 1947 unchanged - 
3 fixed = 1947 total (was 1950)  |
   | -0 :warning: |  checkstyle  |   4m  4s |  root: The patch generated 3 new 
+ 213 unchanged - 8 fixed = 216 total (was 221)  |
   | +1 :green_heart: |  mvnsite  |   4m 46s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  xml  |   0m  4s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  19m 35s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m 39s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   4m 32s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  findbugs  |  11m 38s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |  11m 28s |  hadoop-common in the patch passed. 
 |
   | +1 :green_heart: |  unit  |   2m 37s |  hadoop-hdfs-client in the patch 
passed.  |
   | -1 :x: |  unit  | 131m 59s |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m 16s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 440m 24s 

[jira] [Updated] (HDFS-15098) Add SM4 encryption method for HDFS

2020-08-28 Thread liusheng (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liusheng updated HDFS-15098:

Attachment: (was: HDFS-15098.009.patch)

> Add SM4 encryption method for HDFS
> --
>
> Key: HDFS-15098
> URL: https://issues.apache.org/jira/browse/HDFS-15098
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Affects Versions: 3.4.0
>Reporter: liusheng
>Assignee: liusheng
>Priority: Major
>  Labels: sm4
> Attachments: HDFS-15098.001.patch, HDFS-15098.002.patch, 
> HDFS-15098.003.patch, HDFS-15098.004.patch, HDFS-15098.005.patch, 
> HDFS-15098.006.patch, HDFS-15098.007.patch, HDFS-15098.008.patch, 
> HDFS-15098.009.patch, image-2020-08-19-16-54-41-341.png
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> SM4 (formerly SMS4)is a block cipher used in the Chinese National Standard 
> for Wireless LAN WAPI (Wired Authentication and Privacy Infrastructure).
>  SM4 was a cipher proposed to for the IEEE 802.11i standard, but has so far 
> been rejected by ISO. One of the reasons for the rejection has been 
> opposition to the WAPI fast-track proposal by the IEEE. please see:
> [https://en.wikipedia.org/wiki/SM4_(cipher)]
>  
> *Use sm4 on hdfs as follows:*
> 1.Configure Hadoop KMS
>  2.test HDFS sm4
>  hadoop key create key1 -cipher 'SM4/CTR/NoPadding'
>  hdfs dfs -mkdir /benchmarks
>  hdfs crypto -createZone -keyName key1 -path /benchmarks
> *requires:*
>  1.openssl version >=1.1.1



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15098) Add SM4 encryption method for HDFS

2020-08-28 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17186318#comment-17186318
 ] 

Hadoop QA commented on HDFS-15098:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
51s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:blue}0{color} | {color:blue} markdownlint {color} | {color:blue}  0m  
0s{color} | {color:blue} markdownlint was not available. {color} |
| {color:blue}0{color} | {color:blue} buf {color} | {color:blue}  0m  0s{color} 
| {color:blue} buf was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
1s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 25m 
30s{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 21m 
58s{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
25m 37s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
19s{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
49s{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  3m 
56s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  9m 
40s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
26s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 25m 
41s{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red} 25m 41s{color} | 
{color:red} root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 generated 26 new + 137 unchanged - 
26 fixed = 163 total (was 163) {color} |
| {color:green}+1{color} | {color:green} golang {color} | {color:green} 25m 
41s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 25m 41s{color} 
| {color:red} root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 generated 1 new + 2053 unchanged - 
5 fixed = 2054 total (was 2058) {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 20m 
57s{color} | {color:green} the patch passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red} 20m 57s{color} | 
{color:red} root-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 with 
JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 generated 31 new + 132 
unchanged - 31 fixed = 163 total (was 163) {color} |
| {color:green}+1{color} | {color:green} golang {color} | {color:green} 20m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | 

[jira] [Work logged] (HDFS-15545) (S)Webhdfs will not use updated delegation tokens available in the ugi after the old ones expire

2020-08-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15545?focusedWorklogId=475712=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-475712
 ]

ASF GitHub Bot logged work on HDFS-15545:
-

Author: ASF GitHub Bot
Created on: 28/Aug/20 07:07
Start Date: 28/Aug/20 07:07
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2255:
URL: https://github.com/apache/hadoop/pull/2255#issuecomment-682366218


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |  27m  5s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   3m 22s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  27m 57s |  trunk passed  |
   | +1 :green_heart: |  compile  |   4m 10s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |   3m 49s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   1m  0s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m  6s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  18m 35s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 23s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 51s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   2m 32s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   5m 36s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 22s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 59s |  the patch passed  |
   | +1 :green_heart: |  compile  |   4m 42s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |   4m 42s |  the patch passed  |
   | +1 :green_heart: |  compile  |   4m 42s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |   4m 42s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   1m  1s |  hadoop-hdfs-project: The patch 
generated 2 new + 69 unchanged - 0 fixed = 71 total (was 69)  |
   | +1 :green_heart: |  mvnsite  |   2m  4s |  the patch passed  |
   | -1 :x: |  whitespace  |   0m  0s |  The patch has 1 line(s) that end in 
whitespace. Use git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply  |
   | +1 :green_heart: |  shadedclient  |  16m 12s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 25s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 59s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   6m 55s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 12s |  hadoop-hdfs-client in the patch 
passed.  |
   | -1 :x: |  unit  | 103m 47s |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 55s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 246m 14s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.datanode.TestBPOfferService |
   |   | hadoop.hdfs.TestFileChecksum |
   |   | hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics |
   |   | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
   |   | hadoop.hdfs.TestSafeModeWithStripedFileWithRandomECPolicy |
   |   | hadoop.hdfs.TestFileChecksumCompositeCrc |
   |   | hadoop.hdfs.server.namenode.ha.TestHAAppend |
   |   | hadoop.fs.contract.hdfs.TestHDFSContractMultipartUploader |
   |   | hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2255/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2255 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux c83799c08c53 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39