[jira] [Commented] (HDFS-14498) LeaseManager can loop forever on the file for which create has failed

2020-07-12 Thread Xiaoqiao He (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17156474#comment-17156474
 ] 

Xiaoqiao He commented on HDFS-14498:


Thanks [~sodonnell] for your double checks. Will commit it later today.

> LeaseManager can loop forever on the file for which create has failed 
> --
>
> Key: HDFS-14498
> URL: https://issues.apache.org/jira/browse/HDFS-14498
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.9.0
>Reporter: Sergey Shelukhin
>Assignee: Stephen O'Donnell
>Priority: Major
> Attachments: HDFS-14498.001.patch, HDFS-14498.002.patch
>
>
> The logs from file creation are long gone due to infinite lease logging, 
> however it presumably failed... the client who was trying to write this file 
> is definitely long dead.
> The version includes HDFS-4882.
> We get this log pattern repeating infinitely:
> {noformat}
> 2019-05-16 14:00:16,893 INFO 
> [org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor@b27557f] 
> org.apache.hadoop.hdfs.server.namenode.LeaseManager: [Lease.  Holder: 
> DFSClient_NONMAPREDUCE_-20898906_61, pending creates: 1] has expired hard 
> limit
> 2019-05-16 14:00:16,893 INFO 
> [org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor@b27557f] 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Recovering [Lease.  
> Holder: DFSClient_NONMAPREDUCE_-20898906_61, pending creates: 1], src=
> 2019-05-16 14:00:16,893 WARN 
> [org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor@b27557f] 
> org.apache.hadoop.hdfs.StateChange: DIR* NameSystem.internalReleaseLease: 
> Failed to release lease for file . Committed blocks are waiting to be 
> minimally replicated. Try again later.
> 2019-05-16 14:00:16,893 WARN 
> [org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor@b27557f] 
> org.apache.hadoop.hdfs.server.namenode.LeaseManager: Cannot release the path 
>  in the lease [Lease.  Holder: DFSClient_NONMAPREDUCE_-20898906_61, 
> pending creates: 1]. It will be retried.
> org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException: DIR* 
> NameSystem.internalReleaseLease: Failed to release lease for file . 
> Committed blocks are waiting to be minimally replicated. Try again later.
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.internalReleaseLease(FSNamesystem.java:3357)
>   at 
> org.apache.hadoop.hdfs.server.namenode.LeaseManager.checkLeases(LeaseManager.java:573)
>   at 
> org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor.run(LeaseManager.java:509)
>   at java.lang.Thread.run(Thread.java:745)
> $  grep -c "Recovering.*DFSClient_NONMAPREDUCE_-20898906_61, pending creates: 
> 1" hdfs_nn*
> hdfs_nn.log:1068035
> hdfs_nn.log.2019-05-16-14:1516179
> hdfs_nn.log.2019-05-16-15:1538350
> {noformat}
> Aside from an actual bug fix, it might make sense to make LeaseManager not 
> log so much, in case if there are more bugs like this...



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15465) Support WebHDFS accesses to the data stored in secure Datanode through insecure Namenode

2020-07-12 Thread Toshihiko Uchida (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17156465#comment-17156465
 ] 

Toshihiko Uchida commented on HDFS-15465:
-

Created a PR: [https://github.com/apache/hadoop/pull/2135].

> Support WebHDFS accesses to the data stored in secure Datanode through 
> insecure Namenode
> 
>
> Key: HDFS-15465
> URL: https://issues.apache.org/jira/browse/HDFS-15465
> Project: Hadoop HDFS
>  Issue Type: Wish
>  Components: federation, webhdfs
>Reporter: Toshihiko Uchida
>Assignee: Toshihiko Uchida
>Priority: Minor
> Attachments: webhdfs-federation.pdf
>
>
> We're federating a secure HDFS cluster with an insecure cluster.
> Using HDFS RPC, we can access the data managed by insecure Namenode and 
> stored in secure Datanode.
> However, it does not work for WebHDFS due to HadoopIllegalArgumentException.
> {code}
> $ curl -i "http://:/webhdfs/v1/?op=OPEN"
> HTTP/1.1 307 TEMPORARY_REDIRECT
> (omitted)
> Location: 
> http://:/webhdfs/v1/?op=OPEN==0
> $ curl -i 
> "http://:/webhdfs/v1/?op=OPEN==0"
> HTTP/1.1 400 Bad Request
> (omitted)
> {"RemoteException":{"exception":"HadoopIllegalArgumentException","javaClassName":"org.apache.hadoop.HadoopIllegalArgumentException","message":"Invalid
>  argument, newValue is null"}}
> {code}
> This is because secure Datanode expects a delegation token, but insecure 
> Namenode does not return it to a client.
> - org.apache.hadoop.security.token.Token.decodeWritable
> {code}
>   private static void decodeWritable(Writable obj,
>  String newValue) throws IOException {
> if (newValue == null) {
>   throw new HadoopIllegalArgumentException(
>   "Invalid argument, newValue is null");
> }
> {code}
> The issue proposes to support the access also for WebHDFS.
> The attached PDF file [^webhdfs-federation.pdf] depicts our current 
> architecture and proposal.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15098) Add SM4 encryption method for HDFS

2020-07-12 Thread zZtai (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17156422#comment-17156422
 ] 

zZtai commented on HDFS-15098:
--

[~vinayakumarb] Thank you for your advice. We will deal with it as soon as 
possible;

> Add SM4 encryption method for HDFS
> --
>
> Key: HDFS-15098
> URL: https://issues.apache.org/jira/browse/HDFS-15098
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Affects Versions: 3.4.0
>Reporter: liusheng
>Assignee: zZtai
>Priority: Major
>  Labels: sm4
> Attachments: HDFS-15098.001.patch, HDFS-15098.002.patch, 
> HDFS-15098.003.patch, HDFS-15098.004.patch, HDFS-15098.005.patch, 
> HDFS-15098.006.patch, HDFS-15098.007.patch, HDFS-15098.008.patch
>
>
> SM4 (formerly SMS4)is a block cipher used in the Chinese National Standard 
> for Wireless LAN WAPI (Wired Authentication and Privacy Infrastructure).
>  SM4 was a cipher proposed to for the IEEE 802.11i standard, but has so far 
> been rejected by ISO. One of the reasons for the rejection has been 
> opposition to the WAPI fast-track proposal by the IEEE. please see:
> [https://en.wikipedia.org/wiki/SM4_(cipher)]
>  
> *Use sm4 on hdfs as follows:*
> 1.download Bouncy Castle Crypto APIs from bouncycastle.org
> [https://bouncycastle.org/download/bcprov-ext-jdk15on-165.jar]
> 2.Configure JDK
> Place bcprov-ext-jdk15on-165.jar in $JAVA_HOME/jre/lib/ext directory,
> add "security.provider.10=org.bouncycastle.jce.provider.BouncyCastleProvider" 
> to $JAVA_HOME/jre/lib/security/java.security file
> 3.Configure Hadoop KMS
> 4.test HDFS sm4
> hadoop key create key1 -cipher 'SM4/CTR/NoPadding'
> hdfs dfs -mkdir /benchmarks
> hdfs crypto -createZone -keyName key1 -path /benchmarks
> *requires:*
> 1.openssl version >=1.1.1
> 2.configure Bouncy Castle Crypto on JDK



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12969) DfsAdmin listOpenFiles should report files by type

2020-07-12 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-12969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17156325#comment-17156325
 ] 

Hadoop QA commented on HDFS-12969:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:blue}0{color} | {color:blue} prototool {color} | {color:blue}  0m  
0s{color} | {color:blue} prototool was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
6s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
22m 39s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
26s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  3m  
9s{color} | {color:blue} Used deprecated FindBugs config; considering switching 
to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  7m 
44s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
23s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
58s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red} 16m 58s{color} | 
{color:red} root generated 16 new + 146 unchanged - 16 fixed = 162 total (was 
162) {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
 0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 24s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  8m 
13s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
21s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m  
9s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}109m 35s{color} 
| {color:red} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
54s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}252m 26s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped 

[jira] [Commented] (HDFS-15464) ViewFsOverloadScheme should work when -fs option pointing to remote cluster without mount links

2020-07-12 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17156292#comment-17156292
 ] 

Hadoop QA commented on HDFS-15464:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  2m  
5s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:blue}0{color} | {color:blue} markdownlint {color} | {color:blue}  0m  
0s{color} | {color:blue} markdownlint was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 20m 
38s{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 
39s{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
21m 38s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
36s{color} | {color:red} hadoop-common in trunk failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
43s{color} | {color:red} hadoop-hdfs in trunk failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
44s{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09 {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  3m 
11s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
16s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 20m  
2s{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 20m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
28s{color} | {color:green} the patch passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 17m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 53s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
36s{color} | {color:red} hadoop-common in the patch failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  

[jira] [Updated] (HDFS-12969) DfsAdmin listOpenFiles should report files by type

2020-07-12 Thread hemanthboyina (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-12969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

hemanthboyina updated HDFS-12969:
-
Attachment: HDFS-12969.002.patch

> DfsAdmin listOpenFiles should report files by type
> --
>
> Key: HDFS-12969
> URL: https://issues.apache.org/jira/browse/HDFS-12969
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 3.1.0
>Reporter: Manoj Govindassamy
>Assignee: hemanthboyina
>Priority: Major
> Attachments: HDFS-12969.001.patch, HDFS-12969.002.patch
>
>
> HDFS-11847 has introduced a new option to {{-blockingDecommission}} to an 
> existing command 
> {{dfsadmin -listOpenFiles}}. But the reporting done by the command doesn't 
> differentiate the files based on the type (like blocking decommission). In 
> order to change the reporting style, the proto format used for the base 
> command has to be updated to carry additional fields and better be done in a 
> new jira outside of HDFS-11847. This jira is to track the end-to-end 
> enhancements needed for dfsadmin -listOpenFiles console output.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-15454) ViewFsOverloadScheme should not display error message with "viewfs://" even when it's initialized with other fs.

2020-07-12 Thread Uma Maheswara Rao G (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uma Maheswara Rao G resolved HDFS-15454.

Resolution: Fixed

After HDFS-15464, this is message should not come as as make automatically into 
fallback when there are no mount tables

> ViewFsOverloadScheme should not display error message with "viewfs://" even 
> when it's initialized with other fs.
> 
>
> Key: HDFS-15454
> URL: https://issues.apache.org/jira/browse/HDFS-15454
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
>Priority: Major
>
> Currently ViewFsOverloadScheme extended from ViewFileSystem. When there are 
> no mount links, fs initialization fails and throws the exception. When it 
> fails, even it's initialized via ViewFsOverloadScheme( any scheme can be 
> initialized, let's say hdfs://clustername), the exception message always 
> refers to "viewfs://..."
> {code:java}
> java.io.IOException: ViewFs: Cannot initialize: Empty Mount table in config 
> for viewfs://clustername/ 
> {code}
> The message should be like below:
> {code:java}
> java.io.IOException: ViewFs: Cannot initialize: Empty Mount table in config 
> for hdfs://clustername/ 
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15447) RBF: Add top owners metrics for delegation tokens

2020-07-12 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17156226#comment-17156226
 ] 

Hudson commented on HDFS-15447:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18425 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18425/])
HDFS-15447 RBF: Add top real owners metrics for delegation tokens (github: rev 
84b74b335c0251afa672643352c6b7ecf003e0fb)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/RBFMetrics.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/AbstractDelegationTokenSecretManager.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/RouterMBean.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RBFConfigKeys.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/ZKDelegationTokenSecretManager.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/security/token/ZKDelegationTokenSecretManagerImpl.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/resources/hdfs-rbf-default.xml
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/security/TestRouterSecurityManager.java


> RBF: Add top owners metrics for delegation tokens
> -
>
> Key: HDFS-15447
> URL: https://issues.apache.org/jira/browse/HDFS-15447
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Fengnan Li
>Assignee: Fengnan Li
>Priority: Major
>
> Over time we have seen multiple times of token bombarding behavior either due 
> to mistakes or user issuing huge amount of traffic. Having this metric will 
> help figuring out much faster who/which service is owning these tokens and 
> stopping the behavior quicker.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15463) Add a tool to validate FsImage

2020-07-12 Thread Tsz-wo Sze (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15463?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17156225#comment-17156225
 ] 

Tsz-wo Sze commented on HDFS-15463:
---

FsImageValidation20200709.patch: check INodeReference subclasses.

FsImageValidation20200712.patch: implements org.apache.hadoop.util.Tool.

> Add a tool to validate FsImage
> --
>
> Key: HDFS-15463
> URL: https://issues.apache.org/jira/browse/HDFS-15463
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: namenode
>Reporter: Tsz-wo Sze
>Assignee: Tsz-wo Sze
>Priority: Major
> Attachments: FsImageValidation20200709.patch, 
> FsImageValidation20200712.patch
>
>
> Due to some snapshot related bugs, a fsimage may become corrupted.  Using a 
> corrupted fsimage may further result in data loss.
> In some cases, we found that reference counts are incorrect in some corrupted 
> FsImage.  One of the goals of the validation tool is to check  reference 
> counts.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15463) Add a tool to validate FsImage

2020-07-12 Thread Tsz-wo Sze (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15463?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz-wo Sze updated HDFS-15463:
--
Attachment: FsImageValidation20200712.patch

> Add a tool to validate FsImage
> --
>
> Key: HDFS-15463
> URL: https://issues.apache.org/jira/browse/HDFS-15463
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: namenode
>Reporter: Tsz-wo Sze
>Assignee: Tsz-wo Sze
>Priority: Major
> Attachments: FsImageValidation20200709.patch, 
> FsImageValidation20200712.patch
>
>
> Due to some snapshot related bugs, a fsimage may become corrupted.  Using a 
> corrupted fsimage may further result in data loss.
> In some cases, we found that reference counts are incorrect in some corrupted 
> FsImage.  One of the goals of the validation tool is to check  reference 
> counts.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15464) ViewFsOverloadScheme should work when -fs option pointing to remote cluster without mount links

2020-07-12 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17156219#comment-17156219
 ] 

Hudson commented on HDFS-15464:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18424 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18424/])
HDFS-15464: ViewFsOverloadScheme should work when -fs option pointing to 
(github: rev 3e700066394fb9f516e23537d8abb4661409cae1)
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsOverloadSchemeListStatus.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FsConstants.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystemOverloadScheme.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestViewFileSystemOverloadSchemeWithDFSAdmin.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsConfig.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ViewFsOverloadScheme.md
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java


> ViewFsOverloadScheme should work when -fs option pointing to remote cluster 
> without mount links
> ---
>
> Key: HDFS-15464
> URL: https://issues.apache.org/jira/browse/HDFS-15464
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: viewfsOverloadScheme
>Affects Versions: 3.2.1
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
>Priority: Major
> Fix For: 3.4.0
>
>
> When users try to connect to remote cluster from the cluster env where you 
> enabled ViewFSOverloadScheme, it expects to have at least one mount link make 
> fs init success. 
> Unfortunately you might not have configured any mount links with that remote 
> cluster in your current env. You would have configured only with your local 
> clusters mount points.
> In this case fs init will fail with no mount points configured the mount 
> table if that remote cluster uri's authority.
> One idea is that, when there are no mount links configured, we should just 
> consider that as default cluster, that can be achieved by considering it as 
> fallback option automatically.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15464) ViewFsOverloadScheme should work when -fs option pointing to remote cluster without mount links

2020-07-12 Thread Uma Maheswara Rao G (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uma Maheswara Rao G updated HDFS-15464:
---
Fix Version/s: 3.4.0
 Hadoop Flags: Reviewed
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Thanks [~ayushtkn] for the review. I have committed it to trunk.

> ViewFsOverloadScheme should work when -fs option pointing to remote cluster 
> without mount links
> ---
>
> Key: HDFS-15464
> URL: https://issues.apache.org/jira/browse/HDFS-15464
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: viewfsOverloadScheme
>Affects Versions: 3.2.1
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
>Priority: Major
> Fix For: 3.4.0
>
>
> When users try to connect to remote cluster from the cluster env where you 
> enabled ViewFSOverloadScheme, it expects to have at least one mount link make 
> fs init success. 
> Unfortunately you might not have configured any mount links with that remote 
> cluster in your current env. You would have configured only with your local 
> clusters mount points.
> In this case fs init will fail with no mount points configured the mount 
> table if that remote cluster uri's authority.
> One idea is that, when there are no mount links configured, we should just 
> consider that as default cluster, that can be achieved by considering it as 
> fallback option automatically.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDFS-15465) Support WebHDFS accesses to the data stored in secure Datanode through insecure Namenode

2020-07-12 Thread Toshihiko Uchida (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15465?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-15465 started by Toshihiko Uchida.
---
> Support WebHDFS accesses to the data stored in secure Datanode through 
> insecure Namenode
> 
>
> Key: HDFS-15465
> URL: https://issues.apache.org/jira/browse/HDFS-15465
> Project: Hadoop HDFS
>  Issue Type: Wish
>  Components: federation, webhdfs
>Reporter: Toshihiko Uchida
>Assignee: Toshihiko Uchida
>Priority: Minor
> Attachments: webhdfs-federation.pdf
>
>
> We're federating a secure HDFS cluster with an insecure cluster.
> Using HDFS RPC, we can access the data managed by insecure Namenode and 
> stored in secure Datanode.
> However, it does not work for WebHDFS due to HadoopIllegalArgumentException.
> {code}
> $ curl -i "http://:/webhdfs/v1/?op=OPEN"
> HTTP/1.1 307 TEMPORARY_REDIRECT
> (omitted)
> Location: 
> http://:/webhdfs/v1/?op=OPEN==0
> $ curl -i 
> "http://:/webhdfs/v1/?op=OPEN==0"
> HTTP/1.1 400 Bad Request
> (omitted)
> {"RemoteException":{"exception":"HadoopIllegalArgumentException","javaClassName":"org.apache.hadoop.HadoopIllegalArgumentException","message":"Invalid
>  argument, newValue is null"}}
> {code}
> This is because secure Datanode expects a delegation token, but insecure 
> Namenode does not return it to a client.
> - org.apache.hadoop.security.token.Token.decodeWritable
> {code}
>   private static void decodeWritable(Writable obj,
>  String newValue) throws IOException {
> if (newValue == null) {
>   throw new HadoopIllegalArgumentException(
>   "Invalid argument, newValue is null");
> }
> {code}
> The issue proposes to support the access also for WebHDFS.
> The attached PDF file [^webhdfs-federation.pdf] depicts our current 
> architecture and proposal.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-15465) Support WebHDFS accesses to the data stored in secure Datanode through insecure Namenode

2020-07-12 Thread Toshihiko Uchida (Jira)
Toshihiko Uchida created HDFS-15465:
---

 Summary: Support WebHDFS accesses to the data stored in secure 
Datanode through insecure Namenode
 Key: HDFS-15465
 URL: https://issues.apache.org/jira/browse/HDFS-15465
 Project: Hadoop HDFS
  Issue Type: Wish
  Components: federation, webhdfs
Reporter: Toshihiko Uchida
Assignee: Toshihiko Uchida
 Attachments: webhdfs-federation.pdf

We're federating a secure HDFS cluster with an insecure cluster.
Using HDFS RPC, we can access the data managed by insecure Namenode and stored 
in secure Datanode.
However, it does not work for WebHDFS due to HadoopIllegalArgumentException.
{code}
$ curl -i "http://:/webhdfs/v1/?op=OPEN"
HTTP/1.1 307 TEMPORARY_REDIRECT
(omitted)
Location: 
http://:/webhdfs/v1/?op=OPEN==0
$ curl -i 
"http://:/webhdfs/v1/?op=OPEN==0"
HTTP/1.1 400 Bad Request
(omitted)
{"RemoteException":{"exception":"HadoopIllegalArgumentException","javaClassName":"org.apache.hadoop.HadoopIllegalArgumentException","message":"Invalid
 argument, newValue is null"}}
{code}
This is because secure Datanode expects a delegation token, but insecure 
Namenode does not return it to a client.
- org.apache.hadoop.security.token.Token.decodeWritable
{code}
  private static void decodeWritable(Writable obj,
 String newValue) throws IOException {
if (newValue == null) {
  throw new HadoopIllegalArgumentException(
  "Invalid argument, newValue is null");
}
{code}

The issue proposes to support the access also for WebHDFS.
The attached PDF file [^webhdfs-federation.pdf] depicts our current 
architecture and proposal.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org