[jira] [Commented] (HDFS-13100) Handle IllegalArgumentException in when GETSERVERDEFAULTS is not implemented in webhdfs.

2018-02-02 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16351130#comment-16351130
 ] 

genericqa commented on HDFS-13100:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 59s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
41s{color} | {color:red} hadoop-hdfs-client in the patch failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 15s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs-client: The 
patch generated 1 new + 58 unchanged - 0 fixed = 59 total (was 58) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 15s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
28s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 51m 29s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HDFS-13100 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12909042/HDFS-13100.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux a3fb22cc86fd 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 5072388 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22928/artifact/out/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs-client.txt
 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22928/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs-client.txt
 |
|  Test Results | 

[jira] [Commented] (HDFS-13100) Handle IllegalArgumentException in when GETSERVERDEFAULTS is not implemented in webhdfs.

2018-02-02 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16350960#comment-16350960
 ] 

Yongjun Zhang commented on HDFS-13100:
--

Thanks a lot [~kihwal], will commit soon.


> Handle IllegalArgumentException in when GETSERVERDEFAULTS is not implemented 
> in webhdfs.
> 
>
> Key: HDFS-13100
> URL: https://issues.apache.org/jira/browse/HDFS-13100
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, webhdfs
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
>Priority: Critical
> Attachments: HDFS-13100.001.patch, HDFS-13100.002.patch
>
>
> HDFS-12386 added getserverdefaults call to webhdfs, and expect clusters that 
> don't support this to throw UnsupportedOperationException.  However, we are 
> seeing
> {code}
> hadoop distcp -D ipc.client.fallback-to-simple-auth-allowed=true -m 30 -pb 
> -update -skipcrccheck webhdfs://:/fileX 
> hdfs://:8020/scale1/fileY
> ...
> 18/01/05 10:57:33 ERROR tools.DistCp: Exception encountered 
> org.apache.hadoop.ipc.RemoteException(java.lang.IllegalArgumentException): 
> Invalid value for webhdfs parameter "op": No enum constant 
> org.apache.hadoop.hdfs.web.resources.GetOpParam.Op.GETSERVERDEFAULTS
>   at 
> org.apache.hadoop.hdfs.web.JsonUtilClient.toRemoteException(JsonUtilClient.java:80)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.validateResponse(WebHdfsFileSystem.java:498)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$200(WebHdfsFileSystem.java:126)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.runWithRetry(WebHdfsFileSystem.java:765)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.access$100(WebHdfsFileSystem.java:606)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner$1.run(WebHdfsFileSystem.java:637)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1962)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.run(WebHdfsFileSystem.java:633)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getServerDefaults(WebHdfsFileSystem.java:1807)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getKeyProviderUri(WebHdfsFileSystem.java:1825)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getKeyProvider(WebHdfsFileSystem.java:1836)
>   at 
> org.apache.hadoop.hdfs.HdfsKMSUtil.addDelegationTokensForKeyProvider(HdfsKMSUtil.java:72)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.addDelegationTokens(WebHdfsFileSystem.java:1627)
>   at 
> org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:139)
>   at 
> org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:100)
>   at 
> org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodes(TokenCache.java:80)
>   at 
> org.apache.hadoop.tools.SimpleCopyListing.validatePaths(SimpleCopyListing.java:199)
>   at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:85)
>   at 
> org.apache.hadoop.tools.GlobbedCopyListing.doBuildListing(GlobbedCopyListing.java:89)
>   at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:86)
>   at 
> org.apache.hadoop.tools.DistCp.createInputFileListing(DistCp.java:368)
>   at org.apache.hadoop.tools.DistCp.prepareFileListing(DistCp.java:96)
>   at org.apache.hadoop.tools.DistCp.createAndSubmitJob(DistCp.java:205)
>   at org.apache.hadoop.tools.DistCp.execute(DistCp.java:182)
>   at org.apache.hadoop.tools.DistCp.run(DistCp.java:153)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.tools.DistCp.main(DistCp.java:432)
> {code}
> We either need to make the server throw UnsupportedOperationException, or 
> make the client to handle IllegalArgumentException. For backward 
> compatibility and easier operation in the field, the latter is preferred.
> But we'd better understand why IllegalArgumentException is thrown instead of 
> UnsupportedOperationException is thrown. 
> The correct way to do is: check if the operation is supported, and throw the 
> UnsurportedOperationExcetion if not; then check if parameter is legal, throw 
> IllegalArgumentException is it's not legal. We can do that fix as follow-up 
> of this jira.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: 

[jira] [Commented] (HDFS-13100) Handle IllegalArgumentException in when GETSERVERDEFAULTS is not implemented in webhdfs.

2018-02-02 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16350958#comment-16350958
 ] 

Kihwal Lee commented on HDFS-13100:
---

branch-3.0.1 is already cut. Be sure to commit it there too.

> Handle IllegalArgumentException in when GETSERVERDEFAULTS is not implemented 
> in webhdfs.
> 
>
> Key: HDFS-13100
> URL: https://issues.apache.org/jira/browse/HDFS-13100
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, webhdfs
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
>Priority: Critical
> Attachments: HDFS-13100.001.patch, HDFS-13100.002.patch
>
>
> HDFS-12386 added getserverdefaults call to webhdfs, and expect clusters that 
> don't support this to throw UnsupportedOperationException.  However, we are 
> seeing
> {code}
> hadoop distcp -D ipc.client.fallback-to-simple-auth-allowed=true -m 30 -pb 
> -update -skipcrccheck webhdfs://:/fileX 
> hdfs://:8020/scale1/fileY
> ...
> 18/01/05 10:57:33 ERROR tools.DistCp: Exception encountered 
> org.apache.hadoop.ipc.RemoteException(java.lang.IllegalArgumentException): 
> Invalid value for webhdfs parameter "op": No enum constant 
> org.apache.hadoop.hdfs.web.resources.GetOpParam.Op.GETSERVERDEFAULTS
>   at 
> org.apache.hadoop.hdfs.web.JsonUtilClient.toRemoteException(JsonUtilClient.java:80)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.validateResponse(WebHdfsFileSystem.java:498)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$200(WebHdfsFileSystem.java:126)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.runWithRetry(WebHdfsFileSystem.java:765)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.access$100(WebHdfsFileSystem.java:606)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner$1.run(WebHdfsFileSystem.java:637)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1962)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.run(WebHdfsFileSystem.java:633)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getServerDefaults(WebHdfsFileSystem.java:1807)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getKeyProviderUri(WebHdfsFileSystem.java:1825)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getKeyProvider(WebHdfsFileSystem.java:1836)
>   at 
> org.apache.hadoop.hdfs.HdfsKMSUtil.addDelegationTokensForKeyProvider(HdfsKMSUtil.java:72)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.addDelegationTokens(WebHdfsFileSystem.java:1627)
>   at 
> org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:139)
>   at 
> org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:100)
>   at 
> org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodes(TokenCache.java:80)
>   at 
> org.apache.hadoop.tools.SimpleCopyListing.validatePaths(SimpleCopyListing.java:199)
>   at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:85)
>   at 
> org.apache.hadoop.tools.GlobbedCopyListing.doBuildListing(GlobbedCopyListing.java:89)
>   at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:86)
>   at 
> org.apache.hadoop.tools.DistCp.createInputFileListing(DistCp.java:368)
>   at org.apache.hadoop.tools.DistCp.prepareFileListing(DistCp.java:96)
>   at org.apache.hadoop.tools.DistCp.createAndSubmitJob(DistCp.java:205)
>   at org.apache.hadoop.tools.DistCp.execute(DistCp.java:182)
>   at org.apache.hadoop.tools.DistCp.run(DistCp.java:153)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.tools.DistCp.main(DistCp.java:432)
> {code}
> We either need to make the server throw UnsupportedOperationException, or 
> make the client to handle IllegalArgumentException. For backward 
> compatibility and easier operation in the field, the latter is preferred.
> But we'd better understand why IllegalArgumentException is thrown instead of 
> UnsupportedOperationException is thrown. 
> The correct way to do is: check if the operation is supported, and throw the 
> UnsurportedOperationExcetion if not; then check if parameter is legal, throw 
> IllegalArgumentException is it's not legal. We can do that fix as follow-up 
> of this jira.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional 

[jira] [Commented] (HDFS-13100) Handle IllegalArgumentException in when GETSERVERDEFAULTS is not implemented in webhdfs.

2018-02-02 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16350955#comment-16350955
 ] 

Kihwal Lee commented on HDFS-13100:
---

+1. Precommit successfully ran against the same code. The only diff is the 
typo-fix in the comment.

> Handle IllegalArgumentException in when GETSERVERDEFAULTS is not implemented 
> in webhdfs.
> 
>
> Key: HDFS-13100
> URL: https://issues.apache.org/jira/browse/HDFS-13100
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, webhdfs
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
>Priority: Critical
> Attachments: HDFS-13100.001.patch, HDFS-13100.002.patch
>
>
> HDFS-12386 added getserverdefaults call to webhdfs, and expect clusters that 
> don't support this to throw UnsupportedOperationException.  However, we are 
> seeing
> {code}
> hadoop distcp -D ipc.client.fallback-to-simple-auth-allowed=true -m 30 -pb 
> -update -skipcrccheck webhdfs://:/fileX 
> hdfs://:8020/scale1/fileY
> ...
> 18/01/05 10:57:33 ERROR tools.DistCp: Exception encountered 
> org.apache.hadoop.ipc.RemoteException(java.lang.IllegalArgumentException): 
> Invalid value for webhdfs parameter "op": No enum constant 
> org.apache.hadoop.hdfs.web.resources.GetOpParam.Op.GETSERVERDEFAULTS
>   at 
> org.apache.hadoop.hdfs.web.JsonUtilClient.toRemoteException(JsonUtilClient.java:80)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.validateResponse(WebHdfsFileSystem.java:498)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$200(WebHdfsFileSystem.java:126)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.runWithRetry(WebHdfsFileSystem.java:765)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.access$100(WebHdfsFileSystem.java:606)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner$1.run(WebHdfsFileSystem.java:637)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1962)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.run(WebHdfsFileSystem.java:633)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getServerDefaults(WebHdfsFileSystem.java:1807)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getKeyProviderUri(WebHdfsFileSystem.java:1825)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getKeyProvider(WebHdfsFileSystem.java:1836)
>   at 
> org.apache.hadoop.hdfs.HdfsKMSUtil.addDelegationTokensForKeyProvider(HdfsKMSUtil.java:72)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.addDelegationTokens(WebHdfsFileSystem.java:1627)
>   at 
> org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:139)
>   at 
> org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:100)
>   at 
> org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodes(TokenCache.java:80)
>   at 
> org.apache.hadoop.tools.SimpleCopyListing.validatePaths(SimpleCopyListing.java:199)
>   at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:85)
>   at 
> org.apache.hadoop.tools.GlobbedCopyListing.doBuildListing(GlobbedCopyListing.java:89)
>   at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:86)
>   at 
> org.apache.hadoop.tools.DistCp.createInputFileListing(DistCp.java:368)
>   at org.apache.hadoop.tools.DistCp.prepareFileListing(DistCp.java:96)
>   at org.apache.hadoop.tools.DistCp.createAndSubmitJob(DistCp.java:205)
>   at org.apache.hadoop.tools.DistCp.execute(DistCp.java:182)
>   at org.apache.hadoop.tools.DistCp.run(DistCp.java:153)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.tools.DistCp.main(DistCp.java:432)
> {code}
> We either need to make the server throw UnsupportedOperationException, or 
> make the client to handle IllegalArgumentException. For backward 
> compatibility and easier operation in the field, the latter is preferred.
> But we'd better understand why IllegalArgumentException is thrown instead of 
> UnsupportedOperationException is thrown. 
> The correct way to do is: check if the operation is supported, and throw the 
> UnsurportedOperationExcetion if not; then check if parameter is legal, throw 
> IllegalArgumentException is it's not legal. We can do that fix as follow-up 
> of this jira.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: 

[jira] [Commented] (HDFS-13100) Handle IllegalArgumentException in when GETSERVERDEFAULTS is not implemented in webhdfs.

2018-02-02 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16350921#comment-16350921
 ] 

Yongjun Zhang commented on HDFS-13100:
--

Hi [~kihwal],

Many thanks for the review and additional tests! Please see rev2 that removed 
the typo.



> Handle IllegalArgumentException in when GETSERVERDEFAULTS is not implemented 
> in webhdfs.
> 
>
> Key: HDFS-13100
> URL: https://issues.apache.org/jira/browse/HDFS-13100
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, webhdfs
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
>Priority: Critical
> Attachments: HDFS-13100.001.patch, HDFS-13100.002.patch
>
>
> HDFS-12386 added getserverdefaults call to webhdfs, and expect clusters that 
> don't support this to throw UnsupportedOperationException.  However, we are 
> seeing
> {code}
> hadoop distcp -D ipc.client.fallback-to-simple-auth-allowed=true -m 30 -pb 
> -update -skipcrccheck webhdfs://:/fileX 
> hdfs://:8020/scale1/fileY
> ...
> 18/01/05 10:57:33 ERROR tools.DistCp: Exception encountered 
> org.apache.hadoop.ipc.RemoteException(java.lang.IllegalArgumentException): 
> Invalid value for webhdfs parameter "op": No enum constant 
> org.apache.hadoop.hdfs.web.resources.GetOpParam.Op.GETSERVERDEFAULTS
>   at 
> org.apache.hadoop.hdfs.web.JsonUtilClient.toRemoteException(JsonUtilClient.java:80)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.validateResponse(WebHdfsFileSystem.java:498)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$200(WebHdfsFileSystem.java:126)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.runWithRetry(WebHdfsFileSystem.java:765)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.access$100(WebHdfsFileSystem.java:606)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner$1.run(WebHdfsFileSystem.java:637)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1962)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.run(WebHdfsFileSystem.java:633)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getServerDefaults(WebHdfsFileSystem.java:1807)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getKeyProviderUri(WebHdfsFileSystem.java:1825)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getKeyProvider(WebHdfsFileSystem.java:1836)
>   at 
> org.apache.hadoop.hdfs.HdfsKMSUtil.addDelegationTokensForKeyProvider(HdfsKMSUtil.java:72)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.addDelegationTokens(WebHdfsFileSystem.java:1627)
>   at 
> org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:139)
>   at 
> org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:100)
>   at 
> org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodes(TokenCache.java:80)
>   at 
> org.apache.hadoop.tools.SimpleCopyListing.validatePaths(SimpleCopyListing.java:199)
>   at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:85)
>   at 
> org.apache.hadoop.tools.GlobbedCopyListing.doBuildListing(GlobbedCopyListing.java:89)
>   at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:86)
>   at 
> org.apache.hadoop.tools.DistCp.createInputFileListing(DistCp.java:368)
>   at org.apache.hadoop.tools.DistCp.prepareFileListing(DistCp.java:96)
>   at org.apache.hadoop.tools.DistCp.createAndSubmitJob(DistCp.java:205)
>   at org.apache.hadoop.tools.DistCp.execute(DistCp.java:182)
>   at org.apache.hadoop.tools.DistCp.run(DistCp.java:153)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.tools.DistCp.main(DistCp.java:432)
> {code}
> We either need to make the server throw UnsupportedOperationException, or 
> make the client to handle IllegalArgumentException. For backward 
> compatibility and easier operation in the field, the latter is preferred.
> But we'd better understand why IllegalArgumentException is thrown instead of 
> UnsupportedOperationException is thrown. 
> The correct way to do is: check if the operation is supported, and throw the 
> UnsurportedOperationExcetion if not; then check if parameter is legal, throw 
> IllegalArgumentException is it's not legal. We can do that fix as follow-up 
> of this jira.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: 

[jira] [Commented] (HDFS-13100) Handle IllegalArgumentException in when GETSERVERDEFAULTS is not implemented in webhdfs.

2018-02-02 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16350445#comment-16350445
 ] 

Kihwal Lee commented on HDFS-13100:
---

I ran more tests (416 total) and they all passed.
{noformat}$ mvn test -Dtest=*ebH*

Results :

Tests run: 416, Failures: 0, Errors: 0, Skipped: 1
{noformat}

> Handle IllegalArgumentException in when GETSERVERDEFAULTS is not implemented 
> in webhdfs.
> 
>
> Key: HDFS-13100
> URL: https://issues.apache.org/jira/browse/HDFS-13100
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, webhdfs
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
>Priority: Major
> Attachments: HDFS-13100.001.patch
>
>
> HDFS-12386 added getserverdefaults call to webhdfs, and expect clusters that 
> don't support this to throw UnsupportedOperationException.  However, we are 
> seeing
> {code}
> hadoop distcp -D ipc.client.fallback-to-simple-auth-allowed=true -m 30 -pb 
> -update -skipcrccheck webhdfs://:/fileX 
> hdfs://:8020/scale1/fileY
> ...
> 18/01/05 10:57:33 ERROR tools.DistCp: Exception encountered 
> org.apache.hadoop.ipc.RemoteException(java.lang.IllegalArgumentException): 
> Invalid value for webhdfs parameter "op": No enum constant 
> org.apache.hadoop.hdfs.web.resources.GetOpParam.Op.GETSERVERDEFAULTS
>   at 
> org.apache.hadoop.hdfs.web.JsonUtilClient.toRemoteException(JsonUtilClient.java:80)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.validateResponse(WebHdfsFileSystem.java:498)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$200(WebHdfsFileSystem.java:126)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.runWithRetry(WebHdfsFileSystem.java:765)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.access$100(WebHdfsFileSystem.java:606)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner$1.run(WebHdfsFileSystem.java:637)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1962)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.run(WebHdfsFileSystem.java:633)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getServerDefaults(WebHdfsFileSystem.java:1807)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getKeyProviderUri(WebHdfsFileSystem.java:1825)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getKeyProvider(WebHdfsFileSystem.java:1836)
>   at 
> org.apache.hadoop.hdfs.HdfsKMSUtil.addDelegationTokensForKeyProvider(HdfsKMSUtil.java:72)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.addDelegationTokens(WebHdfsFileSystem.java:1627)
>   at 
> org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:139)
>   at 
> org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:100)
>   at 
> org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodes(TokenCache.java:80)
>   at 
> org.apache.hadoop.tools.SimpleCopyListing.validatePaths(SimpleCopyListing.java:199)
>   at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:85)
>   at 
> org.apache.hadoop.tools.GlobbedCopyListing.doBuildListing(GlobbedCopyListing.java:89)
>   at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:86)
>   at 
> org.apache.hadoop.tools.DistCp.createInputFileListing(DistCp.java:368)
>   at org.apache.hadoop.tools.DistCp.prepareFileListing(DistCp.java:96)
>   at org.apache.hadoop.tools.DistCp.createAndSubmitJob(DistCp.java:205)
>   at org.apache.hadoop.tools.DistCp.execute(DistCp.java:182)
>   at org.apache.hadoop.tools.DistCp.run(DistCp.java:153)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.tools.DistCp.main(DistCp.java:432)
> {code}
> We either need to make the server throw UnsupportedOperationException, or 
> make the client to handle IllegalArgumentException. For backward 
> compatibility and easier operation in the field, the latter is preferred.
> But we'd better understand why IllegalArgumentException is thrown instead of 
> UnsupportedOperationException is thrown. 
> The correct way to do is: check if the operation is supported, and throw the 
> UnsurportedOperationExcetion if not; then check if parameter is legal, throw 
> IllegalArgumentException is it's not legal. We can do that fix as follow-up 
> of this jira.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To 

[jira] [Commented] (HDFS-13100) Handle IllegalArgumentException in when GETSERVERDEFAULTS is not implemented in webhdfs.

2018-02-02 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16350382#comment-16350382
 ] 

Kihwal Lee commented on HDFS-13100:
---

I know you just copied the comment from the previous section, but let's take 
this opportunity to fix the typo: "supports" -> "support"
{code}
// This means server doesn't supports GETSERVERDEFAULTS call.
{code}

> Handle IllegalArgumentException in when GETSERVERDEFAULTS is not implemented 
> in webhdfs.
> 
>
> Key: HDFS-13100
> URL: https://issues.apache.org/jira/browse/HDFS-13100
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, webhdfs
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
>Priority: Major
> Attachments: HDFS-13100.001.patch
>
>
> HDFS-12386 added getserverdefaults call to webhdfs, and expect clusters that 
> don't support this to throw UnsupportedOperationException.  However, we are 
> seeing
> {code}
> hadoop distcp -D ipc.client.fallback-to-simple-auth-allowed=true -m 30 -pb 
> -update -skipcrccheck webhdfs://:/fileX 
> hdfs://:8020/scale1/fileY
> ...
> 18/01/05 10:57:33 ERROR tools.DistCp: Exception encountered 
> org.apache.hadoop.ipc.RemoteException(java.lang.IllegalArgumentException): 
> Invalid value for webhdfs parameter "op": No enum constant 
> org.apache.hadoop.hdfs.web.resources.GetOpParam.Op.GETSERVERDEFAULTS
>   at 
> org.apache.hadoop.hdfs.web.JsonUtilClient.toRemoteException(JsonUtilClient.java:80)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.validateResponse(WebHdfsFileSystem.java:498)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$200(WebHdfsFileSystem.java:126)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.runWithRetry(WebHdfsFileSystem.java:765)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.access$100(WebHdfsFileSystem.java:606)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner$1.run(WebHdfsFileSystem.java:637)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1962)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.run(WebHdfsFileSystem.java:633)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getServerDefaults(WebHdfsFileSystem.java:1807)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getKeyProviderUri(WebHdfsFileSystem.java:1825)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getKeyProvider(WebHdfsFileSystem.java:1836)
>   at 
> org.apache.hadoop.hdfs.HdfsKMSUtil.addDelegationTokensForKeyProvider(HdfsKMSUtil.java:72)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.addDelegationTokens(WebHdfsFileSystem.java:1627)
>   at 
> org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:139)
>   at 
> org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:100)
>   at 
> org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodes(TokenCache.java:80)
>   at 
> org.apache.hadoop.tools.SimpleCopyListing.validatePaths(SimpleCopyListing.java:199)
>   at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:85)
>   at 
> org.apache.hadoop.tools.GlobbedCopyListing.doBuildListing(GlobbedCopyListing.java:89)
>   at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:86)
>   at 
> org.apache.hadoop.tools.DistCp.createInputFileListing(DistCp.java:368)
>   at org.apache.hadoop.tools.DistCp.prepareFileListing(DistCp.java:96)
>   at org.apache.hadoop.tools.DistCp.createAndSubmitJob(DistCp.java:205)
>   at org.apache.hadoop.tools.DistCp.execute(DistCp.java:182)
>   at org.apache.hadoop.tools.DistCp.run(DistCp.java:153)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.tools.DistCp.main(DistCp.java:432)
> {code}
> We either need to make the server throw UnsupportedOperationException, or 
> make the client to handle IllegalArgumentException. For backward 
> compatibility and easier operation in the field, the latter is preferred.
> But we'd better understand why IllegalArgumentException is thrown instead of 
> UnsupportedOperationException is thrown. 
> The correct way to do is: check if the operation is supported, and throw the 
> UnsurportedOperationExcetion if not; then check if parameter is legal, throw 
> IllegalArgumentException is it's not legal. We can do that fix as follow-up 
> of this jira.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HDFS-13100) Handle IllegalArgumentException in when GETSERVERDEFAULTS is not implemented in webhdfs.

2018-02-01 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16349836#comment-16349836
 ] 

genericqa commented on HDFS-13100:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 47s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 14s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs-client: The 
patch generated 1 new + 58 unchanged - 0 fixed = 59 total (was 58) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 16s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
22s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 48m 29s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HDFS-13100 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12908920/HDFS-13100.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 28fafd96a1b4 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / b0627c8 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22923/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs-client.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22923/testReport/ |
| Max. process+thread count | 340 (vs. ulimit of 5000) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-client U: 

[jira] [Commented] (HDFS-13100) Handle IllegalArgumentException in when GETSERVERDEFAULTS is not implemented in webhdfs.

2018-02-01 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16349804#comment-16349804
 ] 

Yongjun Zhang commented on HDFS-13100:
--

Hi [~kihwal],

Thanks again for your help!

I attached a patch (ATM and I worked out), we tested in real cluster. Would you 
please help giving a quick review?

Many thanks.


> Handle IllegalArgumentException in when GETSERVERDEFAULTS is not implemented 
> in webhdfs.
> 
>
> Key: HDFS-13100
> URL: https://issues.apache.org/jira/browse/HDFS-13100
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, webhdfs
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
>Priority: Major
> Attachments: HDFS-13100.001.patch
>
>
> HDFS-12386 added getserverdefaults call to webhdfs, and expect clusters that 
> don't support this to throw UnsupportedOperationException.  However, we are 
> seeing
> {code}
> hadoop distcp -D ipc.client.fallback-to-simple-auth-allowed=true -m 30 -pb 
> -update -skipcrccheck webhdfs://:/fileX 
> hdfs://:8020/scale1/fileY
> ...
> 18/01/05 10:57:33 ERROR tools.DistCp: Exception encountered 
> org.apache.hadoop.ipc.RemoteException(java.lang.IllegalArgumentException): 
> Invalid value for webhdfs parameter "op": No enum constant 
> org.apache.hadoop.hdfs.web.resources.GetOpParam.Op.GETSERVERDEFAULTS
>   at 
> org.apache.hadoop.hdfs.web.JsonUtilClient.toRemoteException(JsonUtilClient.java:80)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.validateResponse(WebHdfsFileSystem.java:498)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$200(WebHdfsFileSystem.java:126)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.runWithRetry(WebHdfsFileSystem.java:765)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.access$100(WebHdfsFileSystem.java:606)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner$1.run(WebHdfsFileSystem.java:637)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1962)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.run(WebHdfsFileSystem.java:633)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getServerDefaults(WebHdfsFileSystem.java:1807)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getKeyProviderUri(WebHdfsFileSystem.java:1825)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getKeyProvider(WebHdfsFileSystem.java:1836)
>   at 
> org.apache.hadoop.hdfs.HdfsKMSUtil.addDelegationTokensForKeyProvider(HdfsKMSUtil.java:72)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.addDelegationTokens(WebHdfsFileSystem.java:1627)
>   at 
> org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:139)
>   at 
> org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:100)
>   at 
> org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodes(TokenCache.java:80)
>   at 
> org.apache.hadoop.tools.SimpleCopyListing.validatePaths(SimpleCopyListing.java:199)
>   at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:85)
>   at 
> org.apache.hadoop.tools.GlobbedCopyListing.doBuildListing(GlobbedCopyListing.java:89)
>   at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:86)
>   at 
> org.apache.hadoop.tools.DistCp.createInputFileListing(DistCp.java:368)
>   at org.apache.hadoop.tools.DistCp.prepareFileListing(DistCp.java:96)
>   at org.apache.hadoop.tools.DistCp.createAndSubmitJob(DistCp.java:205)
>   at org.apache.hadoop.tools.DistCp.execute(DistCp.java:182)
>   at org.apache.hadoop.tools.DistCp.run(DistCp.java:153)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.tools.DistCp.main(DistCp.java:432)
> {code}
> We either need to make the server throw UnsupportedOperationException, or 
> make the client to handle IllegalArgumentException. For backward 
> compatibility and easier operation in the field, the latter is preferred.
> But we'd better understand why IllegalArgumentException is thrown instead of 
> UnsupportedOperationException is thrown. 
> The correct way to do is: check if the operation is supported, and throw the 
> UnsurportedOperationExcetion if not; then check if parameter is legal, throw 
> IllegalArgumentException is it's not legal. We can do that fix as follow-up 
> of this jira.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HDFS-13100) Handle IllegalArgumentException in when GETSERVERDEFAULTS is not implemented in webhdfs.

2018-02-01 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16349558#comment-16349558
 ] 

Yongjun Zhang commented on HDFS-13100:
--

Thank you so much [~kihwal]! 

That's great help! I was about to setup my clusters to dig out the trace. 


> Handle IllegalArgumentException in when GETSERVERDEFAULTS is not implemented 
> in webhdfs.
> 
>
> Key: HDFS-13100
> URL: https://issues.apache.org/jira/browse/HDFS-13100
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, webhdfs
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
>Priority: Major
>
> HDFS-12386 added getserverdefaults call to webhdfs, and expect clusters that 
> don't support this to throw UnsupportedOperationException.  However, we are 
> seeing
> {code}
> hadoop distcp -D ipc.client.fallback-to-simple-auth-allowed=true -m 30 -pb 
> -update -skipcrccheck webhdfs://:/fileX 
> hdfs://:8020/scale1/fileY
> ...
> 18/01/05 10:57:33 ERROR tools.DistCp: Exception encountered 
> org.apache.hadoop.ipc.RemoteException(java.lang.IllegalArgumentException): 
> Invalid value for webhdfs parameter "op": No enum constant 
> org.apache.hadoop.hdfs.web.resources.GetOpParam.Op.GETSERVERDEFAULTS
>   at 
> org.apache.hadoop.hdfs.web.JsonUtilClient.toRemoteException(JsonUtilClient.java:80)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.validateResponse(WebHdfsFileSystem.java:498)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$200(WebHdfsFileSystem.java:126)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.runWithRetry(WebHdfsFileSystem.java:765)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.access$100(WebHdfsFileSystem.java:606)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner$1.run(WebHdfsFileSystem.java:637)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1962)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.run(WebHdfsFileSystem.java:633)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getServerDefaults(WebHdfsFileSystem.java:1807)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getKeyProviderUri(WebHdfsFileSystem.java:1825)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getKeyProvider(WebHdfsFileSystem.java:1836)
>   at 
> org.apache.hadoop.hdfs.HdfsKMSUtil.addDelegationTokensForKeyProvider(HdfsKMSUtil.java:72)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.addDelegationTokens(WebHdfsFileSystem.java:1627)
>   at 
> org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:139)
>   at 
> org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:100)
>   at 
> org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodes(TokenCache.java:80)
>   at 
> org.apache.hadoop.tools.SimpleCopyListing.validatePaths(SimpleCopyListing.java:199)
>   at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:85)
>   at 
> org.apache.hadoop.tools.GlobbedCopyListing.doBuildListing(GlobbedCopyListing.java:89)
>   at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:86)
>   at 
> org.apache.hadoop.tools.DistCp.createInputFileListing(DistCp.java:368)
>   at org.apache.hadoop.tools.DistCp.prepareFileListing(DistCp.java:96)
>   at org.apache.hadoop.tools.DistCp.createAndSubmitJob(DistCp.java:205)
>   at org.apache.hadoop.tools.DistCp.execute(DistCp.java:182)
>   at org.apache.hadoop.tools.DistCp.run(DistCp.java:153)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.tools.DistCp.main(DistCp.java:432)
> {code}
> We either need to make the server throw UnsupportedOperationException, or 
> make the client to handle IllegalArgumentException. For backward 
> compatibility and easier operation in the field, the latter is preferred.
> But we'd better understand why IllegalArgumentException is thrown instead of 
> UnsupportedOperationException is thrown. 
> The correct way to do is: check if the operation is supported, and throw the 
> UnsurportedOperationExcetion if not; then check if parameter is legal, throw 
> IllegalArgumentException is it's not legal. We can do that fix as follow-up 
> of this jira.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: 

[jira] [Commented] (HDFS-13100) Handle IllegalArgumentException in when GETSERVERDEFAULTS is not implemented in webhdfs.

2018-02-01 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16349433#comment-16349433
 ] 

Kihwal Lee commented on HDFS-13100:
---

So, it's not the hadoop code explicitly doing {{valueOf()}}, but jersey's 
request handling that's attempting to covert the enum name to value. 

> Handle IllegalArgumentException in when GETSERVERDEFAULTS is not implemented 
> in webhdfs.
> 
>
> Key: HDFS-13100
> URL: https://issues.apache.org/jira/browse/HDFS-13100
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, webhdfs
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
>Priority: Major
>
> HDFS-12386 added getserverdefaults call to webhdfs, and expect clusters that 
> don't support this to throw UnsupportedOperationException.  However, we are 
> seeing
> {code}
> hadoop distcp -D ipc.client.fallback-to-simple-auth-allowed=true -m 30 -pb 
> -update -skipcrccheck webhdfs://:/fileX 
> hdfs://:8020/scale1/fileY
> ...
> 18/01/05 10:57:33 ERROR tools.DistCp: Exception encountered 
> org.apache.hadoop.ipc.RemoteException(java.lang.IllegalArgumentException): 
> Invalid value for webhdfs parameter "op": No enum constant 
> org.apache.hadoop.hdfs.web.resources.GetOpParam.Op.GETSERVERDEFAULTS
>   at 
> org.apache.hadoop.hdfs.web.JsonUtilClient.toRemoteException(JsonUtilClient.java:80)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.validateResponse(WebHdfsFileSystem.java:498)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$200(WebHdfsFileSystem.java:126)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.runWithRetry(WebHdfsFileSystem.java:765)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.access$100(WebHdfsFileSystem.java:606)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner$1.run(WebHdfsFileSystem.java:637)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1962)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.run(WebHdfsFileSystem.java:633)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getServerDefaults(WebHdfsFileSystem.java:1807)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getKeyProviderUri(WebHdfsFileSystem.java:1825)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getKeyProvider(WebHdfsFileSystem.java:1836)
>   at 
> org.apache.hadoop.hdfs.HdfsKMSUtil.addDelegationTokensForKeyProvider(HdfsKMSUtil.java:72)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.addDelegationTokens(WebHdfsFileSystem.java:1627)
>   at 
> org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:139)
>   at 
> org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:100)
>   at 
> org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodes(TokenCache.java:80)
>   at 
> org.apache.hadoop.tools.SimpleCopyListing.validatePaths(SimpleCopyListing.java:199)
>   at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:85)
>   at 
> org.apache.hadoop.tools.GlobbedCopyListing.doBuildListing(GlobbedCopyListing.java:89)
>   at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:86)
>   at 
> org.apache.hadoop.tools.DistCp.createInputFileListing(DistCp.java:368)
>   at org.apache.hadoop.tools.DistCp.prepareFileListing(DistCp.java:96)
>   at org.apache.hadoop.tools.DistCp.createAndSubmitJob(DistCp.java:205)
>   at org.apache.hadoop.tools.DistCp.execute(DistCp.java:182)
>   at org.apache.hadoop.tools.DistCp.run(DistCp.java:153)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.tools.DistCp.main(DistCp.java:432)
> {code}
> We either need to make the server throw UnsupportedOperationException, or 
> make the client to handle IllegalArgumentException. For backward 
> compatibility and easier operation in the field, the latter is preferred.
> But we'd better understand why IllegalArgumentException is thrown instead of 
> UnsupportedOperationException is thrown. 
> The correct way to do is: check if the operation is supported, and throw the 
> UnsurportedOperationExcetion if not; then check if parameter is legal, throw 
> IllegalArgumentException is it's not legal. We can do that fix as follow-up 
> of this jira.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org

[jira] [Commented] (HDFS-13100) Handle IllegalArgumentException in when GETSERVERDEFAULTS is not implemented in webhdfs.

2018-02-01 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16349424#comment-16349424
 ] 

Kihwal Lee commented on HDFS-13100:
---

Got one at TRACE.
{noformat}
2018-02-01 22:38:55,161 [165538@qtp-1927963027-3 - 
/webhdfs/v1/?op=GETSERVERDEFAULTS] TRACE resources.ExceptionHandler: GOT 
EXCEPITION
com.sun.jersey.api.ParamException$QueryParamException: 
java.lang.IllegalArgumentException: No enum constant 
org.apache.hadoop.hdfs.web.resources.GetOpParam.Op.GETSERVERDEFAULTS
at 
com.sun.jersey.server.impl.model.parameter.QueryParamInjectableProvider$QueryParamInjectable.getValue(QueryParamInjectableProvider.java:74)
at 
com.sun.jersey.server.impl.inject.InjectableValuesProvider.getInjectableValues(InjectableValuesProvider.java:46)
at 
com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$EntityParamInInvoker.getParams(AbstractResourceMethodDispatchProvider.java:153)
at 
com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:203)
at 
com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
at 
com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288)
at 
com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
at 
com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
at 
com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
at 
com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1469)
at 
com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1400)
at 
com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1349)
at 
com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1339)
at 
com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:416)
at 
com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:537)
at 
com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:699)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
at 
org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
at 
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1221)
at 
org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:636)
at 
org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:588)
at org.apache.hadoop.hdfs.web.AuthFilter.doFilter(AuthFilter.java:99)
at 
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
at org.mortbay.servlet.UserAgentFilter.doFilter(UserAgentFilter.java:78)
at com.yahoo.hadoop.GzipFilter.doFilter(GzipFilter.java:220)
at 
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
at 
org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1353)
at 
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
at 
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
at 
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
at 
org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
at 
org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
at 
org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
at 
org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
at 
org.mortbay.jetty.handler.HandlerCollection.handle(HandlerCollection.java:114)
at 
org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
at org.mortbay.jetty.Server.handle(Server.java:322)
at 
org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
at 
org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928)
at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)
at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)
 

[jira] [Commented] (HDFS-13100) Handle IllegalArgumentException in when GETSERVERDEFAULTS is not implemented in webhdfs.

2018-02-01 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16349392#comment-16349392
 ] 

Kihwal Lee commented on HDFS-13100:
---

Reproduced the issue. Enabled DEBUG-level logging on NN, but nothing useful is 
logged. 
{noformat}
[705810816@qtp-1429483328-4 - /webhdfs/v1/?op=GETSERVERDEFAULTS] DEBUG 
common.JspHelper: getUGI is returning: kihwal
[705810816@qtp-1429483328-4 - /webhdfs/v1/?op=GETSERVERDEFAULTS] DEBUG 
mortbay.log: RESPONSE /webhdfs/v1/ 400
{noformat}


> Handle IllegalArgumentException in when GETSERVERDEFAULTS is not implemented 
> in webhdfs.
> 
>
> Key: HDFS-13100
> URL: https://issues.apache.org/jira/browse/HDFS-13100
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, webhdfs
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
>Priority: Major
>
> HDFS-12386 added getserverdefaults call to webhdfs, and expect clusters that 
> don't support this to throw UnsupportedOperationException.  However, we are 
> seeing
> {code}
> hadoop distcp -D ipc.client.fallback-to-simple-auth-allowed=true -m 30 -pb 
> -update -skipcrccheck webhdfs://:/fileX 
> hdfs://:8020/scale1/fileY
> ...
> 18/01/05 10:57:33 ERROR tools.DistCp: Exception encountered 
> org.apache.hadoop.ipc.RemoteException(java.lang.IllegalArgumentException): 
> Invalid value for webhdfs parameter "op": No enum constant 
> org.apache.hadoop.hdfs.web.resources.GetOpParam.Op.GETSERVERDEFAULTS
>   at 
> org.apache.hadoop.hdfs.web.JsonUtilClient.toRemoteException(JsonUtilClient.java:80)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.validateResponse(WebHdfsFileSystem.java:498)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$200(WebHdfsFileSystem.java:126)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.runWithRetry(WebHdfsFileSystem.java:765)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.access$100(WebHdfsFileSystem.java:606)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner$1.run(WebHdfsFileSystem.java:637)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1962)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.run(WebHdfsFileSystem.java:633)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getServerDefaults(WebHdfsFileSystem.java:1807)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getKeyProviderUri(WebHdfsFileSystem.java:1825)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getKeyProvider(WebHdfsFileSystem.java:1836)
>   at 
> org.apache.hadoop.hdfs.HdfsKMSUtil.addDelegationTokensForKeyProvider(HdfsKMSUtil.java:72)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.addDelegationTokens(WebHdfsFileSystem.java:1627)
>   at 
> org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:139)
>   at 
> org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:100)
>   at 
> org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodes(TokenCache.java:80)
>   at 
> org.apache.hadoop.tools.SimpleCopyListing.validatePaths(SimpleCopyListing.java:199)
>   at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:85)
>   at 
> org.apache.hadoop.tools.GlobbedCopyListing.doBuildListing(GlobbedCopyListing.java:89)
>   at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:86)
>   at 
> org.apache.hadoop.tools.DistCp.createInputFileListing(DistCp.java:368)
>   at org.apache.hadoop.tools.DistCp.prepareFileListing(DistCp.java:96)
>   at org.apache.hadoop.tools.DistCp.createAndSubmitJob(DistCp.java:205)
>   at org.apache.hadoop.tools.DistCp.execute(DistCp.java:182)
>   at org.apache.hadoop.tools.DistCp.run(DistCp.java:153)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.tools.DistCp.main(DistCp.java:432)
> {code}
> We either need to make the server throw UnsupportedOperationException, or 
> make the client to handle IllegalArgumentException. For backward 
> compatibility and easier operation in the field, the latter is preferred.
> But we'd better understand why IllegalArgumentException is thrown instead of 
> UnsupportedOperationException is thrown. 
> The correct way to do is: check if the operation is supported, and throw the 
> UnsurportedOperationExcetion if not; then check if parameter is legal, throw 
> IllegalArgumentException is it's not legal. We can do that fix as follow-up 
> of this jira.




[jira] [Commented] (HDFS-13100) Handle IllegalArgumentException in when GETSERVERDEFAULTS is not implemented in webhdfs.

2018-02-01 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16349188#comment-16349188
 ] 

Yongjun Zhang commented on HDFS-13100:
--

Thanks a lot for your comments [~kihwal].

I amended the description to be clear. 

The handler code I posted earlier is 

{code}
  public void handle(ChannelHandlerContext ctx, HttpRequest req)
throws IOException, URISyntaxException {
String op = params.op();
HttpMethod method = req.getMethod();
if (PutOpParam.Op.CREATE.name().equalsIgnoreCase(op)
  && method == PUT) {
  onCreate(ctx);
} else if (PostOpParam.Op.APPEND.name().equalsIgnoreCase(op)
  && method == POST) {
  onAppend(ctx);
} else if (GetOpParam.Op.OPEN.name().equalsIgnoreCase(op)
  && method == GET) {
  onOpen(ctx);
} else if(GetOpParam.Op.GETFILECHECKSUM.name().equalsIgnoreCase(op)
  && method == GET) {
  onGetFileChecksum(ctx);
} else if(PutOpParam.Op.CREATE.name().equalsIgnoreCase(op)
&& method == OPTIONS) {
  allowCORSOnCreate(ctx);
} else {
  throw new IllegalArgumentException("Invalid operation " + op);
}
}
{code}
This code is in WebhdfsHandler.java. I just noticed that the "Invalid 
operation" message in the code is not in my run, so this might not be the right 
code. 

My error message complains "No enum constant 
org.apache.hadoop.hdfs.web.resources.GetOpParam.Op.GETSERVERDEFAULTS", so some 
translation from op=GETSERVERDEFAULTS to enum constant is happening somewhere, 
and throw the IllegalArgumentException with this message, do you have idea what 
this translation is happening?

Many thanks.


> Handle IllegalArgumentException in when GETSERVERDEFAULTS is not implemented 
> in webhdfs.
> 
>
> Key: HDFS-13100
> URL: https://issues.apache.org/jira/browse/HDFS-13100
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, webhdfs
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
>Priority: Major
>
> HDFS-12386 added getserverdefaults call to webhdfs, and expect clusters that 
> don't support this to throw UnsupportedOperationException.  However, we are 
> seeing
> {code}
> hadoop distcp -D ipc.client.fallback-to-simple-auth-allowed=true -m 30 -pb 
> -update -skipcrccheck webhdfs://:/fileX 
> hdfs://:8020/scale1/fileY
> ...
> 18/01/05 10:57:33 ERROR tools.DistCp: Exception encountered 
> org.apache.hadoop.ipc.RemoteException(java.lang.IllegalArgumentException): 
> Invalid value for webhdfs parameter "op": No enum constant 
> org.apache.hadoop.hdfs.web.resources.GetOpParam.Op.GETSERVERDEFAULTS
>   at 
> org.apache.hadoop.hdfs.web.JsonUtilClient.toRemoteException(JsonUtilClient.java:80)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.validateResponse(WebHdfsFileSystem.java:498)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$200(WebHdfsFileSystem.java:126)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.runWithRetry(WebHdfsFileSystem.java:765)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.access$100(WebHdfsFileSystem.java:606)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner$1.run(WebHdfsFileSystem.java:637)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1962)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.run(WebHdfsFileSystem.java:633)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getServerDefaults(WebHdfsFileSystem.java:1807)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getKeyProviderUri(WebHdfsFileSystem.java:1825)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getKeyProvider(WebHdfsFileSystem.java:1836)
>   at 
> org.apache.hadoop.hdfs.HdfsKMSUtil.addDelegationTokensForKeyProvider(HdfsKMSUtil.java:72)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.addDelegationTokens(WebHdfsFileSystem.java:1627)
>   at 
> org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:139)
>   at 
> org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:100)
>   at 
> org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodes(TokenCache.java:80)
>   at 
> org.apache.hadoop.tools.SimpleCopyListing.validatePaths(SimpleCopyListing.java:199)
>   at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:85)
>   at 
> org.apache.hadoop.tools.GlobbedCopyListing.doBuildListing(GlobbedCopyListing.java:89)
>   at 

[jira] [Commented] (HDFS-13100) Handle IllegalArgumentException in when GETSERVERDEFAULTS is not implemented in webhdfs.

2018-02-01 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16348777#comment-16348777
 ] 

Kihwal Lee commented on HDFS-13100:
---

The {{handle()}} method you are showing is from {{WebHdfsHandler}} used by 
datanode, isn't it?  Did you see clients calling getServerDefaults() against 
datanodes?

> Handle IllegalArgumentException in when GETSERVERDEFAULTS is not implemented 
> in webhdfs.
> 
>
> Key: HDFS-13100
> URL: https://issues.apache.org/jira/browse/HDFS-13100
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, webhdfs
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
>Priority: Major
>
> HDFS-12386 added getserverdefaults call to webhdfs, and expect clusters that 
> don't support this to throw UnsupportedOperationException.  However, in that 
> case, the following handler is called, and IllegalArgumentException instead 
> of UnsupportedOperationException is thrown, and the client side would fail to 
> deal with IllegalArgumentException.
> {code}
>   public void handle(ChannelHandlerContext ctx, HttpRequest req)
> throws IOException, URISyntaxException {
> String op = params.op();
> HttpMethod method = req.getMethod();
> if (PutOpParam.Op.CREATE.name().equalsIgnoreCase(op)
>   && method == PUT) {
>   onCreate(ctx);
> } else if (PostOpParam.Op.APPEND.name().equalsIgnoreCase(op)
>   && method == POST) {
>   onAppend(ctx);
> } else if (GetOpParam.Op.OPEN.name().equalsIgnoreCase(op)
>   && method == GET) {
>   onOpen(ctx);
> } else if(GetOpParam.Op.GETFILECHECKSUM.name().equalsIgnoreCase(op)
>   && method == GET) {
>   onGetFileChecksum(ctx);
> } else if(PutOpParam.Op.CREATE.name().equalsIgnoreCase(op)
> && method == OPTIONS) {
>   allowCORSOnCreate(ctx);
> } else {
>   throw new IllegalArgumentException("Invalid operation " + op);
> }
> }
> {code}
> We either need to make the server throw UnsupportedOperationException, or 
> make the client to handle IllegalArgumentException. For backward 
> compatibility and easier operation in the field, the latter is preferred.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org