[ 
https://issues.apache.org/jira/browse/HDFS-13100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16351130#comment-16351130
 ] 

genericqa commented on HDFS-13100:
----------------------------------

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 59s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
41s{color} | {color:red} hadoop-hdfs-client in the patch failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 15s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs-client: The 
patch generated 1 new + 58 unchanged - 0 fixed = 59 total (was 58) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 15s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
28s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 51m 29s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HDFS-13100 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12909042/HDFS-13100.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux a3fb22cc86fd 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 5072388 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22928/artifact/out/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs-client.txt
 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22928/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs-client.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22928/testReport/ |
| Max. process+thread count | 303 (vs. ulimit of 5500) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-client U: 
hadoop-hdfs-project/hadoop-hdfs-client |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22928/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Handle IllegalArgumentException in when GETSERVERDEFAULTS is not implemented 
> in webhdfs.
> ----------------------------------------------------------------------------------------
>
>                 Key: HDFS-13100
>                 URL: https://issues.apache.org/jira/browse/HDFS-13100
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: hdfs, webhdfs
>            Reporter: Yongjun Zhang
>            Assignee: Yongjun Zhang
>            Priority: Critical
>         Attachments: HDFS-13100.001.patch, HDFS-13100.002.patch
>
>
> HDFS-12386 added getserverdefaults call to webhdfs, and expect clusters that 
> don't support this to throw UnsupportedOperationException.  However, we are 
> seeing
> {code}
> hadoop distcp -D ipc.client.fallback-to-simple-auth-allowed=true -m 30 -pb 
> -update -skipcrccheck webhdfs://<NN1>:<webhdfsPort>/fileX 
> hdfs://<NN2>:8020/scale1/fileY
> ...
> 18/01/05 10:57:33 ERROR tools.DistCp: Exception encountered 
> org.apache.hadoop.ipc.RemoteException(java.lang.IllegalArgumentException): 
> Invalid value for webhdfs parameter "op": No enum constant 
> org.apache.hadoop.hdfs.web.resources.GetOpParam.Op.GETSERVERDEFAULTS
>       at 
> org.apache.hadoop.hdfs.web.JsonUtilClient.toRemoteException(JsonUtilClient.java:80)
>       at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.validateResponse(WebHdfsFileSystem.java:498)
>       at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$200(WebHdfsFileSystem.java:126)
>       at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.runWithRetry(WebHdfsFileSystem.java:765)
>       at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.access$100(WebHdfsFileSystem.java:606)
>       at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner$1.run(WebHdfsFileSystem.java:637)
>       at java.security.AccessController.doPrivileged(Native Method)
>       at javax.security.auth.Subject.doAs(Subject.java:422)
>       at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1962)
>       at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.run(WebHdfsFileSystem.java:633)
>       at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getServerDefaults(WebHdfsFileSystem.java:1807)
>       at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getKeyProviderUri(WebHdfsFileSystem.java:1825)
>       at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getKeyProvider(WebHdfsFileSystem.java:1836)
>       at 
> org.apache.hadoop.hdfs.HdfsKMSUtil.addDelegationTokensForKeyProvider(HdfsKMSUtil.java:72)
>       at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.addDelegationTokens(WebHdfsFileSystem.java:1627)
>       at 
> org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:139)
>       at 
> org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:100)
>       at 
> org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodes(TokenCache.java:80)
>       at 
> org.apache.hadoop.tools.SimpleCopyListing.validatePaths(SimpleCopyListing.java:199)
>       at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:85)
>       at 
> org.apache.hadoop.tools.GlobbedCopyListing.doBuildListing(GlobbedCopyListing.java:89)
>       at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:86)
>       at 
> org.apache.hadoop.tools.DistCp.createInputFileListing(DistCp.java:368)
>       at org.apache.hadoop.tools.DistCp.prepareFileListing(DistCp.java:96)
>       at org.apache.hadoop.tools.DistCp.createAndSubmitJob(DistCp.java:205)
>       at org.apache.hadoop.tools.DistCp.execute(DistCp.java:182)
>       at org.apache.hadoop.tools.DistCp.run(DistCp.java:153)
>       at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>       at org.apache.hadoop.tools.DistCp.main(DistCp.java:432)
> {code}
> We either need to make the server throw UnsupportedOperationException, or 
> make the client to handle IllegalArgumentException. For backward 
> compatibility and easier operation in the field, the latter is preferred.
> But we'd better understand why IllegalArgumentException is thrown instead of 
> UnsupportedOperationException is thrown. 
> The correct way to do is: check if the operation is supported, and throw the 
> UnsurportedOperationExcetion if not; then check if parameter is legal, throw 
> IllegalArgumentException is it's not legal. We can do that fix as follow-up 
> of this jira.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to