[
https://issues.apache.org/jira/browse/HDFS-14129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16713255#comment-16713255
]
Hadoop QA commented on HDFS-14129:
----------------------------------
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m
0s{color} | {color:red} The patch doesn't appear to include any new or modified
tests. Please justify why no new tests are needed for this patch. Also please
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} HDFS-13891 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m
7s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m
39s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m
43s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m
55s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m
31s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}
18m 2s{color} | {color:green} branch has no errors when building and testing
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m
12s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m
55s{color} | {color:green} HDFS-13891 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m
21s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m
25s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}
2m 58s{color} | {color:orange} root: The patch generated 11 new + 80 unchanged
- 0 fixed = 91 total (was 80) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m
22s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m
0s{color} | {color:red} The patch has 3 line(s) that end in whitespace. Use git
apply --whitespace=fix <<patch_file>>. Refer https://git-scm.com/docs/git-apply
{color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}
10m 57s{color} | {color:green} patch has no errors when building and testing
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m
3s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-rbf generated 1 new + 0
unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m
1s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m
56s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m
40s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 19m
52s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m
35s{color} | {color:green} The patch does not generate ASF License warnings.
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}132m 10s{color} |
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs-rbf |
| | org.apache.hadoop.hdfs.protocolPB.RouterPolicyProvider.getServices() may
expose internal representation by returning RouterPolicyProvider.rbfServices
At RouterPolicyProvider.java:by returning RouterPolicyProvider.rbfServices At
RouterPolicyProvider.java:[line 36] |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14129 |
| JIRA Patch URL |
https://issues.apache.org/jira/secure/attachment/12950970/HDFS-14129-HDFS-13891.001.patch
|
| Optional Tests | dupname asflicense compile javac javadoc mvninstall
mvnsite unit shadedclient findbugs checkstyle |
| uname | Linux 5fa16f75e814 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-13891 / 9c15947 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| checkstyle |
https://builds.apache.org/job/PreCommit-HDFS-Build/25730/artifact/out/diff-checkstyle-root.txt
|
| whitespace |
https://builds.apache.org/job/PreCommit-HDFS-Build/25730/artifact/out/whitespace-eol.txt
|
| findbugs |
https://builds.apache.org/job/PreCommit-HDFS-Build/25730/artifact/out/new-findbugs-hadoop-hdfs-project_hadoop-hdfs-rbf.html
|
| Test Results |
https://builds.apache.org/job/PreCommit-HDFS-Build/25730/testReport/ |
| Max. process+thread count | 1464 (vs. ulimit of 10000) |
| modules | C: hadoop-common-project/hadoop-common
hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs-rbf U: .
|
| Console output |
https://builds.apache.org/job/PreCommit-HDFS-Build/25730/console |
| Powered by | Apache Yetus 0.8.0 http://yetus.apache.org |
This message was automatically generated.
> RBF: Create new policy provider for router
> ------------------------------------------
>
> Key: HDFS-14129
> URL: https://issues.apache.org/jira/browse/HDFS-14129
> Project: Hadoop HDFS
> Issue Type: Sub-task
> Components: namenode
> Affects Versions: HDFS-13532
> Reporter: Surendra Singh Lilhore
> Assignee: Ranith Sardar
> Priority: Major
> Attachments: HDFS-14129-HDFS-13891.001.patch
>
>
> Router is usingĀ *{{HDFSPolicyProvider}}*. We can't add new protocol in this
> class for router, its better to create in policy provider for Router.
> {code:java}
> // Set service-level authorization security policy
> if (conf.getBoolean(HADOOP_SECURITY_AUTHORIZATION, false)) {
> this.adminServer.refreshServiceAcl(conf, new HDFSPolicyProvider());
> }
> {code}
> I got this issue when I am verified HDFS-14079 with secure cluster.
> {noformat}
> ./bin/hdfs dfsrouteradmin -ls /
> ls: Protocol interface org.apache.hadoop.hdfs.protocolPB.RouterAdminProtocol
> is not known.
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException):
> Protocol interface org.apache.hadoop.hdfs.protocolPB.RouterAdminProtocol is
> not known.
> at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1520)
> at org.apache.hadoop.ipc.Client.call(Client.java:1466)
> {noformat}
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]