[GitHub] [hadoop] swagle commented on issue #785: HDDS-1464. Client should have different retry policies for different exceptions.

2019-05-03 Thread GitBox
swagle commented on issue #785: HDDS-1464. Client should have different retry 
policies for different exceptions.
URL: https://github.com/apache/hadoop/pull/785#issuecomment-489279581
 
 
   Thanks for review @hanishakoneru. On my local machine, I get a 80% pass rate 
on this test.
   I can see that it is failing elsewhere without these changes as well: 
https://github.com/apache/hadoop/pull/781


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16214) Kerberos name implementation in Hadoop does not accept principals with more than two components

2019-05-03 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16832891#comment-16832891
 ] 

Eric Yang commented on HADOOP-16214:


[~daryn] {quote}If I were to use RBAC to protect a cluster I'd want to handle 
both service and user accounts. I would need to write rules to allow only the 
users within certain roles, all else are rejected.  Hence why the MIT 
best-effort else allow all non-matching principals through would be a complete 
non-starter.{quote}

Role inside Kerberos principal is only showing the identity of the caller, 
server side must perform the grant of authorization in order for system to 
remain secure.  Please do not conflating authentication with authorization.  
Your proposal of using auth_to_local as firewall rule is trying to block 
anonymous from gain access to the system during authentication phase.  Where 
the MIT rule mechanism will defer authorization to either proxy ACL or ranger 
plugin because non-matching principal in auth_to_local is still a Kerberos 
authenticated client.  This may sound like hair splitting, but please allow 
other community members to have a chance to develop more fine grained 
authorization scheme than auth_to_local firewall rules.

> Kerberos name implementation in Hadoop does not accept principals with more 
> than two components
> ---
>
> Key: HADOOP-16214
> URL: https://issues.apache.org/jira/browse/HADOOP-16214
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: auth
>Reporter: Issac Buenrostro
>Priority: Major
> Attachments: Add-service-freeipa.png, HADOOP-16214.001.patch, 
> HADOOP-16214.002.patch, HADOOP-16214.003.patch, HADOOP-16214.004.patch, 
> HADOOP-16214.005.patch, HADOOP-16214.006.patch, HADOOP-16214.007.patch, 
> HADOOP-16214.008.patch, HADOOP-16214.009.patch, HADOOP-16214.010.patch, 
> HADOOP-16214.011.patch, HADOOP-16214.012.patch, HADOOP-16214.013.patch
>
>
> org.apache.hadoop.security.authentication.util.KerberosName is in charge of 
> converting a Kerberos principal to a user name in Hadoop for all of the 
> services requiring authentication.
> Although the Kerberos spec 
> ([https://web.mit.edu/kerberos/krb5-1.5/krb5-1.5.4/doc/krb5-user/What-is-a-Kerberos-Principal_003f.html])
>  allows for an arbitrary number of components in the principal, the Hadoop 
> implementation will throw a "Malformed Kerberos name:" error if the principal 
> has more than two components (because the regex can only read serviceName and 
> hostName).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16091) Create hadoop/ozone docker images with inline build process

2019-05-03 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16832875#comment-16832875
 ] 

Hadoop QA commented on HADOOP-16091:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
58s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 38s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
14s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  5m  
9s{color} | {color:red} hadoop-ozone in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
23s{color} | {color:red} docker in the patch failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m  
1s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} hadolint {color} | {color:red}  0m  
1s{color} | {color:red} The patch generated 1 new + 0 unchanged - 0 fixed = 1 
total (was 0) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
15s{color} | {color:red} docker in the patch failed. {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 0s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m 
19s{color} | {color:green} The patch generated 0 new + 104 unchanged - 132 
fixed = 104 total (was 236) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
7s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 10s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  1m  
4s{color} | {color:red} hadoop-ozone in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
18s{color} | {color:red} docker in the patch failed. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 17m 55s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
29s{color} | {color:green} dist in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 22s{color} 
| {color:red} docker in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
55s{color} | {color:green} ozone-recon in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
38s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 95m 31s{color} | 

[jira] [Commented] (HADOOP-16144) Create a Hadoop RPC based KMS client

2019-05-03 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16832838#comment-16832838
 ] 

Anu Engineer commented on HADOOP-16144:
---

Great, way to go. I wrote genesis stuff to get the benchmarks, it is not 
important at all. Please post the patch when you are ready. I am traveling for 
the next few weeks, so my responses might be slow.

> Create a Hadoop RPC based KMS client
> 
>
> Key: HADOOP-16144
> URL: https://issues.apache.org/jira/browse/HADOOP-16144
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: kms
>Reporter: Wei-Chiu Chuang
>Assignee: Anu Engineer
>Priority: Major
> Attachments: HADOOP-16144.001.patch, KMS.RPC.patch
>
>
> Create a new KMS client implementation that speaks Hadoop RPC.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #654: HADOOP-15183 S3Guard store becomes inconsistent after partial failure of rename

2019-05-03 Thread GitBox
hadoop-yetus commented on issue #654: HADOOP-15183 S3Guard store becomes 
inconsistent after partial failure of rename
URL: https://github.com/apache/hadoop/pull/654#issuecomment-489240896
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 32 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 11 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 23 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1164 | trunk passed |
   | +1 | compile | 1230 | trunk passed |
   | +1 | checkstyle | 140 | trunk passed |
   | +1 | mvnsite | 127 | trunk passed |
   | +1 | shadedclient | 961 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 78 | trunk passed |
   | 0 | spotbugs | 56 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | +1 | findbugs | 168 | trunk passed |
   | -0 | patch | 91 | Used diff version of patch file. Binary files and 
potentially other changes not applied. Please rebase and squash commits if 
necessary. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 25 | Maven dependency ordering for patch |
   | +1 | mvninstall | 73 | the patch passed |
   | +1 | compile | 958 | the patch passed |
   | +1 | javac | 958 | the patch passed |
   | -0 | checkstyle | 145 | root: The patch generated 48 new + 55 unchanged - 
0 fixed = 103 total (was 55) |
   | +1 | mvnsite | 127 | the patch passed |
   | -1 | whitespace | 0 | The patch has 2 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply |
   | +1 | xml | 2 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 688 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 38 | hadoop-tools_hadoop-aws generated 2 new + 1 unchanged 
- 0 fixed = 3 total (was 1) |
   | +1 | findbugs | 203 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 521 | hadoop-common in the patch passed. |
   | +1 | unit | 271 | hadoop-aws in the patch passed. |
   | +1 | asflicense | 45 | The patch does not generate ASF License warnings. |
   | | | 7215 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-654/20/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/654 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml |
   | uname | Linux 9371b4f89817 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / f194540 |
   | Default Java | 1.8.0_191 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-654/20/artifact/out/diff-checkstyle-root.txt
 |
   | whitespace | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-654/20/artifact/out/whitespace-eol.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-654/20/artifact/out/diff-javadoc-javadoc-hadoop-tools_hadoop-aws.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-654/20/testReport/ |
   | Max. process+thread count | 1380 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws 
U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-654/20/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16091) Create hadoop/ozone docker images with inline build process

2019-05-03 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16832813#comment-16832813
 ] 

Eric Yang commented on HADOOP-16091:


One notable difference is the location of ozone binary in docker container.  In 
patch 001 and older image, it is set to /opt/hadoop.  Where patch 002 is set to 
/opt/ozone-${project.version}, this allows us to skip unpack tarball in docker 
project's target directory and only expand inside docker image.  I think the 
ideal location is actually /opt/apache/ozone-${project.version} to be 
consistent with other Apache projects.

Another notable problem is the hadoop-runner image is built with Squash and 
symlinks are not supported, and move of directory location is also not 
supported during build process.  It is probably better to pick centos as base 
image to avoid those limitations with squashfs based image.  Thoughts?

> Create hadoop/ozone docker images with inline build process
> ---
>
> Key: HADOOP-16091
> URL: https://issues.apache.org/jira/browse/HADOOP-16091
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Elek, Marton
>Assignee: Eric Yang
>Priority: Major
> Attachments: HADOOP-16091.001.patch, HADOOP-16091.002.patch
>
>
> This is proposed by [~eyang] in 
> [this|https://lists.apache.org/thread.html/33ac54bdeacb4beb023ebd452464603aaffa095bd104cb43c22f484e@%3Chdfs-dev.hadoop.apache.org%3E]
>  mailing thread.
> {quote}1, 3. There are 38 Apache projects hosting docker images on Docker hub 
> using Apache Organization. By browsing Apache github mirror. There are only 7 
> projects using a separate repository for docker image build. Popular projects 
> official images are not from Apache organization, such as zookeeper, tomcat, 
> httpd. We may not disrupt what other Apache projects are doing, but it looks 
> like inline build process is widely employed by majority of projects such as 
> Nifi, Brooklyn, thrift, karaf, syncope and others. The situation seems a bit 
> chaotic for Apache as a whole. However, Hadoop community can decide what is 
> best for Hadoop. My preference is to remove ozone from source tree naming, if 
> Ozone is intended to be subproject of Hadoop for long period of time. This 
> enables Hadoop community to host docker images for various subproject without 
> having to check out several source tree to trigger a grand build. However, 
> inline build process seems more popular than separated process. Hence, I 
> highly recommend making docker build inline if possible.
> {quote}
> The main challenges are also discussed in the thread:
> {code:java}
> 3. Technically it would be possible to add the Dockerfile to the source
> tree and publish the docker image together with the release by the
> release manager but it's also problematic:
> {code}
> a) there is no easy way to stage the images for the vote
>  c) it couldn't be flagged as automated on dockerhub
>  d) It couldn't support the critical updates.
>  * Updating existing images (for example in case of an ssl bug, rebuild
>  all the existing images with exactly the same payload but updated base
>  image/os environment)
>  * Creating image for older releases (We would like to provide images,
>  for hadoop 2.6/2.7/2.7/2.8/2.9. Especially for doing automatic testing
>  with different versions).
> {code:java}
>  {code}
> The a) can be solved (as [~eyang] suggested) with using a personal docker 
> image during the vote and publish it to the dockerhub after the vote (in case 
> the permission can be set by the INFRA)
> Note: based on LEGAL-270 and linked discussion both approaches (inline build 
> process / external build process) are compatible with the apache release.
> Note: HDDS-851 and HADOOP-14898 contains more information about these 
> problems.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16091) Create hadoop/ozone docker images with inline build process

2019-05-03 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16832810#comment-16832810
 ] 

Hadoop QA commented on HADOOP-16091:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
16s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 45s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
56s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  5m 
47s{color} | {color:red} hadoop-ozone in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
28s{color} | {color:red} docker in the patch failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
30s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} hadolint {color} | {color:red}  0m  
1s{color} | {color:red} The patch generated 2 new + 0 unchanged - 0 fixed = 2 
total (was 0) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
18s{color} | {color:red} docker in the patch failed. {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 0s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m 
15s{color} | {color:green} The patch generated 0 new + 104 unchanged - 132 
fixed = 104 total (was 236) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
8s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 25s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  1m 
14s{color} | {color:red} hadoop-ozone in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
20s{color} | {color:red} docker in the patch failed. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 21m 13s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
34s{color} | {color:green} dist in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 26s{color} 
| {color:red} docker in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
6s{color} | {color:green} ozone-recon in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
48s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 99m 13s{color} | 

[jira] [Updated] (HADOOP-16091) Create hadoop/ozone docker images with inline build process

2019-05-03 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16091?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated HADOOP-16091:
---
Attachment: HADOOP-16091.002.patch

> Create hadoop/ozone docker images with inline build process
> ---
>
> Key: HADOOP-16091
> URL: https://issues.apache.org/jira/browse/HADOOP-16091
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Elek, Marton
>Assignee: Eric Yang
>Priority: Major
> Attachments: HADOOP-16091.001.patch, HADOOP-16091.002.patch
>
>
> This is proposed by [~eyang] in 
> [this|https://lists.apache.org/thread.html/33ac54bdeacb4beb023ebd452464603aaffa095bd104cb43c22f484e@%3Chdfs-dev.hadoop.apache.org%3E]
>  mailing thread.
> {quote}1, 3. There are 38 Apache projects hosting docker images on Docker hub 
> using Apache Organization. By browsing Apache github mirror. There are only 7 
> projects using a separate repository for docker image build. Popular projects 
> official images are not from Apache organization, such as zookeeper, tomcat, 
> httpd. We may not disrupt what other Apache projects are doing, but it looks 
> like inline build process is widely employed by majority of projects such as 
> Nifi, Brooklyn, thrift, karaf, syncope and others. The situation seems a bit 
> chaotic for Apache as a whole. However, Hadoop community can decide what is 
> best for Hadoop. My preference is to remove ozone from source tree naming, if 
> Ozone is intended to be subproject of Hadoop for long period of time. This 
> enables Hadoop community to host docker images for various subproject without 
> having to check out several source tree to trigger a grand build. However, 
> inline build process seems more popular than separated process. Hence, I 
> highly recommend making docker build inline if possible.
> {quote}
> The main challenges are also discussed in the thread:
> {code:java}
> 3. Technically it would be possible to add the Dockerfile to the source
> tree and publish the docker image together with the release by the
> release manager but it's also problematic:
> {code}
> a) there is no easy way to stage the images for the vote
>  c) it couldn't be flagged as automated on dockerhub
>  d) It couldn't support the critical updates.
>  * Updating existing images (for example in case of an ssl bug, rebuild
>  all the existing images with exactly the same payload but updated base
>  image/os environment)
>  * Creating image for older releases (We would like to provide images,
>  for hadoop 2.6/2.7/2.7/2.8/2.9. Especially for doing automatic testing
>  with different versions).
> {code:java}
>  {code}
> The a) can be solved (as [~eyang] suggested) with using a personal docker 
> image during the vote and publish it to the dockerhub after the vote (in case 
> the permission can be set by the INFRA)
> Note: based on LEGAL-270 and linked discussion both approaches (inline build 
> process / external build process) are compatible with the apache release.
> Note: HDDS-851 and HADOOP-14898 contains more information about these 
> problems.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16144) Create a Hadoop RPC based KMS client

2019-05-03 Thread Daryn Sharp (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16832808#comment-16832808
 ] 

Daryn Sharp commented on HADOOP-16144:
--

I've _almost_ got a variant of your great patch (just the rpc, not the genesis 
stuff) ready for internal load testing.  It has been a pain making the rpc 
client be compliant with the key provider factory such that both are 
transparently supported.  Just wanted to let you know so we don't diverge in 
our efforts.

> Create a Hadoop RPC based KMS client
> 
>
> Key: HADOOP-16144
> URL: https://issues.apache.org/jira/browse/HADOOP-16144
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: kms
>Reporter: Wei-Chiu Chuang
>Assignee: Anu Engineer
>Priority: Major
> Attachments: HADOOP-16144.001.patch, KMS.RPC.patch
>
>
> Create a new KMS client implementation that speaks Hadoop RPC.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #788: HDDS-1475 : Fix OzoneContainer start method.

2019-05-03 Thread GitBox
hadoop-yetus commented on issue #788: HDDS-1475 : Fix OzoneContainer start 
method.
URL: https://github.com/apache/hadoop/pull/788#issuecomment-489230400
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 27 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 66 | Maven dependency ordering for branch |
   | +1 | mvninstall | 399 | trunk passed |
   | +1 | compile | 188 | trunk passed |
   | +1 | checkstyle | 46 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 813 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 128 | trunk passed |
   | 0 | spotbugs | 231 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 407 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 28 | Maven dependency ordering for patch |
   | +1 | mvninstall | 386 | the patch passed |
   | +1 | compile | 193 | the patch passed |
   | +1 | javac | 193 | the patch passed |
   | +1 | checkstyle | 47 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 598 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 115 | the patch passed |
   | +1 | findbugs | 439 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 129 | hadoop-hdds in the patch failed. |
   | -1 | unit | 771 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 32 | The patch does not generate ASF License warnings. |
   | | | 4950 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.scm.TestXceiverClientManager |
   |   | hadoop.ozone.om.TestContainerReportWithKeys |
   |   | hadoop.hdds.scm.pipeline.TestNode2PipelineMap |
   |   | hadoop.ozone.scm.pipeline.TestSCMPipelineMetrics |
   |   | 
hadoop.ozone.container.common.statemachine.commandhandler.TestDeleteContainerHandler
 |
   |   | hadoop.ozone.container.ozoneimpl.TestOzoneContainer |
   |   | hadoop.ozone.web.client.TestBuckets |
   |   | hadoop.ozone.ozShell.TestOzoneDatanodeShell |
   |   | hadoop.hdds.scm.pipeline.TestNodeFailure |
   |   | hadoop.ozone.om.TestScmSafeMode |
   |   | hadoop.ozone.scm.TestXceiverClientMetrics |
   |   | hadoop.ozone.om.TestMultipleContainerReadWrite |
   |   | 
hadoop.ozone.container.common.statemachine.commandhandler.TestCloseContainerHandler
 |
   |   | hadoop.ozone.scm.TestSCMNodeManagerMXBean |
   |   | hadoop.ozone.client.rpc.TestCommitWatcher |
   |   | hadoop.ozone.scm.TestContainerSmallFile |
   |   | hadoop.ozone.scm.TestSCMMXBean |
   |   | hadoop.ozone.om.TestOmAcls |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   |   | hadoop.ozone.om.TestOzoneManagerHA |
   |   | hadoop.ozone.om.TestOmInit |
   |   | hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   |   | hadoop.ozone.client.rpc.TestHybridPipelineOnDatanode |
   |   | hadoop.ozone.web.TestOzoneWebAccess |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-788/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/788 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 3b7cbdbb6297 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / f194540 |
   | Default Java | 1.8.0_191 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-788/4/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-788/4/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-788/4/testReport/ |
   | Max. process+thread count | 4611 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/container-service hadoop-ozone/integration-test 
U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-788/4/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and 

[jira] [Commented] (HADOOP-16277) Expose getTokenKind method in FileSystem

2019-05-03 Thread Daryn Sharp (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16832804#comment-16832804
 ] 

Daryn Sharp commented on HADOOP-16277:
--

I'm opposed to another api for what should be a private implementation detail – 
thanks [~jojochuang] for pointing that out.  Anytime a customer comes to me 
with custom code trying to do something with tokens I tell them to rip it all 
out.  Hadoop's security is supposed to be generally invisible.  If you feel the 
need to do explicit token manipulation then it's likely you are attacking a 
problem incorrectly.

Steve is referring to this api:
{code:java}
FileSystem#addDelegationTokens(String renewer, Credentials creds);{code}
I specifically added this years ago to support multi-token filesystems.  At the 
time it was viewfs, now it's hdfs/webhdfs+kms, internal s3+RBAC, etc.

It will only fetch tokens that it doesn't already have, so pass either an empty 
credentials or pass your current ugi's credentials.

> Expose getTokenKind method in FileSystem
> 
>
> Key: HADOOP-16277
> URL: https://issues.apache.org/jira/browse/HADOOP-16277
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Venkatesh Sridharan
>Priority: Trivial
>
> It would be nice to have a getTokenKind() method exposed in 
> [FileSystem|[https://github.com/apache/hadoop-common/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java]]
>  . Currently WebHdfsFileSystem class has getTokenKind() which is protected. 
> Having it in FileSystem makes it easier to use at runtime when the consumer 
> doesn't know what the underlying FileSystem implementation is. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16091) Create hadoop/ozone docker images with inline build process

2019-05-03 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16832800#comment-16832800
 ] 

Eric Yang commented on HADOOP-16091:


Patch 002 includes the following changes:
# Make sure Ozone project dependency includes correct version of submodules
# Create docker submodule in hadoop-ozone project for building docker image
# Use maven assembly plugin to create ozone tarball for downstream docker 
project to pick up tarball as dependent artifact

{code}
[INFO] Apache Hadoop Ozone Docker Distribution  SUCCESS [ 32.967 s]
{code}

Ozone docker build takes 32 second on a virtual machine running on 2015 mac 
laptop, and it is only activated when using -Pdist flag.

> Create hadoop/ozone docker images with inline build process
> ---
>
> Key: HADOOP-16091
> URL: https://issues.apache.org/jira/browse/HADOOP-16091
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Elek, Marton
>Assignee: Eric Yang
>Priority: Major
> Attachments: HADOOP-16091.001.patch
>
>
> This is proposed by [~eyang] in 
> [this|https://lists.apache.org/thread.html/33ac54bdeacb4beb023ebd452464603aaffa095bd104cb43c22f484e@%3Chdfs-dev.hadoop.apache.org%3E]
>  mailing thread.
> {quote}1, 3. There are 38 Apache projects hosting docker images on Docker hub 
> using Apache Organization. By browsing Apache github mirror. There are only 7 
> projects using a separate repository for docker image build. Popular projects 
> official images are not from Apache organization, such as zookeeper, tomcat, 
> httpd. We may not disrupt what other Apache projects are doing, but it looks 
> like inline build process is widely employed by majority of projects such as 
> Nifi, Brooklyn, thrift, karaf, syncope and others. The situation seems a bit 
> chaotic for Apache as a whole. However, Hadoop community can decide what is 
> best for Hadoop. My preference is to remove ozone from source tree naming, if 
> Ozone is intended to be subproject of Hadoop for long period of time. This 
> enables Hadoop community to host docker images for various subproject without 
> having to check out several source tree to trigger a grand build. However, 
> inline build process seems more popular than separated process. Hence, I 
> highly recommend making docker build inline if possible.
> {quote}
> The main challenges are also discussed in the thread:
> {code:java}
> 3. Technically it would be possible to add the Dockerfile to the source
> tree and publish the docker image together with the release by the
> release manager but it's also problematic:
> {code}
> a) there is no easy way to stage the images for the vote
>  c) it couldn't be flagged as automated on dockerhub
>  d) It couldn't support the critical updates.
>  * Updating existing images (for example in case of an ssl bug, rebuild
>  all the existing images with exactly the same payload but updated base
>  image/os environment)
>  * Creating image for older releases (We would like to provide images,
>  for hadoop 2.6/2.7/2.7/2.8/2.9. Especially for doing automatic testing
>  with different versions).
> {code:java}
>  {code}
> The a) can be solved (as [~eyang] suggested) with using a personal docker 
> image during the vote and publish it to the dockerhub after the vote (in case 
> the permission can be set by the INFRA)
> Note: based on LEGAL-270 and linked discussion both approaches (inline build 
> process / external build process) are compatible with the apache release.
> Note: HDDS-851 and HADOOP-14898 contains more information about these 
> problems.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13656) fs -expunge to take a filesystem

2019-05-03 Thread Shweta (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-13656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16832798#comment-16832798
 ] 

Shweta commented on HADOOP-13656:
-

Will upload patch for checkstyle, whitespace and unit tests fixes in 
TestHDFSTrash.

> fs -expunge to take a filesystem
> 
>
> Key: HADOOP-13656
> URL: https://issues.apache.org/jira/browse/HADOOP-13656
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 2.7.3
>Reporter: Steve Loughran
>Assignee: Shweta
>Priority: Minor
> Attachments: HADOOP-13656.001.patch, HADOOP-13656.002.patch, 
> HADOOP-13656.003.patch, HADOOP-13656.004.patch, HADOOP-13656.005.patch
>
>
> you can't pass in a filesystem or object store to {{fs -expunge}; you have to 
> change the default fs
> {code}
> hadoop fs -expunge -D fs.defaultFS=s3a://bucket/
> {code}
> If the command took an optional filesystem argument, it'd be better at 
> cleaning up object stores. Given that even deleted object store data runs up 
> bills, this could be appreciated.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16284) KMS Cache Miss Storm

2019-05-03 Thread Daryn Sharp (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16832789#comment-16832789
 ] 

Daryn Sharp commented on HADOOP-16284:
--

Do you know why the number of keys is relevant?  Is the key cache evicting them 
due to size or the accesses for a particular key are more distributed over time 
vs a few highly contended keys?

I'm a bit puzzled how 4 poorly performing KMS instances can overwhelm its 
backend. :)   In all seriousness, other than reducing overhead of the sync 
itself, you could avoid service disruption by moving to an async background 
fetch by throwing a RetriableException for and during a cache miss/fill.  Much 
like Rushabh and I did for file creation in the NN.  I think that went into the 
community...

 

 

> KMS Cache Miss Storm
> 
>
> Key: HADOOP-16284
> URL: https://issues.apache.org/jira/browse/HADOOP-16284
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.6.0
> Environment: CDH 5.13.1, Kerberized, Cloudera Keytrustee Server
>Reporter: Wei-Chiu Chuang
>Priority: Major
>
> We recently stumble upon a performance issue with KMS, where occasionally it 
> exhibited "No content to map" error (this cluster ran an old version that 
> doesn't have HADOOP-14841) and jobs crashed. *We bumped the number of KMSes 
> from 2 to 4, and situation went even worse.*
> Later, we realized this cluster had a few hundred encryption zones and a few 
> hundred encryption keys. This is pretty unusual because most of the 
> deployments known to us has at most a dozen keys. So in terms of number of 
> keys, this cluster is 1-2 order of magnitude higher than any one else.
> The high number of encryption keys in creases the likelihood of key cache 
> miss in KMS. In Cloudera's setup, each cache miss forces KMS to sync with its 
> backend, the Cloudera Keytrustee Server. Plus the high number of KMSes 
> amplifies the latency, effectively causing a [cache miss 
> storm|https://en.wikipedia.org/wiki/Cache_stampede].
> We were able to reproduce this issue with KMS-o-meter (HDFS-14312) - I will 
> come up with a better name later surely - and discovered a scalability bug in 
> CKTS. The fix was verified again with the tool.
> Filing this bug so the community is aware of this issue. I don't have a 
> solution for now in KMS. But we want to address this scalability problem in 
> the near future because we are seeing use cases that requires thousands of 
> encryption keys.
> 
> On a side note, 4 KMS doesn't work well without HADOOP-14445 (and subsequent 
> fixes). A MapReduce job acquires at most 3 KMS delegation tokens, and so for 
> cases, such as distcp, it wouldn fail to reach the 4th KMS on the remote 
> cluster. I imagine similar issues exist for other execution engines, but I 
> didn't test.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hanishakoneru commented on issue #785: HDDS-1464. Client should have different retry policies for different exceptions.

2019-05-03 Thread GitBox
hanishakoneru commented on issue #785: HDDS-1464. Client should have different 
retry policies for different exceptions.
URL: https://github.com/apache/hadoop/pull/785#issuecomment-489213679
 
 
   The patch LGTM overall. 
   The CI unit test failure 
TestOzoneClientRetriesOnException#testMaxRetriesByOzoneClient looks related 
though. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16266) Add more fine-grained processing time metrics to the RPC layer

2019-05-03 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16832759#comment-16832759
 ] 

Hadoop QA commented on HADOOP-16266:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
7s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m 27s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
30s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
27s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 23m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 23m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 23m 
49s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 44s{color} | {color:orange} root: The patch generated 6 new + 351 unchanged 
- 7 fixed = 357 total (was 358) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
14s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 44s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
30s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m  
3s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 99m 14s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
56s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}230m 18s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeUUID |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HADOOP-16266 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12967791/HADOOP-16266.011.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  cc  |
| uname | Linux 05b94178aeab 4.4.0-141-generic #167~14.04.1-Ubuntu SMP Mon Dec 
10 13:20:24 UTC 2018 

[jira] [Commented] (HADOOP-16214) Kerberos name implementation in Hadoop does not accept principals with more than two components

2019-05-03 Thread Daryn Sharp (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16832757#comment-16832757
 ] 

Daryn Sharp commented on HADOOP-16214:
--

Role encoded principals is a creative use of principals that defies 
conventional logic (neither FreeIPA nor AD support 2nd component not being a 
host) so we are in uncharted territory.
 # We can't force a requirement to enable the insecure "MIT" mode to support 
multi-component principals.
 # Allowing only 2-component principals to be SPNs is too restrictive and 
solely based on what a user (truly no offense Issac!) who wants to use 
non-standard principals says would work for them. Instead, we can use a 
sentinel value to indicate no host, ie. something like "user/-/role", to 
indicate no host.

Now why does this matter? We are increasingly moving to role-based access 
control so I can envision using this feature to tightly restrict access of 
highly confidential clusters to a special subset of users within a realm. If I 
were to use RBAC to protect a cluster I'd want to handle both service and user 
accounts. I would need to write rules to allow only the users within certain 
roles, all else are rejected.  Hence why the MIT best-effort else allow all 
non-matching principals through would be a complete non-starter.

> Kerberos name implementation in Hadoop does not accept principals with more 
> than two components
> ---
>
> Key: HADOOP-16214
> URL: https://issues.apache.org/jira/browse/HADOOP-16214
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: auth
>Reporter: Issac Buenrostro
>Priority: Major
> Attachments: Add-service-freeipa.png, HADOOP-16214.001.patch, 
> HADOOP-16214.002.patch, HADOOP-16214.003.patch, HADOOP-16214.004.patch, 
> HADOOP-16214.005.patch, HADOOP-16214.006.patch, HADOOP-16214.007.patch, 
> HADOOP-16214.008.patch, HADOOP-16214.009.patch, HADOOP-16214.010.patch, 
> HADOOP-16214.011.patch, HADOOP-16214.012.patch, HADOOP-16214.013.patch
>
>
> org.apache.hadoop.security.authentication.util.KerberosName is in charge of 
> converting a Kerberos principal to a user name in Hadoop for all of the 
> services requiring authentication.
> Although the Kerberos spec 
> ([https://web.mit.edu/kerberos/krb5-1.5/krb5-1.5.4/doc/krb5-user/What-is-a-Kerberos-Principal_003f.html])
>  allows for an arbitrary number of components in the principal, the Hadoop 
> implementation will throw a "Malformed Kerberos name:" error if the principal 
> has more than two components (because the regex can only read serviceName and 
> hostName).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] avijayanhwx commented on a change in pull request #788: HDDS-1475 : Fix OzoneContainer start method.

2019-05-03 Thread GitBox
avijayanhwx commented on a change in pull request #788: HDDS-1475 : Fix 
OzoneContainer start method.
URL: https://github.com/apache/hadoop/pull/788#discussion_r280894595
 
 

 ##
 File path: 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/XceiverServerGrpc.java
 ##
 @@ -69,6 +69,7 @@
   private UUID id;
   private Server server;
   private final ContainerDispatcher storageContainer;
+  private volatile boolean isStarted;
 
 Review comment:
   I thought this will be multi-threaded. Given this is single threaded, 
volatile is not needed. I will remove it.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16091) Create hadoop/ozone docker images with inline build process

2019-05-03 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16091?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated HADOOP-16091:
---
Attachment: HADOOP-16091.001.patch

> Create hadoop/ozone docker images with inline build process
> ---
>
> Key: HADOOP-16091
> URL: https://issues.apache.org/jira/browse/HADOOP-16091
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Elek, Marton
>Priority: Major
> Attachments: HADOOP-16091.001.patch
>
>
> This is proposed by [~eyang] in 
> [this|https://lists.apache.org/thread.html/33ac54bdeacb4beb023ebd452464603aaffa095bd104cb43c22f484e@%3Chdfs-dev.hadoop.apache.org%3E]
>  mailing thread.
> {quote}1, 3. There are 38 Apache projects hosting docker images on Docker hub 
> using Apache Organization. By browsing Apache github mirror. There are only 7 
> projects using a separate repository for docker image build. Popular projects 
> official images are not from Apache organization, such as zookeeper, tomcat, 
> httpd. We may not disrupt what other Apache projects are doing, but it looks 
> like inline build process is widely employed by majority of projects such as 
> Nifi, Brooklyn, thrift, karaf, syncope and others. The situation seems a bit 
> chaotic for Apache as a whole. However, Hadoop community can decide what is 
> best for Hadoop. My preference is to remove ozone from source tree naming, if 
> Ozone is intended to be subproject of Hadoop for long period of time. This 
> enables Hadoop community to host docker images for various subproject without 
> having to check out several source tree to trigger a grand build. However, 
> inline build process seems more popular than separated process. Hence, I 
> highly recommend making docker build inline if possible.
> {quote}
> The main challenges are also discussed in the thread:
> {code:java}
> 3. Technically it would be possible to add the Dockerfile to the source
> tree and publish the docker image together with the release by the
> release manager but it's also problematic:
> {code}
> a) there is no easy way to stage the images for the vote
>  c) it couldn't be flagged as automated on dockerhub
>  d) It couldn't support the critical updates.
>  * Updating existing images (for example in case of an ssl bug, rebuild
>  all the existing images with exactly the same payload but updated base
>  image/os environment)
>  * Creating image for older releases (We would like to provide images,
>  for hadoop 2.6/2.7/2.7/2.8/2.9. Especially for doing automatic testing
>  with different versions).
> {code:java}
>  {code}
> The a) can be solved (as [~eyang] suggested) with using a personal docker 
> image during the vote and publish it to the dockerhub after the vote (in case 
> the permission can be set by the INFRA)
> Note: based on LEGAL-270 and linked discussion both approaches (inline build 
> process / external build process) are compatible with the apache release.
> Note: HDDS-851 and HADOOP-14898 contains more information about these 
> problems.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16091) Create hadoop/ozone docker images with inline build process

2019-05-03 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16091?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated HADOOP-16091:
---
Assignee: Eric Yang
  Status: Patch Available  (was: Open)

> Create hadoop/ozone docker images with inline build process
> ---
>
> Key: HADOOP-16091
> URL: https://issues.apache.org/jira/browse/HADOOP-16091
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Elek, Marton
>Assignee: Eric Yang
>Priority: Major
> Attachments: HADOOP-16091.001.patch
>
>
> This is proposed by [~eyang] in 
> [this|https://lists.apache.org/thread.html/33ac54bdeacb4beb023ebd452464603aaffa095bd104cb43c22f484e@%3Chdfs-dev.hadoop.apache.org%3E]
>  mailing thread.
> {quote}1, 3. There are 38 Apache projects hosting docker images on Docker hub 
> using Apache Organization. By browsing Apache github mirror. There are only 7 
> projects using a separate repository for docker image build. Popular projects 
> official images are not from Apache organization, such as zookeeper, tomcat, 
> httpd. We may not disrupt what other Apache projects are doing, but it looks 
> like inline build process is widely employed by majority of projects such as 
> Nifi, Brooklyn, thrift, karaf, syncope and others. The situation seems a bit 
> chaotic for Apache as a whole. However, Hadoop community can decide what is 
> best for Hadoop. My preference is to remove ozone from source tree naming, if 
> Ozone is intended to be subproject of Hadoop for long period of time. This 
> enables Hadoop community to host docker images for various subproject without 
> having to check out several source tree to trigger a grand build. However, 
> inline build process seems more popular than separated process. Hence, I 
> highly recommend making docker build inline if possible.
> {quote}
> The main challenges are also discussed in the thread:
> {code:java}
> 3. Technically it would be possible to add the Dockerfile to the source
> tree and publish the docker image together with the release by the
> release manager but it's also problematic:
> {code}
> a) there is no easy way to stage the images for the vote
>  c) it couldn't be flagged as automated on dockerhub
>  d) It couldn't support the critical updates.
>  * Updating existing images (for example in case of an ssl bug, rebuild
>  all the existing images with exactly the same payload but updated base
>  image/os environment)
>  * Creating image for older releases (We would like to provide images,
>  for hadoop 2.6/2.7/2.7/2.8/2.9. Especially for doing automatic testing
>  with different versions).
> {code:java}
>  {code}
> The a) can be solved (as [~eyang] suggested) with using a personal docker 
> image during the vote and publish it to the dockerhub after the vote (in case 
> the permission can be set by the INFRA)
> Note: based on LEGAL-270 and linked discussion both approaches (inline build 
> process / external build process) are compatible with the apache release.
> Note: HDDS-851 and HADOOP-14898 contains more information about these 
> problems.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] nandakumar131 merged pull request #786: HDDS-1448 : RatisPipelineProvider should only consider open pipeline …

2019-05-03 Thread GitBox
nandakumar131 merged pull request #786: HDDS-1448 : RatisPipelineProvider 
should only consider open pipeline …
URL: https://github.com/apache/hadoop/pull/786
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16282) Avoid FileStream to improve performance

2019-05-03 Thread Giovanni Matteo Fumarola (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16832717#comment-16832717
 ] 

Giovanni Matteo Fumarola commented on HADOOP-16282:
---

Thanks [~daryn] for your feedback.
I think we should fix the potential leaks (they are even present in the 
original code) instead of reverting the patch.

e.g.
{code:java}
InputStream in = null;
OutputStream out =null;
try {
  in = Files.newInputStream(src.toPath());
  out = dstFS.create(dst);
  IOUtils.copyBytes(in, out, conf);
} catch (IOException e) {
  IOUtils.closeStream( out );
  IOUtils.closeStream( in );
  throw e;
}
{code}
The variable in and out can leak resources/handles. I am pretty aware of this 
problem since I hit it in production.

We should add the finally clause:
{code:java}
try{
} catch (){
} finally {
  in.close();
  out.close();
}
{code}
or use a Try-with-resources clause:
{code:java}
 try InputStream in = Files.newInputStream(src.toPath()); etc..){{code}

> Avoid FileStream to improve performance
> ---
>
> Key: HADOOP-16282
> URL: https://issues.apache.org/jira/browse/HADOOP-16282
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-16282-01.patch, HADOOP-16282-02.patch
>
>
> The FileInputStream and FileOutputStream classes contains a finalizer method 
> which will cause garbage collection pauses. See 
> [JDK-8080225|https://bugs.openjdk.java.net/browse/JDK-8080225] for details.
> The FileReader and FileWriter constructors instantiate FileInputStream and 
> FileOutputStream, again causing garbage collection issues while finalizer 
> methods are called.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16288) Using an authenticated proxy server to access cloud storage from a cluster

2019-05-03 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16832704#comment-16832704
 ] 

Steve Loughran commented on HADOOP-16288:
-

S3A settings are via fs.s3a.proxy options in the docs there.

ADLS and ABFS just use the JDK classes/settings so probably need this.

* The best place for a comparator will be hadoop-common, in the package 
org.apache.hadoop.security.http
* If you can think of a unit test for this, it'd be great
* otherwise, you will need to (a) provide docs for this in either the adls/abfs 
files (and link from the other)
* and run the entire ADLS test suite through such a proxy, declaring which adls 
store you ran against.




> Using an authenticated proxy server to access cloud storage from a cluster
> --
>
> Key: HADOOP-16288
> URL: https://issues.apache.org/jira/browse/HADOOP-16288
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/adl, fs/azure
>Affects Versions: 3.2.0
>Reporter: Istvan Fajth
>Priority: Minor
>
> Given an environment which does not have direct access to the internet, but 
> has to route all requests through a Proxy that requires authentication Hadoop 
> filesystem commands can not go through.
> My understanding of an [Official Java 
> blog|https://blogs.oracle.com/wssfc/handling-proxy-server-authentication-requests-in-java],
>  is that  this requires a special Authenticator class to be used, which I do 
> not see anywhere in the code, neither I could find any relevant parameter to 
> set the proxy authentication credentials.
> I tried to specify the proxy in a form of 
> username:[passw...@host.fqdn|mailto:passw...@host.fqdn] and the port via java 
> system properties, but that did not work either.
> My use case is to connect to ADLS, but I have not seen relevant properties 
> either for S3.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16288) Using an authenticated proxy server to access ADLS and ABFS storage

2019-05-03 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16288:

Summary: Using an authenticated proxy server to access ADLS and ABFS 
storage  (was: Using an authenticated proxy server to access cloud storage from 
a cluster)

> Using an authenticated proxy server to access ADLS and ABFS storage
> ---
>
> Key: HADOOP-16288
> URL: https://issues.apache.org/jira/browse/HADOOP-16288
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/adl, fs/azure
>Affects Versions: 3.2.0
>Reporter: Istvan Fajth
>Priority: Minor
>
> Given an environment which does not have direct access to the internet, but 
> has to route all requests through a Proxy that requires authentication Hadoop 
> filesystem commands can not go through.
> My understanding of an [Official Java 
> blog|https://blogs.oracle.com/wssfc/handling-proxy-server-authentication-requests-in-java],
>  is that  this requires a special Authenticator class to be used, which I do 
> not see anywhere in the code, neither I could find any relevant parameter to 
> set the proxy authentication credentials.
> I tried to specify the proxy in a form of 
> username:[passw...@host.fqdn|mailto:passw...@host.fqdn] and the port via java 
> system properties, but that did not work either.
> My use case is to connect to ADLS, but I have not seen relevant properties 
> either for S3.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16214) Kerberos name implementation in Hadoop does not accept principals with more than two components

2019-05-03 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16832706#comment-16832706
 ] 

Hadoop QA commented on HADOOP-16214:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 50s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 21s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
30s{color} | {color:green} hadoop-auth in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
41s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 85m 33s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HADOOP-16214 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12967797/HADOOP-16214.013.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux bccaeb302d9a 4.4.0-144-generic #170~14.04.1-Ubuntu SMP Mon Mar 
18 15:02:05 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / f1875b2 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16220/testReport/ |
| Max. process+thread count | 340 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-auth U: 
hadoop-common-project/hadoop-auth |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16220/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Kerberos name implementation in Hadoop does not accept principals with more 
> than two components
> 

[jira] [Updated] (HADOOP-16288) Using an authenticated proxy server to access cloud storage from a cluster

2019-05-03 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16288:

Component/s: (was: contrib/cloud)
 fs/azure
 fs/adl

> Using an authenticated proxy server to access cloud storage from a cluster
> --
>
> Key: HADOOP-16288
> URL: https://issues.apache.org/jira/browse/HADOOP-16288
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/adl, fs/azure
>Reporter: Istvan Fajth
>Priority: Major
>
> Given an environment which does not have direct access to the internet, but 
> has to route all requests through a Proxy that requires authentication Hadoop 
> filesystem commands can not go through.
> My understanding of an [Official Java 
> blog|https://blogs.oracle.com/wssfc/handling-proxy-server-authentication-requests-in-java],
>  is that  this requires a special Authenticator class to be used, which I do 
> not see anywhere in the code, neither I could find any relevant parameter to 
> set the proxy authentication credentials.
> I tried to specify the proxy in a form of 
> username:[passw...@host.fqdn|mailto:passw...@host.fqdn] and the port via java 
> system properties, but that did not work either.
> My use case is to connect to ADLS, but I have not seen relevant properties 
> either for S3.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16288) Using an authenticated proxy server to access cloud storage from a cluster

2019-05-03 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16288:

Priority: Minor  (was: Major)

> Using an authenticated proxy server to access cloud storage from a cluster
> --
>
> Key: HADOOP-16288
> URL: https://issues.apache.org/jira/browse/HADOOP-16288
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/adl, fs/azure
>Reporter: Istvan Fajth
>Priority: Minor
>
> Given an environment which does not have direct access to the internet, but 
> has to route all requests through a Proxy that requires authentication Hadoop 
> filesystem commands can not go through.
> My understanding of an [Official Java 
> blog|https://blogs.oracle.com/wssfc/handling-proxy-server-authentication-requests-in-java],
>  is that  this requires a special Authenticator class to be used, which I do 
> not see anywhere in the code, neither I could find any relevant parameter to 
> set the proxy authentication credentials.
> I tried to specify the proxy in a form of 
> username:[passw...@host.fqdn|mailto:passw...@host.fqdn] and the port via java 
> system properties, but that did not work either.
> My use case is to connect to ADLS, but I have not seen relevant properties 
> either for S3.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16288) Using an authenticated proxy server to access cloud storage from a cluster

2019-05-03 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16288:

Affects Version/s: 3.2.0

> Using an authenticated proxy server to access cloud storage from a cluster
> --
>
> Key: HADOOP-16288
> URL: https://issues.apache.org/jira/browse/HADOOP-16288
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/adl, fs/azure
>Affects Versions: 3.2.0
>Reporter: Istvan Fajth
>Priority: Minor
>
> Given an environment which does not have direct access to the internet, but 
> has to route all requests through a Proxy that requires authentication Hadoop 
> filesystem commands can not go through.
> My understanding of an [Official Java 
> blog|https://blogs.oracle.com/wssfc/handling-proxy-server-authentication-requests-in-java],
>  is that  this requires a special Authenticator class to be used, which I do 
> not see anywhere in the code, neither I could find any relevant parameter to 
> set the proxy authentication credentials.
> I tried to specify the proxy in a form of 
> username:[passw...@host.fqdn|mailto:passw...@host.fqdn] and the port via java 
> system properties, but that did not work either.
> My use case is to connect to ADLS, but I have not seen relevant properties 
> either for S3.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16282) Avoid FileStream to improve performance

2019-05-03 Thread Daryn Sharp (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16832694#comment-16832694
 ] 

Daryn Sharp commented on HADOOP-16282:
--

Please consider reverting most or all of this patch.

I know finalized object are "bad" but this seemingly simple change is high risk 
according to the linked java bug:
{quote}The unresolvable compatibility issue is the requirement in 
FileInputStream and FileOutputStream finalizer methods to call close. ... Since 
it is unknown/unknowable how many FIS/FOS subclasses might rely on overriding 
close or finalize +_*the compatibility issue is severe*_+.  +Only a long term 
(multiple release) restriction to deprecate or invalidate overriding would have 
possibility of eventually eliminating the compatibility problem.+ 
{quote}

As best I can tell, leak a NIO stream and forever leak the file descriptor.  
The exceptions are completely different so anyone expecting to catch 
{{FileNotFoundException}}, ie. at least from {{FileUtil}}, are going to be 
surprised that it's now {{NoSuchFileException}}.

And please do not span components if at all possible.  There is a hdfs project 
for hdfs bugs.

> Avoid FileStream to improve performance
> ---
>
> Key: HADOOP-16282
> URL: https://issues.apache.org/jira/browse/HADOOP-16282
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-16282-01.patch, HADOOP-16282-02.patch
>
>
> The FileInputStream and FileOutputStream classes contains a finalizer method 
> which will cause garbage collection pauses. See 
> [JDK-8080225|https://bugs.openjdk.java.net/browse/JDK-8080225] for details.
> The FileReader and FileWriter constructors instantiate FileInputStream and 
> FileOutputStream, again causing garbage collection issues while finalizer 
> methods are called.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16293) AuthenticationFilterInitializer doc has speudo instead of pseudo

2019-05-03 Thread Prabhu Joseph (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16293?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated HADOOP-16293:
---
Summary: AuthenticationFilterInitializer doc has speudo instead of pseudo  
(was: AuthenticationFilterInitializer doc has speudo instead of psuedo)

> AuthenticationFilterInitializer doc has speudo instead of pseudo
> 
>
> Key: HADOOP-16293
> URL: https://issues.apache.org/jira/browse/HADOOP-16293
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: auth, documentation
>Affects Versions: 3.2.0
>Reporter: Prabhu Joseph
>Priority: Trivial
>  Labels: newbie
>
> AuthenticationFilterInitializer doc has speudo instead of psuedo.
> {code}
>  * It enables anonymous access, simple/speudo and Kerberos HTTP SPNEGO
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16293) AuthenticationFilterInitializer doc has speudo instead of pseudo

2019-05-03 Thread Prabhu Joseph (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16293?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated HADOOP-16293:
---
Description: 
AuthenticationFilterInitializer doc has speudo instead of pseudo.

{code}
 * It enables anonymous access, simple/speudo and Kerberos HTTP SPNEGO
{code}

  was:
AuthenticationFilterInitializer doc has speudo instead of psuedo.

{code}
 * It enables anonymous access, simple/speudo and Kerberos HTTP SPNEGO
{code}


> AuthenticationFilterInitializer doc has speudo instead of pseudo
> 
>
> Key: HADOOP-16293
> URL: https://issues.apache.org/jira/browse/HADOOP-16293
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: auth, documentation
>Affects Versions: 3.2.0
>Reporter: Prabhu Joseph
>Priority: Trivial
>  Labels: newbie
>
> AuthenticationFilterInitializer doc has speudo instead of pseudo.
> {code}
>  * It enables anonymous access, simple/speudo and Kerberos HTTP SPNEGO
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16293) AuthenticationFilterInitializer doc has speudo instead of psuedo

2019-05-03 Thread Prabhu Joseph (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16293?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated HADOOP-16293:
---
Summary: AuthenticationFilterInitializer doc has speudo instead of psuedo  
(was: AuthenticationFilterInitializer doc has speudo instead of psueudo)

> AuthenticationFilterInitializer doc has speudo instead of psuedo
> 
>
> Key: HADOOP-16293
> URL: https://issues.apache.org/jira/browse/HADOOP-16293
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: auth, documentation
>Affects Versions: 3.2.0
>Reporter: Prabhu Joseph
>Priority: Trivial
>  Labels: newbie
>
> AuthenticationFilterInitializer doc has speudo instead of psuedo.
> {code}
>  * It enables anonymous access, simple/speudo and Kerberos HTTP SPNEGO
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16293) AuthenticationFilterInitializer doc has speudo instead of psueudo

2019-05-03 Thread Prabhu Joseph (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16293?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated HADOOP-16293:
---
Description: 
AuthenticationFilterInitializer doc has speudo instead of psuedo.

{code}
 * It enables anonymous access, simple/speudo and Kerberos HTTP SPNEGO
{code}

  was:
AuthenticationFilterInitializer doc has speudo instead of psueudo.

{code}
 * It enables anonymous access, simple/speudo and Kerberos HTTP SPNEGO
{code}


> AuthenticationFilterInitializer doc has speudo instead of psueudo
> -
>
> Key: HADOOP-16293
> URL: https://issues.apache.org/jira/browse/HADOOP-16293
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: auth, documentation
>Affects Versions: 3.2.0
>Reporter: Prabhu Joseph
>Priority: Trivial
>  Labels: newbie
>
> AuthenticationFilterInitializer doc has speudo instead of psuedo.
> {code}
>  * It enables anonymous access, simple/speudo and Kerberos HTTP SPNEGO
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16292) Refactor checkTrustAndSend in SaslDataTransferClient to make it cleaner

2019-05-03 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16832682#comment-16832682
 ] 

Hudson commented on HADOOP-16292:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16500 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16500/])
HADOOP-16292. Refactor checkTrustAndSend in SaslDataTransferClient to (cliang: 
rev 1d59cc490cb46e99d1d72fe3bd0c2a396d98f2c8)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslDataTransferClient.java


> Refactor checkTrustAndSend in SaslDataTransferClient to make it cleaner  
> -
>
> Key: HADOOP-16292
> URL: https://issues.apache.org/jira/browse/HADOOP-16292
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Sherwood Zheng
>Assignee: Sherwood Zheng
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-16292.001.patch
>
>
> Right now
> private IOStreamPair checkTrustAndSend(InetAddress addr,
>  OutputStream underlyingOut, InputStream underlyingIn,
>  DataEncryptionKeyFactory encryptionKeyFactory,
>  Token accessToken, DatanodeID datanodeId)
> only provide an API without secretKey paramter for 
> checkTrustAndSend(
>  InetAddress addr, OutputStream underlyingOut, InputStream underlyingIn,
>  DataEncryptionKeyFactory encryptionKeyFactory,
>  Token accessToken, DatanodeID datanodeId,
>  SecretKey secretKey)
> Remove the former API to make it cleaner



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16293) AuthenticationFilterInitializer doc has speudo instead of psueudo

2019-05-03 Thread Prabhu Joseph (JIRA)
Prabhu Joseph created HADOOP-16293:
--

 Summary: AuthenticationFilterInitializer doc has speudo instead of 
psueudo
 Key: HADOOP-16293
 URL: https://issues.apache.org/jira/browse/HADOOP-16293
 Project: Hadoop Common
  Issue Type: Bug
  Components: auth
Affects Versions: 3.2.0
Reporter: Prabhu Joseph


AuthenticationFilterInitializer doc has speudo instead of psueudo.

{code}
 * It enables anonymous access, simple/speudo and Kerberos HTTP SPNEGO
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16293) AuthenticationFilterInitializer doc has speudo instead of psueudo

2019-05-03 Thread Prabhu Joseph (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16293?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated HADOOP-16293:
---
Component/s: documentation

> AuthenticationFilterInitializer doc has speudo instead of psueudo
> -
>
> Key: HADOOP-16293
> URL: https://issues.apache.org/jira/browse/HADOOP-16293
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: auth, documentation
>Affects Versions: 3.2.0
>Reporter: Prabhu Joseph
>Priority: Trivial
>  Labels: newbie
>
> AuthenticationFilterInitializer doc has speudo instead of psueudo.
> {code}
>  * It enables anonymous access, simple/speudo and Kerberos HTTP SPNEGO
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #788: HDDS-1475 : Fix OzoneContainer start method.

2019-05-03 Thread GitBox
bharatviswa504 commented on a change in pull request #788: HDDS-1475 : Fix 
OzoneContainer start method.
URL: https://github.com/apache/hadoop/pull/788#discussion_r280857665
 
 

 ##
 File path: 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/XceiverServerRatis.java
 ##
 @@ -111,6 +111,7 @@ private static long nextCallId() {
   private final ReplicationLevel replicationLevel;
   private long nodeFailureTimeoutMs;
   private final long cacheEntryExpiryInteval;
+  private volatile boolean isStarted = false;
 
 Review comment:
   same here too


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16292) Refactor checkTrustAndSend in SaslDataTransferClient to make it cleaner

2019-05-03 Thread Chen Liang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HADOOP-16292:

   Resolution: Fixed
Fix Version/s: 3.3.0
   Status: Resolved  (was: Patch Available)

> Refactor checkTrustAndSend in SaslDataTransferClient to make it cleaner  
> -
>
> Key: HADOOP-16292
> URL: https://issues.apache.org/jira/browse/HADOOP-16292
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Sherwood Zheng
>Assignee: Sherwood Zheng
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-16292.001.patch
>
>
> Right now
> private IOStreamPair checkTrustAndSend(InetAddress addr,
>  OutputStream underlyingOut, InputStream underlyingIn,
>  DataEncryptionKeyFactory encryptionKeyFactory,
>  Token accessToken, DatanodeID datanodeId)
> only provide an API without secretKey paramter for 
> checkTrustAndSend(
>  InetAddress addr, OutputStream underlyingOut, InputStream underlyingIn,
>  DataEncryptionKeyFactory encryptionKeyFactory,
>  Token accessToken, DatanodeID datanodeId,
>  SecretKey secretKey)
> Remove the former API to make it cleaner



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16292) Refactor checkTrustAndSend in SaslDataTransferClient to make it cleaner

2019-05-03 Thread Chen Liang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16832679#comment-16832679
 ] 

Chen Liang commented on HADOOP-16292:
-

This is just minor refactoring so I think it is okay to not have new unit test. 
I have committed v001 patch to trunk. Thanks for the contribution [~zhengxg3]!

> Refactor checkTrustAndSend in SaslDataTransferClient to make it cleaner  
> -
>
> Key: HADOOP-16292
> URL: https://issues.apache.org/jira/browse/HADOOP-16292
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Sherwood Zheng
>Assignee: Sherwood Zheng
>Priority: Major
> Attachments: HADOOP-16292.001.patch
>
>
> Right now
> private IOStreamPair checkTrustAndSend(InetAddress addr,
>  OutputStream underlyingOut, InputStream underlyingIn,
>  DataEncryptionKeyFactory encryptionKeyFactory,
>  Token accessToken, DatanodeID datanodeId)
> only provide an API without secretKey paramter for 
> checkTrustAndSend(
>  InetAddress addr, OutputStream underlyingOut, InputStream underlyingIn,
>  DataEncryptionKeyFactory encryptionKeyFactory,
>  Token accessToken, DatanodeID datanodeId,
>  SecretKey secretKey)
> Remove the former API to make it cleaner



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #788: HDDS-1475 : Fix OzoneContainer start method.

2019-05-03 Thread GitBox
bharatviswa504 commented on a change in pull request #788: HDDS-1475 : Fix 
OzoneContainer start method.
URL: https://github.com/apache/hadoop/pull/788#discussion_r280855056
 
 

 ##
 File path: 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/XceiverServerGrpc.java
 ##
 @@ -69,6 +69,7 @@
   private UUID id;
   private Server server;
   private final ContainerDispatcher storageContainer;
+  private volatile boolean isStarted;
 
 Review comment:
   Sorry missed this do we need volatile here?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16251) ABFS: add FSMainOperationsBaseTest

2019-05-03 Thread Da Zhou (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16832661#comment-16832661
 ] 

Da Zhou commented on HADOOP-16251:
--

[~daryn] I am a little confused, so the docs are correct, and
{code:java}
public void testListStatusThrowsExceptionForUnreadableDir(){code}
in "FSMainOperationsBaseTes" should not use listStatus for permission check as 
it is stated as N/A in the doc?

> ABFS: add FSMainOperationsBaseTest
> --
>
> Key: HADOOP-16251
> URL: https://issues.apache.org/jira/browse/HADOOP-16251
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Da Zhou
>Assignee: Da Zhou
>Priority: Major
>
> Just happened to see 
> "hadoop/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FSMainOperationsBaseTest.java",
>  ABFS could inherit this test to increase its test coverage.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #749: HDDS-1395. Key write fails with BlockOutputStream has been closed exception

2019-05-03 Thread GitBox
hadoop-yetus commented on issue #749: HDDS-1395. Key write fails with 
BlockOutputStream has been closed exception
URL: https://github.com/apache/hadoop/pull/749#issuecomment-489162821
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 30 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 5 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 22 | Maven dependency ordering for branch |
   | +1 | mvninstall | 398 | trunk passed |
   | +1 | compile | 204 | trunk passed |
   | +1 | checkstyle | 53 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 819 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 130 | trunk passed |
   | 0 | spotbugs | 234 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 413 | trunk passed |
   | -0 | patch | 270 | Used diff version of patch file. Binary files and 
potentially other changes not applied. Please rebase and squash commits if 
necessary. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 29 | Maven dependency ordering for patch |
   | +1 | mvninstall | 403 | the patch passed |
   | +1 | compile | 209 | the patch passed |
   | +1 | javac | 209 | the patch passed |
   | +1 | checkstyle | 62 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 4 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 666 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 133 | the patch passed |
   | +1 | findbugs | 427 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 145 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1311 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 38 | The patch does not generate ASF License warnings. |
   | | | 5772 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.om.TestOzoneManagerRestInterface |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.ozShell.TestS3Shell |
   |   | hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient |
   |   | hadoop.hdds.scm.pipeline.TestNodeFailure |
   |   | hadoop.ozone.client.rpc.TestBCSID |
   |   | hadoop.ozone.ozShell.TestOzoneDatanodeShell |
   |   | hadoop.ozone.client.rpc.TestContainerStateMachine |
   |   | hadoop.ozone.client.rpc.TestHybridPipelineOnDatanode |
   |   | hadoop.ozone.om.TestOzoneManagerConfiguration |
   |   | hadoop.ozone.om.TestOMDbCheckpointServlet |
   |   | hadoop.ozone.scm.node.TestSCMNodeMetrics |
   |   | hadoop.hdds.scm.pipeline.TestNode2PipelineMap |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStream |
   |   | hadoop.ozone.om.TestScmSafeMode |
   |   | hadoop.hdds.scm.safemode.TestSCMSafeModeWithPipelineRules |
   |   | hadoop.ozone.TestMiniOzoneCluster |
   |   | 
hadoop.ozone.container.common.statemachine.commandhandler.TestCloseContainerByPipeline
 |
   |   | hadoop.ozone.client.rpc.TestContainerStateMachineFailures |
   |   | hadoop.hdds.scm.pipeline.TestPipelineClose |
   |   | hadoop.ozone.om.TestOzoneManagerHA |
   |   | hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   |   | hadoop.hdds.scm.container.TestContainerStateManagerIntegration |
   |   | hadoop.ozone.om.TestOmAcls |
   |   | hadoop.hdds.scm.pipeline.TestSCMRestart |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-749/10/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/749 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml |
   | uname | Linux f1b119980d60 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / f1875b2 |
   | Default Java | 1.8.0_191 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-749/10/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-749/10/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-749/10/testReport/ |
   | Max. process+thread count | 4646 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds hadoop-hdds/client hadoop-hdds/common 
hadoop-ozone hadoop-ozone/client hadoop-ozone/integration-test U: . |
   | Console output | 

[jira] [Commented] (HADOOP-16214) Kerberos name implementation in Hadoop does not accept principals with more than two components

2019-05-03 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16832648#comment-16832648
 ] 

Eric Yang commented on HADOOP-16214:


Patch 13 fixed null pointer exception for rule mechanism is not defined, and 
default to use Hadoop rule mechanism.

> Kerberos name implementation in Hadoop does not accept principals with more 
> than two components
> ---
>
> Key: HADOOP-16214
> URL: https://issues.apache.org/jira/browse/HADOOP-16214
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: auth
>Reporter: Issac Buenrostro
>Priority: Major
> Attachments: Add-service-freeipa.png, HADOOP-16214.001.patch, 
> HADOOP-16214.002.patch, HADOOP-16214.003.patch, HADOOP-16214.004.patch, 
> HADOOP-16214.005.patch, HADOOP-16214.006.patch, HADOOP-16214.007.patch, 
> HADOOP-16214.008.patch, HADOOP-16214.009.patch, HADOOP-16214.010.patch, 
> HADOOP-16214.011.patch, HADOOP-16214.012.patch, HADOOP-16214.013.patch
>
>
> org.apache.hadoop.security.authentication.util.KerberosName is in charge of 
> converting a Kerberos principal to a user name in Hadoop for all of the 
> services requiring authentication.
> Although the Kerberos spec 
> ([https://web.mit.edu/kerberos/krb5-1.5/krb5-1.5.4/doc/krb5-user/What-is-a-Kerberos-Principal_003f.html])
>  allows for an arbitrary number of components in the principal, the Hadoop 
> implementation will throw a "Malformed Kerberos name:" error if the principal 
> has more than two components (because the regex can only read serviceName and 
> hostName).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ajfabbri commented on a change in pull request #787: HADOOP-16251. ABFS: add FSMainOperationsBaseTest

2019-05-03 Thread GitBox
ajfabbri commented on a change in pull request #787: HADOOP-16251. ABFS: add 
FSMainOperationsBaseTest
URL: https://github.com/apache/hadoop/pull/787#discussion_r280846350
 
 

 ##
 File path: 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemMainOperation.java
 ##
 @@ -0,0 +1,71 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs;
+
+import org.junit.Ignore;
+
+import org.apache.hadoop.fs.FSMainOperationsBaseTest;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.azurebfs.contract.ABFSContractTestBinding;
+
+/**
+ * Test AzureBlobFileSystem main operations.
+ * */
+public class ITestAzureBlobFileSystemMainOperation extends 
FSMainOperationsBaseTest {
+
+  private static final String TEST_ROOT_DIR =
+  "/tmp/TestAzureBlobFileSystemMainOperations";
+
+  private final ABFSContractTestBinding binding;
+
+  public ITestAzureBlobFileSystemMainOperation () throws Exception {
+super(TEST_ROOT_DIR);
+binding = new ABFSContractTestBinding();
+  }
+
+  @Override
+  public void setUp() throws Exception {
+binding.setup();
+fSys = binding.getFileSystem();
+  }
+
+  @Override
+  public void tearDown() throws Exception {
+super.tearDown();
+  }
+
+  @Override
+  protected FileSystem createFileSystem() throws Exception {
+return fSys;
+  }
+
+  @Override
+  @Ignore("There shouldn't be permission check for getFileInfo")
+  public void testListStatusThrowsExceptionForUnreadableDir() {
+// Permission Checks:
+// 
https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsPermissionsGuide.html
+  }
+
+  @Override
+  @Ignore("There shouldn't be permission check for getFileInfo")
+  public void testGlobStatusThrowsExceptionForUnreadableDir() {
+// Permission Checks:
+// 
https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsPermissionsGuide.html
 
 Review comment:
   I was wrong. I misread the namenode code. Though it handles an 
AccessControlException apparently it doesn't check permissions, just HA status.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16214) Kerberos name implementation in Hadoop does not accept principals with more than two components

2019-05-03 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated HADOOP-16214:
---
Attachment: HADOOP-16214.013.patch

> Kerberos name implementation in Hadoop does not accept principals with more 
> than two components
> ---
>
> Key: HADOOP-16214
> URL: https://issues.apache.org/jira/browse/HADOOP-16214
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: auth
>Reporter: Issac Buenrostro
>Priority: Major
> Attachments: Add-service-freeipa.png, HADOOP-16214.001.patch, 
> HADOOP-16214.002.patch, HADOOP-16214.003.patch, HADOOP-16214.004.patch, 
> HADOOP-16214.005.patch, HADOOP-16214.006.patch, HADOOP-16214.007.patch, 
> HADOOP-16214.008.patch, HADOOP-16214.009.patch, HADOOP-16214.010.patch, 
> HADOOP-16214.011.patch, HADOOP-16214.012.patch, HADOOP-16214.013.patch
>
>
> org.apache.hadoop.security.authentication.util.KerberosName is in charge of 
> converting a Kerberos principal to a user name in Hadoop for all of the 
> services requiring authentication.
> Although the Kerberos spec 
> ([https://web.mit.edu/kerberos/krb5-1.5/krb5-1.5.4/doc/krb5-user/What-is-a-Kerberos-Principal_003f.html])
>  allows for an arbitrary number of components in the principal, the Hadoop 
> implementation will throw a "Malformed Kerberos name:" error if the principal 
> has more than two components (because the regex can only read serviceName and 
> hostName).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16291) HDFS Permissions Guide appears incorrect about getFileStatus()/getFileInfo()

2019-05-03 Thread Aaron Fabbri (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16832644#comment-16832644
 ] 

Aaron Fabbri commented on HADOOP-16291:
---

Thanks [~daryn]. Thought it was strange the docs were wrong for this long. Was 
going to ask for a sanity check on this JIRA but you beat me to it.

> HDFS Permissions Guide appears incorrect about getFileStatus()/getFileInfo()
> 
>
> Key: HADOOP-16291
> URL: https://issues.apache.org/jira/browse/HADOOP-16291
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Reporter: Aaron Fabbri
>Priority: Minor
>  Labels: newbie
>
> Fix some errors in the HDFS Permissions doc.
> Noticed this when reviewing HADOOP-16251. The FS Permissions 
> [documentation|https://hadoop.apache.org/docs/r3.2.0/hadoop-project-dist/hadoop-hdfs/HdfsPermissionsGuide.html]
>  seems to mark a lot of permissions as Not Applicable (N/A) when that is not 
> the case. In particular getFileInfo (getFileStatus) checks READ permission 
> bit 
> [here|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java#L3202-L3204],
>  as it should.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16291) HDFS Permissions Guide appears incorrect about getFileStatus()/getFileInfo()

2019-05-03 Thread Daryn Sharp (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daryn Sharp resolved HADOOP-16291.
--
Resolution: Not A Problem

> HDFS Permissions Guide appears incorrect about getFileStatus()/getFileInfo()
> 
>
> Key: HADOOP-16291
> URL: https://issues.apache.org/jira/browse/HADOOP-16291
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Reporter: Aaron Fabbri
>Priority: Minor
>  Labels: newbie
>
> Fix some errors in the HDFS Permissions doc.
> Noticed this when reviewing HADOOP-16251. The FS Permissions 
> [documentation|https://hadoop.apache.org/docs/r3.2.0/hadoop-project-dist/hadoop-hdfs/HdfsPermissionsGuide.html]
>  seems to mark a lot of permissions as Not Applicable (N/A) when that is not 
> the case. In particular getFileInfo (getFileStatus) checks READ permission 
> bit 
> [here|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java#L3202-L3204],
>  as it should.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16251) ABFS: add FSMainOperationsBaseTest

2019-05-03 Thread Daryn Sharp (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16832634#comment-16832634
 ] 

Daryn Sharp commented on HADOOP-16251:
--

The docs aren't wrong.   That's the HA state check, not a permissions check.

> ABFS: add FSMainOperationsBaseTest
> --
>
> Key: HADOOP-16251
> URL: https://issues.apache.org/jira/browse/HADOOP-16251
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Da Zhou
>Assignee: Da Zhou
>Priority: Major
>
> Just happened to see 
> "hadoop/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FSMainOperationsBaseTest.java",
>  ABFS could inherit this test to increase its test coverage.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16291) HDFS Permissions Guide appears incorrect about getFileStatus()/getFileInfo()

2019-05-03 Thread Daryn Sharp (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16832635#comment-16832635
 ] 

Daryn Sharp commented on HADOOP-16291:
--

Docs are correct.  That's the HA state check, not a permissions check.

> HDFS Permissions Guide appears incorrect about getFileStatus()/getFileInfo()
> 
>
> Key: HADOOP-16291
> URL: https://issues.apache.org/jira/browse/HADOOP-16291
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Reporter: Aaron Fabbri
>Priority: Minor
>  Labels: newbie
>
> Fix some errors in the HDFS Permissions doc.
> Noticed this when reviewing HADOOP-16251. The FS Permissions 
> [documentation|https://hadoop.apache.org/docs/r3.2.0/hadoop-project-dist/hadoop-hdfs/HdfsPermissionsGuide.html]
>  seems to mark a lot of permissions as Not Applicable (N/A) when that is not 
> the case. In particular getFileInfo (getFileStatus) checks READ permission 
> bit 
> [here|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java#L3202-L3204],
>  as it should.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16281) ABFS: Rename operation, GetFileStatus before rename operation and throw exception on the driver side

2019-05-03 Thread Daryn Sharp (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16832628#comment-16832628
 ] 

Daryn Sharp commented on HADOOP-16281:
--

This Jira is very deceptively named.  One would not expect an Azure dubbed 
change to change the NN and its filesystem impl and S3.  Please don't change 
hdfs on a seemingly bengin sounding Jira in common.  Break it out.

I superficially skimmed the Jira and it's not clear if _any_ of the rename 
semantics have changed incompatibly changed?  Like the renaming into/over 
directories since a lots of docs were updated?  Exceptions?

> ABFS: Rename operation, GetFileStatus before rename operation and  throw 
> exception on the driver side
> -
>
> Key: HADOOP-16281
> URL: https://issues.apache.org/jira/browse/HADOOP-16281
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Da Zhou
>Assignee: Da Zhou
>Priority: Major
>
> ABFS should add the rename with options:
>  [https://github.com/apache/hadoop/pull/743]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16287) KerberosAuthenticationHandler Trusted Proxy Support for Knox

2019-05-03 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16832604#comment-16832604
 ] 

Eric Yang commented on HADOOP-16287:


It would be best to ensure that respond of servers are identical to client with 
or without proxy.

> KerberosAuthenticationHandler Trusted Proxy Support for Knox
> 
>
> Key: HADOOP-16287
> URL: https://issues.apache.org/jira/browse/HADOOP-16287
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: auth
>Affects Versions: 3.2.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: HADOOP-16287-001.patch
>
>
> Knox passes doAs with end user while accessing RM, WebHdfs Rest Api. 
> Currently KerberosAuthenticationHandler sets the remote user to Knox. Need 
> Trusted Proxy Support by reading doAs query parameter.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16287) KerberosAuthenticationHandler Trusted Proxy Support for Knox

2019-05-03 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16832601#comment-16832601
 ] 

Eric Yang commented on HADOOP-16287:


[~daryn] {quote}-1 on returning a new auth cookie as the impersonated user. 
It's insanely dangerous and will create bugs and/or security holes. The auth 
cookie must be the authenticated user. Let's explore the unintended side 
effects.{quote}

If the auth cookie is forwarded from proxy to end user, then end user got token 
for proxy user (authenticated user).  That does not sound right.  It's either 
no auth cookie returned, or the cookie is token for end user credential.  This 
prevents accidental leaks of impersonation power.  The three points listed are 
implementation mistake that can happen, if proxyuser or server code is not 
written properly.  Knox does shield hadoop.auth cookie from leaking.  The 
handling of hadoop.auth cookie between Hadoop and Knox should be private 
conversation.  If the operation between servers are multi-calls, the cached 
token can reduce hitting KDC server for each call.

> KerberosAuthenticationHandler Trusted Proxy Support for Knox
> 
>
> Key: HADOOP-16287
> URL: https://issues.apache.org/jira/browse/HADOOP-16287
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: auth
>Affects Versions: 3.2.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: HADOOP-16287-001.patch
>
>
> Knox passes doAs with end user while accessing RM, WebHdfs Rest Api. 
> Currently KerberosAuthenticationHandler sets the remote user to Knox. Need 
> Trusted Proxy Support by reading doAs query parameter.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16238) Add the possbility to set SO_REUSEADDR in IPC Server Listener

2019-05-03 Thread Daryn Sharp (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16832597#comment-16832597
 ] 

Daryn Sharp commented on HADOOP-16238:
--

I could have sworn the server already enabled SO_REUSEADDR...  I'd prefer for 
the default to be true because how many people really want to wait for the 
socket state to clear up?

> Add the possbility to set SO_REUSEADDR in IPC Server Listener
> -
>
> Key: HADOOP-16238
> URL: https://issues.apache.org/jira/browse/HADOOP-16238
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Reporter: Peter Bacsko
>Assignee: Peter Bacsko
>Priority: Minor
> Attachments: HADOOP-16238-001.patch, HADOOP-16238-002.patch, 
> HADOOP-16238-003.patch, HADOOP-16238-004.patch
>
>
> Currently we can't enable SO_REUSEADDR in the IPC Server. In some 
> circumstances, this would be desirable, see explanation here:
> [https://developer.ibm.com/tutorials/l-sockpit/#pitfall-3-address-in-use-error-eaddrinuse-]
> Rarely it also causes problems in a test case 
> {{TestMiniMRClientCluster.testRestart}}:
> {noformat}
> 2019-04-04 11:21:31,896 INFO [main] service.AbstractService 
> (AbstractService.java:noteFailure(273)) - Service 
> org.apache.hadoop.yarn.server.resourcemanager.AdminService failed in state 
> STARTED; cause: org.apache.hadoop.yarn.exceptions.YarnRuntimeException: 
> java.net.BindException: Problem binding to [test-host:35491] 
> java.net.BindException: Address already in use; For more details see: 
> http://wiki.apache.org/hadoop/BindException
> org.apache.hadoop.yarn.exceptions.YarnRuntimeException: 
> java.net.BindException: Problem binding to [test-host:35491] 
> java.net.BindException: Address already in use; For more details see: 
> http://wiki.apache.org/hadoop/BindException
>  at 
> org.apache.hadoop.yarn.factories.impl.pb.RpcServerFactoryPBImpl.getServer(RpcServerFactoryPBImpl.java:138)
>  at 
> org.apache.hadoop.yarn.ipc.HadoopYarnProtoRPC.getServer(HadoopYarnProtoRPC.java:65)
>  at org.apache.hadoop.yarn.ipc.YarnRPC.getServer(YarnRPC.java:54)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.AdminService.startServer(AdminService.java:178)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.AdminService.serviceStart(AdminService.java:165)
>  at org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
>  at 
> org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:121)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1244)
>  at org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
>  at 
> org.apache.hadoop.yarn.server.MiniYARNCluster.startResourceManager(MiniYARNCluster.java:355)
>  at 
> org.apache.hadoop.yarn.server.MiniYARNCluster.access$300(MiniYARNCluster.java:127)
>  at 
> org.apache.hadoop.yarn.server.MiniYARNCluster$ResourceManagerWrapper.serviceStart(MiniYARNCluster.java:493)
>  at org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
>  at 
> org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:121)
>  at 
> org.apache.hadoop.yarn.server.MiniYARNCluster.serviceStart(MiniYARNCluster.java:312)
>  at 
> org.apache.hadoop.mapreduce.v2.MiniMRYarnCluster.serviceStart(MiniMRYarnCluster.java:210)
>  at org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
>  at 
> org.apache.hadoop.mapred.MiniMRYarnClusterAdapter.restart(MiniMRYarnClusterAdapter.java:73)
>  at 
> org.apache.hadoop.mapred.TestMiniMRClientCluster.testRestart(TestMiniMRClientCluster.java:114)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62){noformat}
>  
> At least for testing, having this socket option enabled is benefical. We 
> could enable this with a new property like {{ipc.server.reuseaddr}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16059) Use SASL Factories Cache to Improve Performance

2019-05-03 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16832595#comment-16832595
 ] 

Ayush Saxena commented on HADOOP-16059:
---

Hi [~daryn]

 {{new FastSaslClientFactory(null)}} This doesn't throw any checked exception, 
Does this still bother?

> Use SASL Factories Cache to Improve Performance
> ---
>
> Key: HADOOP-16059
> URL: https://issues.apache.org/jira/browse/HADOOP-16059
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Critical
> Fix For: 3.3.0
>
> Attachments: After-Dn.png, After-Read.png, After-Server.png, 
> After-write.png, Before-DN.png, Before-Read.png, Before-Server.png, 
> Before-Write.png, HADOOP-16059-01.patch, HADOOP-16059-02.patch, 
> HADOOP-16059-02.patch, HADOOP-16059-03.patch, HADOOP-16059-04.patch
>
>
> SASL Client factories can be cached and SASL Server Factories and SASL Client 
> Factories can be together extended at SaslParticipant  to improve performance.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16266) Add more fine-grained processing time metrics to the RPC layer

2019-05-03 Thread Daryn Sharp (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16832581#comment-16832581
 ] 

Daryn Sharp commented on HADOOP-16266:
--

Glad to see such momentum!  I'll try to review today or early next week.

> Add more fine-grained processing time metrics to the RPC layer
> --
>
> Key: HADOOP-16266
> URL: https://issues.apache.org/jira/browse/HADOOP-16266
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Reporter: Christopher Gregorian
>Assignee: Christopher Gregorian
>Priority: Minor
>  Labels: rpc
> Attachments: HADOOP-16266.001.patch, HADOOP-16266.002.patch, 
> HADOOP-16266.003.patch, HADOOP-16266.004.patch, HADOOP-16266.005.patch, 
> HADOOP-16266.006.patch, HADOOP-16266.007.patch, HADOOP-16266.008.patch, 
> HADOOP-16266.009.patch, HADOOP-16266.010.patch, HADOOP-16266.011.patch
>
>
> Splitting off of HDFS-14403 to track the first part: introduces more 
> fine-grained measuring of how a call's processing time is split up.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16287) KerberosAuthenticationHandler Trusted Proxy Support for Knox

2019-05-03 Thread Daryn Sharp (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16832580#comment-16832580
 ] 

Daryn Sharp commented on HADOOP-16287:
--

We need this, but -1 on one very seemingly simple change noted below this list. 
Cursory review:
 # I'd like to see this be a unique filter, not a subclass of 
{{KerberosAuthenticationHandler}}, since there's no reason for it to be 
specific to any given authentication type. Ideally it should be an enforced 
filter installed at the tail of the filter chain.
 # Should be much cheaper to use {{request.getParameter("doAs")}} versus 
manually re-parsing the query string.
 # Returning no meaningful status message with the forbidden response isn't 
useful.  Might not have to catch the exception from  {{ProxyUsers.authorize}}. 
I think the exception mappings will convert it to a forbidden response and also 
afford apps to ability encode the remote exception in the response payload. 
Test it out though.

-1 on returning a new auth cookie as the impersonated user. It's insanely 
dangerous and will create bugs and/or security holes. The auth cookie must be 
the authenticated user. Let's explore the unintended side effects.

The auth cookie is equivalent to hard authentication (typically kerberos unless 
there's a custom auth filter).  That's very important because impersonation and 
management operations (ie. token ops) can only be performed via hard 
authentication or auth cookie.  The impersonated user did not authenticate so 
they _must not_ be granted an auth cookie.

Cookie aware clients including but not limited to {{AuthenticatedURL}} and by 
extension the rest-based kms client, and perhaps things like Hue, may be 
unexpectedly impacted. Consider if "proxyuser" impersonates "user1". This patch 
will cause a cookie for user1 to be used for all subsequent operations. Now a 
few bad things can happen:
 # "proxyuser" attempts another operation as user1. Instead of passing an auth 
cookie for proxyuser, it passes one for user1 and fails because user1 is likely 
not a proxy user and thus cannot proxy to itself.
 # "proxyuser" now attempts to impersonate user2.  Uses the auth cookie for 
user1 which hopefully fails because user1 likely isn't a proxy user too.
 # "proxyuser" attempts to perform an operation as itself but instead will do 
it as user1.

Best case, all these should fail. Worst case, the potential for very creative 
abuse can elevate privileges esp. with sloppy proxy user configurations which 
apparently are more common than I thought.

 

> KerberosAuthenticationHandler Trusted Proxy Support for Knox
> 
>
> Key: HADOOP-16287
> URL: https://issues.apache.org/jira/browse/HADOOP-16287
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: auth
>Affects Versions: 3.2.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: HADOOP-16287-001.patch
>
>
> Knox passes doAs with end user while accessing RM, WebHdfs Rest Api. 
> Currently KerberosAuthenticationHandler sets the remote user to Knox. Need 
> Trusted Proxy Support by reading doAs query parameter.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16266) Add more fine-grained processing time metrics to the RPC layer

2019-05-03 Thread Erik Krogen (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16832569#comment-16832569
 ] 

Erik Krogen commented on HADOOP-16266:
--

Thanks [~csun], these were very helpful comments. I have addressed all of them 
in v011.

{quote}
Question on RpcCall#isOpen - this is used to determine whether a call has been 
processed or not, but is implemented using connection.channel.isOpen(), which 
is a little confusing to me. Isn't a channel being used to serve multiple RPC 
calls, and is only closed when you run into some errors or timed out? how can 
it be used to indicate whether a call has been processed or not?
{quote}
I changed the name of this variable to be more illustrative, and added a 
comment. Essentially this is checking if the connection was dropped (e.g. due 
to timeout) while the call was in the queue. If so, the connection was closed, 
and we did not end up doing any processing.

> Add more fine-grained processing time metrics to the RPC layer
> --
>
> Key: HADOOP-16266
> URL: https://issues.apache.org/jira/browse/HADOOP-16266
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Reporter: Christopher Gregorian
>Assignee: Christopher Gregorian
>Priority: Minor
>  Labels: rpc
> Attachments: HADOOP-16266.001.patch, HADOOP-16266.002.patch, 
> HADOOP-16266.003.patch, HADOOP-16266.004.patch, HADOOP-16266.005.patch, 
> HADOOP-16266.006.patch, HADOOP-16266.007.patch, HADOOP-16266.008.patch, 
> HADOOP-16266.009.patch, HADOOP-16266.010.patch, HADOOP-16266.011.patch
>
>
> Splitting off of HDFS-14403 to track the first part: introduces more 
> fine-grained measuring of how a call's processing time is split up.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16266) Add more fine-grained processing time metrics to the RPC layer

2019-05-03 Thread Erik Krogen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HADOOP-16266:
-
Attachment: HADOOP-16266.011.patch

> Add more fine-grained processing time metrics to the RPC layer
> --
>
> Key: HADOOP-16266
> URL: https://issues.apache.org/jira/browse/HADOOP-16266
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Reporter: Christopher Gregorian
>Assignee: Christopher Gregorian
>Priority: Minor
>  Labels: rpc
> Attachments: HADOOP-16266.001.patch, HADOOP-16266.002.patch, 
> HADOOP-16266.003.patch, HADOOP-16266.004.patch, HADOOP-16266.005.patch, 
> HADOOP-16266.006.patch, HADOOP-16266.007.patch, HADOOP-16266.008.patch, 
> HADOOP-16266.009.patch, HADOOP-16266.010.patch, HADOOP-16266.011.patch
>
>
> Splitting off of HDFS-14403 to track the first part: introduces more 
> fine-grained measuring of how a call's processing time is split up.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hanishakoneru commented on a change in pull request #792: HDDS-1474. ozone.scm.datanode.id config should take path for a dir

2019-05-03 Thread GitBox
hanishakoneru commented on a change in pull request #792: HDDS-1474. 
ozone.scm.datanode.id config should take path for a dir 
URL: https://github.com/apache/hadoop/pull/792#discussion_r280814733
 
 

 ##
 File path: 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/hdds/scm/HddsServerUtil.java
 ##
 @@ -380,18 +380,19 @@ public static String 
getDefaultRatisDirectory(Configuration conf) {
* @return the path of datanode id as string
*/
   public static String getDatanodeIdFilePath(Configuration conf) {
-String dataNodeIDPath = conf.get(ScmConfigKeys.OZONE_SCM_DATANODE_ID);
-if (dataNodeIDPath == null) {
+String dataNodeIDDirPath = conf.get(ScmConfigKeys.OZONE_SCM_DATANODE_ID);
+if (dataNodeIDDirPath == null) {
   File metaDirPath = ServerUtils.getOzoneMetaDirPath(conf);
   if (metaDirPath == null) {
 // this means meta data is not found, in theory should not happen at
 // this point because should've failed earlier.
 throw new IllegalArgumentException("Unable to locate meta data" +
 "directory when getting datanode id path");
   }
-  dataNodeIDPath = new File(metaDirPath,
-  ScmConfigKeys.OZONE_SCM_DATANODE_ID_PATH_DEFAULT).toString();
+  dataNodeIDDirPath = metaDirPath.toString();
 }
-return dataNodeIDPath;
+// Use default datanode id file name for file path
+return new File(dataNodeIDDirPath,
+ScmConfigKeys.OZONE_SCM_DATANODE_ID_PATH_DEFAULT).toString();
 
 Review comment:
   OZONE_SCM_DATANODE_ID_PATH_DEFAULT is now a constant as it is not 
configurable. We should move it to OzoneConsts.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hanishakoneru commented on a change in pull request #792: HDDS-1474. ozone.scm.datanode.id config should take path for a dir

2019-05-03 Thread GitBox
hanishakoneru commented on a change in pull request #792: HDDS-1474. 
ozone.scm.datanode.id config should take path for a dir 
URL: https://github.com/apache/hadoop/pull/792#discussion_r280812006
 
 

 ##
 File path: 
hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/common/TestDatanodeStateMachine.java
 ##
 @@ -117,8 +116,7 @@ public void setUp() throws Exception {
 }
 conf.set(HddsConfigKeys.OZONE_METADATA_DIRS,
 new File(testRoot, "scm").getAbsolutePath());
-path = Paths.get(path.toString(),
-TestDatanodeStateMachine.class.getSimpleName() + ".id").toString();
+path = new File(testRoot, "datnodeID").getAbsolutePath();
 
 Review comment:
   NITPICK: datanodeID misspelled.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hanishakoneru commented on a change in pull request #792: HDDS-1474. ozone.scm.datanode.id config should take path for a dir

2019-05-03 Thread GitBox
hanishakoneru commented on a change in pull request #792: HDDS-1474. 
ozone.scm.datanode.id config should take path for a dir 
URL: https://github.com/apache/hadoop/pull/792#discussion_r280813768
 
 

 ##
 File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/ScmConfigKeys.java
 ##
 @@ -296,7 +296,7 @@
 
   public static final int OZONE_SCM_DEFAULT_PORT =
   OZONE_SCM_DATANODE_PORT_DEFAULT;
-  // File Name and path where datanode ID is to written to.
+  // The path where datanode ID is to be written to.
   // if this value is not set then container startup will fail.
   public static final String OZONE_SCM_DATANODE_ID = "ozone.scm.datanode.id";
 
 Review comment:
   Can we rename this config to OZONE_SCM_DATANODE_ID_DIR or PATH to be more 
clear.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16238) Add the possbility to set SO_REUSEADDR in IPC Server Listener

2019-05-03 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16832538#comment-16832538
 ] 

Wei-Chiu Chuang commented on HADOOP-16238:
--

+1 make sense to me.
I'm pretty sure I've seen this in real cluster, and it's pretty annoying.

> Add the possbility to set SO_REUSEADDR in IPC Server Listener
> -
>
> Key: HADOOP-16238
> URL: https://issues.apache.org/jira/browse/HADOOP-16238
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Reporter: Peter Bacsko
>Assignee: Peter Bacsko
>Priority: Minor
> Attachments: HADOOP-16238-001.patch, HADOOP-16238-002.patch, 
> HADOOP-16238-003.patch, HADOOP-16238-004.patch
>
>
> Currently we can't enable SO_REUSEADDR in the IPC Server. In some 
> circumstances, this would be desirable, see explanation here:
> [https://developer.ibm.com/tutorials/l-sockpit/#pitfall-3-address-in-use-error-eaddrinuse-]
> Rarely it also causes problems in a test case 
> {{TestMiniMRClientCluster.testRestart}}:
> {noformat}
> 2019-04-04 11:21:31,896 INFO [main] service.AbstractService 
> (AbstractService.java:noteFailure(273)) - Service 
> org.apache.hadoop.yarn.server.resourcemanager.AdminService failed in state 
> STARTED; cause: org.apache.hadoop.yarn.exceptions.YarnRuntimeException: 
> java.net.BindException: Problem binding to [test-host:35491] 
> java.net.BindException: Address already in use; For more details see: 
> http://wiki.apache.org/hadoop/BindException
> org.apache.hadoop.yarn.exceptions.YarnRuntimeException: 
> java.net.BindException: Problem binding to [test-host:35491] 
> java.net.BindException: Address already in use; For more details see: 
> http://wiki.apache.org/hadoop/BindException
>  at 
> org.apache.hadoop.yarn.factories.impl.pb.RpcServerFactoryPBImpl.getServer(RpcServerFactoryPBImpl.java:138)
>  at 
> org.apache.hadoop.yarn.ipc.HadoopYarnProtoRPC.getServer(HadoopYarnProtoRPC.java:65)
>  at org.apache.hadoop.yarn.ipc.YarnRPC.getServer(YarnRPC.java:54)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.AdminService.startServer(AdminService.java:178)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.AdminService.serviceStart(AdminService.java:165)
>  at org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
>  at 
> org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:121)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1244)
>  at org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
>  at 
> org.apache.hadoop.yarn.server.MiniYARNCluster.startResourceManager(MiniYARNCluster.java:355)
>  at 
> org.apache.hadoop.yarn.server.MiniYARNCluster.access$300(MiniYARNCluster.java:127)
>  at 
> org.apache.hadoop.yarn.server.MiniYARNCluster$ResourceManagerWrapper.serviceStart(MiniYARNCluster.java:493)
>  at org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
>  at 
> org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:121)
>  at 
> org.apache.hadoop.yarn.server.MiniYARNCluster.serviceStart(MiniYARNCluster.java:312)
>  at 
> org.apache.hadoop.mapreduce.v2.MiniMRYarnCluster.serviceStart(MiniMRYarnCluster.java:210)
>  at org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
>  at 
> org.apache.hadoop.mapred.MiniMRYarnClusterAdapter.restart(MiniMRYarnClusterAdapter.java:73)
>  at 
> org.apache.hadoop.mapred.TestMiniMRClientCluster.testRestart(TestMiniMRClientCluster.java:114)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62){noformat}
>  
> At least for testing, having this socket option enabled is benefical. We 
> could enable this with a new property like {{ipc.server.reuseaddr}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16281) ABFS: Rename operation, GetFileStatus before rename operation and throw exception on the driver side

2019-05-03 Thread Sean Mackrory (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16832524#comment-16832524
 ] 

Sean Mackrory commented on HADOOP-16281:


[~DanielZhou] I actually hadn't thought of it being applicable to Azure because 
the WASB connector already had similar mechanisms, and ADLS Gen1 and Gen2 
already offer the required file-system semantics. But then I remembered you can 
turn of Hierarchical Namespace :) If people want to run HBase without that 
feature for some reason, then yes, I'd love to make sure it supports ABFS well.

> ABFS: Rename operation, GetFileStatus before rename operation and  throw 
> exception on the driver side
> -
>
> Key: HADOOP-16281
> URL: https://issues.apache.org/jira/browse/HADOOP-16281
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Da Zhou
>Assignee: Da Zhou
>Priority: Major
>
> ABFS should add the rename with options:
>  [https://github.com/apache/hadoop/pull/743]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16059) Use SASL Factories Cache to Improve Performance

2019-05-03 Thread Daryn Sharp (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16832509#comment-16832509
 ] 

Daryn Sharp commented on HADOOP-16059:
--

A bit late, but static blocks that throw exceptions can cause very bizarre and 
misleading errors.  Is there any particular reason this:
{code:java}
+ private static SaslClientFactory saslFactory;
+ static {
+saslFactory = new FastSaslClientFactory(null);
+  }{code}
Isn't this:
{code:java}
+ private static final SaslClientFactory saslFactory = new 
FastSaslClientFactory(null);{code}

> Use SASL Factories Cache to Improve Performance
> ---
>
> Key: HADOOP-16059
> URL: https://issues.apache.org/jira/browse/HADOOP-16059
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Critical
> Fix For: 3.3.0
>
> Attachments: After-Dn.png, After-Read.png, After-Server.png, 
> After-write.png, Before-DN.png, Before-Read.png, Before-Server.png, 
> Before-Write.png, HADOOP-16059-01.patch, HADOOP-16059-02.patch, 
> HADOOP-16059-02.patch, HADOOP-16059-03.patch, HADOOP-16059-04.patch
>
>
> SASL Client factories can be cached and SASL Server Factories and SASL Client 
> Factories can be together extended at SaslParticipant  to improve performance.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14951) KMSACL implementation is not configurable

2019-05-03 Thread Daryn Sharp (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-14951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16832497#comment-16832497
 ] 

Daryn Sharp commented on HADOOP-14951:
--

Nice.  A few comments after a cursory review:
# Why change {{checkAccess}} to take a key name parameter when there's an 
existing {{checkKeyAccess}}?
# Elaborating an earlier comment, I'd prefer {{KMSACLs}} to be the interface or 
an abstract class to minimize changes throughout the code.  In particular, it's 
much easier to review security related patches when the reviewer doesn't have 
to scrutinize all the changes to existing tests to ensure something wasn't 
subtlely altered.
# Minor but please change {{Assert.assertTrue("Expected KeyManagementACLs 
type", KMSWebApp.getACLs().getClass() == TestKeyManagementACLs.class);}} to 
{{Assert.assertEquals}} or {{Assert.assertSame}}.  Failed asserts for 
true/false take longer to debug when the difference isn't shown – and typically 
requires making the requested change.

> KMSACL implementation is not configurable
> -
>
> Key: HADOOP-14951
> URL: https://issues.apache.org/jira/browse/HADOOP-14951
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Reporter: Zsombor Gegesy
>Assignee: Zsombor Gegesy
>Priority: Major
>  Labels: key-management, kms
> Attachments: HADOOP-14951-10.patch, HADOOP-14951-11.patch, 
> HADOOP-14951-9.patch
>
>
> Currently, it is not possible to customize KMS's key management, if KMSACLs 
> behaviour is not enough. If an external key management solution is used, that 
> would need a higher level API, where it can decide, if the given operation is 
> allowed, or not.
>  For this to achieve, it would be a solution, to introduce a new interface, 
> which could be implemented by KMSACLs - and also other KMS - and a new 
> configuration point could be added, where the actual interface implementation 
> could be specified.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16287) KerberosAuthenticationHandler Trusted Proxy Support for Knox

2019-05-03 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16832478#comment-16832478
 ] 

Hadoop QA commented on HADOOP-16287:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 22m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 47s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
19s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 19m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 19m 
33s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 41s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 6 new + 0 unchanged - 0 fixed = 6 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 58s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
18s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
50s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}115m  6s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HADOOP-16287 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12967770/HADOOP-16287-001.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 6d2f3a6ff82a 4.4.0-144-generic #170~14.04.1-Ubuntu SMP Mon Mar 
18 15:02:05 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / f1875b2 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16218/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16218/testReport/ |
| Max. process+thread count | 1555 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 

[jira] [Commented] (HADOOP-14951) KMSACL implementation is not configurable

2019-05-03 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-14951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16832461#comment-16832461
 ] 

Hadoop QA commented on HADOOP-14951:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
7s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 27s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
24s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
23s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m 
35s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 20s{color} | {color:orange} root: The patch generated 5 new + 94 unchanged - 
18 fixed = 99 total (was 112) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
56s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 6 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 24s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
39s{color} | {color:green} hadoop-kms in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}107m 51s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  1m 
 8s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}214m  5s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.tools.TestDFSZKFailoverController |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.server.namenode.TestNamenodeCapacityReport |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HADOOP-14951 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12967758/HADOOP-14951-11.patch 
|
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 7df20453605d 4.4.0-144-generic #170~14.04.1-Ubuntu SMP Mon Mar 
18 15:02:05 UTC 2019 x86_64 

[jira] [Commented] (HADOOP-14951) KMSACL implementation is not configurable

2019-05-03 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-14951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16832451#comment-16832451
 ] 

Hadoop QA commented on HADOOP-14951:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
27s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
2s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 15s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
18s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  2m 
51s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
28s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
23s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 17m 
14s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 11s{color} | {color:orange} root: The patch generated 11 new + 95 unchanged 
- 18 fixed = 106 total (was 113) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
44s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 8 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 37s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
45s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
31s{color} | {color:green} hadoop-kms in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}107m 11s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  1m 
 7s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}212m 31s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.TestFSEditLogLoader |
|   | hadoop.hdfs.TestDFSClientRetries |
|   | hadoop.hdfs.TestFileCreation |
|   | hadoop.hdfs.server.namenode.TestLeaseManager |
|   | hadoop.hdfs.server.namenode.TestDefaultBlockPlacementPolicy |
|   | hadoop.hdfs.server.mover.TestMover |
|   | hadoop.hdfs.TestErasureCodingExerciseAPIs |
|   | 

[GitHub] [hadoop] hadoop-yetus commented on issue #664: [HADOOP-14951] Make the KMSACLs implementation customizable, with an …

2019-05-03 Thread GitBox
hadoop-yetus commented on issue #664: [HADOOP-14951] Make the KMSACLs 
implementation customizable, with an …
URL: https://github.com/apache/hadoop/pull/664#issuecomment-489072576
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 27 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 3 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 62 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1063 | trunk passed |
   | +1 | compile | 1086 | trunk passed |
   | +1 | checkstyle | 132 | trunk passed |
   | +1 | mvnsite | 108 | trunk passed |
   | +1 | shadedclient | 915 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 78 | trunk passed |
   | 0 | spotbugs | 171 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 208 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 23 | Maven dependency ordering for patch |
   | +1 | mvninstall | 75 | the patch passed |
   | +1 | compile | 1034 | the patch passed |
   | +1 | javac | 1034 | the patch passed |
   | -0 | checkstyle | 131 | root: The patch generated 11 new + 95 unchanged - 
18 fixed = 106 total (was 113) |
   | +1 | mvnsite | 104 | the patch passed |
   | -1 | whitespace | 0 | The patch has 8 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply |
   | +1 | shadedclient | 637 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 79 | the patch passed |
   | +1 | findbugs | 225 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 211 | hadoop-kms in the patch passed. |
   | -1 | unit | 6431 | hadoop-hdfs in the patch failed. |
   | +1 | asflicense | 67 | The patch does not generate ASF License warnings. |
   | | | 12751 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.namenode.TestFSEditLogLoader |
   |   | hadoop.hdfs.TestDFSClientRetries |
   |   | hadoop.hdfs.TestFileCreation |
   |   | hadoop.hdfs.server.namenode.TestLeaseManager |
   |   | hadoop.hdfs.server.namenode.TestDefaultBlockPlacementPolicy |
   |   | hadoop.hdfs.server.mover.TestMover |
   |   | hadoop.hdfs.TestErasureCodingExerciseAPIs |
   |   | hadoop.hdfs.server.namenode.TestINodeAttributeProvider |
   |   | hadoop.hdfs.TestMaintenanceState |
   |   | hadoop.hdfs.TestDatanodeReport |
   |   | hadoop.hdfs.server.namenode.TestListOpenFiles |
   |   | hadoop.hdfs.TestHDFSFileSystemContract |
   |   | hadoop.hdfs.TestDistributedFileSystemWithECFileWithRandomECPolicy |
   |   | hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier |
   |   | hadoop.hdfs.TestErasureCodingPolicies |
   |   | hadoop.hdfs.TestStateAlignmentContextWithHA |
   |   | hadoop.hdfs.server.mover.TestStorageMover |
   |   | hadoop.hdfs.web.TestWebHdfsTimeouts |
   |   | hadoop.hdfs.server.namenode.TestBlockPlacementPolicyRackFaultTolerant |
   |   | hadoop.hdfs.TestErasureCodingMultipleRacks |
   |   | hadoop.hdfs.TestSafeModeWithStripedFile |
   |   | hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy |
   |   | hadoop.hdfs.server.diskbalancer.command.TestDiskBalancerCommand |
   |   | hadoop.hdfs.server.namenode.TestNameNodeMXBean |
   |   | hadoop.hdfs.server.diskbalancer.TestDiskBalancerRPC |
   |   | hadoop.hdfs.TestErasureCodingPolicyWithSnapshot |
   |   | hadoop.hdfs.TestDFSStripedInputStream |
   |   | hadoop.hdfs.server.namenode.TestNamenodeRetryCache |
   |   | hadoop.hdfs.TestDistributedFileSystem |
   |   | hadoop.hdfs.TestErasureCodingPoliciesWithRandomECPolicy |
   |   | hadoop.hdfs.TestFileChecksumCompositeCrc |
   |   | hadoop.hdfs.server.datanode.fsdataset.impl.TestFsVolumeList |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-664/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/664 |
   | JIRA Issue | HADOOP-14951 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 028c48b95dca 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / f1875b2 |
   | Default Java | 1.8.0_191 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-664/2/artifact/out/diff-checkstyle-root.txt
 |
   | whitespace | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-664/2/artifact/out/whitespace-eol.txt
 |
   | 

[GitHub] [hadoop] hadoop-yetus commented on a change in pull request #664: [HADOOP-14951] Make the KMSACLs implementation customizable, with an …

2019-05-03 Thread GitBox
hadoop-yetus commented on a change in pull request #664: [HADOOP-14951] Make 
the KMSACLs implementation customizable, with an …
URL: https://github.com/apache/hadoop/pull/664#discussion_r280745647
 
 

 ##
 File path: 
hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSACLs.java
 ##
 @@ -65,269 +66,105 @@ public String getBlacklistConfigKey() {
 }
   }
 
-  public static final String ACL_DEFAULT = 
AccessControlList.WILDCARD_ACL_VALUE;
-
-  public static final int RELOADER_SLEEP_MILLIS = 1000;
-
-  private volatile Map acls;
-  private volatile Map blacklistedAcls;
-  @VisibleForTesting
-  volatile Map> keyAcls;
-  @VisibleForTesting
-  volatile Map defaultKeyAcls = new HashMap<>();
-  @VisibleForTesting
-  volatile Map whitelistKeyAcls = new 
HashMap<>();
-  private ScheduledExecutorService executorService;
-  private long lastReload;
-
-  KMSACLs(Configuration conf) {
-if (conf == null) {
-  conf = loadACLs();
-}
-setKMSACLs(conf);
-setKeyACLs(conf);
-  }
-
-  public KMSACLs() {
-this(null);
-  }
-
-  private void setKMSACLs(Configuration conf) {
-Map tempAcls = new HashMap();
-Map tempBlacklist = new HashMap();
-for (Type aclType : Type.values()) {
-  String aclStr = conf.get(aclType.getAclConfigKey(), ACL_DEFAULT);
-  tempAcls.put(aclType, new AccessControlList(aclStr));
-  String blacklistStr = conf.get(aclType.getBlacklistConfigKey());
-  if (blacklistStr != null) {
-// Only add if blacklist is present
-tempBlacklist.put(aclType, new AccessControlList(blacklistStr));
-LOG.info("'{}' Blacklist '{}'", aclType, blacklistStr);
-  }
-  LOG.info("'{}' ACL '{}'", aclType, aclStr);
-}
-acls = tempAcls;
-blacklistedAcls = tempBlacklist;
-  }
-
-  @VisibleForTesting
-  void setKeyACLs(Configuration conf) {
-Map> tempKeyAcls =
-new HashMap>();
-Map allKeyACLS =
-conf.getValByRegex(KMSConfiguration.KEY_ACL_PREFIX_REGEX);
-for (Map.Entry keyAcl : allKeyACLS.entrySet()) {
-  String k = keyAcl.getKey();
-  // this should be of type "key.acl.."
-  int keyNameStarts = KMSConfiguration.KEY_ACL_PREFIX.length();
-  int keyNameEnds = k.lastIndexOf(".");
-  if (keyNameStarts >= keyNameEnds) {
-LOG.warn("Invalid key name '{}'", k);
-  } else {
-String aclStr = keyAcl.getValue();
-String keyName = k.substring(keyNameStarts, keyNameEnds);
-String keyOp = k.substring(keyNameEnds + 1);
-KeyOpType aclType = null;
-try {
-  aclType = KeyOpType.valueOf(keyOp);
-} catch (IllegalArgumentException e) {
-  LOG.warn("Invalid key Operation '{}'", keyOp);
-}
-if (aclType != null) {
-  // On the assumption this will be single threaded.. else we need to
-  // ConcurrentHashMap
-  HashMap aclMap =
-  tempKeyAcls.get(keyName);
-  if (aclMap == null) {
-aclMap = new HashMap();
-tempKeyAcls.put(keyName, aclMap);
-  }
-  aclMap.put(aclType, new AccessControlList(aclStr));
-  LOG.info("KEY_NAME '{}' KEY_OP '{}' ACL '{}'",
-  keyName, aclType, aclStr);
-}
-  }
-}
-keyAcls = tempKeyAcls;
-
-final Map tempDefaults = new HashMap<>();
-final Map tempWhitelists = new HashMap<>();
-for (KeyOpType keyOp : KeyOpType.values()) {
-  parseAclsWithPrefix(conf, KMSConfiguration.DEFAULT_KEY_ACL_PREFIX,
-  keyOp, tempDefaults);
-  parseAclsWithPrefix(conf, KMSConfiguration.WHITELIST_KEY_ACL_PREFIX,
-  keyOp, tempWhitelists);
-}
-defaultKeyAcls = tempDefaults;
-whitelistKeyAcls = tempWhitelists;
-  }
+  /**
+   * First Check if user is in ACL for the KMS operation, if yes, then return
+   * true if user is not present in any configured blacklist for the operation.
+   * 
+   * @param keyOperationType
+   *  KMS Operation
+   * @param ugi
+   *  UserGroupInformation of user
+   * @param key
+   *  the key name
+   * @return true is user has access
+   */
+  public abstract boolean hasAccess(Type keyOperationType,
+  UserGroupInformation ugi, String key);
 
+  
   /**
-   * Parse the acls from configuration with the specified prefix. Currently
-   * only 2 possible prefixes: whitelist and default.
-   *
-   * @param conf The configuration.
-   * @param prefix The prefix.
-   * @param keyOp The key operation.
-   * @param results The collection of results to add to.
+   * This is called by the KeyProvider to check if the given user is
+   * authorized to perform the specified operation on the given acl name.
+   * @param aclName name of the key ACL
+   * @param ugi User's UserGroupInformation
+   * @param opType Operation Type 
+   * @return true if user has access to the aclName and opType else false
*/
-  private void parseAclsWithPrefix(final Configuration conf,
-  

[GitHub] [hadoop] hadoop-yetus commented on a change in pull request #664: [HADOOP-14951] Make the KMSACLs implementation customizable, with an …

2019-05-03 Thread GitBox
hadoop-yetus commented on a change in pull request #664: [HADOOP-14951] Make 
the KMSACLs implementation customizable, with an …
URL: https://github.com/apache/hadoop/pull/664#discussion_r280745665
 
 

 ##
 File path: 
hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSACLs.java
 ##
 @@ -65,269 +66,105 @@ public String getBlacklistConfigKey() {
 }
   }
 
-  public static final String ACL_DEFAULT = 
AccessControlList.WILDCARD_ACL_VALUE;
-
-  public static final int RELOADER_SLEEP_MILLIS = 1000;
-
-  private volatile Map acls;
-  private volatile Map blacklistedAcls;
-  @VisibleForTesting
-  volatile Map> keyAcls;
-  @VisibleForTesting
-  volatile Map defaultKeyAcls = new HashMap<>();
-  @VisibleForTesting
-  volatile Map whitelistKeyAcls = new 
HashMap<>();
-  private ScheduledExecutorService executorService;
-  private long lastReload;
-
-  KMSACLs(Configuration conf) {
-if (conf == null) {
-  conf = loadACLs();
-}
-setKMSACLs(conf);
-setKeyACLs(conf);
-  }
-
-  public KMSACLs() {
-this(null);
-  }
-
-  private void setKMSACLs(Configuration conf) {
-Map tempAcls = new HashMap();
-Map tempBlacklist = new HashMap();
-for (Type aclType : Type.values()) {
-  String aclStr = conf.get(aclType.getAclConfigKey(), ACL_DEFAULT);
-  tempAcls.put(aclType, new AccessControlList(aclStr));
-  String blacklistStr = conf.get(aclType.getBlacklistConfigKey());
-  if (blacklistStr != null) {
-// Only add if blacklist is present
-tempBlacklist.put(aclType, new AccessControlList(blacklistStr));
-LOG.info("'{}' Blacklist '{}'", aclType, blacklistStr);
-  }
-  LOG.info("'{}' ACL '{}'", aclType, aclStr);
-}
-acls = tempAcls;
-blacklistedAcls = tempBlacklist;
-  }
-
-  @VisibleForTesting
-  void setKeyACLs(Configuration conf) {
-Map> tempKeyAcls =
-new HashMap>();
-Map allKeyACLS =
-conf.getValByRegex(KMSConfiguration.KEY_ACL_PREFIX_REGEX);
-for (Map.Entry keyAcl : allKeyACLS.entrySet()) {
-  String k = keyAcl.getKey();
-  // this should be of type "key.acl.."
-  int keyNameStarts = KMSConfiguration.KEY_ACL_PREFIX.length();
-  int keyNameEnds = k.lastIndexOf(".");
-  if (keyNameStarts >= keyNameEnds) {
-LOG.warn("Invalid key name '{}'", k);
-  } else {
-String aclStr = keyAcl.getValue();
-String keyName = k.substring(keyNameStarts, keyNameEnds);
-String keyOp = k.substring(keyNameEnds + 1);
-KeyOpType aclType = null;
-try {
-  aclType = KeyOpType.valueOf(keyOp);
-} catch (IllegalArgumentException e) {
-  LOG.warn("Invalid key Operation '{}'", keyOp);
-}
-if (aclType != null) {
-  // On the assumption this will be single threaded.. else we need to
-  // ConcurrentHashMap
-  HashMap aclMap =
-  tempKeyAcls.get(keyName);
-  if (aclMap == null) {
-aclMap = new HashMap();
-tempKeyAcls.put(keyName, aclMap);
-  }
-  aclMap.put(aclType, new AccessControlList(aclStr));
-  LOG.info("KEY_NAME '{}' KEY_OP '{}' ACL '{}'",
-  keyName, aclType, aclStr);
-}
-  }
-}
-keyAcls = tempKeyAcls;
-
-final Map tempDefaults = new HashMap<>();
-final Map tempWhitelists = new HashMap<>();
-for (KeyOpType keyOp : KeyOpType.values()) {
-  parseAclsWithPrefix(conf, KMSConfiguration.DEFAULT_KEY_ACL_PREFIX,
-  keyOp, tempDefaults);
-  parseAclsWithPrefix(conf, KMSConfiguration.WHITELIST_KEY_ACL_PREFIX,
-  keyOp, tempWhitelists);
-}
-defaultKeyAcls = tempDefaults;
-whitelistKeyAcls = tempWhitelists;
-  }
+  /**
+   * First Check if user is in ACL for the KMS operation, if yes, then return
+   * true if user is not present in any configured blacklist for the operation.
+   * 
 
 Review comment:
   whitespace:end of line
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on a change in pull request #664: [HADOOP-14951] Make the KMSACLs implementation customizable, with an …

2019-05-03 Thread GitBox
hadoop-yetus commented on a change in pull request #664: [HADOOP-14951] Make 
the KMSACLs implementation customizable, with an …
URL: https://github.com/apache/hadoop/pull/664#discussion_r280745671
 
 

 ##
 File path: 
hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSACLs.java
 ##
 @@ -65,269 +66,105 @@ public String getBlacklistConfigKey() {
 }
   }
 
-  public static final String ACL_DEFAULT = 
AccessControlList.WILDCARD_ACL_VALUE;
-
-  public static final int RELOADER_SLEEP_MILLIS = 1000;
-
-  private volatile Map acls;
-  private volatile Map blacklistedAcls;
-  @VisibleForTesting
-  volatile Map> keyAcls;
-  @VisibleForTesting
-  volatile Map defaultKeyAcls = new HashMap<>();
-  @VisibleForTesting
-  volatile Map whitelistKeyAcls = new 
HashMap<>();
-  private ScheduledExecutorService executorService;
-  private long lastReload;
-
-  KMSACLs(Configuration conf) {
-if (conf == null) {
-  conf = loadACLs();
-}
-setKMSACLs(conf);
-setKeyACLs(conf);
-  }
-
-  public KMSACLs() {
-this(null);
-  }
-
-  private void setKMSACLs(Configuration conf) {
-Map tempAcls = new HashMap();
-Map tempBlacklist = new HashMap();
-for (Type aclType : Type.values()) {
-  String aclStr = conf.get(aclType.getAclConfigKey(), ACL_DEFAULT);
-  tempAcls.put(aclType, new AccessControlList(aclStr));
-  String blacklistStr = conf.get(aclType.getBlacklistConfigKey());
-  if (blacklistStr != null) {
-// Only add if blacklist is present
-tempBlacklist.put(aclType, new AccessControlList(blacklistStr));
-LOG.info("'{}' Blacklist '{}'", aclType, blacklistStr);
-  }
-  LOG.info("'{}' ACL '{}'", aclType, aclStr);
-}
-acls = tempAcls;
-blacklistedAcls = tempBlacklist;
-  }
-
-  @VisibleForTesting
-  void setKeyACLs(Configuration conf) {
-Map> tempKeyAcls =
-new HashMap>();
-Map allKeyACLS =
-conf.getValByRegex(KMSConfiguration.KEY_ACL_PREFIX_REGEX);
-for (Map.Entry keyAcl : allKeyACLS.entrySet()) {
-  String k = keyAcl.getKey();
-  // this should be of type "key.acl.."
-  int keyNameStarts = KMSConfiguration.KEY_ACL_PREFIX.length();
-  int keyNameEnds = k.lastIndexOf(".");
-  if (keyNameStarts >= keyNameEnds) {
-LOG.warn("Invalid key name '{}'", k);
-  } else {
-String aclStr = keyAcl.getValue();
-String keyName = k.substring(keyNameStarts, keyNameEnds);
-String keyOp = k.substring(keyNameEnds + 1);
-KeyOpType aclType = null;
-try {
-  aclType = KeyOpType.valueOf(keyOp);
-} catch (IllegalArgumentException e) {
-  LOG.warn("Invalid key Operation '{}'", keyOp);
-}
-if (aclType != null) {
-  // On the assumption this will be single threaded.. else we need to
-  // ConcurrentHashMap
-  HashMap aclMap =
-  tempKeyAcls.get(keyName);
-  if (aclMap == null) {
-aclMap = new HashMap();
-tempKeyAcls.put(keyName, aclMap);
-  }
-  aclMap.put(aclType, new AccessControlList(aclStr));
-  LOG.info("KEY_NAME '{}' KEY_OP '{}' ACL '{}'",
-  keyName, aclType, aclStr);
-}
-  }
-}
-keyAcls = tempKeyAcls;
-
-final Map tempDefaults = new HashMap<>();
-final Map tempWhitelists = new HashMap<>();
-for (KeyOpType keyOp : KeyOpType.values()) {
-  parseAclsWithPrefix(conf, KMSConfiguration.DEFAULT_KEY_ACL_PREFIX,
-  keyOp, tempDefaults);
-  parseAclsWithPrefix(conf, KMSConfiguration.WHITELIST_KEY_ACL_PREFIX,
-  keyOp, tempWhitelists);
-}
-defaultKeyAcls = tempDefaults;
-whitelistKeyAcls = tempWhitelists;
-  }
+  /**
+   * First Check if user is in ACL for the KMS operation, if yes, then return
+   * true if user is not present in any configured blacklist for the operation.
+   * 
+   * @param keyOperationType
+   *  KMS Operation
+   * @param ugi
+   *  UserGroupInformation of user
+   * @param key
+   *  the key name
+   * @return true is user has access
+   */
+  public abstract boolean hasAccess(Type keyOperationType,
+  UserGroupInformation ugi, String key);
 
+  
 
 Review comment:
   whitespace:end of line
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on a change in pull request #664: [HADOOP-14951] Make the KMSACLs implementation customizable, with an …

2019-05-03 Thread GitBox
hadoop-yetus commented on a change in pull request #664: [HADOOP-14951] Make 
the KMSACLs implementation customizable, with an …
URL: https://github.com/apache/hadoop/pull/664#discussion_r280745677
 
 

 ##
 File path: 
hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSACLs.java
 ##
 @@ -65,269 +66,105 @@ public String getBlacklistConfigKey() {
 }
   }
 
-  public static final String ACL_DEFAULT = 
AccessControlList.WILDCARD_ACL_VALUE;
-
-  public static final int RELOADER_SLEEP_MILLIS = 1000;
-
-  private volatile Map acls;
-  private volatile Map blacklistedAcls;
-  @VisibleForTesting
-  volatile Map> keyAcls;
-  @VisibleForTesting
-  volatile Map defaultKeyAcls = new HashMap<>();
-  @VisibleForTesting
-  volatile Map whitelistKeyAcls = new 
HashMap<>();
-  private ScheduledExecutorService executorService;
-  private long lastReload;
-
-  KMSACLs(Configuration conf) {
-if (conf == null) {
-  conf = loadACLs();
-}
-setKMSACLs(conf);
-setKeyACLs(conf);
-  }
-
-  public KMSACLs() {
-this(null);
-  }
-
-  private void setKMSACLs(Configuration conf) {
-Map tempAcls = new HashMap();
-Map tempBlacklist = new HashMap();
-for (Type aclType : Type.values()) {
-  String aclStr = conf.get(aclType.getAclConfigKey(), ACL_DEFAULT);
-  tempAcls.put(aclType, new AccessControlList(aclStr));
-  String blacklistStr = conf.get(aclType.getBlacklistConfigKey());
-  if (blacklistStr != null) {
-// Only add if blacklist is present
-tempBlacklist.put(aclType, new AccessControlList(blacklistStr));
-LOG.info("'{}' Blacklist '{}'", aclType, blacklistStr);
-  }
-  LOG.info("'{}' ACL '{}'", aclType, aclStr);
-}
-acls = tempAcls;
-blacklistedAcls = tempBlacklist;
-  }
-
-  @VisibleForTesting
-  void setKeyACLs(Configuration conf) {
-Map> tempKeyAcls =
-new HashMap>();
-Map allKeyACLS =
-conf.getValByRegex(KMSConfiguration.KEY_ACL_PREFIX_REGEX);
-for (Map.Entry keyAcl : allKeyACLS.entrySet()) {
-  String k = keyAcl.getKey();
-  // this should be of type "key.acl.."
-  int keyNameStarts = KMSConfiguration.KEY_ACL_PREFIX.length();
-  int keyNameEnds = k.lastIndexOf(".");
-  if (keyNameStarts >= keyNameEnds) {
-LOG.warn("Invalid key name '{}'", k);
-  } else {
-String aclStr = keyAcl.getValue();
-String keyName = k.substring(keyNameStarts, keyNameEnds);
-String keyOp = k.substring(keyNameEnds + 1);
-KeyOpType aclType = null;
-try {
-  aclType = KeyOpType.valueOf(keyOp);
-} catch (IllegalArgumentException e) {
-  LOG.warn("Invalid key Operation '{}'", keyOp);
-}
-if (aclType != null) {
-  // On the assumption this will be single threaded.. else we need to
-  // ConcurrentHashMap
-  HashMap aclMap =
-  tempKeyAcls.get(keyName);
-  if (aclMap == null) {
-aclMap = new HashMap();
-tempKeyAcls.put(keyName, aclMap);
-  }
-  aclMap.put(aclType, new AccessControlList(aclStr));
-  LOG.info("KEY_NAME '{}' KEY_OP '{}' ACL '{}'",
-  keyName, aclType, aclStr);
-}
-  }
-}
-keyAcls = tempKeyAcls;
-
-final Map tempDefaults = new HashMap<>();
-final Map tempWhitelists = new HashMap<>();
-for (KeyOpType keyOp : KeyOpType.values()) {
-  parseAclsWithPrefix(conf, KMSConfiguration.DEFAULT_KEY_ACL_PREFIX,
-  keyOp, tempDefaults);
-  parseAclsWithPrefix(conf, KMSConfiguration.WHITELIST_KEY_ACL_PREFIX,
-  keyOp, tempWhitelists);
-}
-defaultKeyAcls = tempDefaults;
-whitelistKeyAcls = tempWhitelists;
-  }
+  /**
+   * First Check if user is in ACL for the KMS operation, if yes, then return
+   * true if user is not present in any configured blacklist for the operation.
+   * 
+   * @param keyOperationType
+   *  KMS Operation
+   * @param ugi
+   *  UserGroupInformation of user
+   * @param key
+   *  the key name
+   * @return true is user has access
+   */
+  public abstract boolean hasAccess(Type keyOperationType,
+  UserGroupInformation ugi, String key);
 
+  
   /**
-   * Parse the acls from configuration with the specified prefix. Currently
-   * only 2 possible prefixes: whitelist and default.
-   *
-   * @param conf The configuration.
-   * @param prefix The prefix.
-   * @param keyOp The key operation.
-   * @param results The collection of results to add to.
+   * This is called by the KeyProvider to check if the given user is
+   * authorized to perform the specified operation on the given acl name.
+   * @param aclName name of the key ACL
+   * @param ugi User's UserGroupInformation
+   * @param opType Operation Type 
 
 Review comment:
   whitespace:end of line
   


This is an automated message from 

[GitHub] [hadoop] hadoop-yetus commented on issue #664: [HADOOP-14951] Make the KMSACLs implementation customizable, with an …

2019-05-03 Thread GitBox
hadoop-yetus commented on issue #664: [HADOOP-14951] Make the KMSACLs 
implementation customizable, with an …
URL: https://github.com/apache/hadoop/pull/664#issuecomment-48906
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 30 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 3 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 23 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1050 | trunk passed |
   | +1 | compile | 1077 | trunk passed |
   | +1 | checkstyle | 131 | trunk passed |
   | +1 | mvnsite | 101 | trunk passed |
   | +1 | shadedclient | 895 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 73 | trunk passed |
   | 0 | spotbugs | 165 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 200 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 25 | Maven dependency ordering for patch |
   | +1 | mvninstall | 83 | the patch passed |
   | +1 | compile | 1023 | the patch passed |
   | +1 | javac | 1023 | the patch passed |
   | -0 | checkstyle | 138 | root: The patch generated 5 new + 95 unchanged - 
18 fixed = 100 total (was 113) |
   | +1 | mvnsite | 101 | the patch passed |
   | -1 | whitespace | 0 | The patch has 6 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply |
   | +1 | shadedclient | 606 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 79 | the patch passed |
   | +1 | findbugs | 211 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 221 | hadoop-kms in the patch passed. |
   | -1 | unit | 5606 | hadoop-hdfs in the patch failed. |
   | +1 | asflicense | 54 | The patch does not generate ASF License warnings. |
   | | | 11795 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.TestDatanodeDeath |
   |   | hadoop.hdfs.TestDFSStorageStateRecovery |
   |   | hadoop.hdfs.TestFileCreation |
   |   | hadoop.hdfs.TestQuota |
   |   | hadoop.hdfs.TestDFSInotifyEventInputStream |
   |   | hadoop.hdfs.TestDFSShell |
   |   | hadoop.hdfs.TestErasureCodingExerciseAPIs |
   |   | hadoop.hdfs.TestMaintenanceState |
   |   | hadoop.hdfs.server.diskbalancer.TestDiskBalancerWithMockMover |
   |   | hadoop.hdfs.TestHDFSFileSystemContract |
   |   | hadoop.hdfs.TestLeaseRecoveryStriped |
   |   | hadoop.hdfs.server.datanode.TestDataNodeUUID |
   |   | hadoop.hdfs.TestRenameWhileOpen |
   |   | hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier |
   |   | hadoop.hdfs.TestErasureCodingPolicies |
   |   | hadoop.hdfs.TestStateAlignmentContextWithHA |
   |   | hadoop.hdfs.web.TestWebHdfsTimeouts |
   |   | hadoop.hdfs.server.namenode.ha.TestFailureToReadEdits |
   |   | hadoop.hdfs.TestClientReportBadBlock |
   |   | hadoop.hdfs.TestDecommissionWithStriped |
   |   | hadoop.cli.TestHDFSCLI |
   |   | hadoop.fs.TestWebHdfsFileContextMainOperations |
   |   | hadoop.hdfs.TestErasureCodingPolicyWithSnapshotWithRandomECPolicy |
   |   | hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy |
   |   | hadoop.hdfs.server.namenode.TestReconstructStripedBlocks |
   |   | hadoop.hdfs.server.namenode.TestFSImageWithSnapshot |
   |   | hadoop.hdfs.TestDFSAddressConfig |
   |   | hadoop.hdfs.TestDFSStripedOutputStream |
   |   | hadoop.hdfs.client.impl.TestBlockReaderFactory |
   |   | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA |
   |   | hadoop.hdfs.security.token.block.TestBlockToken |
   |   | hadoop.hdfs.TestUnsetAndChangeDirectoryEcPolicy |
   |   | hadoop.hdfs.server.namenode.TestNamenodeRetryCache |
   |   | hadoop.hdfs.TestDistributedFileSystem |
   |   | hadoop.hdfs.TestErasureCodingPoliciesWithRandomECPolicy |
   |   | hadoop.hdfs.TestFileChecksumCompositeCrc |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-664/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/664 |
   | JIRA Issue | HADOOP-14951 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux b16dd794557f 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / f1875b2 |
   | Default Java | 1.8.0_191 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-664/3/artifact/out/diff-checkstyle-root.txt
 |
   | whitespace | 

[jira] [Commented] (HADOOP-14951) KMSACL implementation is not configurable

2019-05-03 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-14951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16832444#comment-16832444
 ] 

Hadoop QA commented on HADOOP-14951:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
30s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
23s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 55s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
13s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  2m 
45s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
20s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
25s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 17m  
3s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 18s{color} | {color:orange} root: The patch generated 5 new + 95 unchanged - 
18 fixed = 100 total (was 113) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 6 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  6s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
31s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
41s{color} | {color:green} hadoop-kms in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 93m 26s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
54s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}196m 35s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDatanodeDeath |
|   | hadoop.hdfs.TestDFSStorageStateRecovery |
|   | hadoop.hdfs.TestFileCreation |
|   | hadoop.hdfs.TestQuota |
|   | hadoop.hdfs.TestDFSInotifyEventInputStream |
|   | hadoop.hdfs.TestDFSShell |
|   | hadoop.hdfs.TestErasureCodingExerciseAPIs |
|   | hadoop.hdfs.TestMaintenanceState |
|   | 

[GitHub] [hadoop] hadoop-yetus commented on a change in pull request #664: [HADOOP-14951] Make the KMSACLs implementation customizable, with an …

2019-05-03 Thread GitBox
hadoop-yetus commented on a change in pull request #664: [HADOOP-14951] Make 
the KMSACLs implementation customizable, with an …
URL: https://github.com/apache/hadoop/pull/664#discussion_r280745681
 
 

 ##
 File path: 
hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSACLs.java
 ##
 @@ -65,269 +66,105 @@ public String getBlacklistConfigKey() {
 }
   }
 
-  public static final String ACL_DEFAULT = 
AccessControlList.WILDCARD_ACL_VALUE;
-
-  public static final int RELOADER_SLEEP_MILLIS = 1000;
-
-  private volatile Map acls;
-  private volatile Map blacklistedAcls;
-  @VisibleForTesting
-  volatile Map> keyAcls;
-  @VisibleForTesting
-  volatile Map defaultKeyAcls = new HashMap<>();
-  @VisibleForTesting
-  volatile Map whitelistKeyAcls = new 
HashMap<>();
-  private ScheduledExecutorService executorService;
-  private long lastReload;
-
-  KMSACLs(Configuration conf) {
-if (conf == null) {
-  conf = loadACLs();
-}
-setKMSACLs(conf);
-setKeyACLs(conf);
-  }
-
-  public KMSACLs() {
-this(null);
-  }
-
-  private void setKMSACLs(Configuration conf) {
-Map tempAcls = new HashMap();
-Map tempBlacklist = new HashMap();
-for (Type aclType : Type.values()) {
-  String aclStr = conf.get(aclType.getAclConfigKey(), ACL_DEFAULT);
-  tempAcls.put(aclType, new AccessControlList(aclStr));
-  String blacklistStr = conf.get(aclType.getBlacklistConfigKey());
-  if (blacklistStr != null) {
-// Only add if blacklist is present
-tempBlacklist.put(aclType, new AccessControlList(blacklistStr));
-LOG.info("'{}' Blacklist '{}'", aclType, blacklistStr);
-  }
-  LOG.info("'{}' ACL '{}'", aclType, aclStr);
-}
-acls = tempAcls;
-blacklistedAcls = tempBlacklist;
-  }
-
-  @VisibleForTesting
-  void setKeyACLs(Configuration conf) {
-Map> tempKeyAcls =
-new HashMap>();
-Map allKeyACLS =
-conf.getValByRegex(KMSConfiguration.KEY_ACL_PREFIX_REGEX);
-for (Map.Entry keyAcl : allKeyACLS.entrySet()) {
-  String k = keyAcl.getKey();
-  // this should be of type "key.acl.."
-  int keyNameStarts = KMSConfiguration.KEY_ACL_PREFIX.length();
-  int keyNameEnds = k.lastIndexOf(".");
-  if (keyNameStarts >= keyNameEnds) {
-LOG.warn("Invalid key name '{}'", k);
-  } else {
-String aclStr = keyAcl.getValue();
-String keyName = k.substring(keyNameStarts, keyNameEnds);
-String keyOp = k.substring(keyNameEnds + 1);
-KeyOpType aclType = null;
-try {
-  aclType = KeyOpType.valueOf(keyOp);
-} catch (IllegalArgumentException e) {
-  LOG.warn("Invalid key Operation '{}'", keyOp);
-}
-if (aclType != null) {
-  // On the assumption this will be single threaded.. else we need to
-  // ConcurrentHashMap
-  HashMap aclMap =
-  tempKeyAcls.get(keyName);
-  if (aclMap == null) {
-aclMap = new HashMap();
-tempKeyAcls.put(keyName, aclMap);
-  }
-  aclMap.put(aclType, new AccessControlList(aclStr));
-  LOG.info("KEY_NAME '{}' KEY_OP '{}' ACL '{}'",
-  keyName, aclType, aclStr);
-}
-  }
-}
-keyAcls = tempKeyAcls;
-
-final Map tempDefaults = new HashMap<>();
-final Map tempWhitelists = new HashMap<>();
-for (KeyOpType keyOp : KeyOpType.values()) {
-  parseAclsWithPrefix(conf, KMSConfiguration.DEFAULT_KEY_ACL_PREFIX,
-  keyOp, tempDefaults);
-  parseAclsWithPrefix(conf, KMSConfiguration.WHITELIST_KEY_ACL_PREFIX,
-  keyOp, tempWhitelists);
-}
-defaultKeyAcls = tempDefaults;
-whitelistKeyAcls = tempWhitelists;
-  }
+  /**
+   * First Check if user is in ACL for the KMS operation, if yes, then return
+   * true if user is not present in any configured blacklist for the operation.
+   * 
+   * @param keyOperationType
+   *  KMS Operation
+   * @param ugi
+   *  UserGroupInformation of user
+   * @param key
+   *  the key name
+   * @return true is user has access
+   */
+  public abstract boolean hasAccess(Type keyOperationType,
+  UserGroupInformation ugi, String key);
 
+  
   /**
-   * Parse the acls from configuration with the specified prefix. Currently
-   * only 2 possible prefixes: whitelist and default.
-   *
-   * @param conf The configuration.
-   * @param prefix The prefix.
-   * @param keyOp The key operation.
-   * @param results The collection of results to add to.
+   * This is called by the KeyProvider to check if the given user is
+   * authorized to perform the specified operation on the given acl name.
+   * @param aclName name of the key ACL
+   * @param ugi User's UserGroupInformation
+   * @param opType Operation Type 
+   * @return true if user has access to the aclName and opType else false
*/
-  private void parseAclsWithPrefix(final Configuration conf,
-  

[GitHub] [hadoop] hadoop-yetus commented on a change in pull request #664: [HADOOP-14951] Make the KMSACLs implementation customizable, with an …

2019-05-03 Thread GitBox
hadoop-yetus commented on a change in pull request #664: [HADOOP-14951] Make 
the KMSACLs implementation customizable, with an …
URL: https://github.com/apache/hadoop/pull/664#discussion_r280745641
 
 

 ##
 File path: 
hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSACLs.java
 ##
 @@ -65,269 +66,105 @@ public String getBlacklistConfigKey() {
 }
   }
 
-  public static final String ACL_DEFAULT = 
AccessControlList.WILDCARD_ACL_VALUE;
-
-  public static final int RELOADER_SLEEP_MILLIS = 1000;
-
-  private volatile Map acls;
-  private volatile Map blacklistedAcls;
-  @VisibleForTesting
-  volatile Map> keyAcls;
-  @VisibleForTesting
-  volatile Map defaultKeyAcls = new HashMap<>();
-  @VisibleForTesting
-  volatile Map whitelistKeyAcls = new 
HashMap<>();
-  private ScheduledExecutorService executorService;
-  private long lastReload;
-
-  KMSACLs(Configuration conf) {
-if (conf == null) {
-  conf = loadACLs();
-}
-setKMSACLs(conf);
-setKeyACLs(conf);
-  }
-
-  public KMSACLs() {
-this(null);
-  }
-
-  private void setKMSACLs(Configuration conf) {
-Map tempAcls = new HashMap();
-Map tempBlacklist = new HashMap();
-for (Type aclType : Type.values()) {
-  String aclStr = conf.get(aclType.getAclConfigKey(), ACL_DEFAULT);
-  tempAcls.put(aclType, new AccessControlList(aclStr));
-  String blacklistStr = conf.get(aclType.getBlacklistConfigKey());
-  if (blacklistStr != null) {
-// Only add if blacklist is present
-tempBlacklist.put(aclType, new AccessControlList(blacklistStr));
-LOG.info("'{}' Blacklist '{}'", aclType, blacklistStr);
-  }
-  LOG.info("'{}' ACL '{}'", aclType, aclStr);
-}
-acls = tempAcls;
-blacklistedAcls = tempBlacklist;
-  }
-
-  @VisibleForTesting
-  void setKeyACLs(Configuration conf) {
-Map> tempKeyAcls =
-new HashMap>();
-Map allKeyACLS =
-conf.getValByRegex(KMSConfiguration.KEY_ACL_PREFIX_REGEX);
-for (Map.Entry keyAcl : allKeyACLS.entrySet()) {
-  String k = keyAcl.getKey();
-  // this should be of type "key.acl.."
-  int keyNameStarts = KMSConfiguration.KEY_ACL_PREFIX.length();
-  int keyNameEnds = k.lastIndexOf(".");
-  if (keyNameStarts >= keyNameEnds) {
-LOG.warn("Invalid key name '{}'", k);
-  } else {
-String aclStr = keyAcl.getValue();
-String keyName = k.substring(keyNameStarts, keyNameEnds);
-String keyOp = k.substring(keyNameEnds + 1);
-KeyOpType aclType = null;
-try {
-  aclType = KeyOpType.valueOf(keyOp);
-} catch (IllegalArgumentException e) {
-  LOG.warn("Invalid key Operation '{}'", keyOp);
-}
-if (aclType != null) {
-  // On the assumption this will be single threaded.. else we need to
-  // ConcurrentHashMap
-  HashMap aclMap =
-  tempKeyAcls.get(keyName);
-  if (aclMap == null) {
-aclMap = new HashMap();
-tempKeyAcls.put(keyName, aclMap);
-  }
-  aclMap.put(aclType, new AccessControlList(aclStr));
-  LOG.info("KEY_NAME '{}' KEY_OP '{}' ACL '{}'",
-  keyName, aclType, aclStr);
-}
-  }
-}
-keyAcls = tempKeyAcls;
-
-final Map tempDefaults = new HashMap<>();
-final Map tempWhitelists = new HashMap<>();
-for (KeyOpType keyOp : KeyOpType.values()) {
-  parseAclsWithPrefix(conf, KMSConfiguration.DEFAULT_KEY_ACL_PREFIX,
-  keyOp, tempDefaults);
-  parseAclsWithPrefix(conf, KMSConfiguration.WHITELIST_KEY_ACL_PREFIX,
-  keyOp, tempWhitelists);
-}
-defaultKeyAcls = tempDefaults;
-whitelistKeyAcls = tempWhitelists;
-  }
+  /**
+   * First Check if user is in ACL for the KMS operation, if yes, then return
+   * true if user is not present in any configured blacklist for the operation.
+   * 
+   * @param keyOperationType
+   *  KMS Operation
+   * @param ugi
+   *  UserGroupInformation of user
+   * @param key
+   *  the key name
+   * @return true is user has access
+   */
+  public abstract boolean hasAccess(Type keyOperationType,
+  UserGroupInformation ugi, String key);
 
+  
   /**
-   * Parse the acls from configuration with the specified prefix. Currently
-   * only 2 possible prefixes: whitelist and default.
-   *
-   * @param conf The configuration.
-   * @param prefix The prefix.
-   * @param keyOp The key operation.
-   * @param results The collection of results to add to.
+   * This is called by the KeyProvider to check if the given user is
+   * authorized to perform the specified operation on the given acl name.
+   * @param aclName name of the key ACL
+   * @param ugi User's UserGroupInformation
+   * @param opType Operation Type 
+   * @return true if user has access to the aclName and opType else false
*/
-  private void parseAclsWithPrefix(final Configuration conf,
-  

[GitHub] [hadoop] hadoop-yetus commented on issue #749: HDDS-1395. Key write fails with BlockOutputStream has been closed exception

2019-05-03 Thread GitBox
hadoop-yetus commented on issue #749: HDDS-1395. Key write fails with 
BlockOutputStream has been closed exception
URL: https://github.com/apache/hadoop/pull/749#issuecomment-489068923
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 31 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 5 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 21 | Maven dependency ordering for branch |
   | +1 | mvninstall | 404 | trunk passed |
   | +1 | compile | 199 | trunk passed |
   | +1 | checkstyle | 53 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 819 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 124 | trunk passed |
   | 0 | spotbugs | 234 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 408 | trunk passed |
   | -0 | patch | 268 | Used diff version of patch file. Binary files and 
potentially other changes not applied. Please rebase and squash commits if 
necessary. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 25 | Maven dependency ordering for patch |
   | +1 | mvninstall | 405 | the patch passed |
   | +1 | compile | 208 | the patch passed |
   | +1 | javac | 208 | the patch passed |
   | -0 | checkstyle | 28 | hadoop-ozone: The patch generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 4 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 667 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 125 | the patch passed |
   | +1 | findbugs | 437 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 145 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1046 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 37 | The patch does not generate ASF License warnings. |
   | | | 5348 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.scm.TestXceiverClientMetrics |
   |   | hadoop.ozone.web.client.TestKeysRatis |
   |   | hadoop.ozone.om.TestOzoneManager |
   |   | hadoop.ozone.TestContainerOperations |
   |   | hadoop.ozone.om.TestOzoneManagerRestInterface |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.hdds.scm.pipeline.TestNodeFailure |
   |   | hadoop.ozone.client.rpc.TestBCSID |
   |   | hadoop.ozone.web.client.TestVolume |
   |   | hadoop.ozone.web.TestOzoneRestWithMiniCluster |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineUtils |
   |   | hadoop.ozone.om.TestOzoneManagerConfiguration |
   |   | 
hadoop.ozone.container.common.statemachine.commandhandler.TestCloseContainerHandler
 |
   |   | hadoop.hdds.scm.pipeline.TestNode2PipelineMap |
   |   | hadoop.ozone.scm.TestGetCommittedBlockLengthAndPutKey |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStream |
   |   | hadoop.ozone.TestMiniOzoneCluster |
   |   | hadoop.ozone.client.rpc.TestContainerStateMachineFailures |
   |   | hadoop.ozone.scm.pipeline.TestSCMPipelineMetrics |
   |   | hadoop.hdds.scm.pipeline.TestPipelineClose |
   |   | hadoop.ozone.TestContainerStateMachineIdempotency |
   |   | hadoop.ozone.om.TestOzoneManagerHA |
   |   | hadoop.ozone.scm.TestContainerSmallFile |
   |   | hadoop.ozone.om.TestOmAcls |
   |   | hadoop.hdds.scm.pipeline.TestSCMRestart |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-749/9/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/749 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml |
   | uname | Linux 714cc545bcd9 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / f1875b2 |
   | Default Java | 1.8.0_191 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-749/9/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-749/9/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-749/9/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-749/9/testReport/ |
   | Max. process+thread count | 4130 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds hadoop-hdds/client hadoop-hdds/common 

[jira] [Updated] (HADOOP-16287) KerberosAuthenticationHandler Trusted Proxy Support for Knox

2019-05-03 Thread Prabhu Joseph (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated HADOOP-16287:
---
Status: Patch Available  (was: Open)

> KerberosAuthenticationHandler Trusted Proxy Support for Knox
> 
>
> Key: HADOOP-16287
> URL: https://issues.apache.org/jira/browse/HADOOP-16287
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: auth
>Affects Versions: 3.2.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: HADOOP-16287-001.patch
>
>
> Knox passes doAs with end user while accessing RM, WebHdfs Rest Api. 
> Currently KerberosAuthenticationHandler sets the remote user to Knox. Need 
> Trusted Proxy Support by reading doAs query parameter.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16287) KerberosAuthenticationHandler Trusted Proxy Support for Knox

2019-05-03 Thread Prabhu Joseph (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated HADOOP-16287:
---
Attachment: HADOOP-16287-001.patch

> KerberosAuthenticationHandler Trusted Proxy Support for Knox
> 
>
> Key: HADOOP-16287
> URL: https://issues.apache.org/jira/browse/HADOOP-16287
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: auth
>Affects Versions: 3.2.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: HADOOP-16287-001.patch
>
>
> Knox passes doAs with end user while accessing RM, WebHdfs Rest Api. 
> Currently KerberosAuthenticationHandler sets the remote user to Knox. Need 
> Trusted Proxy Support by reading doAs query parameter.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14951) KMSACL implementation is not configurable

2019-05-03 Thread Zsombor Gegesy (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-14951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16832369#comment-16832369
 ] 

Zsombor Gegesy commented on HADOOP-14951:
-

Sure, good idea. The only thing which concern me currently, is that currently 
we have _boolean hasAccess(Type keyOperationType, UserGroupInformation ugi, 
String key)_ and _boolean hasAccessToKey(String keyName, UserGroupInformation 
ugi, KeyOpType opType)_ which looks very similar, the only real difference is 
the operation type enum. Do we really need two separate types?

> KMSACL implementation is not configurable
> -
>
> Key: HADOOP-14951
> URL: https://issues.apache.org/jira/browse/HADOOP-14951
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Reporter: Zsombor Gegesy
>Assignee: Zsombor Gegesy
>Priority: Major
>  Labels: key-management, kms
> Attachments: HADOOP-14951-10.patch, HADOOP-14951-11.patch, 
> HADOOP-14951-9.patch
>
>
> Currently, it is not possible to customize KMS's key management, if KMSACLs 
> behaviour is not enough. If an external key management solution is used, that 
> would need a higher level API, where it can decide, if the given operation is 
> allowed, or not.
>  For this to achieve, it would be a solution, to introduce a new interface, 
> which could be implemented by KMSACLs - and also other KMS - and a new 
> configuration point could be added, where the actual interface implementation 
> could be specified.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14951) KMSACL implementation is not configurable

2019-05-03 Thread Zsombor Gegesy (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14951?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zsombor Gegesy updated HADOOP-14951:

Attachment: HADOOP-14951-11.patch

> KMSACL implementation is not configurable
> -
>
> Key: HADOOP-14951
> URL: https://issues.apache.org/jira/browse/HADOOP-14951
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Reporter: Zsombor Gegesy
>Assignee: Zsombor Gegesy
>Priority: Major
>  Labels: key-management, kms
> Attachments: HADOOP-14951-10.patch, HADOOP-14951-11.patch, 
> HADOOP-14951-9.patch
>
>
> Currently, it is not possible to customize KMS's key management, if KMSACLs 
> behaviour is not enough. If an external key management solution is used, that 
> would need a higher level API, where it can decide, if the given operation is 
> allowed, or not.
>  For this to achieve, it would be a solution, to introduce a new interface, 
> which could be implemented by KMSACLs - and also other KMS - and a new 
> configuration point could be added, where the actual interface implementation 
> could be specified.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16059) Use SASL Factories Cache to Improve Performance

2019-05-03 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16832280#comment-16832280
 ] 

Hudson commented on HADOOP-16059:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16499 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16499/])
HADOOP-16059. Use SASL Factories Cache to Improve Performance. (vinayakumarb: 
rev f1875b205e492ef071e7ef78b147efee0e51263d)
* (add) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/FastSaslClientFactory.java
* (add) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/FastSaslServerFactory.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SaslRpcClient.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslParticipant.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SaslRpcServer.java


> Use SASL Factories Cache to Improve Performance
> ---
>
> Key: HADOOP-16059
> URL: https://issues.apache.org/jira/browse/HADOOP-16059
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Critical
> Fix For: 3.3.0
>
> Attachments: After-Dn.png, After-Read.png, After-Server.png, 
> After-write.png, Before-DN.png, Before-Read.png, Before-Server.png, 
> Before-Write.png, HADOOP-16059-01.patch, HADOOP-16059-02.patch, 
> HADOOP-16059-02.patch, HADOOP-16059-03.patch, HADOOP-16059-04.patch
>
>
> SASL Client factories can be cached and SASL Server Factories and SASL Client 
> Factories can be together extended at SaslParticipant  to improve performance.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16289) Allow extra jsvc startup option in hadoop_start_secure_daemon in hadoop-functions.sh

2019-05-03 Thread Todd Lipcon (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16832271#comment-16832271
 ] 

Todd Lipcon commented on HADOOP-16289:
--

This seems fine to me if others are OK. +1

> Allow extra jsvc startup option in hadoop_start_secure_daemon in 
> hadoop-functions.sh
> 
>
> Key: HADOOP-16289
> URL: https://issues.apache.org/jira/browse/HADOOP-16289
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Affects Versions: 3.2.0, 3.1.2
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HADOOP-16289.001.patch, HADOOP-16289.002.patch
>
>
> Due to different opinions in HADOOP-16276 and we might want to pull in more 
> people for discussion, I want to speed this up by making a simple change to 
> the script in this jira (which would have been included in HADOOP-16276), 
> that is, to add HADOOP_DAEMON_JSVC_EXTRA_OPTS to jsvc startup command which 
> allows users to specify their extra options for jsvc.
> CC [~tlipcon] [~hgadre] [~jojochuang]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16276) Fix jsvc startup command in hadoop-functions.sh due to jsvc >= 1.0.11 changed default current working directory

2019-05-03 Thread Todd Lipcon (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16832270#comment-16832270
 ] 

Todd Lipcon commented on HADOOP-16276:
--

hrm. Personally I liked patch 2 better, since as you said, it can be seen as a 
compatibility fix. The fix in HADOOP-16289 also seems reasonable, since as 
Hrishikesh said, some users may think the new '-cwd /' behavior is more secure 
anyway. But, the behavior in the latest patch here is somewhat strange, in that 
there's the combination of a passed-in flag _and_ autodetection, which doesn't 
seem very intuitive.

> Fix jsvc startup command in hadoop-functions.sh due to jsvc >= 1.0.11 changed 
> default current working directory
> ---
>
> Key: HADOOP-16276
> URL: https://issues.apache.org/jira/browse/HADOOP-16276
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 3.2.0, 3.1.2
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HADOOP-16276.001.patch, HADOOP-16276.002.patch, 
> HADOOP-16276.003.patch
>
>
> In CDH6, when we bump jsvc from 1.0.10 to 1.1.0 we hit 
> *KerberosAuthException: failure to login / LoginException: Unable to obtain 
> password from user* due to DAEMON-264 and our 
> *dfs.nfs.keytab.file* config uses a relative path. I will probably file 
> another jira to issue a warning like *hdfs.keytab not found* before 
> KerberosAuthException in this case.
> The solution is to add *-cwd $(pwd)* in function hadoop_start_secure_daemon 
> in hadoop-functions.sh but I will have to consider the compatibility with 
> older jsvc versions <= 1.0.10. Will post the patch after I tested it.
> Thanks [~tlipcon] for finding the root cause.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #768: HADOOP-16269. ABFS: add listFileStatus with StartFrom.

2019-05-03 Thread GitBox
hadoop-yetus commented on issue #768: HADOOP-16269. ABFS: add listFileStatus 
with StartFrom.
URL: https://github.com/apache/hadoop/pull/768#issuecomment-488953600
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 30 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 3 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1047 | trunk passed |
   | +1 | compile | 27 | trunk passed |
   | +1 | checkstyle | 18 | trunk passed |
   | +1 | mvnsite | 34 | trunk passed |
   | +1 | shadedclient | 177 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 24 | trunk passed |
   | 0 | spotbugs | 54 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | +1 | findbugs | 52 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 25 | the patch passed |
   | +1 | compile | 24 | the patch passed |
   | +1 | javac | 24 | the patch passed |
   | -0 | checkstyle | 15 | hadoop-tools/hadoop-azure: The patch generated 4 
new + 2 unchanged - 0 fixed = 6 total (was 2) |
   | +1 | mvnsite | 29 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 746 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 21 | the patch passed |
   | +1 | findbugs | 55 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 81 | hadoop-azure in the patch passed. |
   | +1 | asflicense | 31 | The patch does not generate ASF License warnings. |
   | | | 2593 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-768/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/768 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 7924d0c9a0e0 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / d6b7609 |
   | Default Java | 1.8.0_191 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-768/5/artifact/out/diff-checkstyle-hadoop-tools_hadoop-azure.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-768/5/testReport/ |
   | Max. process+thread count | 411 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-768/5/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org