[jira] [Commented] (HADOOP-15726) Create utility to limit frequency of log statements

2018-09-14 Thread Erik Krogen (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16615493#comment-16615493
 ] 

Erik Krogen commented on HADOOP-15726:
--

Thanks for taking a look Chen!
# It can definitely be used for text-only info; you would just call log() with 
no arguments (no values specified). Then, you can use {{getCount()}} to 
indicate how many lines were throttled. I believe this addresses your 
parenthetical thought as well. Take a look at the [most recent 
patch|https://issues.apache.org/jira/secure/attachment/12938732/HDFS-13791-HDFS-12943.003.patch]
 in HDFS-13791 for examples of this.
# I considered this as well... Especially while writing the Javadoc, I felt the 
confusion between "logging" to the throttler, and actually logging. I have a 
few ideas, though I'm not thrilled about any of them. Can you let me know your 
thoughts? (cc [~csun]):
** checkLog() - indicate that you are *checking* whether you should log rather 
than actually logging. This doesn't indicate to me that you are updating the 
values stored in the helper, though.
** store() - indicate that you store values. This may be too value-centric and 
not be accommodating to the text-only situations.
** update() - fairly generic, indicate that you are updating the state of the 
helper both in terms of values and whether or not logging should occur
# The name is only necessary because the API which accepts a user-specified 
time value also accepts a logger name. I wanted to avoid too many method 
overloads with various argument combinations so I did one short version 
({{log(double...)}}) and one long version which accepts both options (this has 
the additional benefit of avoiding confusion between {{log(long, double...)}} 
and {{log(double...)}}, since something intended as a long could also be 
interpreted as a double). If you look at other examples in HDFS-13791, most of 
the time, only the short version of the API is used, so there is none of this 
confusion. So far we have only one use case which requires multiple logs on a 
single helper, and that was a primary/dependent situation, so we made the 
dependent case easy. I see your reasoning that it may be less common, but with 
only one use case for multiple loggers so far, I think I would prefer to wait 
and see which ends up being more common? Let me know if this makes sense to you.

> Create utility to limit frequency of log statements
> ---
>
> Key: HADOOP-15726
> URL: https://issues.apache.org/jira/browse/HADOOP-15726
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common, util
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HADOOP-15726.000.patch, HADOOP-15726.001.patch
>
>
> There is a common pattern of logging a behavior that is normally extraneous. 
> Under some circumstances, such a behavior becomes common, flooding the logs 
> and making it difficult to see what else is going on in the system. Under 
> such situations it is beneficial to limit how frequently the extraneous 
> behavior is logged, while capturing some summary information about the 
> suppressed log statements.
> This is currently implemented in {{FSNamesystemLock}} (in HDFS-10713). We 
> have additional use cases for this in HDFS-13791, so this is a good time to 
> create a common utility for different sites to share this logic.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15757) ABFS: remove dependency on common-codec Base64

2018-09-14 Thread Thomas Marquardt (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15757?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Marquardt updated HADOOP-15757:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

commit 8873d29d32ff6605ec5b3b5abfe385b046816063
Author: Thomas Marquardt 
Date: Fri Sep 14 22:34:19 2018 +

HADOOP-15757. ABFS: remove dependency on common-codec Base64.
 Contributed by Da Zhou.

> ABFS: remove dependency on common-codec Base64
> --
>
> Key: HADOOP-15757
> URL: https://issues.apache.org/jira/browse/HADOOP-15757
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Da Zhou
>Assignee: Da Zhou
>Priority: Major
> Attachments: HADOOP-15757-HADOOP-15407-001.patch, 
> HADOOP-15757-HADOOP-15407-002.patch
>
>
> Currently ABFS relies on common-codec Base64, because different versions of 
> common-codec are widely used and some are missing the methods needed by ABFS, 
> it cause lots of "no such method" exception in customer's env, hence we 
> decide to add util for Base64 to avoid such issues in future.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15757) ABFS: remove dependency on common-codec Base64

2018-09-14 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16615487#comment-16615487
 ] 

Hadoop QA commented on HADOOP-15757:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} HADOOP-15407 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
24s{color} | {color:green} HADOOP-15407 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
26s{color} | {color:green} HADOOP-15407 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} HADOOP-15407 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} HADOOP-15407 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 58s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
38s{color} | {color:green} HADOOP-15407 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} HADOOP-15407 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 31s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
7s{color} | {color:green} hadoop-azure in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 57m 44s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | HADOOP-15757 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12939792/HADOOP-15757-HADOOP-15407-002.patch
 |
| Optional Tests |  dupname  asflicense  xml  compile  javac  javadoc  
mvninstall  mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux ffc45777f512 4.4.0-134-generic #160~14.04.1-Ubuntu SMP Fri Aug 
17 11:07:07 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HADOOP-15407 / 39bacd6 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15197/testReport/ |
| Max. process+thread count | 345 (vs. ulimit of 1) |
| modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15197/console |
| Powered by | Apache Yetus 0.8.0   

[jira] [Commented] (HADOOP-15726) Create utility to limit frequency of log statements

2018-09-14 Thread Chen Liang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16615480#comment-16615480
 ] 

Chen Liang commented on HADOOP-15726:
-

Thanks [~xkrogen]!. Some comments:

1. Please correct me if I'm wrong. It looks like {{LogThrottlingHelper}} is 
specifically for logging of numeric info? Because the {{log}} method always 
takes {{double}} type {{values}}, backed by {{SummaryStatistics}}. I think it 
is pretty cool to only log aggregated statistics (e.g. max, count) when there 
are too many of the logs about numbers. But it may be helpful to make it clear 
that this may not be the best fit for throttling/aggregating of text only 
loggings, probably in naming/comments. (a thought: even for logs without a 
numeric value to aggregate, it may still be useful to log how many lines of 
them are omitted/throttled, I wonder would that a good/possible addition to 
this).
 2. Since {{log}} method does not really log, but only to tell caller whether a 
log should be made, maybe rename it to something else?
 3. I guess I can see use cases of primary and dependent loggers. But seems it 
may also be an extra burden to the caller. For example in {{FSNamesystemLock}}, 
although the object is already named {{writeLockReportLogger}}, an explicitly 
given string "write" still needs to be passed to it. If someone later wants to 
add more logging with this logger, he/she better be sure not only using 
{{writeLockReportLogger}}, but also calling with/without the static string 
"write". I imagine having dependent loggers is a more rare use case, maybe a 
more common case would be to just set the name when initializing, and always 
using the only (primary) logger the rest of time. e.g. maybe add a constructor 
with only (name, interval) and use this to initialize in {{FSNamesystemLock}} 
instead.

> Create utility to limit frequency of log statements
> ---
>
> Key: HADOOP-15726
> URL: https://issues.apache.org/jira/browse/HADOOP-15726
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common, util
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HADOOP-15726.000.patch, HADOOP-15726.001.patch
>
>
> There is a common pattern of logging a behavior that is normally extraneous. 
> Under some circumstances, such a behavior becomes common, flooding the logs 
> and making it difficult to see what else is going on in the system. Under 
> such situations it is beneficial to limit how frequently the extraneous 
> behavior is logged, while capturing some summary information about the 
> suppressed log statements.
> This is currently implemented in {{FSNamesystemLock}} (in HDFS-10713). We 
> have additional use cases for this in HDFS-13791, so this is a good time to 
> create a common utility for different sites to share this logic.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15673) Hadoop:3 image is missing from dockerhub

2018-09-14 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16615478#comment-16615478
 ] 

Hadoop QA commented on HADOOP-15673:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  5m  
3s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue}  0m  
0s{color} | {color:blue} Shelldocs was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} docker-hadoop-3 Compile Tests {color} ||
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce 
Image:yetus/hadoop:date2018-09-14 |
| JIRA Issue | HADOOP-15673 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12939805/HADOOP-15673-docker-hadoop-3.00.patch
 |
| Optional Tests |  dupname  asflicense  hadolint  shellcheck  shelldocs  |
| uname | Linux f4255169d9c3 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | docker-hadoop-3 / bced12e |
| maven | version: Apache Maven 3.3.9 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15198/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Hadoop:3 image is missing from dockerhub
> 
>
> Key: HADOOP-15673
> URL: https://issues.apache.org/jira/browse/HADOOP-15673
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: newbie
> Attachments: HADOOP-15673-docker-hadoop-3.00.patch
>
>
> Currently the apache/hadoop:3 image is missing from the dockerhub as the 
> Dockerfile in docker-hadoop-3 branch contains the outdated 3.0.0 download 
> url. It should be updated to the latest 3.1.1 url.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15673) Hadoop:3 image is missing from dockerhub

2018-09-14 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15673?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HADOOP-15673:

Status: Patch Available  (was: Open)

> Hadoop:3 image is missing from dockerhub
> 
>
> Key: HADOOP-15673
> URL: https://issues.apache.org/jira/browse/HADOOP-15673
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: newbie
> Attachments: HADOOP-15673-docker-hadoop-3.00.patch
>
>
> Currently the apache/hadoop:3 image is missing from the dockerhub as the 
> Dockerfile in docker-hadoop-3 branch contains the outdated 3.0.0 download 
> url. It should be updated to the latest 3.1.1 url.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15673) Hadoop:3 image is missing from dockerhub

2018-09-14 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15673?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HADOOP-15673:

Attachment: (was: HADOOP-15673.00.patch)

> Hadoop:3 image is missing from dockerhub
> 
>
> Key: HADOOP-15673
> URL: https://issues.apache.org/jira/browse/HADOOP-15673
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: newbie
> Attachments: HADOOP-15673-docker-hadoop-3.00.patch
>
>
> Currently the apache/hadoop:3 image is missing from the dockerhub as the 
> Dockerfile in docker-hadoop-3 branch contains the outdated 3.0.0 download 
> url. It should be updated to the latest 3.1.1 url.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15673) Hadoop:3 image is missing from dockerhub

2018-09-14 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15673?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HADOOP-15673:

Attachment: HADOOP-15673-docker-hadoop-3.00.patch

> Hadoop:3 image is missing from dockerhub
> 
>
> Key: HADOOP-15673
> URL: https://issues.apache.org/jira/browse/HADOOP-15673
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: newbie
> Attachments: HADOOP-15673-docker-hadoop-3.00.patch
>
>
> Currently the apache/hadoop:3 image is missing from the dockerhub as the 
> Dockerfile in docker-hadoop-3 branch contains the outdated 3.0.0 download 
> url. It should be updated to the latest 3.1.1 url.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15673) Hadoop:3 image is missing from dockerhub

2018-09-14 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16615467#comment-16615467
 ] 

Bharat Viswanadham commented on HADOOP-15673:
-

[~elek]

Updated to change the urls to 3.1.1.

 

> Hadoop:3 image is missing from dockerhub
> 
>
> Key: HADOOP-15673
> URL: https://issues.apache.org/jira/browse/HADOOP-15673
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: newbie
> Attachments: HADOOP-15673-docker-hadoop-3.00.patch
>
>
> Currently the apache/hadoop:3 image is missing from the dockerhub as the 
> Dockerfile in docker-hadoop-3 branch contains the outdated 3.0.0 download 
> url. It should be updated to the latest 3.1.1 url.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15673) Hadoop:3 image is missing from dockerhub

2018-09-14 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15673?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HADOOP-15673:

Attachment: HADOOP-15673.00.patch

> Hadoop:3 image is missing from dockerhub
> 
>
> Key: HADOOP-15673
> URL: https://issues.apache.org/jira/browse/HADOOP-15673
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: newbie
> Attachments: HADOOP-15673.00.patch
>
>
> Currently the apache/hadoop:3 image is missing from the dockerhub as the 
> Dockerfile in docker-hadoop-3 branch contains the outdated 3.0.0 download 
> url. It should be updated to the latest 3.1.1 url.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15757) ABFS: remove dependency on common-codec Base64

2018-09-14 Thread Thomas Marquardt (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16615464#comment-16615464
 ] 

Thomas Marquardt commented on HADOOP-15757:
---

+1 on the 002 patch.  As long as Yetus gives a +1, we can push it.

 

All tests pass for me too:
{code:java}
mvn -T 1C -Dparallel-tests=abfs -Dscale -DtestsThreadCount=8 clean verify
Tests run: 35, Failures: 0, Errors: 0, Skipped: 0
Tests run: 269, Failures: 0, Errors: 0, Skipped: 182
Tests run: 167, Failures: 0, Errors: 0, Skipped: 27
Total time: 04:26 min (Wall Clock)
{code}

> ABFS: remove dependency on common-codec Base64
> --
>
> Key: HADOOP-15757
> URL: https://issues.apache.org/jira/browse/HADOOP-15757
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Da Zhou
>Assignee: Da Zhou
>Priority: Major
> Attachments: HADOOP-15757-HADOOP-15407-001.patch, 
> HADOOP-15757-HADOOP-15407-002.patch
>
>
> Currently ABFS relies on common-codec Base64, because different versions of 
> common-codec are widely used and some are missing the methods needed by ABFS, 
> it cause lots of "no such method" exception in customer's env, hence we 
> decide to add util for Base64 to avoid such issues in future.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-15673) Hadoop:3 image is missing from dockerhub

2018-09-14 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15673?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham reassigned HADOOP-15673:
---

Assignee: Bharat Viswanadham

> Hadoop:3 image is missing from dockerhub
> 
>
> Key: HADOOP-15673
> URL: https://issues.apache.org/jira/browse/HADOOP-15673
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: newbie
>
> Currently the apache/hadoop:3 image is missing from the dockerhub as the 
> Dockerfile in docker-hadoop-3 branch contains the outdated 3.0.0 download 
> url. It should be updated to the latest 3.1.1 url.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15757) ABFS: remove dependency on common-codec Base64

2018-09-14 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16615438#comment-16615438
 ] 

Hadoop QA commented on HADOOP-15757:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 19m 
57s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} HADOOP-15407 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 29m 
58s{color} | {color:green} HADOOP-15407 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
25s{color} | {color:green} HADOOP-15407 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} HADOOP-15407 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} HADOOP-15407 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 45s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
41s{color} | {color:green} HADOOP-15407 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} HADOOP-15407 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 15s{color} | {color:orange} hadoop-tools/hadoop-azure: The patch generated 1 
new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 43s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
14s{color} | {color:green} hadoop-azure in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 79m  7s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | HADOOP-15757 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12939782/HADOOP-15757-HADOOP-15407-001.patch
 |
| Optional Tests |  dupname  asflicense  xml  compile  javac  javadoc  
mvninstall  mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 00fbfdcd499d 4.4.0-133-generic #159-Ubuntu SMP Fri Aug 10 
07:31:43 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HADOOP-15407 / 39bacd6 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15196/artifact/out/diff-checkstyle-hadoop-tools_hadoop-azure.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15196/testReport/ |
| Max. process+thread count | 409 (vs. ulimit of 1) |
| 

[jira] [Commented] (HADOOP-15757) ABFS: remove dependency on common-codec Base64

2018-09-14 Thread Da Zhou (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16615435#comment-16615435
 ] 

Da Zhou commented on HADOOP-15757:
--

[~tmarquardt]Thanks for the review.
Submitting HADOOP-15757-HADOOP-15407-002.patch:
- Base64.java L33: resolved
- TestConfigurationValidators.java, L24: resolved.

> ABFS: remove dependency on common-codec Base64
> --
>
> Key: HADOOP-15757
> URL: https://issues.apache.org/jira/browse/HADOOP-15757
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Da Zhou
>Assignee: Da Zhou
>Priority: Major
> Attachments: HADOOP-15757-HADOOP-15407-001.patch, 
> HADOOP-15757-HADOOP-15407-002.patch
>
>
> Currently ABFS relies on common-codec Base64, because different versions of 
> common-codec are widely used and some are missing the methods needed by ABFS, 
> it cause lots of "no such method" exception in customer's env, hence we 
> decide to add util for Base64 to avoid such issues in future.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15757) ABFS: remove dependency on common-codec Base64

2018-09-14 Thread Da Zhou (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15757?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Da Zhou updated HADOOP-15757:
-
Attachment: HADOOP-15757-HADOOP-15407-002.patch

> ABFS: remove dependency on common-codec Base64
> --
>
> Key: HADOOP-15757
> URL: https://issues.apache.org/jira/browse/HADOOP-15757
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Da Zhou
>Assignee: Da Zhou
>Priority: Major
> Attachments: HADOOP-15757-HADOOP-15407-001.patch, 
> HADOOP-15757-HADOOP-15407-002.patch
>
>
> Currently ABFS relies on common-codec Base64, because different versions of 
> common-codec are widely used and some are missing the methods needed by ABFS, 
> it cause lots of "no such method" exception in customer's env, hence we 
> decide to add util for Base64 to avoid such issues in future.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15741) [JDK10] Upgrade Maven Javadoc Plugin from 3.0.0-M1 to 3.0.1

2018-09-14 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16615396#comment-16615396
 ] 

Hadoop QA commented on HADOOP-15741:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
35s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
27s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 20m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 13m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
69m 15s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  6m  
2s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 17m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 11m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 10s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
23s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}182m 14s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
55s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}324m 59s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestLeaseRecovery2 |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.server.namenode.ha.TestHAAppend |
|   | hadoop.hdfs.server.namenode.web.resources.TestWebHdfsDataLocality |
|   | hadoop.yarn.server.nodemanager.containermanager.TestContainerManager |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HADOOP-15741 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12939743/HADOOP-15741.2.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  xml  |
| uname | Linux ed9f9daa2f27 3.13.0-144-generic #193-Ubuntu SMP Thu Mar 15 
17:03:53 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 9923760 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15195/artifact/out/patch-unit-root.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15195/testReport/ |
| Max. process+thread count | 2998 (vs. ulimit of 1) |
| modules | C: 

[jira] [Commented] (HADOOP-15757) ABFS: remove dependency on common-codec Base64

2018-09-14 Thread Thomas Marquardt (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16615380#comment-16615380
 ] 

Thomas Marquardt commented on HADOOP-15757:
---

LGTM, couple minor comments:

*Base64.java*:
   L33: {{byte DECODE_64[]}} should be {{byte[] DECODE_64}}.

*TestConfigurationValidators.java*:
 L24: imports should be in alphabetical order, so 
org.apache.hadoop.fs.azurebfs.utils.Base64 is below
 
org.apache.hadoop.fs.azurebfs.contracts.exceptions.InvalidConfigurationValueException

 

Otherwise, +1 from me.  After we move to trunk, let's move this to 
hadoop-common.  We've actually used this class for years, so we no it is 
correct and very stable.

> ABFS: remove dependency on common-codec Base64
> --
>
> Key: HADOOP-15757
> URL: https://issues.apache.org/jira/browse/HADOOP-15757
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Da Zhou
>Assignee: Da Zhou
>Priority: Major
> Attachments: HADOOP-15757-HADOOP-15407-001.patch
>
>
> Currently ABFS relies on common-codec Base64, because different versions of 
> common-codec are widely used and some are missing the methods needed by ABFS, 
> it cause lots of "no such method" exception in customer's env, hence we 
> decide to add util for Base64 to avoid such issues in future.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15757) ABFS: remove dependency on common-codec Base64

2018-09-14 Thread Da Zhou (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15757?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Da Zhou updated HADOOP-15757:
-
Description: Currently ABFS relies on common-codec Base64, because 
different versions of common-codec are widely used and some are missing the 
methods needed by ABFS, it cause lots of "no such method" exception in 
customer's env, hence we decide to add util for Base64 to avoid such issues in 
future.  (was: Currently ABFS relies on common-codec Base64, because different 
versions of common-codec are widely used and some are missing the methods 
needed by ABFS, it cause lots of "no such method" exception, hence we decide to 
add util for Base64 to avoid such issues in future.)

> ABFS: remove dependency on common-codec Base64
> --
>
> Key: HADOOP-15757
> URL: https://issues.apache.org/jira/browse/HADOOP-15757
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Da Zhou
>Assignee: Da Zhou
>Priority: Major
> Attachments: HADOOP-15757-HADOOP-15407-001.patch
>
>
> Currently ABFS relies on common-codec Base64, because different versions of 
> common-codec are widely used and some are missing the methods needed by ABFS, 
> it cause lots of "no such method" exception in customer's env, hence we 
> decide to add util for Base64 to avoid such issues in future.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15739) ABFS: remove unused maven dependencies and add used undeclared dependencies

2018-09-14 Thread Da Zhou (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16615356#comment-16615356
 ] 

Da Zhou commented on HADOOP-15739:
--

1. I will update the test to remove log4j.
2. Sure, this is because of another "no-such-method" exception happened in 
customer env,  ABFS is using a org.codehaus.jackson to avoid that.
3. They will be removed in  HADOOP-15757.
4. Good catch, it is for test only, will update it.

Will upload a patch once HADOOP-15757 is committed, as it removed some 
dependencies.

> ABFS: remove unused maven dependencies and add used undeclared dependencies
> ---
>
> Key: HADOOP-15739
> URL: https://issues.apache.org/jira/browse/HADOOP-15739
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build, fs/azure
>Affects Versions: HADOOP-15407
>Reporter: Da Zhou
>Assignee: Da Zhou
>Priority: Major
> Attachments: HADOOP-15739-HADOOP-15407-001.patch, 
> HADOOP-15739-HADOOP-15407-002.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15757) ABFS: remove dependency on common-codec Base64

2018-09-14 Thread Da Zhou (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16615337#comment-16615337
 ] 

Da Zhou commented on HADOOP-15757:
--

ABFS tests passed my US West account
Tests results:
Tests run: 35, Failures: 0, Errors: 0, Skipped: 0
Tests run: 269, Failures: 0, Errors: 0, Skipped: 31
Tests run: 167, Failures: 0, Errors: 0, Skipped: 27

> ABFS: remove dependency on common-codec Base64
> --
>
> Key: HADOOP-15757
> URL: https://issues.apache.org/jira/browse/HADOOP-15757
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Da Zhou
>Assignee: Da Zhou
>Priority: Major
> Attachments: HADOOP-15757-HADOOP-15407-001.patch
>
>
> Currently ABFS relies on common-codec Base64, because different versions of 
> common-codec are widely used and some are missing the methods needed by ABFS, 
> it cause lots of "no such method" exception, hence we decide to add util for 
> Base64 to avoid such issues in future.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15757) ABFS: remove dependency on common-codec Base64

2018-09-14 Thread Da Zhou (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15757?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Da Zhou updated HADOOP-15757:
-
Attachment: HADOOP-15757-HADOOP-15407-001.patch
Status: Patch Available  (was: Open)

Submitting HADOOP-15757-HADOOP-15407-001.patch:
- removed dependency on Base64 of third party jar
- added Base64 util.
- exclude it from checkstyle for now because of the magic number violations, 
will try to update this to hadoop common for usage in future.

 

> ABFS: remove dependency on common-codec Base64
> --
>
> Key: HADOOP-15757
> URL: https://issues.apache.org/jira/browse/HADOOP-15757
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Da Zhou
>Assignee: Da Zhou
>Priority: Major
> Attachments: HADOOP-15757-HADOOP-15407-001.patch
>
>
> Currently ABFS relies on common-codec Base64, because different versions of 
> common-codec are widely used and some are missing the methods needed by ABFS, 
> it cause lots of "no such method" exception, hence we decide to add util for 
> Base64 to avoid such issues in future.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15758) Filesystem.get API not working as expected with user argument

2018-09-14 Thread Hrishikesh Gadre (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16615305#comment-16615305
 ] 

Hrishikesh Gadre edited comment on HADOOP-15758 at 9/14/18 7:50 PM:


Here is a sample program to reproduce this issue: 
[https://gist.github.com/hgadre/38e1b625a6af70f1659fb19137a12ece]

The steps to reproduce are as follows
 * export KRB5CCNAME=/tmp/krb5cc_foo
 * export CLASSPATH=$CLASSPATH:$(hadoop classpath)
 * javac ReadWriteHDFSWithKinit.java
 * kinit -l 1m -kt hdfs.keytab [h...@abc.com|mailto:h...@abc.com] # kinit as a 
superuser (could be any user that has ability to proxy)
 * java ReadWriteHDFSWithKinit systest # note: access file as systest via the 
FileSystem.get(uri,conf,user) API

 

The last step fails with following exception,
{noformat}
WARN security.UserGroupInformation: PriviledgedActionException as:h...@abc.com 
(auth:KERBEROS) cause:javax.security.sasl.SaslException: GSS initiate failed 
[Caused by GSSException: No valid credentials provided (Mechanism level: Failed 
to find any Kerberos tgt)]
18/06/21 12:59:58 WARN ipc.Client: Exception encountered while connecting to 
the server : javax.security.sasl.SaslException: GSS initiate failed [Caused by 
GSSException: No valid credentials provided (Mechanism level: Failed to find 
any Kerberos tgt)]
18/06/21 12:59:58 WARN security.UserGroupInformation: 
PriviledgedActionException as:h...@abc.com (auth:KERBEROS) 
cause:java.io.IOException: javax.security.sasl.SaslException: GSS initiate 
failed [Caused by GSSException: No valid credentials provided (Mechanism level: 
Failed to find any Kerberos tgt)]
18/06/21 12:59:58 WARN hdfs.LeaseRenewer: Failed to renew lease for 
[DFSClient_NONMAPREDUCE_1855947848_1] for 30 seconds.  Will retry shortly ...
java.io.IOException: Failed on local exception: java.io.IOException: 
javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: 
No valid credentials provided (Mechanism level: Failed to find any Kerberos 
tgt)]; Host Details : local host is: "host-2.abc.com/10.15.13.17"; destination 
host is: "host-1.abc.com":8020;{noformat}
 


was (Author: hgadre):
Here is a sample program to reproduce this issue: 
[https://gist.github.com/hgadre/38e1b625a6af70f1659fb19137a12ece]

The steps to reproduce are as follows
 * export KRB5CCNAME=/tmp/krb5cc_foo
 * export CLASSPATH=$CLASSPATH:$(hadoop classpath)
 * javac ReadWriteHDFSWithKinit.java
 * kinit -l 1m -kt hdfs.keytab [h...@abc.com|mailto:h...@abc.com] # kinit as a 
superuser (could be any user that has ability to proxy)
 * java ReadWriteHDFSWithKinitCloudera systest # note: access file as systest 
via the FileSystem.get(uri,conf,user) API

 

The last step fails with following exception,
{noformat}
WARN security.UserGroupInformation: PriviledgedActionException as:h...@abc.com 
(auth:KERBEROS) cause:javax.security.sasl.SaslException: GSS initiate failed 
[Caused by GSSException: No valid credentials provided (Mechanism level: Failed 
to find any Kerberos tgt)]
18/06/21 12:59:58 WARN ipc.Client: Exception encountered while connecting to 
the server : javax.security.sasl.SaslException: GSS initiate failed [Caused by 
GSSException: No valid credentials provided (Mechanism level: Failed to find 
any Kerberos tgt)]
18/06/21 12:59:58 WARN security.UserGroupInformation: 
PriviledgedActionException as:h...@abc.com (auth:KERBEROS) 
cause:java.io.IOException: javax.security.sasl.SaslException: GSS initiate 
failed [Caused by GSSException: No valid credentials provided (Mechanism level: 
Failed to find any Kerberos tgt)]
18/06/21 12:59:58 WARN hdfs.LeaseRenewer: Failed to renew lease for 
[DFSClient_NONMAPREDUCE_1855947848_1] for 30 seconds.  Will retry shortly ...
java.io.IOException: Failed on local exception: java.io.IOException: 
javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: 
No valid credentials provided (Mechanism level: Failed to find any Kerberos 
tgt)]; Host Details : local host is: "host-2.abc.com/10.15.13.17"; destination 
host is: "host-1.abc.com":8020;{noformat}
 

> Filesystem.get API not working as expected with user argument
> -
>
> Key: HADOOP-15758
> URL: https://issues.apache.org/jira/browse/HADOOP-15758
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Hrishikesh Gadre
>Priority: Major
>
> A user reported that the Filesystem.get API is not working as expected when 
> they use the 'FileSystem.get(URI, Configuration, user)' method signature - 
> but 'FileSystem.get(URI, Configuration)' works fine. The user is trying to 
> use this method signature to mimic proxy user functionality e.g. provide 
> ticket cache based kerberos credentials (using KRB5CCNAME env variable) for 
> the proxy user and then in the java program pass name of the user 

[jira] [Commented] (HADOOP-15758) Filesystem.get API not working as expected with user argument

2018-09-14 Thread Hrishikesh Gadre (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16615305#comment-16615305
 ] 

Hrishikesh Gadre commented on HADOOP-15758:
---

Here is a sample program to reproduce this issue: 
[https://gist.github.com/hgadre/38e1b625a6af70f1659fb19137a12ece]

The steps to reproduce are as follows
 * export KRB5CCNAME=/tmp/krb5cc_foo
 * export CLASSPATH=$CLASSPATH:$(hadoop classpath)
 * javac ReadWriteHDFSWithKinit.java
 * kinit -l 1m -kt hdfs.keytab [h...@abc.com|mailto:h...@abc.com] # kinit as a 
superuser (could be any user that has ability to proxy)
 * java ReadWriteHDFSWithKinitCloudera systest # note: access file as systest 
via the FileSystem.get(uri,conf,user) API

 

The last step fails with following exception,
{noformat}
WARN security.UserGroupInformation: PriviledgedActionException as:h...@abc.com 
(auth:KERBEROS) cause:javax.security.sasl.SaslException: GSS initiate failed 
[Caused by GSSException: No valid credentials provided (Mechanism level: Failed 
to find any Kerberos tgt)]
18/06/21 12:59:58 WARN ipc.Client: Exception encountered while connecting to 
the server : javax.security.sasl.SaslException: GSS initiate failed [Caused by 
GSSException: No valid credentials provided (Mechanism level: Failed to find 
any Kerberos tgt)]
18/06/21 12:59:58 WARN security.UserGroupInformation: 
PriviledgedActionException as:h...@abc.com (auth:KERBEROS) 
cause:java.io.IOException: javax.security.sasl.SaslException: GSS initiate 
failed [Caused by GSSException: No valid credentials provided (Mechanism level: 
Failed to find any Kerberos tgt)]
18/06/21 12:59:58 WARN hdfs.LeaseRenewer: Failed to renew lease for 
[DFSClient_NONMAPREDUCE_1855947848_1] for 30 seconds.  Will retry shortly ...
java.io.IOException: Failed on local exception: java.io.IOException: 
javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: 
No valid credentials provided (Mechanism level: Failed to find any Kerberos 
tgt)]; Host Details : local host is: "host-2.abc.com/10.15.13.17"; destination 
host is: "host-1.abc.com":8020;{noformat}
 

> Filesystem.get API not working as expected with user argument
> -
>
> Key: HADOOP-15758
> URL: https://issues.apache.org/jira/browse/HADOOP-15758
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Hrishikesh Gadre
>Priority: Major
>
> A user reported that the Filesystem.get API is not working as expected when 
> they use the 'FileSystem.get(URI, Configuration, user)' method signature - 
> but 'FileSystem.get(URI, Configuration)' works fine. The user is trying to 
> use this method signature to mimic proxy user functionality e.g. provide 
> ticket cache based kerberos credentials (using KRB5CCNAME env variable) for 
> the proxy user and then in the java program pass name of the user to be 
> impersonated. The alternative, to use [proxy users 
> functionality|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/Superusers.html]
>  in Hadoop works as expected.
>  
> Since FileSystem.get(URI, Configuration, user) is a public API and it does 
> not restrict its usage in this fashion, we should ideally make it work or add 
> docs to discourage its usage to implement proxy users.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15758) Filesystem.get API not working as expected with user argument

2018-09-14 Thread Hrishikesh Gadre (JIRA)
Hrishikesh Gadre created HADOOP-15758:
-

 Summary: Filesystem.get API not working as expected with user 
argument
 Key: HADOOP-15758
 URL: https://issues.apache.org/jira/browse/HADOOP-15758
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Hrishikesh Gadre


A user reported that the Filesystem.get API is not working as expected when 
they use the 'FileSystem.get(URI, Configuration, user)' method signature - but 
'FileSystem.get(URI, Configuration)' works fine. The user is trying to use this 
method signature to mimic proxy user functionality e.g. provide ticket cache 
based kerberos credentials (using KRB5CCNAME env variable) for the proxy user 
and then in the java program pass name of the user to be impersonated. The 
alternative, to use [proxy users 
functionality|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/Superusers.html]
 in Hadoop works as expected.

 

Since FileSystem.get(URI, Configuration, user) is a public API and it does not 
restrict its usage in this fashion, we should ideally make it work or add docs 
to discourage its usage to implement proxy users.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-15704) ABFS: Consider passing FS URI to CustomDelegationTokenManager

2018-09-14 Thread Da Zhou (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15704?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Da Zhou reassigned HADOOP-15704:


Assignee: Da Zhou

> ABFS: Consider passing FS URI to CustomDelegationTokenManager
> -
>
> Key: HADOOP-15704
> URL: https://issues.apache.org/jira/browse/HADOOP-15704
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Reporter: Thomas Marquardt
>Assignee: Da Zhou
>Priority: Major
>
> Refer to Steve's comments in HADOOP-15692.  Passing the FS or FS URI to the 
> CustomDelegationTokenManager would allow it to provide per-filesystem tokens. 
>  We currently have implementations of CustomDelegationTokenManager, and need 
> to do a little leg work here, but it may be possible to update before GA.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-15702) ABFS: Increase timeout of ITestAbfsReadWriteAndSeek

2018-09-14 Thread Da Zhou (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15702?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Da Zhou reassigned HADOOP-15702:


Assignee: Da Zhou

> ABFS: Increase timeout of ITestAbfsReadWriteAndSeek
> ---
>
> Key: HADOOP-15702
> URL: https://issues.apache.org/jira/browse/HADOOP-15702
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure, test
>Affects Versions: HADOOP-15407
>Reporter: Sean Mackrory
>Assignee: Da Zhou
>Priority: Major
>
> ITestAbfsReadWriteAndSeek.testReadAndWriteWithDifferentBufferSizesAndSeek 
> fails for me all the time. Let's increase the timout limit.
> It also seems to get executed twice...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-15662) ABFS: Better exception handling of DNS errors

2018-09-14 Thread Da Zhou (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Da Zhou reassigned HADOOP-15662:


Assignee: Da Zhou

> ABFS: Better exception handling of DNS errors
> -
>
> Key: HADOOP-15662
> URL: https://issues.apache.org/jira/browse/HADOOP-15662
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Reporter: Thomas Marquardt
>Assignee: Da Zhou
>Priority: Major
>
> DNS errors are common during testing due to typos or misconfiguration.  They 
> can also occur in production, as some transient DNS issues occur from time to 
> time. 
> 1) Let's investigate if we can distinguish between the two and fail fast for 
> the test issues, but continue to have retry logic for the transient DNS 
> issues in production.
> 2) Let's improve the error handling of DNS failures, so the user has an 
> actionable error message.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15702) ABFS: Increase timeout of ITestAbfsReadWriteAndSeek

2018-09-14 Thread Da Zhou (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16615211#comment-16615211
 ] 

Da Zhou commented on HADOOP-15702:
--

[~mackrorysd]Could we close this JIRA?  I updated global default timeout in 
HADOOP-15715 in case of the long network latency , so this Jira looks like a 
duplicate.

> ABFS: Increase timeout of ITestAbfsReadWriteAndSeek
> ---
>
> Key: HADOOP-15702
> URL: https://issues.apache.org/jira/browse/HADOOP-15702
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure, test
>Affects Versions: HADOOP-15407
>Reporter: Sean Mackrory
>Priority: Major
>
> ITestAbfsReadWriteAndSeek.testReadAndWriteWithDifferentBufferSizesAndSeek 
> fails for me all the time. Let's increase the timout limit.
> It also seems to get executed twice...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15712) ABFS to increase output stream close more robustly

2018-09-14 Thread Da Zhou (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16615204#comment-16615204
 ] 

Da Zhou commented on HADOOP-15712:
--

Since this is WASB issue, it doesn't make sense to put it under HADOOP-15407,  
could we move it out?

> ABFS to increase output stream close more robustly
> --
>
> Key: HADOOP-15712
> URL: https://issues.apache.org/jira/browse/HADOOP-15712
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Reporter: Steve Loughran
>Priority: Minor
>
> if {{BlobOutputStream.close()}} raises an exception, then 
> {{NativeAzureFsOutputStream.close()}} doesn't set it's {{this.out}} field to 
> null, so close() can still be re-invoked, this time with an error.
> the {{out.close()}} needs to be moved into the try/finally clause



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15715) ITestAzureBlobFileSystemE2E timing out with non-scale timeout of 10 min

2018-09-14 Thread Da Zhou (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16615199#comment-16615199
 ] 

Da Zhou commented on HADOOP-15715:
--

Hi [~ste...@apache.org], could you try the patch and see if it works for you? 

> ITestAzureBlobFileSystemE2E timing out with non-scale timeout of 10 min
> ---
>
> Key: HADOOP-15715
> URL: https://issues.apache.org/jira/browse/HADOOP-15715
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure, test
>Affects Versions: HADOOP-15407
>Reporter: Steve Loughran
>Assignee: Da Zhou
>Priority: Minor
> Attachments: HADOOP-15715-HADOOP-15407-001.patch, 
> HADOOP-15715-HADOOP-15407-002.patch
>
>
> {{ITestAzureBlobFileSystemRandomRead}} is timing out on remote parallel test 
> runs, because its working with multi MB files, which need bandwidth which the 
> other tests deny.
> Either it needs to be tagged as a scale test, or, as minimum, timeout changed 
> to match that value



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14178) Move Mockito up to version 2.x

2018-09-14 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-14178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16615145#comment-16615145
 ] 

Hadoop QA commented on HADOOP-14178:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 277 new or modified 
test files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
45s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  5m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 11m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
28m 17s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-hdfs-project/hadoop-hdfs-native-client 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests 
hadoop-mapreduce-project/hadoop-mapreduce-client hadoop-mapreduce-project 
hadoop-client-modules/hadoop-client-minicluster . hadoop-ozone/integration-test 
{color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
34s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine 
in trunk has 4 extant Findbugs warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
19s{color} | {color:red} framework in trunk failed. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
23s{color} | {color:red} server-scm in trunk failed. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
23s{color} | {color:red} ozonefs in trunk failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
36s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
15s{color} | {color:red} framework in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
15s{color} | {color:red} server-scm in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
16s{color} | {color:red} integration-test in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
15s{color} | {color:red} ozonefs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
42s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 14m 42s{color} 
| {color:red} root generated 11 new + 1335 unchanged - 0 fixed = 1346 total 
(was 1335) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  5m 
11s{color} | {color:green} root: The patch generated 0 new + 7068 unchanged - 
89 fixed = 7068 total (was 7157) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 12m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m 
56s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 16s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched 

[jira] [Created] (HADOOP-15757) ABFS: remove dependency on common-codec Base64

2018-09-14 Thread Da Zhou (JIRA)
Da Zhou created HADOOP-15757:


 Summary: ABFS: remove dependency on common-codec Base64
 Key: HADOOP-15757
 URL: https://issues.apache.org/jira/browse/HADOOP-15757
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Da Zhou
Assignee: Da Zhou


Currently ABFS relies on common-codec Base64, because different versions of 
common-codec are widely used and some are missing the methods needed by ABFS, 
it cause lots of "no such method" exception, hence we decide to add util for 
Base64 to avoid such issues in future.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15753) ABFS: support path "abfs://mycluster/file/path"

2018-09-14 Thread Thomas Marquardt (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15753?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Marquardt updated HADOOP-15753:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

commit 39bacd6ab36872afdd97007c62e0b28ba05523a1
Author: Thomas Marquardt 
Date: Fri Sep 14 16:50:26 2018 +

HADOOP-15753. ABFS: support path "abfs://mycluster/file/path"
 Contributed by Da Zhou.

> ABFS: support path "abfs://mycluster/file/path"
> ---
>
> Key: HADOOP-15753
> URL: https://issues.apache.org/jira/browse/HADOOP-15753
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Da Zhou
>Assignee: Da Zhou
>Priority: Major
> Attachments: HADOOP-15753-HADOOP-15407-001.patch
>
>
> WASB support path format: "wasb://mycluster/file/path", but ABFS doesn't, 
> which caused some issues for customer. I will add support for this path 
> format.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15753) ABFS: support path "abfs://mycluster/file/path"

2018-09-14 Thread Thomas Marquardt (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15753?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16615126#comment-16615126
 ] 

Thomas Marquardt commented on HADOOP-15753:
---

+1, LGTM.  I will push this now.

Tests pass for me too;

mvn -T 1C -Dparallel-tests=abfs -Dscale -DtestsThreadCount=8 clean verify
Tests run: 35, Failures: 0, Errors: 0, Skipped: 0
Tests run: 269, Failures: 0, Errors: 0, Skipped: 182
Tests run: 167, Failures: 0, Errors: 0, Skipped: 27
Total time: 13:29 min (Wall Clock)

> ABFS: support path "abfs://mycluster/file/path"
> ---
>
> Key: HADOOP-15753
> URL: https://issues.apache.org/jira/browse/HADOOP-15753
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Da Zhou
>Assignee: Da Zhou
>Priority: Major
> Attachments: HADOOP-15753-HADOOP-15407-001.patch
>
>
> WASB support path format: "wasb://mycluster/file/path", but ABFS doesn't, 
> which caused some issues for customer. I will add support for this path 
> format.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15753) ABFS: support path "abfs://mycluster/file/path"

2018-09-14 Thread Da Zhou (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15753?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16615122#comment-16615122
 ] 

Da Zhou edited comment on HADOOP-15753 at 9/14/18 5:32 PM:
---

[~tmarquardt], yes, both S3a and wasb ignore the scheme and authority in the 
Path which is passed into filesystem operation.


was (Author: danielzhou):
[~tmarquardt], yes, both S3a and wasb ignore the scheme and authority in the 
Path which is passed in filesystem operation.

> ABFS: support path "abfs://mycluster/file/path"
> ---
>
> Key: HADOOP-15753
> URL: https://issues.apache.org/jira/browse/HADOOP-15753
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Da Zhou
>Assignee: Da Zhou
>Priority: Major
> Attachments: HADOOP-15753-HADOOP-15407-001.patch
>
>
> WASB support path format: "wasb://mycluster/file/path", but ABFS doesn't, 
> which caused some issues for customer. I will add support for this path 
> format.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15753) ABFS: support path "abfs://mycluster/file/path"

2018-09-14 Thread Da Zhou (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15753?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16615122#comment-16615122
 ] 

Da Zhou commented on HADOOP-15753:
--

[~tmarquardt], yes, both S3a and wasb ignore the scheme and authority in the 
Path which is passed in filesystem operation.

> ABFS: support path "abfs://mycluster/file/path"
> ---
>
> Key: HADOOP-15753
> URL: https://issues.apache.org/jira/browse/HADOOP-15753
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Da Zhou
>Assignee: Da Zhou
>Priority: Major
> Attachments: HADOOP-15753-HADOOP-15407-001.patch
>
>
> WASB support path format: "wasb://mycluster/file/path", but ABFS doesn't, 
> which caused some issues for customer. I will add support for this path 
> format.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15753) ABFS: support path "abfs://mycluster/file/path"

2018-09-14 Thread Thomas Marquardt (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15753?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16615060#comment-16615060
 ] 

Thomas Marquardt commented on HADOOP-15753:
---

Looks like both wasb and s3a also ignore the scheme and the authority.  Can you 
confirm?  If yes, then +1 from me.

> ABFS: support path "abfs://mycluster/file/path"
> ---
>
> Key: HADOOP-15753
> URL: https://issues.apache.org/jira/browse/HADOOP-15753
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Da Zhou
>Assignee: Da Zhou
>Priority: Major
> Attachments: HADOOP-15753-HADOOP-15407-001.patch
>
>
> WASB support path format: "wasb://mycluster/file/path", but ABFS doesn't, 
> which caused some issues for customer. I will add support for this path 
> format.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15741) [JDK10] Upgrade Maven Javadoc Plugin from 3.0.0-M1 to 3.0.1

2018-09-14 Thread Takanobu Asanuma (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16615006#comment-16615006
 ] 

Takanobu Asanuma commented on HADOOP-15741:
---

Thanks [~ajisakaa] for your review and your investigation! Uploaded the 2nd 
patch addressing the issue.

> [JDK10] Upgrade Maven Javadoc Plugin from 3.0.0-M1 to 3.0.1
> ---
>
> Key: HADOOP-15741
> URL: https://issues.apache.org/jira/browse/HADOOP-15741
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Takanobu Asanuma
>Priority: Minor
> Attachments: HADOOP-15741.1.patch, HADOOP-15741.2.patch
>
>
> MJAVADOC-517 is fixed in 3.0.1. Let's upgrade the plugin to 3.0.1 or upper to 
> support Java 10.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15741) [JDK10] Upgrade Maven Javadoc Plugin from 3.0.0-M1 to 3.0.1

2018-09-14 Thread Takanobu Asanuma (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma updated HADOOP-15741:
--
Attachment: HADOOP-15741.2.patch

> [JDK10] Upgrade Maven Javadoc Plugin from 3.0.0-M1 to 3.0.1
> ---
>
> Key: HADOOP-15741
> URL: https://issues.apache.org/jira/browse/HADOOP-15741
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Takanobu Asanuma
>Priority: Minor
> Attachments: HADOOP-15741.1.patch, HADOOP-15741.2.patch
>
>
> MJAVADOC-517 is fixed in 3.0.1. Let's upgrade the plugin to 3.0.1 or upper to 
> support Java 10.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15741) [JDK10] Upgrade Maven Javadoc Plugin from 3.0.0-M1 to 3.0.1

2018-09-14 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16614919#comment-16614919
 ] 

Akira Ajisaka commented on HADOOP-15741:


bq. +1  javadoc 6m 37s  root generated 0 new + 142 unchanged - 4089 fixed = 142 
total (was 4231)
This is strange.

In 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15190/artifact/out/patch-javadoc-javadoc-root.txt,
 yarn-server-common module output only 100 javadoc warnings, although 
HADOOP-13083 raised the limit to 1.
After further research, I found MJAVADOC-475 replaced {{additionalparam}} 
parameter with {{additionalOptions}} and that's why HADOOP-13083 does not take 
effect in maven javadoc plugin 3.0.1.

Hi [~tasanuma0829], would you update the configuration? I'm +1 if that is 
addressed.

> [JDK10] Upgrade Maven Javadoc Plugin from 3.0.0-M1 to 3.0.1
> ---
>
> Key: HADOOP-15741
> URL: https://issues.apache.org/jira/browse/HADOOP-15741
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Takanobu Asanuma
>Priority: Minor
> Attachments: HADOOP-15741.1.patch
>
>
> MJAVADOC-517 is fixed in 3.0.1. Let's upgrade the plugin to 3.0.1 or upper to 
> support Java 10.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15754) s3guard: testDynamoTableTagging should clear existing config

2018-09-14 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16614871#comment-16614871
 ] 

Hadoop QA commented on HADOOP-15754:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 10s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 31s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
37s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 56m  3s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HADOOP-15754 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12939721/HADOOP-15754.002.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 25f6534507c7 3.13.0-144-generic #193-Ubuntu SMP Thu Mar 15 
17:03:53 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / f1a893f |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15194/testReport/ |
| Max. process+thread count | 328 (vs. ulimit of 1) |
| modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15194/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> s3guard: testDynamoTableTagging should clear existing config
> 
>
> Key: HADOOP-15754
> 

[jira] [Commented] (HADOOP-15755) StringUtils#createStartupShutdownMessage throws NPE when args is null

2018-09-14 Thread Jason Lowe (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16614840#comment-16614840
 ] 

Jason Lowe commented on HADOOP-15755:
-

Thanks for the report and patch!  Fix looks fine.  It could use 
Collections.emptyList but that's not a must-fix.

Would you mind adding a unit test?  It's trivial in this case since the test 
just needs to invoke the method with a null args parameter.  That way if 
someone later refactors the method a test will verify this doesn't regress.


> StringUtils#createStartupShutdownMessage throws NPE when args is null
> -
>
> Key: HADOOP-15755
> URL: https://issues.apache.org/jira/browse/HADOOP-15755
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Attachments: HADOOP-15755.001.patch
>
>
> StringUtils#createStartupShutdownMessage uses 
> {code:java}
> Arrays.asList(args)
> {code}
> which throws NPE when args is null.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15754) s3guard: testDynamoTableTagging should clear existing config

2018-09-14 Thread Gabor Bota (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16614784#comment-16614784
 ] 

Gabor Bota commented on HADOOP-15754:
-

Had to fix 
{{AbstractS3GuardToolTestBase#testSetCapacityFailFastOnReadWriteOfZero}} too, 
because if there's no region set, it will fail with:
{noformat}
[ERROR] 
testSetCapacityFailFastOnReadWriteOfZero(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardToolDynamoDB)
  Time elapsed: 2.121 s  <<< ERROR!
java.nio.file.AccessDeniedException: S3 client role lacks permission 
s3:GetBucketLocation for s3a://bucket
Caused by: java.nio.file.AccessDeniedException: bucket: getBucketLocation() on 
bucket: com.amazonaws.services.s3.model.AmazonS3Exception: Access Denied 
(Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 
A511C83C764B02A8; S3 Extended Request ID: 
tGDU7Kl+FUN7hqCyHo30YGkDvNPe1b5mFP7OPDCPhEHdRzyiTlrfP90b/p+oB6ahaaQMoU/2ojg=), 
S3 Extended Request ID: 
tGDU7Kl+FUN7hqCyHo30YGkDvNPe1b5mFP7OPDCPhEHdRzyiTlrfP90b/p+oB6ahaaQMoU/2ojg=:AccessDenied
Caused by: com.amazonaws.services.s3.model.AmazonS3Exception: Access Denied 
(Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 
A511C83C764B02A8; S3 Extended Request ID: 
tGDU7Kl+FUN7hqCyHo30YGkDvNPe1b5mFP7OPDCPhEHdRzyiTlrfP90b/p+oB6ahaaQMoU/2ojg=)
{noformat}

-

Tested on eu-west-1.

Test output with {{fs.s3a.s3guard.ddb.region}} set:
{noformat}
[INFO] Running org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardToolDynamoDB
[ERROR] Tests run: 11, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
140.708 s <<< FAILURE! - in 
org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardToolDynamoDB
[ERROR] 
testDestroyNoBucket(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardToolDynamoDB)  
Time elapsed: 1.266 s  <<< ERROR!
java.lang.IllegalArgumentException: No DynamoDB table name configured

[INFO] Running org.apache.hadoop.fs.s3a.s3guard.ITestDynamoDBMetadataStore
[INFO] Tests run: 40, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 206.605 
s - in org.apache.hadoop.fs.s3a.s3guard.ITestDynamoDBMetadataStore
{noformat}

Test output without {{fs.s3a.s3guard.ddb.region}} set:

{noformat}
[INFO] Running org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardToolDynamoDB
[WARNING] Tests run: 11, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 
109.912 s - in org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardToolDynamoDB
[INFO] Running org.apache.hadoop.fs.s3a.s3guard.ITestDynamoDBMetadataStore
[ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.001 s 
<<< FAILURE! - in org.apache.hadoop.fs.s3a.s3guard.ITestDynamoDBMetadataStore
[ERROR] org.apache.hadoop.fs.s3a.s3guard.ITestDynamoDBMetadataStore  Time 
elapsed: 0.001 s  <<< ERROR!
java.lang.IllegalArgumentException: No DynamoDB region configured
at 
org.apache.hadoop.fs.s3a.s3guard.ITestDynamoDBMetadataStore.beforeClassSetup(ITestDynamoDBMetadataStore.java:150)
{noformat}

> s3guard: testDynamoTableTagging should clear existing config
> 
>
> Key: HADOOP-15754
> URL: https://issues.apache.org/jira/browse/HADOOP-15754
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Aaron Fabbri
>Assignee: Gabor Bota
>Priority: Minor
> Attachments: HADOOP-15754.001.patch, HADOOP-15754.002.patch
>
>
> I recently committed HADOOP-14734 which adds support for tagging Dynamo DB 
> tables for S3Guard when they are created.
>  
> Later, when testing another patch, I hit a test failure because I still had a 
> tag option set in my test configuration (auth-keys.xml) that was adding my 
> own table tag.
> {noformat}
> [ERROR] 
> testDynamoTableTagging(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardToolDynamoDB)
>   Time elapsed: 13.384 s  <<< FAILURE!
> java.lang.AssertionError: expected:<2> but was:<3>
>         at org.junit.Assert.fail(Assert.java:88)
>         at org.junit.Assert.failNotEquals(Assert.java:743)
>         at org.junit.Assert.assertEquals(Assert.java:118)
>         at org.junit.Assert.assertEquals(Assert.java:555)
>         at org.junit.Assert.assertEquals(Assert.java:542)
>         at 
> org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardToolDynamoDB.testDynamoTableTagging(ITestS3GuardToolDynamoDB.java:129)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>         at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:498)
>         at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>         at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>         at 

[jira] [Updated] (HADOOP-15754) s3guard: testDynamoTableTagging should clear existing config

2018-09-14 Thread Gabor Bota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15754?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-15754:

Attachment: HADOOP-15754.002.patch

> s3guard: testDynamoTableTagging should clear existing config
> 
>
> Key: HADOOP-15754
> URL: https://issues.apache.org/jira/browse/HADOOP-15754
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Aaron Fabbri
>Assignee: Gabor Bota
>Priority: Minor
> Attachments: HADOOP-15754.001.patch, HADOOP-15754.002.patch
>
>
> I recently committed HADOOP-14734 which adds support for tagging Dynamo DB 
> tables for S3Guard when they are created.
>  
> Later, when testing another patch, I hit a test failure because I still had a 
> tag option set in my test configuration (auth-keys.xml) that was adding my 
> own table tag.
> {noformat}
> [ERROR] 
> testDynamoTableTagging(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardToolDynamoDB)
>   Time elapsed: 13.384 s  <<< FAILURE!
> java.lang.AssertionError: expected:<2> but was:<3>
>         at org.junit.Assert.fail(Assert.java:88)
>         at org.junit.Assert.failNotEquals(Assert.java:743)
>         at org.junit.Assert.assertEquals(Assert.java:118)
>         at org.junit.Assert.assertEquals(Assert.java:555)
>         at org.junit.Assert.assertEquals(Assert.java:542)
>         at 
> org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardToolDynamoDB.testDynamoTableTagging(ITestS3GuardToolDynamoDB.java:129)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>         at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:498)
>         at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>         at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>         at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>         at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>         at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>         at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>         at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>         at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74){noformat}
> I think the solution is just to clear any tag.* options set in the 
> configuration at the beginning of the test.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15756) [JDK10] Migrate from sun.net.util.IPAddressUtil to the replacement

2018-09-14 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16614773#comment-16614773
 ] 

Hadoop QA commented on HADOOP-15756:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 19m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 59s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m  
7s{color} | {color:green} root generated 0 new + 1330 unchanged - 5 fixed = 
1330 total (was 1335) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
51s{color} | {color:green} hadoop-common-project/hadoop-common: The patch 
generated 0 new + 26 unchanged - 1 fixed = 26 total (was 27) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 13s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
49s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
39s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}101m 37s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HADOOP-15756 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12939699/HADOOP-15756.01.patch 
|
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux ec419055d963 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / f1a893f |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15192/testReport/ |
| Max. process+thread count | 1455 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 

[jira] [Commented] (HADOOP-15741) [JDK10] Upgrade Maven Javadoc Plugin from 3.0.0-M1 to 3.0.1

2018-09-14 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16614688#comment-16614688
 ] 

Hadoop QA commented on HADOOP-15741:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 11m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
57m 13s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
37s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 12m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 53s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  6m 
37s{color} | {color:green} root generated 0 new + 142 unchanged - 4089 fixed = 
142 total (was 4231) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}151m 59s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
47s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}281m 24s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.TestPread |
|   | hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy |
|   | hadoop.hdfs.server.datanode.TestFsDatasetCache |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HADOOP-15741 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12939663/HADOOP-15741.1.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  xml  |
| uname | Linux f67a19a1d75b 4.4.0-134-generic #160~14.04.1-Ubuntu SMP Fri Aug 
17 11:07:07 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 568ebec |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15190/artifact/out/patch-unit-root.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15190/testReport/ |
| Max. process+thread count | 3545 (vs. ulimit of 1) |
| modules | C: . U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15190/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> [JDK10] Upgrade Maven Javadoc Plugin from 3.0.0-M1 to 3.0.1
> ---
>
> 

[jira] [Commented] (HADOOP-14178) Move Mockito up to version 2.x

2018-09-14 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-14178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16614660#comment-16614660
 ] 

Akira Ajisaka commented on HADOOP-14178:


022 patch: rebased

> Move Mockito up to version 2.x
> --
>
> Key: HADOOP-14178
> URL: https://issues.apache.org/jira/browse/HADOOP-14178
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: HADOOP-14178.001.patch, HADOOP-14178.002.patch, 
> HADOOP-14178.003.patch, HADOOP-14178.004.patch, HADOOP-14178.005-wip.patch, 
> HADOOP-14178.005-wip2.patch, HADOOP-14178.005-wip3.patch, 
> HADOOP-14178.005-wip4.patch, HADOOP-14178.005-wip5.patch, 
> HADOOP-14178.005-wip6.patch, HADOOP-14178.005.patch, HADOOP-14178.006.patch, 
> HADOOP-14178.007.patch, HADOOP-14178.008.patch, HADOOP-14178.009.patch, 
> HADOOP-14178.010.patch, HADOOP-14178.011.patch, HADOOP-14178.012.patch, 
> HADOOP-14178.013.patch, HADOOP-14178.014.patch, HADOOP-14178.015.patch, 
> HADOOP-14178.016.patch, HADOOP-14178.017.patch, HADOOP-14178.018.patch, 
> HADOOP-14178.019.patch, HADOOP-14178.020.patch, HADOOP-14178.021.patch, 
> HADOOP-14178.022.patch
>
>
> I don't know when Hadoop picked up Mockito, but it has been frozen at 1.8.5 
> since the switch to maven in 2011. 
> Mockito is now at version 2.1, [with lots of Java 8 
> support|https://github.com/mockito/mockito/wiki/What%27s-new-in-Mockito-2]. 
> That' s not just defining actions as closures, but in supporting Optional 
> types, mocking methods in interfaces, etc. 
> It's only used for testing, and, *provided there aren't regressions*, cost of 
> upgrade is low. The good news: test tools usually come with good test 
> coverage. The bad: mockito does go deep into java bytecodes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14178) Move Mockito up to version 2.x

2018-09-14 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-14178:
---
Attachment: HADOOP-14178.022.patch

> Move Mockito up to version 2.x
> --
>
> Key: HADOOP-14178
> URL: https://issues.apache.org/jira/browse/HADOOP-14178
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: HADOOP-14178.001.patch, HADOOP-14178.002.patch, 
> HADOOP-14178.003.patch, HADOOP-14178.004.patch, HADOOP-14178.005-wip.patch, 
> HADOOP-14178.005-wip2.patch, HADOOP-14178.005-wip3.patch, 
> HADOOP-14178.005-wip4.patch, HADOOP-14178.005-wip5.patch, 
> HADOOP-14178.005-wip6.patch, HADOOP-14178.005.patch, HADOOP-14178.006.patch, 
> HADOOP-14178.007.patch, HADOOP-14178.008.patch, HADOOP-14178.009.patch, 
> HADOOP-14178.010.patch, HADOOP-14178.011.patch, HADOOP-14178.012.patch, 
> HADOOP-14178.013.patch, HADOOP-14178.014.patch, HADOOP-14178.015.patch, 
> HADOOP-14178.016.patch, HADOOP-14178.017.patch, HADOOP-14178.018.patch, 
> HADOOP-14178.019.patch, HADOOP-14178.020.patch, HADOOP-14178.021.patch, 
> HADOOP-14178.022.patch
>
>
> I don't know when Hadoop picked up Mockito, but it has been frozen at 1.8.5 
> since the switch to maven in 2011. 
> Mockito is now at version 2.1, [with lots of Java 8 
> support|https://github.com/mockito/mockito/wiki/What%27s-new-in-Mockito-2]. 
> That' s not just defining actions as closures, but in supporting Optional 
> types, mocking methods in interfaces, etc. 
> It's only used for testing, and, *provided there aren't regressions*, cost of 
> upgrade is low. The good news: test tools usually come with good test 
> coverage. The bad: mockito does go deep into java bytecodes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15756) [JDK10] Migrate from sun.net.util.IPAddressUtil to the replacement

2018-09-14 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-15756:
---
Status: Patch Available  (was: Open)

> [JDK10] Migrate from sun.net.util.IPAddressUtil to the replacement
> --
>
> Key: HADOOP-15756
> URL: https://issues.apache.org/jira/browse/HADOOP-15756
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: net, util
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: HADOOP-15756.01.patch
>
>
> In JDK10, sun.net.util.IPAddressUtil is encapsulated and not accessible from 
> unnamed modules. This issue is to remove the usage of IPAddressUtil.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15741) [JDK10] Upgrade Maven Javadoc Plugin from 3.0.0-M1 to 3.0.1

2018-09-14 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-15741:
---
Attachment: (was: HADOOP-15741.01.patch)

> [JDK10] Upgrade Maven Javadoc Plugin from 3.0.0-M1 to 3.0.1
> ---
>
> Key: HADOOP-15741
> URL: https://issues.apache.org/jira/browse/HADOOP-15741
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Takanobu Asanuma
>Priority: Minor
> Attachments: HADOOP-15741.1.patch
>
>
> MJAVADOC-517 is fixed in 3.0.1. Let's upgrade the plugin to 3.0.1 or upper to 
> support Java 10.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15741) [JDK10] Upgrade Maven Javadoc Plugin from 3.0.0-M1 to 3.0.1

2018-09-14 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-15741:
---
Attachment: HADOOP-15741.01.patch

> [JDK10] Upgrade Maven Javadoc Plugin from 3.0.0-M1 to 3.0.1
> ---
>
> Key: HADOOP-15741
> URL: https://issues.apache.org/jira/browse/HADOOP-15741
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Takanobu Asanuma
>Priority: Minor
> Attachments: HADOOP-15741.1.patch
>
>
> MJAVADOC-517 is fixed in 3.0.1. Let's upgrade the plugin to 3.0.1 or upper to 
> support Java 10.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15756) [JDK10] Migrate from sun.net.util.IPAddressUtil to the replacement

2018-09-14 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-15756:
---
Attachment: HADOOP-15756.01.patch

> [JDK10] Migrate from sun.net.util.IPAddressUtil to the replacement
> --
>
> Key: HADOOP-15756
> URL: https://issues.apache.org/jira/browse/HADOOP-15756
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: net, util
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: HADOOP-15756.01.patch
>
>
> In JDK10, sun.net.util.IPAddressUtil is encapsulated and not accessible from 
> unnamed modules. This issue is to remove the usage of IPAddressUtil.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15756) [JDK10] Migrate from sun.net.util.IPAddressUtil to the replacement

2018-09-14 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-15756:
---
Summary: [JDK10] Migrate from sun.net.util.IPAddressUtil to the 
replacement  (was: [JDK10] Migrate from sun.net.util.IPAddressUtil to 
org.apache.commons.validator.routines.InetAddressValidator)
Description: In JDK10, sun.net.util.IPAddressUtil is encapsulated and not 
accessible from unnamed modules. This issue is to remove the usage of 
IPAddressUtil.  (was: In JDK10, sun.net.util.IPAddressUtil is encapsulated and 
not accessible from unnamed modules. This issue is to migrate to 
org.apache.commons.validator.routines.InetAddressValidator.)

> [JDK10] Migrate from sun.net.util.IPAddressUtil to the replacement
> --
>
> Key: HADOOP-15756
> URL: https://issues.apache.org/jira/browse/HADOOP-15756
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: net, util
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
>
> In JDK10, sun.net.util.IPAddressUtil is encapsulated and not accessible from 
> unnamed modules. This issue is to remove the usage of IPAddressUtil.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15755) StringUtils#createStartupShutdownMessage throws NPE when args is null

2018-09-14 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16614644#comment-16614644
 ] 

Hadoop QA commented on HADOOP-15755:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 21m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 55s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
15s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 20m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 20m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  0s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
39s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
40s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}113m  7s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HADOOP-15755 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12939678/HADOOP-15755.001.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 44cde7d22bb2 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 568ebec |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15191/testReport/ |
| Max. process+thread count | 1361 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15191/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> 

[jira] [Commented] (HADOOP-15716) native library dependency on the very first build

2018-09-14 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16614619#comment-16614619
 ] 

Steve Loughran commented on HADOOP-15716:
-

bq. As you see in my build command I am choosing the hdds and dist profiles 
exclusively.

what happens if you start the day with a full local (test skipped) build? 

{code}
mvn -T 1C install -skipTests
{code}

then do the specific profiles. 

FWIW, I do that and have tabbed terminal windows in the different subprojects; 
stops me accidentally kicking off a full build. I am using macos, and I'm not 
creating native libs without the -Pnative option on a full build. 


> native library dependency on the very first build
> -
>
> Key: HADOOP-15716
> URL: https://issues.apache.org/jira/browse/HADOOP-15716
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, common
>Affects Versions: 3.2.0
> Environment: [INFO] Detecting the operating system and CPU 
> architecture
> [INFO] 
> 
> [INFO] os.detected.name: osx
> [INFO] os.detected.arch: x86_64
> [INFO] os.detected.version: 10.13
> [INFO] os.detected.version.major: 10
> [INFO] os.detected.version.minor: 13
> [INFO] os.detected.classifier: osx-x86_64
>Reporter: Sree Vaddi
>Priority: Major
>
> When building hadoop (hdds exactly, but hadoop, too) for the very first time, 
> Tests fails due to the dependency on the native lib (missing libhadoop.so).  
> As a work around, one can get past by skipping tests.  But it sounds chicken 
> & egg situation, to have installed 'libhadoop.so' before building hadoop for 
> the very first time.
>  
> Suggestion to have a first time flag or some logic figure it, then skip the 
> failing tests and/or compile/install libhadoop.so before running those 
> failing tests.
>  
>  
> HW14169:hadoop svaddi$ mvn clean package install -Phdds -Pdist -Dtar
> [INFO] Running org.apache.hadoop.util.TestTime
> [INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.06 s 
> - in org.apache.hadoop.util.TestTime
> [INFO] Running org.apache.hadoop.util.TestNativeCodeLoader
> [ERROR] Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.117 
> s <<< FAILURE! - in org.apache.hadoop.util.TestNativeCodeLoader
> [ERROR] testNativeCodeLoaded(org.apache.hadoop.util.TestNativeCodeLoader)  
> Time elapsed: 0.027 s  <<< FAILURE!
> java.lang.AssertionError: TestNativeCodeLoader: libhadoop.so testing was 
> required, but libhadoop.so was not loaded.
>     at org.junit.Assert.fail(Assert.java:88)
>     at 
> org.apache.hadoop.util.TestNativeCodeLoader.testNativeCodeLoaded(TestNativeCodeLoader.java:48)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>     at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:498)
>     at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>     at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>     at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>     at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>     at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>     at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
>     at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
>     at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
>     at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
>     at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
>     at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>     at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>     at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>     at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>     at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>     at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:379)
>     at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:340)
>     at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:125)
>     at 
> 

[jira] [Commented] (HADOOP-15220) Über-jira: S3a phase V: Hadoop 3.2 features

2018-09-14 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16614615#comment-16614615
 ] 

Steve Loughran commented on HADOOP-15220:
-

we're 100% done here, one little S3Guard test tune to get out the way on the 
dependent S3Guard patch. HADOOP-15754. Hope to have that today. 

*Anyone who can, please checkout and test this stuff*

> Über-jira: S3a phase V: Hadoop 3.2 features
> ---
>
> Key: HADOOP-15220
> URL: https://issues.apache.org/jira/browse/HADOOP-15220
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>
> Über-jira for S3A work for Hadoop 3.2.x
> The items from HADOOP-14831 which didn't get into Hadoop-3.1, and anything 
> else



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15754) s3guard: testDynamoTableTagging should clear existing config

2018-09-14 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16614613#comment-16614613
 ] 

Steve Loughran commented on HADOOP-15754:
-

OK

# skip if the region isn't there. That allows people to skip this test by not 
setting it.
# make sure bucket capacity is set to (1, 1), so if something goes wrong with 
cleanup, cost is minimal

> s3guard: testDynamoTableTagging should clear existing config
> 
>
> Key: HADOOP-15754
> URL: https://issues.apache.org/jira/browse/HADOOP-15754
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Aaron Fabbri
>Assignee: Gabor Bota
>Priority: Minor
> Attachments: HADOOP-15754.001.patch
>
>
> I recently committed HADOOP-14734 which adds support for tagging Dynamo DB 
> tables for S3Guard when they are created.
>  
> Later, when testing another patch, I hit a test failure because I still had a 
> tag option set in my test configuration (auth-keys.xml) that was adding my 
> own table tag.
> {noformat}
> [ERROR] 
> testDynamoTableTagging(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardToolDynamoDB)
>   Time elapsed: 13.384 s  <<< FAILURE!
> java.lang.AssertionError: expected:<2> but was:<3>
>         at org.junit.Assert.fail(Assert.java:88)
>         at org.junit.Assert.failNotEquals(Assert.java:743)
>         at org.junit.Assert.assertEquals(Assert.java:118)
>         at org.junit.Assert.assertEquals(Assert.java:555)
>         at org.junit.Assert.assertEquals(Assert.java:542)
>         at 
> org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardToolDynamoDB.testDynamoTableTagging(ITestS3GuardToolDynamoDB.java:129)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>         at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:498)
>         at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>         at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>         at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>         at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>         at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>         at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>         at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>         at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74){noformat}
> I think the solution is just to clear any tag.* options set in the 
> configuration at the beginning of the test.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15756) [JDK10] Migrate from sun.net.util.IPAddressUtil to org.apache.commons.validator.routines.InetAddressValidator

2018-09-14 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-15756:
---
Description: In JDK10, sun.net.util.IPAddressUtil is encapsulated and not 
accessible from unnamed modules. This issue is to migrate to 
org.apache.commons.validator.routines.InetAddressValidator.  (was: In JDK10, 
sun.net.util.IPAddressUtil is encapsulated and not accessible from unnamed 
module. This issue is to migrate to 
org.apache.commons.validator.routines.InetAddressValidator.)

> [JDK10] Migrate from sun.net.util.IPAddressUtil to 
> org.apache.commons.validator.routines.InetAddressValidator
> -
>
> Key: HADOOP-15756
> URL: https://issues.apache.org/jira/browse/HADOOP-15756
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: net, util
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
>
> In JDK10, sun.net.util.IPAddressUtil is encapsulated and not accessible from 
> unnamed modules. This issue is to migrate to 
> org.apache.commons.validator.routines.InetAddressValidator.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15756) [JDK10] Migrate from sun.net.util.IPAddressUtil to org.apache.commons.validator.routines.InetAddressValidator

2018-09-14 Thread Akira Ajisaka (JIRA)
Akira Ajisaka created HADOOP-15756:
--

 Summary: [JDK10] Migrate from sun.net.util.IPAddressUtil to 
org.apache.commons.validator.routines.InetAddressValidator
 Key: HADOOP-15756
 URL: https://issues.apache.org/jira/browse/HADOOP-15756
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: net, util
Reporter: Akira Ajisaka
Assignee: Akira Ajisaka


In JDK10, sun.net.util.IPAddressUtil is encapsulated and not accessible from 
unnamed module. This issue is to migrate to 
org.apache.commons.validator.routines.InetAddressValidator.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15741) [JDK10] Upgrade Maven Javadoc Plugin from 3.0.0-M1 to 3.0.1

2018-09-14 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16614574#comment-16614574
 ] 

Akira Ajisaka commented on HADOOP-15741:


+1 pending Jenkins. Thanks [~tasanuma0829].

> [JDK10] Upgrade Maven Javadoc Plugin from 3.0.0-M1 to 3.0.1
> ---
>
> Key: HADOOP-15741
> URL: https://issues.apache.org/jira/browse/HADOOP-15741
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Takanobu Asanuma
>Priority: Minor
> Attachments: HADOOP-15741.1.patch
>
>
> MJAVADOC-517 is fixed in 3.0.1. Let's upgrade the plugin to 3.0.1 or upper to 
> support Java 10.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15755) StringUtils#createStartupShutdownMessage throws NPE when args is null

2018-09-14 Thread Lokesh Jain (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15755?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain updated HADOOP-15755:
-
Status: Patch Available  (was: Open)

> StringUtils#createStartupShutdownMessage throws NPE when args is null
> -
>
> Key: HADOOP-15755
> URL: https://issues.apache.org/jira/browse/HADOOP-15755
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Attachments: HADOOP-15755.001.patch
>
>
> StringUtils#createStartupShutdownMessage uses 
> {code:java}
> Arrays.asList(args)
> {code}
> which throws NPE when args is null.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15755) StringUtils#createStartupShutdownMessage throws NPE when args is null

2018-09-14 Thread Lokesh Jain (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15755?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain updated HADOOP-15755:
-
Attachment: HADOOP-15755.001.patch

> StringUtils#createStartupShutdownMessage throws NPE when args is null
> -
>
> Key: HADOOP-15755
> URL: https://issues.apache.org/jira/browse/HADOOP-15755
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Attachments: HADOOP-15755.001.patch
>
>
> StringUtils#createStartupShutdownMessage uses 
> {code:java}
> Arrays.asList(args)
> {code}
> which throws NPE when args is null.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15716) native library dependency on the very first build

2018-09-14 Thread Sree Vaddi (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16614502#comment-16614502
 ] 

Sree Vaddi commented on HADOOP-15716:
-

[~jojochuang]

Exactly. When
{code:java}
[INFO] os.detected.name: osx{code}
is detected by mvn, why is it even looking for native libs ?

 

[~ste...@apache.org]

As you see in my build command I am choosing the hdds and dist profiles 
exclusively.

$ {{mvn help:active-profiles}}

 

[INFO]
Active Profiles for Project 'org.apache.hadoop:hadoop-main:pom:3.2.0-SNAPSHOT':

The following profiles are active:




Active Profiles for Project 
'org.apache.hadoop:hadoop-build-tools:jar:3.2.0-SNAPSHOT':

The following profiles are active:




Active Profiles for Project 
'org.apache.hadoop:hadoop-project:pom:3.2.0-SNAPSHOT':

The following profiles are active:

 - os.mac (source: org.apache.hadoop:hadoop-project:3.2.0-SNAPSHOT)
 - hbase1 (source: org.apache.hadoop:hadoop-project:3.2.0-SNAPSHOT)



Active Profiles for Project 
'org.apache.hadoop:hadoop-annotations:jar:3.2.0-SNAPSHOT':

The following profiles are active:

 - jdk1.8 (source: org.apache.hadoop:hadoop-annotations:3.2.0-SNAPSHOT)
 - os.mac (source: org.apache.hadoop:hadoop-project:3.2.0-SNAPSHOT)
 - hbase1 (source: org.apache.hadoop:hadoop-project:3.2.0-SNAPSHOT)



Active Profiles for Project 
'org.apache.hadoop:hadoop-project-dist:pom:3.2.0-SNAPSHOT':

The following profiles are active:

 - os.mac (source: org.apache.hadoop:hadoop-project:3.2.0-SNAPSHOT)
 - hbase1 (source: org.apache.hadoop:hadoop-project:3.2.0-SNAPSHOT)



Active Profiles for Project 
'org.apache.hadoop:hadoop-assemblies:jar:3.2.0-SNAPSHOT':

The following profiles are active:

 - os.mac (source: org.apache.hadoop:hadoop-project:3.2.0-SNAPSHOT)
 - hbase1 (source: org.apache.hadoop:hadoop-project:3.2.0-SNAPSHOT)



Active Profiles for Project 
'org.apache.hadoop:hadoop-maven-plugins:maven-plugin:3.2.0-SNAPSHOT':

The following profiles are active:

 - os.mac (source: org.apache.hadoop:hadoop-project:3.2.0-SNAPSHOT)
 - hbase1 (source: org.apache.hadoop:hadoop-project:3.2.0-SNAPSHOT)



Active Profiles for Project 
'org.apache.hadoop:hadoop-minikdc:jar:3.2.0-SNAPSHOT':

The following profiles are active:

 - os.mac (source: org.apache.hadoop:hadoop-project:3.2.0-SNAPSHOT)
 - hbase1 (source: org.apache.hadoop:hadoop-project:3.2.0-SNAPSHOT)



Active Profiles for Project 'org.apache.hadoop:hadoop-auth:jar:3.2.0-SNAPSHOT':

The following profiles are active:

 - os.mac (source: org.apache.hadoop:hadoop-project:3.2.0-SNAPSHOT)
 - hbase1 (source: org.apache.hadoop:hadoop-project:3.2.0-SNAPSHOT)



Active Profiles for Project 
'org.apache.hadoop:hadoop-auth-examples:war:3.2.0-SNAPSHOT':

The following profiles are active:

 - os.mac (source: org.apache.hadoop:hadoop-project:3.2.0-SNAPSHOT)
 - hbase1 (source: org.apache.hadoop:hadoop-project:3.2.0-SNAPSHOT)



Active Profiles for Project 
'org.apache.hadoop:hadoop-common:jar:3.2.0-SNAPSHOT':

The following profiles are active:

 - shelltest (source: org.apache.hadoop:hadoop-common:3.2.0-SNAPSHOT)
 - os.mac (source: org.apache.hadoop:hadoop-project:3.2.0-SNAPSHOT)
 - hbase1 (source: org.apache.hadoop:hadoop-project:3.2.0-SNAPSHOT)



Active Profiles for Project 'org.apache.hadoop:hadoop-nfs:jar:3.2.0-SNAPSHOT':

The following profiles are active:

 - os.mac (source: org.apache.hadoop:hadoop-project:3.2.0-SNAPSHOT)
 - hbase1 (source: org.apache.hadoop:hadoop-project:3.2.0-SNAPSHOT)



Active Profiles for Project 'org.apache.hadoop:hadoop-kms:jar:3.2.0-SNAPSHOT':

The following profiles are active:

 - os.mac (source: org.apache.hadoop:hadoop-project:3.2.0-SNAPSHOT)
 - hbase1 (source: org.apache.hadoop:hadoop-project:3.2.0-SNAPSHOT)



Active Profiles for Project 
'org.apache.hadoop:hadoop-common-project:pom:3.2.0-SNAPSHOT':

The following profiles are active:

 - os.mac (source: org.apache.hadoop:hadoop-project:3.2.0-SNAPSHOT)
 - hbase1 (source: org.apache.hadoop:hadoop-project:3.2.0-SNAPSHOT)



Active Profiles for Project 
'org.apache.hadoop:hadoop-hdfs-client:jar:3.2.0-SNAPSHOT':

The following profiles are active:

 - os.mac (source: org.apache.hadoop:hadoop-project:3.2.0-SNAPSHOT)
 - hbase1 (source: org.apache.hadoop:hadoop-project:3.2.0-SNAPSHOT)



Active Profiles for Project 'org.apache.hadoop:hadoop-hdfs:jar:3.2.0-SNAPSHOT':

The following profiles are active:

 - shelltest (source: org.apache.hadoop:hadoop-hdfs:3.2.0-SNAPSHOT)
 - os.mac (source: org.apache.hadoop:hadoop-project:3.2.0-SNAPSHOT)
 - hbase1 (source: org.apache.hadoop:hadoop-project:3.2.0-SNAPSHOT)



Active Profiles for Project 
'org.apache.hadoop:hadoop-hdfs-native-client:jar:3.2.0-SNAPSHOT':

The following profiles are active:

 - os.mac (source: org.apache.hadoop:hadoop-project:3.2.0-SNAPSHOT)
 - hbase1 (source: org.apache.hadoop:hadoop-project:3.2.0-SNAPSHOT)



Active Profiles for Project 

[jira] [Created] (HADOOP-15755) StringUtils#createStartupShutdownMessage throws NPE when args is null

2018-09-14 Thread Lokesh Jain (JIRA)
Lokesh Jain created HADOOP-15755:


 Summary: StringUtils#createStartupShutdownMessage throws NPE when 
args is null
 Key: HADOOP-15755
 URL: https://issues.apache.org/jira/browse/HADOOP-15755
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Lokesh Jain
Assignee: Lokesh Jain


StringUtils#createStartupShutdownMessage uses 
{code:java}
Arrays.asList(args)
{code}
which throws NPE when args is null.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15741) [JDK10] Upgrade Maven Javadoc Plugin from 3.0.0-M1 to 3.0.1

2018-09-14 Thread Takanobu Asanuma (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16614462#comment-16614462
 ] 

Takanobu Asanuma commented on HADOOP-15741:
---

Uploaded the 1st patch.

> [JDK10] Upgrade Maven Javadoc Plugin from 3.0.0-M1 to 3.0.1
> ---
>
> Key: HADOOP-15741
> URL: https://issues.apache.org/jira/browse/HADOOP-15741
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Takanobu Asanuma
>Priority: Minor
> Attachments: HADOOP-15741.1.patch
>
>
> MJAVADOC-517 is fixed in 3.0.1. Let's upgrade the plugin to 3.0.1 or upper to 
> support Java 10.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15741) [JDK10] Upgrade Maven Javadoc Plugin from 3.0.0-M1 to 3.0.1

2018-09-14 Thread Takanobu Asanuma (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma updated HADOOP-15741:
--
Attachment: HADOOP-15741.1.patch

> [JDK10] Upgrade Maven Javadoc Plugin from 3.0.0-M1 to 3.0.1
> ---
>
> Key: HADOOP-15741
> URL: https://issues.apache.org/jira/browse/HADOOP-15741
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Takanobu Asanuma
>Priority: Minor
> Attachments: HADOOP-15741.1.patch
>
>
> MJAVADOC-517 is fixed in 3.0.1. Let's upgrade the plugin to 3.0.1 or upper to 
> support Java 10.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15741) [JDK10] Upgrade Maven Javadoc Plugin from 3.0.0-M1 to 3.0.1

2018-09-14 Thread Takanobu Asanuma (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma updated HADOOP-15741:
--
Status: Patch Available  (was: Open)

> [JDK10] Upgrade Maven Javadoc Plugin from 3.0.0-M1 to 3.0.1
> ---
>
> Key: HADOOP-15741
> URL: https://issues.apache.org/jira/browse/HADOOP-15741
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Takanobu Asanuma
>Priority: Minor
> Attachments: HADOOP-15741.1.patch
>
>
> MJAVADOC-517 is fixed in 3.0.1. Let's upgrade the plugin to 3.0.1 or upper to 
> support Java 10.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15741) [JDK10] Upgrade Maven Javadoc Plugin from 3.0.0-M1 to 3.0.1

2018-09-14 Thread Takanobu Asanuma (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16614454#comment-16614454
 ] 

Takanobu Asanuma commented on HADOOP-15741:
---

I'd like to assign this jira to myself.

> [JDK10] Upgrade Maven Javadoc Plugin from 3.0.0-M1 to 3.0.1
> ---
>
> Key: HADOOP-15741
> URL: https://issues.apache.org/jira/browse/HADOOP-15741
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Takanobu Asanuma
>Priority: Minor
>
> MJAVADOC-517 is fixed in 3.0.1. Let's upgrade the plugin to 3.0.1 or upper to 
> support Java 10.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-15741) [JDK10] Upgrade Maven Javadoc Plugin from 3.0.0-M1 to 3.0.1

2018-09-14 Thread Takanobu Asanuma (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma reassigned HADOOP-15741:
-

Assignee: Takanobu Asanuma

> [JDK10] Upgrade Maven Javadoc Plugin from 3.0.0-M1 to 3.0.1
> ---
>
> Key: HADOOP-15741
> URL: https://issues.apache.org/jira/browse/HADOOP-15741
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Takanobu Asanuma
>Priority: Minor
>
> MJAVADOC-517 is fixed in 3.0.1. Let's upgrade the plugin to 3.0.1 or upper to 
> support Java 10.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org