[jira] [Commented] (HADOOP-12718) Incorrect error message by fs -put local dir without permission

2016-05-17 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15288479#comment-15288479
 ] 

John Zhuge commented on HADOOP-12718:
-

I think the patch should use {{o.a.h.security.AccessControlException}} (many 
usages) instead of {{java.nio.AccessDeniedException}}.

> Incorrect error message by fs -put local dir without permission
> ---
>
> Key: HADOOP-12718
> URL: https://issues.apache.org/jira/browse/HADOOP-12718
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
>  Labels: supportability
> Fix For: 2.8.0
>
> Attachments: HADOOP-12718.001.patch, HADOOP-12718.002.patch, 
> HADOOP-12718.003.patch, TestFsShellCopyPermission-output.001.txt, 
> TestFsShellCopyPermission-output.002.txt, TestFsShellCopyPermission.001.patch
>
>
> When the user doesn't have access permission to the local directory, the 
> "hadoop fs -put" command prints a confusing error message "No such file or 
> directory".
> {noformat}
> $ whoami
> systest
> $ cd /home/systest
> $ ls -ld .
> drwx--. 4 systest systest 4096 Jan 13 14:21 .
> $ mkdir d1
> $ sudo -u hdfs hadoop fs -put d1 /tmp
> put: `d1': No such file or directory
> {noformat}
> It will be more informative if the message is:
> {noformat}
> put: d1 (Permission denied)
> {noformat}
> If the source is a local file, the error message is ok:
> {noformat}
> put: f1 (Permission denied)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13169) Randomize file list in SimpleCopyListing

2016-05-17 Thread Rajesh Balamohan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajesh Balamohan updated HADOOP-13169:
--
Status: Patch Available  (was: Open)

> Randomize file list in SimpleCopyListing
> 
>
> Key: HADOOP-13169
> URL: https://issues.apache.org/jira/browse/HADOOP-13169
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Reporter: Rajesh Balamohan
>Priority: Minor
> Attachments: HADOOP-13169-branch-2-001.patch
>
>
> When copying files to S3, based on file listing some mappers can get into S3 
> partition hotspots. This would be more visible, when data is copied from hive 
> warehouse with lots of partitions (e.g date partitions). In such cases, some 
> of the tasks would tend to be a lot more slower than others. It would be good 
> to randomize the file paths which are written out in SimpleCopyListing to 
> avoid this issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13157) Follow-on improvements to hadoop credential commands

2016-05-17 Thread Mike Yoder (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Yoder updated HADOOP-13157:

Attachment: HADOOP-13157.003.branch-2.8.patch

Attaching HADOOP-13157.003.branch-2.8.patch for branch-2.8. Looks like I ran 
into Robert's environment removal change.

> Follow-on improvements to hadoop credential commands
> 
>
> Key: HADOOP-13157
> URL: https://issues.apache.org/jira/browse/HADOOP-13157
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.8.0
>Reporter: Mike Yoder
>Assignee: Mike Yoder
> Fix For: 2.9.0
>
> Attachments: HADOOP-13157.001.patch, HADOOP-13157.002.patch, 
> HADOOP-13157.003.branch-2.8.patch
>
>
> [~andrew.wang] had some follow-up code review comments from HADOOP-12942. 
> Hence this issue.
> Ping [~lmccay] as well.  
> The comments:
> {quote}
> Overall this looks okay, the only correctness question I have is about the 
> difference in behavior when the pwfile doesn't exist.
> The rest are all nits, would be nice to do these cleanups though.
> File 
> hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/JavaKeyStoreProvider.java:
> Line 147:
> Could this be a static helper?
> Line 161: new
> The javadoc says it returns null in this situation. This is also a difference 
> from the implementation in the AbstractJKSP. Intentional?
> Line 175:   private void locateKeystore() throws IOException {
> static helper? for the construct*Path methods too?
> File 
> hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyShell.java:
> Line 50:   @VisibleForTesting public static final String NO_VALID_PROVIDERS =
> FYI for the future, our coding style is to put annotations on their own 
> separate line.
> File 
> hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/AbstractJavaKeyStoreProvider.java:
> Line 326:   private char[] locatePassword() throws IOException {
> this method looks very similar to the one in JavaKeyStoreProvider, except the 
> env var it looks for is different, is there potential for code reuse?
> Line 394:   "o In the environment variable " +
> Using a "*" is the usual way of doing a bullet point, e.g. markdown and wiki 
> syntax.
> Line 399:   
> "http://hadoop.apache.org/docs/current/hadoop-project-dist/; +
> This link is not tied to a version, so could be inaccurate.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13138) Unable to append to a SequenceFile with Compression.NONE.

2016-05-17 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HADOOP-13138:
---
Attachment: HADOOP-13138-02.patch

Attaching the patch with fixed checkstyle nits

> Unable to append to a SequenceFile with Compression.NONE.
> -
>
> Key: HADOOP-13138
> URL: https://issues.apache.org/jira/browse/HADOOP-13138
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.2
>Reporter: Gervais Mickaël
>Assignee: Vinayakumar B
>Priority: Critical
> Attachments: HADOOP-13138-01.patch, HADOOP-13138-02.patch
>
>
> Hi,
> I'm trying to use the append functionnality to an existing _SequenceFile_.
> If I set _Compression.NONE_, it works when the file is created, but when the 
> file already exists I've a _NullPointerException_, by the way it works if I 
> specify a compression with a codec.
> {code:title=Failing code|borderStyle=solid}
> Option compression = compression(CompressionType.NONE);
> Option keyClass = keyClass(LongWritable.class);
> Option valueClass = valueClass(BytesWritable.class);
> Option out = file(dfs);
> Option append = appendIfExists(true);
> writer = createWriter(conf,
>  out,
>  append,
>  compression,
>  keyClass,
>  valueClass);
> {code}
> The following exeception is thrown when the file exists because compression 
> option is checked:
> {code}
> Exception in thread "main" java.lang.NullPointerException
>   at 
> org.apache.hadoop.io.SequenceFile$Writer.(SequenceFile.java:1119)
>   at org.apache.hadoop.io.SequenceFile.createWriter(SequenceFile.java:273)
> {code}
> This is due to the *codec* which is _null_:
> {code:title=SequenceFile.java|borderStyle=solid}
>  if (readerCompressionOption.value != compressionTypeOption.value
> || !readerCompressionOption.codec.getClass().getName()
> 
> .equals(compressionTypeOption.codec.getClass().getName())) {
>   throw new IllegalArgumentException(
>   "Compression option provided does not match the file");
> }
> {code}
> Thansk 
> Mickaël



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13169) Randomize file list in SimpleCopyListing

2016-05-17 Thread Rajesh Balamohan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajesh Balamohan updated HADOOP-13169:
--
Attachment: HADOOP-13169-branch-2-001.patch

TestCopyListing covers this codepath. Encountered HADOOP-13170 when running 
unit tests; included the trivial fix it.

> Randomize file list in SimpleCopyListing
> 
>
> Key: HADOOP-13169
> URL: https://issues.apache.org/jira/browse/HADOOP-13169
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Reporter: Rajesh Balamohan
>Priority: Minor
> Attachments: HADOOP-13169-branch-2-001.patch
>
>
> When copying files to S3, based on file listing some mappers can get into S3 
> partition hotspots. This would be more visible, when data is copied from hive 
> warehouse with lots of partitions (e.g date partitions). In such cases, some 
> of the tasks would tend to be a lot more slower than others. It would be good 
> to randomize the file paths which are written out in SimpleCopyListing to 
> avoid this issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13170) TestOptionsParser in o.a.h.tools fails

2016-05-17 Thread Rajesh Balamohan (JIRA)
Rajesh Balamohan created HADOOP-13170:
-

 Summary: TestOptionsParser in o.a.h.tools fails
 Key: HADOOP-13170
 URL: https://issues.apache.org/jira/browse/HADOOP-13170
 Project: Hadoop Common
  Issue Type: Bug
  Components: tools
Reporter: Rajesh Balamohan
Priority: Trivial


testToString fails. It should compare with "mapBandwidth=100" instead of 
"mapBandwidth=100.0".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13169) Randomize file list in SimpleCopyListing

2016-05-17 Thread Rajesh Balamohan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajesh Balamohan updated HADOOP-13169:
--
Component/s: tools/distcp

> Randomize file list in SimpleCopyListing
> 
>
> Key: HADOOP-13169
> URL: https://issues.apache.org/jira/browse/HADOOP-13169
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Reporter: Rajesh Balamohan
>Priority: Minor
>
> When copying files to S3, based on file listing some mappers can get into S3 
> partition hotspots. This would be more visible, when data is copied from hive 
> warehouse with lots of partitions (e.g date partitions). In such cases, some 
> of the tasks would tend to be a lot more slower than others. It would be good 
> to randomize the file paths which are written out in SimpleCopyListing to 
> avoid this issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12893) Verify LICENSE.txt and NOTICE.txt

2016-05-17 Thread Billie Rinaldi (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15288105#comment-15288105
 ] 

Billie Rinaldi commented on HADOOP-12893:
-

I took a look at the patch. Apache releases cannot contain any LGPL 
dependencies, so if any exist they must be removed before a release can be 
made. Going through the list of things that are stated to be LGPL in this patch:
* FindBugs-jsr305 looks like it might be BSD, but I'm not sure 
(https://github.com/findbugsproject/findbugs/blob/master/findbugs/licenses/LICENSE-jsr305.txt)
* Data Mapper for Jackson and Xml Compatibility extensions for Jackson are 
dual-licensed AL (http://wiki.fasterxml.com/JacksonDownload under "Licensing")
* Javassist is triple licensed MPL/LGPL/Apache 
(https://github.com/jboss-javassist/javassist/tree/rel_3_18_1_ga)

The following items I can't find bundled; can anyone else? If they aren't 
actually bundled, we don't need to mention them.
* logback could be licensed under EPL instead 
(http://logback.qos.ch/license.html)
* jdiff is LGPL and can't be bundled

I haven't looked in detail at the other additions in the patch, but we should 
make sure they are all bundled dependencies. I was wondering about the 
inclusion of mockito and junit, for example. Are those actually needed, or is 
it just a case of not specifying the test scope for those dependencies in some 
of the modules?

> Verify LICENSE.txt and NOTICE.txt
> -
>
> Key: HADOOP-12893
> URL: https://issues.apache.org/jira/browse/HADOOP-12893
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0, 2.7.3, 2.6.5, 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Assignee: Xiao Chen
>Priority: Blocker
> Attachments: HADOOP-12893.01.patch
>
>
> We have many bundled dependencies in both the source and the binary artifacts 
> that are not in LICENSE.txt and NOTICE.txt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13169) Randomize file list in SimpleCopyListing

2016-05-17 Thread Rajesh Balamohan (JIRA)
Rajesh Balamohan created HADOOP-13169:
-

 Summary: Randomize file list in SimpleCopyListing
 Key: HADOOP-13169
 URL: https://issues.apache.org/jira/browse/HADOOP-13169
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Rajesh Balamohan
Priority: Minor


When copying files to S3, based on file listing some mappers can get into S3 
partition hotspots. This would be more visible, when data is copied from hive 
warehouse with lots of partitions (e.g date partitions). In such cases, some of 
the tasks would tend to be a lot more slower than others. It would be good to 
randomize the file paths which are written out in SimpleCopyListing to avoid 
this issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13168) Support Future.get with timeout in ipc async calls

2016-05-17 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HADOOP-13168:
-
Status: Patch Available  (was: Open)

> Support Future.get with timeout in ipc async calls
> --
>
> Key: HADOOP-13168
> URL: https://issues.apache.org/jira/browse/HADOOP-13168
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
> Attachments: c13168_20160517.patch
>
>
> Currently, the Future returned by ipc async call only support Future.get() 
> but not Future.get(timeout, unit).  We should support the latter as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13168) Support Future.get with timeout in ipc async calls

2016-05-17 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HADOOP-13168:
-
Attachment: c13168_20160517.patch

c13168_20160517.patch: 1st patch.

> Support Future.get with timeout in ipc async calls
> --
>
> Key: HADOOP-13168
> URL: https://issues.apache.org/jira/browse/HADOOP-13168
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
> Attachments: c13168_20160517.patch
>
>
> Currently, the Future returned by ipc async call only support Future.get() 
> but not Future.get(timeout, unit).  We should support the latter as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13168) Support Future.get with timeout in ipc async calls

2016-05-17 Thread Tsz Wo Nicholas Sze (JIRA)
Tsz Wo Nicholas Sze created HADOOP-13168:


 Summary: Support Future.get with timeout in ipc async calls
 Key: HADOOP-13168
 URL: https://issues.apache.org/jira/browse/HADOOP-13168
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze


Currently, the Future returned by ipc async call only support Future.get() but 
not Future.get(timeout, unit).  We should support the latter as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13157) Follow-on improvements to hadoop credential commands

2016-05-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15288027#comment-15288027
 ] 

Hudson commented on HADOOP-13157:
-

ABORTED: Integrated in Hadoop-trunk-Commit #9811 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9811/])
HADOOP-13157. Follow-on improvements to hadoop credential commands. (wang: rev 
7154ace71212e9fb9dd6209a92165fb075df7806)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ProviderUtils.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/JavaKeyStoreProvider.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/alias/TestCredShell.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/AbstractJavaKeyStoreProvider.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/key/TestKeyShell.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/CredentialShell.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyShell.java


> Follow-on improvements to hadoop credential commands
> 
>
> Key: HADOOP-13157
> URL: https://issues.apache.org/jira/browse/HADOOP-13157
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.8.0
>Reporter: Mike Yoder
>Assignee: Mike Yoder
> Fix For: 2.9.0
>
> Attachments: HADOOP-13157.001.patch, HADOOP-13157.002.patch
>
>
> [~andrew.wang] had some follow-up code review comments from HADOOP-12942. 
> Hence this issue.
> Ping [~lmccay] as well.  
> The comments:
> {quote}
> Overall this looks okay, the only correctness question I have is about the 
> difference in behavior when the pwfile doesn't exist.
> The rest are all nits, would be nice to do these cleanups though.
> File 
> hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/JavaKeyStoreProvider.java:
> Line 147:
> Could this be a static helper?
> Line 161: new
> The javadoc says it returns null in this situation. This is also a difference 
> from the implementation in the AbstractJKSP. Intentional?
> Line 175:   private void locateKeystore() throws IOException {
> static helper? for the construct*Path methods too?
> File 
> hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyShell.java:
> Line 50:   @VisibleForTesting public static final String NO_VALID_PROVIDERS =
> FYI for the future, our coding style is to put annotations on their own 
> separate line.
> File 
> hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/AbstractJavaKeyStoreProvider.java:
> Line 326:   private char[] locatePassword() throws IOException {
> this method looks very similar to the one in JavaKeyStoreProvider, except the 
> env var it looks for is different, is there potential for code reuse?
> Line 394:   "o In the environment variable " +
> Using a "*" is the usual way of doing a bullet point, e.g. markdown and wiki 
> syntax.
> Line 399:   
> "http://hadoop.apache.org/docs/current/hadoop-project-dist/; +
> This link is not tied to a version, so could be inaccurate.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13157) Follow-on improvements to hadoop credential commands

2016-05-17 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15287981#comment-15287981
 ] 

Andrew Wang commented on HADOOP-13157:
--

I've committed this to trunk and branch-2, branch-2.8 wasn't clean and fails 
with a compile error in Shell.java.

Mike, do you mind preparing a branch-2.8 patch as well? Thanks.

> Follow-on improvements to hadoop credential commands
> 
>
> Key: HADOOP-13157
> URL: https://issues.apache.org/jira/browse/HADOOP-13157
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.8.0
>Reporter: Mike Yoder
>Assignee: Mike Yoder
> Fix For: 2.9.0
>
> Attachments: HADOOP-13157.001.patch, HADOOP-13157.002.patch
>
>
> [~andrew.wang] had some follow-up code review comments from HADOOP-12942. 
> Hence this issue.
> Ping [~lmccay] as well.  
> The comments:
> {quote}
> Overall this looks okay, the only correctness question I have is about the 
> difference in behavior when the pwfile doesn't exist.
> The rest are all nits, would be nice to do these cleanups though.
> File 
> hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/JavaKeyStoreProvider.java:
> Line 147:
> Could this be a static helper?
> Line 161: new
> The javadoc says it returns null in this situation. This is also a difference 
> from the implementation in the AbstractJKSP. Intentional?
> Line 175:   private void locateKeystore() throws IOException {
> static helper? for the construct*Path methods too?
> File 
> hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyShell.java:
> Line 50:   @VisibleForTesting public static final String NO_VALID_PROVIDERS =
> FYI for the future, our coding style is to put annotations on their own 
> separate line.
> File 
> hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/AbstractJavaKeyStoreProvider.java:
> Line 326:   private char[] locatePassword() throws IOException {
> this method looks very similar to the one in JavaKeyStoreProvider, except the 
> env var it looks for is different, is there potential for code reuse?
> Line 394:   "o In the environment variable " +
> Using a "*" is the usual way of doing a bullet point, e.g. markdown and wiki 
> syntax.
> Line 399:   
> "http://hadoop.apache.org/docs/current/hadoop-project-dist/; +
> This link is not tied to a version, so could be inaccurate.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13157) Follow-on improvements to hadoop credential commands

2016-05-17 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-13157:
-
Fix Version/s: 2.9.0

> Follow-on improvements to hadoop credential commands
> 
>
> Key: HADOOP-13157
> URL: https://issues.apache.org/jira/browse/HADOOP-13157
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.8.0
>Reporter: Mike Yoder
>Assignee: Mike Yoder
> Fix For: 2.9.0
>
> Attachments: HADOOP-13157.001.patch, HADOOP-13157.002.patch
>
>
> [~andrew.wang] had some follow-up code review comments from HADOOP-12942. 
> Hence this issue.
> Ping [~lmccay] as well.  
> The comments:
> {quote}
> Overall this looks okay, the only correctness question I have is about the 
> difference in behavior when the pwfile doesn't exist.
> The rest are all nits, would be nice to do these cleanups though.
> File 
> hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/JavaKeyStoreProvider.java:
> Line 147:
> Could this be a static helper?
> Line 161: new
> The javadoc says it returns null in this situation. This is also a difference 
> from the implementation in the AbstractJKSP. Intentional?
> Line 175:   private void locateKeystore() throws IOException {
> static helper? for the construct*Path methods too?
> File 
> hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyShell.java:
> Line 50:   @VisibleForTesting public static final String NO_VALID_PROVIDERS =
> FYI for the future, our coding style is to put annotations on their own 
> separate line.
> File 
> hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/AbstractJavaKeyStoreProvider.java:
> Line 326:   private char[] locatePassword() throws IOException {
> this method looks very similar to the one in JavaKeyStoreProvider, except the 
> env var it looks for is different, is there potential for code reuse?
> Line 394:   "o In the environment variable " +
> Using a "*" is the usual way of doing a bullet point, e.g. markdown and wiki 
> syntax.
> Line 399:   
> "http://hadoop.apache.org/docs/current/hadoop-project-dist/; +
> This link is not tied to a version, so could be inaccurate.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13157) Follow-on improvements to hadoop credential commands

2016-05-17 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15287973#comment-15287973
 ] 

Andrew Wang commented on HADOOP-13157:
--

+1 let's get it in, thanks for working on this [~yoderme]!

> Follow-on improvements to hadoop credential commands
> 
>
> Key: HADOOP-13157
> URL: https://issues.apache.org/jira/browse/HADOOP-13157
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.8.0
>Reporter: Mike Yoder
>Assignee: Mike Yoder
> Attachments: HADOOP-13157.001.patch, HADOOP-13157.002.patch
>
>
> [~andrew.wang] had some follow-up code review comments from HADOOP-12942. 
> Hence this issue.
> Ping [~lmccay] as well.  
> The comments:
> {quote}
> Overall this looks okay, the only correctness question I have is about the 
> difference in behavior when the pwfile doesn't exist.
> The rest are all nits, would be nice to do these cleanups though.
> File 
> hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/JavaKeyStoreProvider.java:
> Line 147:
> Could this be a static helper?
> Line 161: new
> The javadoc says it returns null in this situation. This is also a difference 
> from the implementation in the AbstractJKSP. Intentional?
> Line 175:   private void locateKeystore() throws IOException {
> static helper? for the construct*Path methods too?
> File 
> hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyShell.java:
> Line 50:   @VisibleForTesting public static final String NO_VALID_PROVIDERS =
> FYI for the future, our coding style is to put annotations on their own 
> separate line.
> File 
> hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/AbstractJavaKeyStoreProvider.java:
> Line 326:   private char[] locatePassword() throws IOException {
> this method looks very similar to the one in JavaKeyStoreProvider, except the 
> env var it looks for is different, is there potential for code reuse?
> Line 394:   "o In the environment variable " +
> Using a "*" is the usual way of doing a bullet point, e.g. markdown and wiki 
> syntax.
> Line 399:   
> "http://hadoop.apache.org/docs/current/hadoop-project-dist/; +
> This link is not tied to a version, so could be inaccurate.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13132) LoadBalancingKMSClientProvider ClassCastException on AuthenticationException

2016-05-17 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-13132:
-
Attachment: HADOOP-13132.002.patch

Thanks [~xiaochen] for the suggestion. I attached a new patch which includes 
your suggestion. In addition, this patch adds a test case

> LoadBalancingKMSClientProvider ClassCastException on AuthenticationException
> 
>
> Key: HADOOP-13132
> URL: https://issues.apache.org/jira/browse/HADOOP-13132
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Reporter: Miklos Szurap
>Assignee: Wei-Chiu Chuang
> Attachments: HADOOP-13132.001.patch, HADOOP-13132.002.patch
>
>
> An Oozie job with a single shell action fails (may not be important, but if 
> you needs the exact details I can provide them) with an error message coming 
> from NodeManager:
> {code}
> 2016-05-10 11:10:14,290 ERROR 
> org.apache.hadoop.yarn.YarnUncaughtExceptionHandler: Thread 
> Thread[LogAggregationService #652,5,main] threw an Exception.
> java.lang.ClassCastException: 
> org.apache.hadoop.security.authentication.client.AuthenticationException 
> cannot be cast to java.security.GeneralSecurityException
> at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.decryptEncryptedKey(LoadBalancingKMSClientProvider.java:189)
> at 
> org.apache.hadoop.crypto.key.KeyProviderCryptoExtension.decryptEncryptedKey(KeyProviderCryptoExtension.java:388)
> at 
> org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(DFSClient.java:1419)
> at 
> org.apache.hadoop.hdfs.DFSClient.createWrappedOutputStream(DFSClient.java:1521)
> at org.apache.hadoop.fs.Hdfs.createInternal(Hdfs.java:108)
> at org.apache.hadoop.fs.Hdfs.createInternal(Hdfs.java:59)
> at org.apache.hadoop.fs.AbstractFileSystem.create(AbstractFileSystem.java:577)
> at org.apache.hadoop.fs.FileContext$3.next(FileContext.java:683)
> at org.apache.hadoop.fs.FileContext$3.next(FileContext.java:679)
> at org.apache.hadoop.fs.FSLinkResolver.resolve(FSLinkResolver.java:90)
> at org.apache.hadoop.fs.FileContext.create(FileContext.java:679)
> at 
> org.apache.hadoop.yarn.logaggregation.AggregatedLogFormat$LogWriter$1.run(AggregatedLogFormat.java:382)
> at 
> org.apache.hadoop.yarn.logaggregation.AggregatedLogFormat$LogWriter$1.run(AggregatedLogFormat.java:377)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
> at 
> org.apache.hadoop.yarn.logaggregation.AggregatedLogFormat$LogWriter.(AggregatedLogFormat.java:376)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.AppLogAggregatorImpl.uploadLogsForContainers(AppLogAggregatorImpl.java:246)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.AppLogAggregatorImpl.doAppLogAggregation(AppLogAggregatorImpl.java:456)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.AppLogAggregatorImpl.run(AppLogAggregatorImpl.java:421)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.LogAggregationService$2.run(LogAggregationService.java:384)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> The unsafe cast is here:
> https://github.com/apache/hadoop/blob/2e1d0ff4e901b8313c8d71869735b94ed8bc40a0/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/LoadBalancingKMSClientProvider.java#L189
> Because of this ClassCastException:
> - an uncaught exception is raised
> - we do not see the exact "caused by" exception/message
> - the oozie job fails
> - YARN logs are not reported/saved



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13130) s3a failures can surface as RTEs, not IOEs

2016-05-17 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15287946#comment-15287946
 ] 

Chris Nauroth commented on HADOOP-13130:


[~ste...@apache.org], thank you for the patch.  Here are a few comments.

{code}
if (LOG.isDebugEnabled()) {
  LOG.debug("Completing multi-part upload for key '{}', id '{}'", key,
  uploadId);
}
{code}

The log level guard is unnecessary.

{code}
   public FSDataOutputStream append(Path f, int bufferSize,
   Progressable progress) throws IOException {
-throw new IOException("Not supported");
+throw new UnsupportedOperationException("Not supported");
   }
{code}

Possibly backwards-incompatible?  Maybe someone coded error handling that 
catches {{IOException}} and falls back to a non-append strategy for non-HDFS?

{code}
String header = operation
+ (path != null ? ("on " + path) : "")
+ ": ";
String message = header + exception;
{code}

The message will have no space between {{operation}} and {{"on "}}.

{code}
  // this exception is non-null if the service exception is an s3 on
{code}

Typo at end of comment?

{code}
  public static void eventually(int timeout, Callable callback)
  throws Exception {
Exception lastException;
long endtime = System.currentTimeMillis() + timeout;
do {
  try {
callback.call();
return;
  } catch (FailFastException e) {
throw e;
  } catch (Exception e) {
lastException = e;
  }
  Thread.sleep(500);
} while (endtime > System.currentTimeMillis());
throw lastException;
  }
{code}

{{eventually}} doesn't appear to be interested in the results returned from the 
callback, so would {{Runnable}} be a better fit than {{Callable}}?

{code}
  assertEquals("Expected EOF got char " + (char) c, -1, c);

  byte[] buf = new byte[256];

  assertEquals(-1, instream.read(buf));
  assertEquals(-1, instream.read(instream.getPos(), buf, 0, buf.length));

  // now do a block read fully, again, backwards from the current pos
  try {
instream.readFully(shortLen + 512, buf);
fail("Expected readFully to fail");
  } catch (EOFException expected) {
LOG.debug("Expected: ", expected);
  }

  assertEquals(-1, instream.read(shortLen + 510, buf, 0, buf.length));
{code}

Do you want to use the descriptive "Expected EOF" message on all of these EOF 
assertions?

See below for several test failures I got after applying the patch to branch-2. 
 I see these failures consistently, running both with and without the 
parallel-tests profile.  If these failures don't repro for you, let me know, 
and I'll dig deeper on my side.

{code}
testProxyConnection(org.apache.hadoop.fs.s3a.TestS3AConfiguration)  Time 
elapsed: 1.635 sec  <<< ERROR!
java.io.IOException: doesBucketExiston cnauroth-test-aws-s3a: 
com.amazonaws.AmazonClientException: Unable to execute HTTP request: Connection 
to http://127.0.0.1:1 refused
at java.net.PlainSocketImpl.socketConnect(Native Method)
at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:589)
at 
org.apache.http.conn.scheme.PlainSocketFactory.connectSocket(PlainSocketFactory.java:127)
at 
org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:180)
at 
org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:294)
at 
org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:643)
at 
org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:479)
at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:906)
at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:805)
at 
com.amazonaws.http.AmazonHttpClient.executeOneRequest(AmazonHttpClient.java:728)
at 
com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:489)
at 
com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:310)
at 
com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3785)
at 
com.amazonaws.services.s3.AmazonS3Client.headBucket(AmazonS3Client.java:1107)
at 
com.amazonaws.services.s3.AmazonS3Client.doesBucketExist(AmazonS3Client.java:1070)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.verifyBucketExists(S3AFileSystem.java:289)
at 

[jira] [Commented] (HADOOP-12723) S3A: Add ability to plug in any AWSCredentialsProvider

2016-05-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15287939#comment-15287939
 ] 

Hadoop QA commented on HADOOP-12723:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 12s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
11s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 30s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
19s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 9s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
24s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
46s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 8s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
50s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 31s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 31s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
28s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 12s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
23s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
55s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 17s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 24s 
{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 14s 
{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
21s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 42m 28s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12804541/HADOOP-12723.4.patch |
| JIRA Issue | HADOOP-12723 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  findbugs  checkstyle  |
| uname | Linux 9b98ece2b3e3 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / dd99f5f |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9475/testReport/ |
| modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws U: . 
|
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9475/console |
| Powered by | Apache Yetus 0.3.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> S3A: Add ability to plug in any 

[jira] [Commented] (HADOOP-13140) GlobalStorageStatistics should check null FileSystem scheme to avoid NPE

2016-05-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15287897#comment-15287897
 ] 

Hadoop QA commented on HADOOP-13140:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
4s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
17s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
42s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 42s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
23s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
11s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 23s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
3s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
54s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 54s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
35s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 17s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
17s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
15s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 19s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 42s 
{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
26s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 61m 44s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12804527/HADOOP-13140.002.patch
 |
| JIRA Issue | HADOOP-13140 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 0bc757bc6d07 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 0c6726e |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9474/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9474/console |
| Powered by | Apache Yetus 0.3.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> GlobalStorageStatistics should check null FileSystem scheme to avoid NPE
> 
>
> Key: HADOOP-13140
> URL: https://issues.apache.org/jira/browse/HADOOP-13140
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Brahma Reddy Battula
>Assignee: Mingliang Liu
> Attachments: 

[jira] [Updated] (HADOOP-12723) S3A: Add ability to plug in any AWSCredentialsProvider

2016-05-17 Thread Steven Wong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12723?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steven Wong updated HADOOP-12723:
-
Attachment: HADOOP-12723.4.patch

Rebasing the patch. Could someone review it again?

> S3A: Add ability to plug in any AWSCredentialsProvider
> --
>
> Key: HADOOP-12723
> URL: https://issues.apache.org/jira/browse/HADOOP-12723
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/s3
>Affects Versions: 2.7.1
>Reporter: Steven Wong
>Assignee: Steven Wong
> Attachments: HADOOP-12723.0.patch, HADOOP-12723.1.patch, 
> HADOOP-12723.2.patch, HADOOP-12723.3.patch, HADOOP-12723.4.patch
>
>
> Although S3A currently has built-in support for 
> {{org.apache.hadoop.fs.s3a.BasicAWSCredentialsProvider}}, 
> {{com.amazonaws.auth.InstanceProfileCredentialsProvider}}, and 
> {{org.apache.hadoop.fs.s3a.AnonymousAWSCredentialsProvider}}, it does not 
> support any other credentials provider that implements the 
> {{com.amazonaws.auth.AWSCredentialsProvider}} interface. Supporting the 
> ability to plug in any {{com.amazonaws.auth.AWSCredentialsProvider}} instance 
> will expand the options for S3 credentials, such as:
> * temporary credentials from STS, e.g. via 
> {{com.amazonaws.auth.STSSessionCredentialsProvider}}
> * IAM role-based credentials, e.g. via 
> {{com.amazonaws.auth.STSAssumeRoleSessionCredentialsProvider}}
> * a custom credentials provider that satisfies one's own needs, e.g. 
> bucket-specific credentials, user-specific credentials, etc.
> To support this, we can add a configuration for the fully qualified class 
> name of a credentials provider, to be loaded by 
> {{S3AFileSystem.initialize(URI, Configuration)}}.
> The configured credentials provider should implement 
> {{com.amazonaws.auth.AWSCredentialsProvider}} and have a constructor that 
> accepts {{(URI uri, Configuration conf)}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13145) In DistCp, prevent unnecessary getFileStatus call when not preserving metadata.

2016-05-17 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-13145:
---
Attachment: HADOOP-13145.003.patch

Patch v003 adds a new abstract contract test suite for DistCp coverage and 
concrete test suite subclasses for S3A and WASB.  I verified the tests are 
passing for both hadoop-aws (including running in parallel mode) and 
hadoop-azure.

I'm going to leave the JIRA issue in Open status instead of Patch Available for 
now.  The v003 patch will potentially hit Jenkins a little hard because of 
touching multiple modules, so I'd like to get another round of code review 
feedback first.


> In DistCp, prevent unnecessary getFileStatus call when not preserving 
> metadata.
> ---
>
> Key: HADOOP-13145
> URL: https://issues.apache.org/jira/browse/HADOOP-13145
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HADOOP-13145.001.patch, HADOOP-13145.003.patch
>
>
> After DistCp copies a file, it calls {{getFileStatus}} to get the 
> {{FileStatus}} from the destination so that it can compare to the source and 
> update metadata if necessary.  If the DistCp command was run without the 
> option to preserve metadata attributes, then this additional 
> {{getFileStatus}} call is wasteful.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13159) Fix potential NPE in Metrics2 source for DecayRpcScheduler

2016-05-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15287830#comment-15287830
 ] 

Hudson commented on HADOOP-13159:
-

SUCCESS: Integrated in Hadoop-trunk-Commit #9809 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9809/])
HADOOP-13159. Fix potential NPE in Metrics2 source for (xyao: rev 
94784848456a92a6502f3a3c0074e44fba4b19c9)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/DecayRpcScheduler.java


> Fix potential NPE in Metrics2 source for DecayRpcScheduler
> --
>
> Key: HADOOP-13159
> URL: https://issues.apache.org/jira/browse/HADOOP-13159
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 2.8.0
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Fix For: 2.8.0
>
> Attachments: HADOOP-13159.00.patch, HADOOP-13159.01.patch, 
> HADOOP-13159.02.patch
>
>
> HADOOP-12985 introduced a few metrics2 counters for DecayRpcScheduler. The 
> counters should be initialized before the updater thread is created/executed. 
> Otherwise, this is a chance of NPE  if the updater thread get executed before 
> the counters are fully initialized. I will post a patch for it shortly. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13145) In DistCp, prevent unnecessary getFileStatus call when not preserving metadata.

2016-05-17 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-13145:
---
Status: Open  (was: Patch Available)

> In DistCp, prevent unnecessary getFileStatus call when not preserving 
> metadata.
> ---
>
> Key: HADOOP-13145
> URL: https://issues.apache.org/jira/browse/HADOOP-13145
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HADOOP-13145.001.patch
>
>
> After DistCp copies a file, it calls {{getFileStatus}} to get the 
> {{FileStatus}} from the destination so that it can compare to the source and 
> update metadata if necessary.  If the DistCp command was run without the 
> option to preserve metadata attributes, then this additional 
> {{getFileStatus}} call is wasteful.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13157) Follow-on improvements to hadoop credential commands

2016-05-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15287802#comment-15287802
 ] 

Hadoop QA commented on HADOOP-13157:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
38s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 41s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
27s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 59s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
27s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
42s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 9s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 9s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
25s {color} | {color:green} hadoop-common-project/hadoop-common: The patch 
generated 0 new + 59 unchanged - 2 fixed = 59 total (was 61) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 0s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
42s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 53s 
{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
21s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 40m 37s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12804524/HADOOP-13157.002.patch
 |
| JIRA Issue | HADOOP-13157 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 9004c95f2598 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 0c6726e |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9473/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9473/console |
| Powered by | Apache Yetus 0.3.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Follow-on improvements to hadoop credential commands
> 
>
> Key: HADOOP-13157
> URL: https://issues.apache.org/jira/browse/HADOOP-13157
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.8.0
>Reporter: Mike Yoder
>Assignee: Mike Yoder
> Attachments: HADOOP-13157.001.patch, HADOOP-13157.002.patch
>
>
> [~andrew.wang] had some follow-up code 

[jira] [Updated] (HADOOP-13159) Fix potential NPE in Metrics2 source for DecayRpcScheduler

2016-05-17 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13159?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HADOOP-13159:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

Thanks [~arpitagarwal] and [~templedf] for the review. I commit the patch v02 
to trunk, branch-2 and branch-2.8 based on [~arpitagarwal]'s +1. 

There is no unit test added for this change as it is difficult to test the race 
between the main thread and the updater thread during initialization. I 
manually validate the fix with multiple namenode restart and check the namenode 
log, jmx and jstack to ensure there is no NPE and the jmx output as expected. 

> Fix potential NPE in Metrics2 source for DecayRpcScheduler
> --
>
> Key: HADOOP-13159
> URL: https://issues.apache.org/jira/browse/HADOOP-13159
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 2.8.0
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Fix For: 2.8.0
>
> Attachments: HADOOP-13159.00.patch, HADOOP-13159.01.patch, 
> HADOOP-13159.02.patch
>
>
> HADOOP-12985 introduced a few metrics2 counters for DecayRpcScheduler. The 
> counters should be initialized before the updater thread is created/executed. 
> Otherwise, this is a chance of NPE  if the updater thread get executed before 
> the counters are fully initialized. I will post a patch for it shortly. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13160) Suppress checkstyle JavadocPackage check for test source

2016-05-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15287788#comment-15287788
 ] 

Hadoop QA commented on HADOOP-13160:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
5s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 4s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 9m 10s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
34s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 5m 30s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
19s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 39s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 39s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 8m 29s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
34s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 3s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 5m 37s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 106m 30s 
{color} | {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
24s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 164m 52s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.applicationhistoryservice.TestFileSystemApplicationHistoryStore
 |
|   | 
hadoop.yarn.server.nodemanager.containermanager.monitor.TestContainerMetrics |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12804502/HADOOP-13160.003.patch
 |
| JIRA Issue | HADOOP-13160 |
| Optional Tests |  asflicense  xml  compile  javac  javadoc  mvninstall  
mvnsite  unit  |
| uname | Linux 16372e5b5784 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / fa3bc34 |
| Default Java | 1.8.0_91 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9470/artifact/patchprocess/patch-unit-root.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-HADOOP-Build/9470/artifact/patchprocess/patch-unit-root.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9470/testReport/ |
| modules | C: . U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9470/console |
| Powered by | Apache Yetus 0.3.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Suppress checkstyle JavadocPackage check for test source
> 
>
> Key: HADOOP-13160
> URL: https://issues.apache.org/jira/browse/HADOOP-13160
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.7.2
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Attachments: HADOOP-13160.001.patch, HADOOP-13160.002.patch, 
> HADOOP-13160.003.patch
>
>
> 

[jira] [Commented] (HADOOP-13159) Fix potential NPE in Metrics2 source for DecayRpcScheduler

2016-05-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15287772#comment-15287772
 ] 

Hadoop QA commented on HADOOP-13159:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
32s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
39s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
36s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 58s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
22s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
35s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 24s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
26s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 26s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
55s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 51s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
19s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
33s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 26s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 11m 19s 
{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
34s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 72m 59s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12804518/HADOOP-13159.02.patch 
|
| JIRA Issue | HADOOP-13159 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux ee9221b911e0 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 1356cbe |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9472/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9472/console |
| Powered by | Apache Yetus 0.3.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Fix potential NPE in Metrics2 source for DecayRpcScheduler
> --
>
> Key: HADOOP-13159
> URL: https://issues.apache.org/jira/browse/HADOOP-13159
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 2.8.0
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HADOOP-13159.00.patch, 

[jira] [Updated] (HADOOP-13140) GlobalStorageStatistics should check null FileSystem scheme to avoid NPE

2016-05-17 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-13140:
---
Attachment: HADOOP-13140.002.patch

> GlobalStorageStatistics should check null FileSystem scheme to avoid NPE
> 
>
> Key: HADOOP-13140
> URL: https://issues.apache.org/jira/browse/HADOOP-13140
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Brahma Reddy Battula
>Assignee: Mingliang Liu
> Attachments: HADOOP-13140.000.patch, HADOOP-13140.001.patch, 
> HADOOP-13140.002.patch
>
>
> {{org.apache.hadoop.fs.GlobalStorageStatistics#put}} is not checking the null 
> scheme, and the internal map will complain NPE. This was reported by a flaky 
> test {{TestFileSystemApplicationHistoryStore}}. Thanks [~brahmareddy] for 
> reporting.
> To address this,
> # Fix the test by providing a valid URI, e.g. {{file:///}}
> # Guard the null scheme in {{GlobalStorageStatistics#put}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org




[jira] [Commented] (HADOOP-13140) GlobalStorageStatistics should check null FileSystem scheme to avoid NPE

2016-05-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15287659#comment-15287659
 ] 

Hadoop QA commented on HADOOP-13140:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 53s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
25s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 2s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
26s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
50s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 32s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 32s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 27s 
{color} | {color:red} hadoop-common-project/hadoop-common: The patch generated 
2 new + 126 unchanged - 1 fixed = 128 total (was 127) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 0s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
41s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 3s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 13m 33s {color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
23s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 46m 50s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.security.ssl.TestReloadingX509TrustManager |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12804513/HADOOP-13140.001.patch
 |
| JIRA Issue | HADOOP-13140 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 691e94be5406 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 1356cbe |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9471/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9471/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-HADOOP-Build/9471/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9471/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 

[jira] [Commented] (HADOOP-13157) Follow-on improvements to hadoop credential commands

2016-05-17 Thread Mike Yoder (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15287645#comment-15287645
 ] 

Mike Yoder commented on HADOOP-13157:
-

Added this in patch 2.

> Follow-on improvements to hadoop credential commands
> 
>
> Key: HADOOP-13157
> URL: https://issues.apache.org/jira/browse/HADOOP-13157
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.8.0
>Reporter: Mike Yoder
>Assignee: Mike Yoder
> Attachments: HADOOP-13157.001.patch, HADOOP-13157.002.patch
>
>
> [~andrew.wang] had some follow-up code review comments from HADOOP-12942. 
> Hence this issue.
> Ping [~lmccay] as well.  
> The comments:
> {quote}
> Overall this looks okay, the only correctness question I have is about the 
> difference in behavior when the pwfile doesn't exist.
> The rest are all nits, would be nice to do these cleanups though.
> File 
> hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/JavaKeyStoreProvider.java:
> Line 147:
> Could this be a static helper?
> Line 161: new
> The javadoc says it returns null in this situation. This is also a difference 
> from the implementation in the AbstractJKSP. Intentional?
> Line 175:   private void locateKeystore() throws IOException {
> static helper? for the construct*Path methods too?
> File 
> hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyShell.java:
> Line 50:   @VisibleForTesting public static final String NO_VALID_PROVIDERS =
> FYI for the future, our coding style is to put annotations on their own 
> separate line.
> File 
> hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/AbstractJavaKeyStoreProvider.java:
> Line 326:   private char[] locatePassword() throws IOException {
> this method looks very similar to the one in JavaKeyStoreProvider, except the 
> env var it looks for is different, is there potential for code reuse?
> Line 394:   "o In the environment variable " +
> Using a "*" is the usual way of doing a bullet point, e.g. markdown and wiki 
> syntax.
> Line 399:   
> "http://hadoop.apache.org/docs/current/hadoop-project-dist/; +
> This link is not tied to a version, so could be inaccurate.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13157) Follow-on improvements to hadoop credential commands

2016-05-17 Thread Mike Yoder (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Yoder updated HADOOP-13157:

Attachment: HADOOP-13157.002.patch

> Follow-on improvements to hadoop credential commands
> 
>
> Key: HADOOP-13157
> URL: https://issues.apache.org/jira/browse/HADOOP-13157
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.8.0
>Reporter: Mike Yoder
>Assignee: Mike Yoder
> Attachments: HADOOP-13157.001.patch, HADOOP-13157.002.patch
>
>
> [~andrew.wang] had some follow-up code review comments from HADOOP-12942. 
> Hence this issue.
> Ping [~lmccay] as well.  
> The comments:
> {quote}
> Overall this looks okay, the only correctness question I have is about the 
> difference in behavior when the pwfile doesn't exist.
> The rest are all nits, would be nice to do these cleanups though.
> File 
> hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/JavaKeyStoreProvider.java:
> Line 147:
> Could this be a static helper?
> Line 161: new
> The javadoc says it returns null in this situation. This is also a difference 
> from the implementation in the AbstractJKSP. Intentional?
> Line 175:   private void locateKeystore() throws IOException {
> static helper? for the construct*Path methods too?
> File 
> hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyShell.java:
> Line 50:   @VisibleForTesting public static final String NO_VALID_PROVIDERS =
> FYI for the future, our coding style is to put annotations on their own 
> separate line.
> File 
> hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/AbstractJavaKeyStoreProvider.java:
> Line 326:   private char[] locatePassword() throws IOException {
> this method looks very similar to the one in JavaKeyStoreProvider, except the 
> env var it looks for is different, is there potential for code reuse?
> Line 394:   "o In the environment variable " +
> Using a "*" is the usual way of doing a bullet point, e.g. markdown and wiki 
> syntax.
> Line 399:   
> "http://hadoop.apache.org/docs/current/hadoop-project-dist/; +
> This link is not tied to a version, so could be inaccurate.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13157) Follow-on improvements to hadoop credential commands

2016-05-17 Thread Mike Yoder (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Yoder updated HADOOP-13157:

Status: Patch Available  (was: Open)

> Follow-on improvements to hadoop credential commands
> 
>
> Key: HADOOP-13157
> URL: https://issues.apache.org/jira/browse/HADOOP-13157
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.8.0
>Reporter: Mike Yoder
>Assignee: Mike Yoder
> Attachments: HADOOP-13157.001.patch, HADOOP-13157.002.patch
>
>
> [~andrew.wang] had some follow-up code review comments from HADOOP-12942. 
> Hence this issue.
> Ping [~lmccay] as well.  
> The comments:
> {quote}
> Overall this looks okay, the only correctness question I have is about the 
> difference in behavior when the pwfile doesn't exist.
> The rest are all nits, would be nice to do these cleanups though.
> File 
> hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/JavaKeyStoreProvider.java:
> Line 147:
> Could this be a static helper?
> Line 161: new
> The javadoc says it returns null in this situation. This is also a difference 
> from the implementation in the AbstractJKSP. Intentional?
> Line 175:   private void locateKeystore() throws IOException {
> static helper? for the construct*Path methods too?
> File 
> hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyShell.java:
> Line 50:   @VisibleForTesting public static final String NO_VALID_PROVIDERS =
> FYI for the future, our coding style is to put annotations on their own 
> separate line.
> File 
> hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/AbstractJavaKeyStoreProvider.java:
> Line 326:   private char[] locatePassword() throws IOException {
> this method looks very similar to the one in JavaKeyStoreProvider, except the 
> env var it looks for is different, is there potential for code reuse?
> Line 394:   "o In the environment variable " +
> Using a "*" is the usual way of doing a bullet point, e.g. markdown and wiki 
> syntax.
> Line 399:   
> "http://hadoop.apache.org/docs/current/hadoop-project-dist/; +
> This link is not tied to a version, so could be inaccurate.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13157) Follow-on improvements to hadoop credential commands

2016-05-17 Thread Mike Yoder (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Yoder updated HADOOP-13157:

Status: Open  (was: Patch Available)

> Follow-on improvements to hadoop credential commands
> 
>
> Key: HADOOP-13157
> URL: https://issues.apache.org/jira/browse/HADOOP-13157
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.8.0
>Reporter: Mike Yoder
>Assignee: Mike Yoder
> Attachments: HADOOP-13157.001.patch
>
>
> [~andrew.wang] had some follow-up code review comments from HADOOP-12942. 
> Hence this issue.
> Ping [~lmccay] as well.  
> The comments:
> {quote}
> Overall this looks okay, the only correctness question I have is about the 
> difference in behavior when the pwfile doesn't exist.
> The rest are all nits, would be nice to do these cleanups though.
> File 
> hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/JavaKeyStoreProvider.java:
> Line 147:
> Could this be a static helper?
> Line 161: new
> The javadoc says it returns null in this situation. This is also a difference 
> from the implementation in the AbstractJKSP. Intentional?
> Line 175:   private void locateKeystore() throws IOException {
> static helper? for the construct*Path methods too?
> File 
> hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyShell.java:
> Line 50:   @VisibleForTesting public static final String NO_VALID_PROVIDERS =
> FYI for the future, our coding style is to put annotations on their own 
> separate line.
> File 
> hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/AbstractJavaKeyStoreProvider.java:
> Line 326:   private char[] locatePassword() throws IOException {
> this method looks very similar to the one in JavaKeyStoreProvider, except the 
> env var it looks for is different, is there potential for code reuse?
> Line 394:   "o In the environment variable " +
> Using a "*" is the usual way of doing a bullet point, e.g. markdown and wiki 
> syntax.
> Line 399:   
> "http://hadoop.apache.org/docs/current/hadoop-project-dist/; +
> This link is not tied to a version, so could be inaccurate.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13159) Fix potential NPE in Metrics2 source for DecayRpcScheduler

2016-05-17 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15287589#comment-15287589
 ] 

Arpit Agarwal commented on HADOOP-13159:


+1 for the v02 patch. Actually you are avoiding 19 invocations of 
topNCallers.size() with the update.

> Fix potential NPE in Metrics2 source for DecayRpcScheduler
> --
>
> Key: HADOOP-13159
> URL: https://issues.apache.org/jira/browse/HADOOP-13159
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 2.8.0
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HADOOP-13159.00.patch, HADOOP-13159.01.patch, 
> HADOOP-13159.02.patch
>
>
> HADOOP-12985 introduced a few metrics2 counters for DecayRpcScheduler. The 
> counters should be initialized before the updater thread is created/executed. 
> Otherwise, this is a chance of NPE  if the updater thread get executed before 
> the counters are fully initialized. I will post a patch for it shortly. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13159) Fix potential NPE in Metrics2 source for DecayRpcScheduler

2016-05-17 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13159?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HADOOP-13159:

Attachment: HADOOP-13159.02.patch

Minor change to avoid calling topNCallers.size() twice.

> Fix potential NPE in Metrics2 source for DecayRpcScheduler
> --
>
> Key: HADOOP-13159
> URL: https://issues.apache.org/jira/browse/HADOOP-13159
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 2.8.0
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HADOOP-13159.00.patch, HADOOP-13159.01.patch, 
> HADOOP-13159.02.patch
>
>
> HADOOP-12985 introduced a few metrics2 counters for DecayRpcScheduler. The 
> counters should be initialized before the updater thread is created/executed. 
> Otherwise, this is a chance of NPE  if the updater thread get executed before 
> the counters are fully initialized. I will post a patch for it shortly. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13140) GlobalStorageStatistics should check null FileSystem scheme to avoid NPE

2016-05-17 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-13140:
---
Attachment: HADOOP-13140.001.patch

Thanks [~cmccabe] for the suggestion. I like the idea.

Attach v1 patch.

> GlobalStorageStatistics should check null FileSystem scheme to avoid NPE
> 
>
> Key: HADOOP-13140
> URL: https://issues.apache.org/jira/browse/HADOOP-13140
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Brahma Reddy Battula
>Assignee: Mingliang Liu
> Attachments: HADOOP-13140.000.patch, HADOOP-13140.001.patch
>
>
> {{org.apache.hadoop.fs.GlobalStorageStatistics#put}} is not checking the null 
> scheme, and the internal map will complain NPE. This was reported by a flaky 
> test {{TestFileSystemApplicationHistoryStore}}. Thanks [~brahmareddy] for 
> reporting.
> To address this,
> # Fix the test by providing a valid URI, e.g. {{file:///}}
> # Guard the null scheme in {{GlobalStorageStatistics#put}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13126) Add Brotli compression codec

2016-05-17 Thread Martin W. Kirst (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15287510#comment-15287510
 ] 

Martin W. Kirst commented on HADOOP-13126:
--

[~rdblue] Great that you take care of adopting brotli into Hadoop.

A small side note:
I filled one minor issue for brotli itself, which you may concern, when 
thinking about backing a release.
See https://github.com/google/brotli/issues/346
The good news are, the comments implies that a fix will be available soon.
I will take care on that and adopt it ASAP.

Let's rock this :-)

> Add Brotli compression codec
> 
>
> Key: HADOOP-13126
> URL: https://issues.apache.org/jira/browse/HADOOP-13126
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: io
>Affects Versions: 2.7.2
>Reporter: Ryan Blue
>Assignee: Ryan Blue
> Attachments: HADOOP-13126.1.patch, HADOOP-13126.2.patch
>
>
> I've been testing [Brotli|https://github.com/google/brotli/], a new 
> compression library based on LZ77 from Google. Google's [brotli 
> benchmarks|https://cran.r-project.org/web/packages/brotli/vignettes/brotli-2015-09-22.pdf]
>  look really good and we're also seeing a significant improvement in 
> compression size, compression speed, or both.
> {code:title=Brotli preliminary test results}
> [blue@work Downloads]$ time parquet from test.parquet -o test.snappy.parquet 
> --compression-codec snappy --overwrite  
> real1m17.106s
> user1m30.804s
> sys 0m4.404s
> [blue@work Downloads]$ time parquet from test.parquet -o test.br.parquet 
> --compression-codec brotli --overwrite 
> real1m16.640s
> user1m24.244s
> sys 0m6.412s
> [blue@work Downloads]$ time parquet from test.parquet -o test.gz.parquet 
> --compression-codec gzip --overwrite
> real3m39.496s
> user3m48.736s
> sys 0m3.880s
> [blue@work Downloads]$ ls -l
> -rw-r--r-- 1 blue blue 1068821936 May 10 11:06 test.br.parquet
> -rw-r--r-- 1 blue blue 1421601880 May 10 11:10 test.gz.parquet
> -rw-r--r-- 1 blue blue 2265950833 May 10 10:30 test.snappy.parquet
> {code}
> Brotli, at quality 1, is as fast as snappy and ends up smaller than gzip-9. 
> Another test resulted in a slightly larger Brotli file than gzip produced, 
> but Brotli was 4x faster. I'd like to get this compression codec into Hadoop.
> [Brotli is licensed with the MIT 
> license|https://github.com/google/brotli/blob/master/LICENSE], and the [JNI 
> library jbrotli is 
> ALv2|https://github.com/MeteoGroup/jbrotli/blob/master/LICENSE].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13126) Add Brotli compression codec

2016-05-17 Thread Martin W. Kirst (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15287490#comment-15287490
 ] 

Martin W. Kirst commented on HADOOP-13126:
--

Sure, I will do.
I plan to do this by end of this week.

> Add Brotli compression codec
> 
>
> Key: HADOOP-13126
> URL: https://issues.apache.org/jira/browse/HADOOP-13126
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: io
>Affects Versions: 2.7.2
>Reporter: Ryan Blue
>Assignee: Ryan Blue
> Attachments: HADOOP-13126.1.patch, HADOOP-13126.2.patch
>
>
> I've been testing [Brotli|https://github.com/google/brotli/], a new 
> compression library based on LZ77 from Google. Google's [brotli 
> benchmarks|https://cran.r-project.org/web/packages/brotli/vignettes/brotli-2015-09-22.pdf]
>  look really good and we're also seeing a significant improvement in 
> compression size, compression speed, or both.
> {code:title=Brotli preliminary test results}
> [blue@work Downloads]$ time parquet from test.parquet -o test.snappy.parquet 
> --compression-codec snappy --overwrite  
> real1m17.106s
> user1m30.804s
> sys 0m4.404s
> [blue@work Downloads]$ time parquet from test.parquet -o test.br.parquet 
> --compression-codec brotli --overwrite 
> real1m16.640s
> user1m24.244s
> sys 0m6.412s
> [blue@work Downloads]$ time parquet from test.parquet -o test.gz.parquet 
> --compression-codec gzip --overwrite
> real3m39.496s
> user3m48.736s
> sys 0m3.880s
> [blue@work Downloads]$ ls -l
> -rw-r--r-- 1 blue blue 1068821936 May 10 11:06 test.br.parquet
> -rw-r--r-- 1 blue blue 1421601880 May 10 11:10 test.gz.parquet
> -rw-r--r-- 1 blue blue 2265950833 May 10 10:30 test.snappy.parquet
> {code}
> Brotli, at quality 1, is as fast as snappy and ends up smaller than gzip-9. 
> Another test resulted in a slightly larger Brotli file than gzip produced, 
> but Brotli was 4x faster. I'd like to get this compression codec into Hadoop.
> [Brotli is licensed with the MIT 
> license|https://github.com/google/brotli/blob/master/LICENSE], and the [JNI 
> library jbrotli is 
> ALv2|https://github.com/MeteoGroup/jbrotli/blob/master/LICENSE].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13157) Follow-on improvements to hadoop credential commands

2016-05-17 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15287475#comment-15287475
 ] 

Andrew Wang commented on HADOOP-13157:
--

LGTM, only small nit is that now we've settled on the exception vs. null 
behavior for the pwfile,  shall we update the relevant parent class Javadocs to 
explain the behavior?

> Follow-on improvements to hadoop credential commands
> 
>
> Key: HADOOP-13157
> URL: https://issues.apache.org/jira/browse/HADOOP-13157
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.8.0
>Reporter: Mike Yoder
>Assignee: Mike Yoder
> Attachments: HADOOP-13157.001.patch
>
>
> [~andrew.wang] had some follow-up code review comments from HADOOP-12942. 
> Hence this issue.
> Ping [~lmccay] as well.  
> The comments:
> {quote}
> Overall this looks okay, the only correctness question I have is about the 
> difference in behavior when the pwfile doesn't exist.
> The rest are all nits, would be nice to do these cleanups though.
> File 
> hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/JavaKeyStoreProvider.java:
> Line 147:
> Could this be a static helper?
> Line 161: new
> The javadoc says it returns null in this situation. This is also a difference 
> from the implementation in the AbstractJKSP. Intentional?
> Line 175:   private void locateKeystore() throws IOException {
> static helper? for the construct*Path methods too?
> File 
> hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyShell.java:
> Line 50:   @VisibleForTesting public static final String NO_VALID_PROVIDERS =
> FYI for the future, our coding style is to put annotations on their own 
> separate line.
> File 
> hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/AbstractJavaKeyStoreProvider.java:
> Line 326:   private char[] locatePassword() throws IOException {
> this method looks very similar to the one in JavaKeyStoreProvider, except the 
> env var it looks for is different, is there potential for code reuse?
> Line 394:   "o In the environment variable " +
> Using a "*" is the usual way of doing a bullet point, e.g. markdown and wiki 
> syntax.
> Line 399:   
> "http://hadoop.apache.org/docs/current/hadoop-project-dist/; +
> This link is not tied to a version, so could be inaccurate.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13157) Follow-on improvements to hadoop credential commands

2016-05-17 Thread Mike Yoder (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15287470#comment-15287470
 ] 

Mike Yoder commented on HADOOP-13157:
-

Failure was
{noformat}
TestIPC.testConnectionIdleTimeouts:941 expected:<7> but was:<4>
{noformat}
Hard to see how this code has anything to do with that...

> Follow-on improvements to hadoop credential commands
> 
>
> Key: HADOOP-13157
> URL: https://issues.apache.org/jira/browse/HADOOP-13157
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.8.0
>Reporter: Mike Yoder
>Assignee: Mike Yoder
> Attachments: HADOOP-13157.001.patch
>
>
> [~andrew.wang] had some follow-up code review comments from HADOOP-12942. 
> Hence this issue.
> Ping [~lmccay] as well.  
> The comments:
> {quote}
> Overall this looks okay, the only correctness question I have is about the 
> difference in behavior when the pwfile doesn't exist.
> The rest are all nits, would be nice to do these cleanups though.
> File 
> hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/JavaKeyStoreProvider.java:
> Line 147:
> Could this be a static helper?
> Line 161: new
> The javadoc says it returns null in this situation. This is also a difference 
> from the implementation in the AbstractJKSP. Intentional?
> Line 175:   private void locateKeystore() throws IOException {
> static helper? for the construct*Path methods too?
> File 
> hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyShell.java:
> Line 50:   @VisibleForTesting public static final String NO_VALID_PROVIDERS =
> FYI for the future, our coding style is to put annotations on their own 
> separate line.
> File 
> hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/AbstractJavaKeyStoreProvider.java:
> Line 326:   private char[] locatePassword() throws IOException {
> this method looks very similar to the one in JavaKeyStoreProvider, except the 
> env var it looks for is different, is there potential for code reuse?
> Line 394:   "o In the environment variable " +
> Using a "*" is the usual way of doing a bullet point, e.g. markdown and wiki 
> syntax.
> Line 399:   
> "http://hadoop.apache.org/docs/current/hadoop-project-dist/; +
> This link is not tied to a version, so could be inaccurate.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13160) Suppress checkstyle JavadocPackage check for test source

2016-05-17 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13160?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-13160:

Attachment: HADOOP-13160.003.patch

Patch 003:
* Fix asflicense error

> Suppress checkstyle JavadocPackage check for test source
> 
>
> Key: HADOOP-13160
> URL: https://issues.apache.org/jira/browse/HADOOP-13160
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.7.2
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Attachments: HADOOP-13160.001.patch, HADOOP-13160.002.patch, 
> HADOOP-13160.003.patch
>
>
> Suppress "Missing package-info.java" checkstyle error for test source files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13160) Suppress checkstyle JavadocPackage check for test source

2016-05-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15287342#comment-15287342
 ] 

Hadoop QA commented on HADOOP-13160:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
18s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 12s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 8m 14s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
20s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 5m 22s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 
58s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 11s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 11s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 7m 58s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
34s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 5m 25s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 36m 29s {color} 
| {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 20s 
{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 91m 16s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Timed out junit tests | org.apache.hadoop.http.TestHttpServerLifecycle |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12804457/HADOOP-13160.002.patch
 |
| JIRA Issue | HADOOP-13160 |
| Optional Tests |  asflicense  xml  compile  javac  javadoc  mvninstall  
mvnsite  unit  |
| uname | Linux e33fa7e2d248 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 34fddd1 |
| Default Java | 1.8.0_91 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9468/artifact/patchprocess/patch-unit-root.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-HADOOP-Build/9468/artifact/patchprocess/patch-unit-root.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9468/testReport/ |
| asflicense | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9468/artifact/patchprocess/patch-asflicense-problems.txt
 |
| modules | C: . U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9468/console |
| Powered by | Apache Yetus 0.3.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Suppress checkstyle JavadocPackage check for test source
> 
>
> Key: HADOOP-13160
> URL: https://issues.apache.org/jira/browse/HADOOP-13160
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.7.2
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Attachments: HADOOP-13160.001.patch, HADOOP-13160.002.patch
>
>
> Suppress "Missing package-info.java" 

[jira] [Commented] (HADOOP-13140) GlobalStorageStatistics should check null FileSystem scheme to avoid NPE

2016-05-17 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15287297#comment-15287297
 ] 

Colin Patrick McCabe commented on HADOOP-13140:
---

{code}
  /** Called after a new FileSystem instance is constructed.
   * @param name a uri whose authority section names the host, port, etc.
   *   for this FileSystem
   * @param conf the configuration
   */
  public void initialize(URI name, Configuration conf) throws IOException {
statistics = getStatistics(name.getScheme(), getClass());
resolveSymlinks = conf.getBoolean(
CommonConfigurationKeys.FS_CLIENT_RESOLVE_REMOTE_SYMLINKS_KEY,
CommonConfigurationKeys.FS_CLIENT_RESOLVE_REMOTE_SYMLINKS_DEFAULT);
  }
{code}

If {{name#getScheme()}} is empty or null here, we can use 
{{FileSystem#getDefaultUri#getScheme}} to pass a non-null scheme.  That should 
cover almost all the cases where a null scheme would be passed.

If the user intentionally passes a null or empty scheme directly to 
{{FileSystem#getStatistics}}, we should throw an exception.

> GlobalStorageStatistics should check null FileSystem scheme to avoid NPE
> 
>
> Key: HADOOP-13140
> URL: https://issues.apache.org/jira/browse/HADOOP-13140
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Brahma Reddy Battula
>Assignee: Mingliang Liu
> Attachments: HADOOP-13140.000.patch
>
>
> {{org.apache.hadoop.fs.GlobalStorageStatistics#put}} is not checking the null 
> scheme, and the internal map will complain NPE. This was reported by a flaky 
> test {{TestFileSystemApplicationHistoryStore}}. Thanks [~brahmareddy] for 
> reporting.
> To address this,
> # Fix the test by providing a valid URI, e.g. {{file:///}}
> # Guard the null scheme in {{GlobalStorageStatistics#put}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13157) Follow-on improvements to hadoop credential commands

2016-05-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15287291#comment-15287291
 ] 

Hadoop QA commented on HADOOP-13157:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
21s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 23s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
22s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 9s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
18s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
49s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 42s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 42s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
27s {color} | {color:green} hadoop-common-project/hadoop-common: The patch 
generated 0 new + 58 unchanged - 2 fixed = 58 total (was 60) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 7s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
39s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 30s {color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
20s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 39m 8s {color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ipc.TestIPC |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12804308/HADOOP-13157.001.patch
 |
| JIRA Issue | HADOOP-13157 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 77f50f2bb295 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 34fddd1 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9469/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-HADOOP-Build/9469/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9469/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9469/console |
| Powered by | Apache Yetus 0.3.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Follow-on improvements to hadoop credential commands
> 
>
> Key: HADOOP-13157
> 

[jira] [Commented] (HADOOP-13157) Follow-on improvements to hadoop credential commands

2016-05-17 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15287174#comment-15287174
 ] 

Andrew Wang commented on HADOOP-13157:
--

Yea, probably related to the JDK8 bump we just did. I think Allen just fixed 
it, so I retriggered the build.

> Follow-on improvements to hadoop credential commands
> 
>
> Key: HADOOP-13157
> URL: https://issues.apache.org/jira/browse/HADOOP-13157
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.8.0
>Reporter: Mike Yoder
>Assignee: Mike Yoder
> Attachments: HADOOP-13157.001.patch
>
>
> [~andrew.wang] had some follow-up code review comments from HADOOP-12942. 
> Hence this issue.
> Ping [~lmccay] as well.  
> The comments:
> {quote}
> Overall this looks okay, the only correctness question I have is about the 
> difference in behavior when the pwfile doesn't exist.
> The rest are all nits, would be nice to do these cleanups though.
> File 
> hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/JavaKeyStoreProvider.java:
> Line 147:
> Could this be a static helper?
> Line 161: new
> The javadoc says it returns null in this situation. This is also a difference 
> from the implementation in the AbstractJKSP. Intentional?
> Line 175:   private void locateKeystore() throws IOException {
> static helper? for the construct*Path methods too?
> File 
> hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyShell.java:
> Line 50:   @VisibleForTesting public static final String NO_VALID_PROVIDERS =
> FYI for the future, our coding style is to put annotations on their own 
> separate line.
> File 
> hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/AbstractJavaKeyStoreProvider.java:
> Line 326:   private char[] locatePassword() throws IOException {
> this method looks very similar to the one in JavaKeyStoreProvider, except the 
> env var it looks for is different, is there potential for code reuse?
> Line 394:   "o In the environment variable " +
> Using a "*" is the usual way of doing a bullet point, e.g. markdown and wiki 
> syntax.
> Line 399:   
> "http://hadoop.apache.org/docs/current/hadoop-project-dist/; +
> This link is not tied to a version, so could be inaccurate.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-13167) "javac: invalid target release: 1.8" failures happen on YARN precommit jobs

2016-05-17 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-13167.
---
Resolution: Duplicate

YARN-5100 ran without incident. Closing as a dupe.

> "javac: invalid target release: 1.8" failures happen on YARN precommit jobs
> ---
>
> Key: HADOOP-13167
> URL: https://issues.apache.org/jira/browse/HADOOP-13167
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Wangda Tan
>Priority: Blocker
>
> Tons of failures happen on YARN precommit runs, for example:
> https://issues.apache.org/jira/browse/YARN-4957?focusedCommentId=15285836=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15285836.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13132) LoadBalancingKMSClientProvider ClassCastException on AuthenticationException

2016-05-17 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15287124#comment-15287124
 ] 

Xiao Chen commented on HADOOP-13132:


Thanks for the patch [~jojochuang], LGTM.
One minor ask is, maybe we can add a log when the wrapped exception is not a 
GSE? Since that's the assumption of the original code. 

> LoadBalancingKMSClientProvider ClassCastException on AuthenticationException
> 
>
> Key: HADOOP-13132
> URL: https://issues.apache.org/jira/browse/HADOOP-13132
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Reporter: Miklos Szurap
>Assignee: Wei-Chiu Chuang
> Attachments: HADOOP-13132.001.patch
>
>
> An Oozie job with a single shell action fails (may not be important, but if 
> you needs the exact details I can provide them) with an error message coming 
> from NodeManager:
> {code}
> 2016-05-10 11:10:14,290 ERROR 
> org.apache.hadoop.yarn.YarnUncaughtExceptionHandler: Thread 
> Thread[LogAggregationService #652,5,main] threw an Exception.
> java.lang.ClassCastException: 
> org.apache.hadoop.security.authentication.client.AuthenticationException 
> cannot be cast to java.security.GeneralSecurityException
> at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.decryptEncryptedKey(LoadBalancingKMSClientProvider.java:189)
> at 
> org.apache.hadoop.crypto.key.KeyProviderCryptoExtension.decryptEncryptedKey(KeyProviderCryptoExtension.java:388)
> at 
> org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(DFSClient.java:1419)
> at 
> org.apache.hadoop.hdfs.DFSClient.createWrappedOutputStream(DFSClient.java:1521)
> at org.apache.hadoop.fs.Hdfs.createInternal(Hdfs.java:108)
> at org.apache.hadoop.fs.Hdfs.createInternal(Hdfs.java:59)
> at org.apache.hadoop.fs.AbstractFileSystem.create(AbstractFileSystem.java:577)
> at org.apache.hadoop.fs.FileContext$3.next(FileContext.java:683)
> at org.apache.hadoop.fs.FileContext$3.next(FileContext.java:679)
> at org.apache.hadoop.fs.FSLinkResolver.resolve(FSLinkResolver.java:90)
> at org.apache.hadoop.fs.FileContext.create(FileContext.java:679)
> at 
> org.apache.hadoop.yarn.logaggregation.AggregatedLogFormat$LogWriter$1.run(AggregatedLogFormat.java:382)
> at 
> org.apache.hadoop.yarn.logaggregation.AggregatedLogFormat$LogWriter$1.run(AggregatedLogFormat.java:377)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
> at 
> org.apache.hadoop.yarn.logaggregation.AggregatedLogFormat$LogWriter.(AggregatedLogFormat.java:376)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.AppLogAggregatorImpl.uploadLogsForContainers(AppLogAggregatorImpl.java:246)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.AppLogAggregatorImpl.doAppLogAggregation(AppLogAggregatorImpl.java:456)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.AppLogAggregatorImpl.run(AppLogAggregatorImpl.java:421)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.LogAggregationService$2.run(LogAggregationService.java:384)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> The unsafe cast is here:
> https://github.com/apache/hadoop/blob/2e1d0ff4e901b8313c8d71869735b94ed8bc40a0/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/LoadBalancingKMSClientProvider.java#L189
> Because of this ClassCastException:
> - an uncaught exception is raised
> - we do not see the exact "caused by" exception/message
> - the oozie job fails
> - YARN logs are not reported/saved



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12666) Support Microsoft Azure Data Lake - as a file system in Hadoop

2016-05-17 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12666?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15287122#comment-15287122
 ] 

Allen Wittenauer commented on HADOOP-12666:
---

NP. I wish there was a way to avoid per-pom settings, but alas :(

> Support Microsoft Azure Data Lake - as a file system in Hadoop
> --
>
> Key: HADOOP-12666
> URL: https://issues.apache.org/jira/browse/HADOOP-12666
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs, fs/azure, tools
>Reporter: Vishwajeet Dusane
>Assignee: Vishwajeet Dusane
> Attachments: Create_Read_Hadoop_Adl_Store_Semantics.pdf, 
> HADOOP-12666-002.patch, HADOOP-12666-003.patch, HADOOP-12666-004.patch, 
> HADOOP-12666-005.patch, HADOOP-12666-006.patch, HADOOP-12666-007.patch, 
> HADOOP-12666-008.patch, HADOOP-12666-009.patch, HADOOP-12666-010.patch, 
> HADOOP-12666-011.patch, HADOOP-12666-012.patch, HADOOP-12666-013.patch, 
> HADOOP-12666-014.patch, HADOOP-12666-015.patch, HADOOP-12666-1.patch
>
>   Original Estimate: 336h
>  Time Spent: 336h
>  Remaining Estimate: 0h
>
> h2. Description
> This JIRA describes a new file system implementation for accessing Microsoft 
> Azure Data Lake Store (ADL) from within Hadoop. This would enable existing 
> Hadoop applications such has MR, HIVE, Hbase etc..,  to use ADL store as 
> input or output.
>  
> ADL is ultra-high capacity, Optimized for massive throughput with rich 
> management and security features. More details available at 
> https://azure.microsoft.com/en-us/services/data-lake-store/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13167) "javac: invalid target release: 1.8" failures happen on YARN precommit jobs

2016-05-17 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15287118#comment-15287118
 ] 

Arun Suresh commented on HADOOP-13167:
--

This should be fixed by HADOOP-13161
Kicking Jenkins to re-run the patch now should be fine..

> "javac: invalid target release: 1.8" failures happen on YARN precommit jobs
> ---
>
> Key: HADOOP-13167
> URL: https://issues.apache.org/jira/browse/HADOOP-13167
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Wangda Tan
>Priority: Blocker
>
> Tons of failures happen on YARN precommit runs, for example:
> https://issues.apache.org/jira/browse/YARN-4957?focusedCommentId=15285836=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15285836.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13159) Fix potential NPE in Metrics2 source for DecayRpcScheduler

2016-05-17 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15287115#comment-15287115
 ] 

Arpit Agarwal commented on HADOOP-13159:


+1 thanks for fixing this [~xyao].

> Fix potential NPE in Metrics2 source for DecayRpcScheduler
> --
>
> Key: HADOOP-13159
> URL: https://issues.apache.org/jira/browse/HADOOP-13159
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 2.8.0
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HADOOP-13159.00.patch, HADOOP-13159.01.patch
>
>
> HADOOP-12985 introduced a few metrics2 counters for DecayRpcScheduler. The 
> counters should be initialized before the updater thread is created/executed. 
> Otherwise, this is a chance of NPE  if the updater thread get executed before 
> the counters are fully initialized. I will post a patch for it shortly. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12666) Support Microsoft Azure Data Lake - as a file system in Hadoop

2016-05-17 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12666?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15287114#comment-15287114
 ] 

Chris Nauroth commented on HADOOP-12666:


[~aw], awesome!  Thanks for the tip.

> Support Microsoft Azure Data Lake - as a file system in Hadoop
> --
>
> Key: HADOOP-12666
> URL: https://issues.apache.org/jira/browse/HADOOP-12666
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs, fs/azure, tools
>Reporter: Vishwajeet Dusane
>Assignee: Vishwajeet Dusane
> Attachments: Create_Read_Hadoop_Adl_Store_Semantics.pdf, 
> HADOOP-12666-002.patch, HADOOP-12666-003.patch, HADOOP-12666-004.patch, 
> HADOOP-12666-005.patch, HADOOP-12666-006.patch, HADOOP-12666-007.patch, 
> HADOOP-12666-008.patch, HADOOP-12666-009.patch, HADOOP-12666-010.patch, 
> HADOOP-12666-011.patch, HADOOP-12666-012.patch, HADOOP-12666-013.patch, 
> HADOOP-12666-014.patch, HADOOP-12666-015.patch, HADOOP-12666-1.patch
>
>   Original Estimate: 336h
>  Time Spent: 336h
>  Remaining Estimate: 0h
>
> h2. Description
> This JIRA describes a new file system implementation for accessing Microsoft 
> Azure Data Lake Store (ADL) from within Hadoop. This would enable existing 
> Hadoop applications such has MR, HIVE, Hbase etc..,  to use ADL store as 
> input or output.
>  
> ADL is ultra-high capacity, Optimized for massive throughput with rich 
> management and security features. More details available at 
> https://azure.microsoft.com/en-us/services/data-lake-store/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12857) Rework hadoop-tools

2016-05-17 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-12857:
--
Release Note: 

* Turning on optional things from the tools directory such as S3 support can 
now be done in hadoop-env.sh with the HADOOP\_OPTIONAL\_TOOLS environment 
variable without impacting the various user-facing CLASSPATH variables.
* The tools directory is no longer pulled in blindly for any utilities that 
pull it in.  
* TOOL\_PATH / HADOOP\_TOOLS\_PATH has been broken apart and replaced with 
HADOOP\_TOOLS\_HOME, HADOOP\_TOOLS\_DIR and HADOOP\_TOOLS\_LIB\_JARS\_DIR to be 
consistent with the rest of Hadoop.

  was:

* Turning on optional things from the tools directory can now be done via 
hadoop-env.sh without impacting the various user-facing CLASSPATH.
* The tools directory is no longer pulled in blindly for any utilities that 
pull it in.  
* TOOL\_PATH / HADOOP\_TOOLS\_PATH has been broken apart and replaced with 
HADOOP\_TOOLS\_HOME, HADOOP\_TOOLS\_DIR and HADOOP\_TOOLS\_LIB\_JARS\_DIR to be 
consistent with the rest of Hadoop.


> Rework hadoop-tools
> ---
>
> Key: HADOOP-12857
> URL: https://issues.apache.org/jira/browse/HADOOP-12857
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Fix For: 3.0.0-alpha1
>
> Attachments: HADOOP-12857.00.patch, HADOOP-12857.01.patch, 
> HADOOP-12857.02.patch
>
>
> As hadoop-tools grows bigger and bigger, it's becoming evident that having a 
> single directory that gets sucked in is starting to become a big burden as 
> the number of tools grows.  Let's rework this to be smarter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12666) Support Microsoft Azure Data Lake - as a file system in Hadoop

2016-05-17 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12666?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15287108#comment-15287108
 ] 

Allen Wittenauer commented on HADOOP-12666:
---

Just a heads up that if you want folks to be able to turn this on with 
HADOOP_OPTIONAL_TOOLS in hadoop-env.sh (currently a trunk-only feature), you'll 
want the following in your pom.xml:

{code}
   
org.apache.maven.plugins
maven-dependency-plugin

  
deplist
compile

  list


  
  
${project.basedir}/target/hadoop-tools-deps/${project.artifactId}.tools-optional.txt

  

  
{code}

> Support Microsoft Azure Data Lake - as a file system in Hadoop
> --
>
> Key: HADOOP-12666
> URL: https://issues.apache.org/jira/browse/HADOOP-12666
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs, fs/azure, tools
>Reporter: Vishwajeet Dusane
>Assignee: Vishwajeet Dusane
> Attachments: Create_Read_Hadoop_Adl_Store_Semantics.pdf, 
> HADOOP-12666-002.patch, HADOOP-12666-003.patch, HADOOP-12666-004.patch, 
> HADOOP-12666-005.patch, HADOOP-12666-006.patch, HADOOP-12666-007.patch, 
> HADOOP-12666-008.patch, HADOOP-12666-009.patch, HADOOP-12666-010.patch, 
> HADOOP-12666-011.patch, HADOOP-12666-012.patch, HADOOP-12666-013.patch, 
> HADOOP-12666-014.patch, HADOOP-12666-015.patch, HADOOP-12666-1.patch
>
>   Original Estimate: 336h
>  Time Spent: 336h
>  Remaining Estimate: 0h
>
> h2. Description
> This JIRA describes a new file system implementation for accessing Microsoft 
> Azure Data Lake Store (ADL) from within Hadoop. This would enable existing 
> Hadoop applications such has MR, HIVE, Hbase etc..,  to use ADL store as 
> input or output.
>  
> ADL is ultra-high capacity, Optimized for massive throughput with rich 
> management and security features. More details available at 
> https://azure.microsoft.com/en-us/services/data-lake-store/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13167) "javac: invalid target release: 1.8" failures happen on YARN precommit jobs

2016-05-17 Thread Wangda Tan (JIRA)
Wangda Tan created HADOOP-13167:
---

 Summary: "javac: invalid target release: 1.8" failures happen on 
YARN precommit jobs
 Key: HADOOP-13167
 URL: https://issues.apache.org/jira/browse/HADOOP-13167
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Wangda Tan
Priority: Blocker


Tons of failures happen on YARN precommit runs, for example:
https://issues.apache.org/jira/browse/YARN-4957?focusedCommentId=15285836=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15285836.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13160) Suppress checkstyle JavadocPackage check for test source

2016-05-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15286993#comment-15286993
 ] 

Hadoop QA commented on HADOOP-13160:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} docker {color} | {color:red} 2m 51s 
{color} | {color:red} Docker failed to build yetus/hadoop:2c91fd8. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12804457/HADOOP-13160.002.patch
 |
| JIRA Issue | HADOOP-13160 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9467/console |
| Powered by | Apache Yetus 0.3.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Suppress checkstyle JavadocPackage check for test source
> 
>
> Key: HADOOP-13160
> URL: https://issues.apache.org/jira/browse/HADOOP-13160
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.7.2
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Attachments: HADOOP-13160.001.patch, HADOOP-13160.002.patch
>
>
> Suppress "Missing package-info.java" checkstyle error for test source files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13160) Suppress checkstyle JavadocPackage check for test source

2016-05-17 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13160?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-13160:

Status: Patch Available  (was: In Progress)

> Suppress checkstyle JavadocPackage check for test source
> 
>
> Key: HADOOP-13160
> URL: https://issues.apache.org/jira/browse/HADOOP-13160
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.7.2
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Attachments: HADOOP-13160.001.patch, HADOOP-13160.002.patch
>
>
> Suppress "Missing package-info.java" checkstyle error for test source files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13160) Suppress checkstyle JavadocPackage check for test source

2016-05-17 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13160?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-13160:

Attachment: HADOOP-13160.002.patch

Patch 002:
* Move checkstyle suppressions xml to {{dev-support/checkstyle}} directory
* Bump checkstyle suppressions xml to DTD 1.1
* Fix whitespace issues

Tested it on Mac and Ubuntu.

Thanks [~ste...@apache.org] and [~boky01] for comments.

> Suppress checkstyle JavadocPackage check for test source
> 
>
> Key: HADOOP-13160
> URL: https://issues.apache.org/jira/browse/HADOOP-13160
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.7.2
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Attachments: HADOOP-13160.001.patch, HADOOP-13160.002.patch
>
>
> Suppress "Missing package-info.java" checkstyle error for test source files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12666) Support Microsoft Azure Data Lake - as a file system in Hadoop

2016-05-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12666?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15286952#comment-15286952
 ] 

Hadoop QA commented on HADOOP-12666:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 8m 2s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 
55s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
25s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
44s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 4m 8s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
19s {color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-tools hadoop-tools/hadoop-tools-dist {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
50s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 55s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 21s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
20s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 57s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 9m 57s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 41s 
{color} | {color:red} root: The patch generated 1 new + 5 unchanged - 0 fixed = 
6 total (was 5) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 4m 18s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 2m 
34s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 11s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-tools hadoop-tools/hadoop-tools-dist {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
36s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 2s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 10s 
{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 27s 
{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 18s 
{color} | {color:green} hadoop-azure-datalake in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 39m 13s 
{color} | {color:green} hadoop-tools in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 15s 
{color} | {color:green} hadoop-tools-dist in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
29s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 120m 32s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 

[jira] [Commented] (HADOOP-13162) Consider reducing number of getFileStatus calls in S3AFileSystem.mkdirs

2016-05-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15286902#comment-15286902
 ] 

Hadoop QA commented on HADOOP-13162:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 10m 34s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 49s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
12s {color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 14s 
{color} | {color:green} branch-2 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 5s 
{color} | {color:green} branch-2 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
24s {color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 15s 
{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 2m 
47s {color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
22s {color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 6s 
{color} | {color:green} branch-2 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 17s 
{color} | {color:green} branch-2 passed with JDK v1.7.0_101 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 13s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
54s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 7s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 7s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 5s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 5s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
19s {color} | {color:green} root: The patch generated 0 new + 23 unchanged - 2 
fixed = 23 total (was 25) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 11s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
27s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 49 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
31s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 16s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 6m 42s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.8.0_91. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 12s 
{color} | {color:green} hadoop-aws in the patch passed with JDK v1.8.0_91. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 0s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_101. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 14s 
{color} | {color:green} hadoop-aws in the patch passed with JDK v1.7.0_101. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
23s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | 

[jira] [Commented] (HADOOP-13157) Follow-on improvements to hadoop credential commands

2016-05-17 Thread Mike Yoder (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15286884#comment-15286884
 ] 

Mike Yoder commented on HADOOP-13157:
-

[~andrew.wang] - I take it that there's something amiss with the build/test 
infrastructure? My failures are all:
{noformat}
Detected JDK Version: 1.7.0-95 is not in the allowed range [1.8,).
{noformat}


> Follow-on improvements to hadoop credential commands
> 
>
> Key: HADOOP-13157
> URL: https://issues.apache.org/jira/browse/HADOOP-13157
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.8.0
>Reporter: Mike Yoder
>Assignee: Mike Yoder
> Attachments: HADOOP-13157.001.patch
>
>
> [~andrew.wang] had some follow-up code review comments from HADOOP-12942. 
> Hence this issue.
> Ping [~lmccay] as well.  
> The comments:
> {quote}
> Overall this looks okay, the only correctness question I have is about the 
> difference in behavior when the pwfile doesn't exist.
> The rest are all nits, would be nice to do these cleanups though.
> File 
> hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/JavaKeyStoreProvider.java:
> Line 147:
> Could this be a static helper?
> Line 161: new
> The javadoc says it returns null in this situation. This is also a difference 
> from the implementation in the AbstractJKSP. Intentional?
> Line 175:   private void locateKeystore() throws IOException {
> static helper? for the construct*Path methods too?
> File 
> hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyShell.java:
> Line 50:   @VisibleForTesting public static final String NO_VALID_PROVIDERS =
> FYI for the future, our coding style is to put annotations on their own 
> separate line.
> File 
> hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/AbstractJavaKeyStoreProvider.java:
> Line 326:   private char[] locatePassword() throws IOException {
> this method looks very similar to the one in JavaKeyStoreProvider, except the 
> env var it looks for is different, is there potential for code reuse?
> Line 394:   "o In the environment variable " +
> Using a "*" is the usual way of doing a bullet point, e.g. markdown and wiki 
> syntax.
> Line 399:   
> "http://hadoop.apache.org/docs/current/hadoop-project-dist/; +
> This link is not tied to a version, so could be inaccurate.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13157) Follow-on improvements to hadoop credential commands

2016-05-17 Thread Mike Yoder (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15286877#comment-15286877
 ] 

Mike Yoder commented on HADOOP-13157:
-

OK, I agree your title is much better.

> Follow-on improvements to hadoop credential commands
> 
>
> Key: HADOOP-13157
> URL: https://issues.apache.org/jira/browse/HADOOP-13157
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.8.0
>Reporter: Mike Yoder
>Assignee: Mike Yoder
> Attachments: HADOOP-13157.001.patch
>
>
> [~andrew.wang] had some follow-up code review comments from HADOOP-12942. 
> Hence this issue.
> Ping [~lmccay] as well.  
> The comments:
> {quote}
> Overall this looks okay, the only correctness question I have is about the 
> difference in behavior when the pwfile doesn't exist.
> The rest are all nits, would be nice to do these cleanups though.
> File 
> hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/JavaKeyStoreProvider.java:
> Line 147:
> Could this be a static helper?
> Line 161: new
> The javadoc says it returns null in this situation. This is also a difference 
> from the implementation in the AbstractJKSP. Intentional?
> Line 175:   private void locateKeystore() throws IOException {
> static helper? for the construct*Path methods too?
> File 
> hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyShell.java:
> Line 50:   @VisibleForTesting public static final String NO_VALID_PROVIDERS =
> FYI for the future, our coding style is to put annotations on their own 
> separate line.
> File 
> hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/AbstractJavaKeyStoreProvider.java:
> Line 326:   private char[] locatePassword() throws IOException {
> this method looks very similar to the one in JavaKeyStoreProvider, except the 
> env var it looks for is different, is there potential for code reuse?
> Line 394:   "o In the environment variable " +
> Using a "*" is the usual way of doing a bullet point, e.g. markdown and wiki 
> syntax.
> Line 399:   
> "http://hadoop.apache.org/docs/current/hadoop-project-dist/; +
> This link is not tied to a version, so could be inaccurate.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13130) s3a failures can surface as RTEs, not IOEs

2016-05-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15286864#comment-15286864
 ] 

Hadoop QA commented on HADOOP-13130:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 11m 17s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 34s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
48s {color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 38s 
{color} | {color:green} branch-2 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 29s 
{color} | {color:green} branch-2 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
35s {color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 22s 
{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 15m 
13s {color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
23s {color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 9s 
{color} | {color:green} branch-2 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 20s 
{color} | {color:green} branch-2 passed with JDK v1.7.0_101 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 13s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
56s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 52s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 52s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 37s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 37s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 20s 
{color} | {color:red} root: The patch generated 1 new + 26 unchanged - 6 fixed 
= 27 total (was 32) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 14s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
27s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 49 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
40s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 9s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 20s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 9s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.8.0_91. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 13s 
{color} | {color:green} hadoop-aws in the patch passed with JDK v1.8.0_91. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 6s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_101. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 14s 
{color} | {color:green} hadoop-aws in the patch passed with JDK v1.7.0_101. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
24s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | 

[jira] [Commented] (HADOOP-13166) add getFileStatus("/") test to AbstractContractGetFileStatusTest

2016-05-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15286753#comment-15286753
 ] 

Hadoop QA commented on HADOOP-13166:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
26s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 30s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
22s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 54s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
24s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 58s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
41s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 50s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 50s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
23s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 55s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
34s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 58s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 5s 
{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
20s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 37m 37s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12804426/HADOOP-13166-001.patch
 |
| JIRA Issue | HADOOP-13166 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 5d43906c3b64 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 4a5819d |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9464/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9464/console |
| Powered by | Apache Yetus 0.3.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> add getFileStatus("/") test to AbstractContractGetFileStatusTest
> 
>
> Key: HADOOP-13166
> URL: https://issues.apache.org/jira/browse/HADOOP-13166
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs, test
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-13166-001.patch
>
>
> The test suite {{AbstractContractGetFileStatusTest}} doesn't have a test for 
> {{getFileStatus("/")}} working.
> While it 

[jira] [Updated] (HADOOP-13160) Suppress checkstyle JavadocPackage check for test source

2016-05-17 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13160?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-13160:

Status: In Progress  (was: Patch Available)

> Suppress checkstyle JavadocPackage check for test source
> 
>
> Key: HADOOP-13160
> URL: https://issues.apache.org/jira/browse/HADOOP-13160
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.7.2
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Attachments: HADOOP-13160.001.patch
>
>
> Suppress "Missing package-info.java" checkstyle error for test source files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13160) Suppress checkstyle JavadocPackage check for test source

2016-05-17 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13160?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-13160:

Summary: Suppress checkstyle JavadocPackage check for test source  (was: 
Suppress checkstyle JavadocPackage error for test source files)

> Suppress checkstyle JavadocPackage check for test source
> 
>
> Key: HADOOP-13160
> URL: https://issues.apache.org/jira/browse/HADOOP-13160
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.7.2
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Attachments: HADOOP-13160.001.patch
>
>
> Suppress "Missing package-info.java" checkstyle error for test source files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12666) Support Microsoft Azure Data Lake - as a file system in Hadoop

2016-05-17 Thread Vishwajeet Dusane (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12666?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15286715#comment-15286715
 ] 

Vishwajeet Dusane commented on HADOOP-12666:


[~chris.douglas] - Thanks you for the change set. I reviewed the change and the 
change is good enough for time being. +1

> Support Microsoft Azure Data Lake - as a file system in Hadoop
> --
>
> Key: HADOOP-12666
> URL: https://issues.apache.org/jira/browse/HADOOP-12666
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs, fs/azure, tools
>Reporter: Vishwajeet Dusane
>Assignee: Vishwajeet Dusane
> Attachments: Create_Read_Hadoop_Adl_Store_Semantics.pdf, 
> HADOOP-12666-002.patch, HADOOP-12666-003.patch, HADOOP-12666-004.patch, 
> HADOOP-12666-005.patch, HADOOP-12666-006.patch, HADOOP-12666-007.patch, 
> HADOOP-12666-008.patch, HADOOP-12666-009.patch, HADOOP-12666-010.patch, 
> HADOOP-12666-011.patch, HADOOP-12666-012.patch, HADOOP-12666-013.patch, 
> HADOOP-12666-014.patch, HADOOP-12666-015.patch, HADOOP-12666-1.patch
>
>   Original Estimate: 336h
>  Time Spent: 336h
>  Remaining Estimate: 0h
>
> h2. Description
> This JIRA describes a new file system implementation for accessing Microsoft 
> Azure Data Lake Store (ADL) from within Hadoop. This would enable existing 
> Hadoop applications such has MR, HIVE, Hbase etc..,  to use ADL store as 
> input or output.
>  
> ADL is ultra-high capacity, Optimized for massive throughput with rich 
> management and security features. More details available at 
> https://azure.microsoft.com/en-us/services/data-lake-store/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13116) Jets3tNativeS3FileSystemContractTest does not run.

2016-05-17 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13116?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-13116:
---
Priority: Minor  (was: Major)

> Jets3tNativeS3FileSystemContractTest does not run.
> --
>
> Key: HADOOP-13116
> URL: https://issues.apache.org/jira/browse/HADOOP-13116
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-13116.001.patch
>
>
> S3N includes a test suite named {{Jets3tNativeS3FileSystemContractTest}}.  
> This test suite does not run during an {{mvn test}} run, because our Surefire 
> configuration includes only test suite classes that start with "Test" in the 
> name.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13122) Customize User-Agent header sent in HTTP requests by S3A.

2016-05-17 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-13122:
---
Priority: Minor  (was: Major)

> Customize User-Agent header sent in HTTP requests by S3A.
> -
>
> Key: HADOOP-13122
> URL: https://issues.apache.org/jira/browse/HADOOP-13122
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-13122.001.patch
>
>
> S3A passes a User-Agent header to the S3 back-end.  Right now, it uses the 
> default value set by the AWS SDK, so Hadoop HTTP traffic doesn't appear any 
> different from general AWS SDK traffic.  If we customize the User-Agent 
> header, then it will enable better troubleshooting and analysis by AWS or 
> alternative providers of S3-like services.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13113) Enable parallel test execution for hadoop-aws.

2016-05-17 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13113?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-13113:
---
Issue Type: Sub-task  (was: Improvement)
Parent: HADOOP-11694

> Enable parallel test execution for hadoop-aws.
> --
>
> Key: HADOOP-13113
> URL: https://issues.apache.org/jira/browse/HADOOP-13113
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: test
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-13113.001.patch, HADOOP-13113.002.patch, 
> HADOOP-13113.003.patch, HADOOP-13113.004.patch
>
>
> The full hadoop-aws test suite takes ~30 minutes to execute.  The tests spend 
> most of their time blocked on network I/O with the S3 back-end, but they 
> don't saturate the bandwidth of the NIC.  We can improve overall execution 
> time by enabling parallel test execution.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13113) Enable parallel test execution for hadoop-aws.

2016-05-17 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13113?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-13113:
---
Priority: Minor  (was: Major)

> Enable parallel test execution for hadoop-aws.
> --
>
> Key: HADOOP-13113
> URL: https://issues.apache.org/jira/browse/HADOOP-13113
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: test
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-13113.001.patch, HADOOP-13113.002.patch, 
> HADOOP-13113.003.patch, HADOOP-13113.004.patch
>
>
> The full hadoop-aws test suite takes ~30 minutes to execute.  The tests spend 
> most of their time blocked on network I/O with the S3 back-end, but they 
> don't saturate the bandwidth of the NIC.  We can improve overall execution 
> time by enabling parallel test execution.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13158) S3AFileSystem#toString might throw NullPointerException due to null cannedACL.

2016-05-17 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-13158:
---
Priority: Minor  (was: Major)

> S3AFileSystem#toString might throw NullPointerException due to null cannedACL.
> --
>
> Key: HADOOP-13158
> URL: https://issues.apache.org/jira/browse/HADOOP-13158
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-13158.001.patch
>
>
> The {{cannedACL}} field of {{S3AFileSystem}} can be {{null}}.  The 
> {{toString}} implementation has an unguarded call to 
> {{cannedACL.toString()}}, so there is a risk of {{NullPointerException}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13158) S3AFileSystem#toString might throw NullPointerException due to null cannedACL.

2016-05-17 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-13158:
---
Issue Type: Sub-task  (was: Bug)
Parent: HADOOP-11694

> S3AFileSystem#toString might throw NullPointerException due to null cannedACL.
> --
>
> Key: HADOOP-13158
> URL: https://issues.apache.org/jira/browse/HADOOP-13158
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Fix For: 2.8.0
>
> Attachments: HADOOP-13158.001.patch
>
>
> The {{cannedACL}} field of {{S3AFileSystem}} can be {{null}}.  The 
> {{toString}} implementation has an unguarded call to 
> {{cannedACL.toString()}}, so there is a risk of {{NullPointerException}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12666) Support Microsoft Azure Data Lake - as a file system in Hadoop

2016-05-17 Thread Vishwajeet Dusane (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12666?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vishwajeet Dusane updated HADOOP-12666:
---
Status: Patch Available  (was: Open)

> Support Microsoft Azure Data Lake - as a file system in Hadoop
> --
>
> Key: HADOOP-12666
> URL: https://issues.apache.org/jira/browse/HADOOP-12666
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs, fs/azure, tools
>Reporter: Vishwajeet Dusane
>Assignee: Vishwajeet Dusane
> Attachments: Create_Read_Hadoop_Adl_Store_Semantics.pdf, 
> HADOOP-12666-002.patch, HADOOP-12666-003.patch, HADOOP-12666-004.patch, 
> HADOOP-12666-005.patch, HADOOP-12666-006.patch, HADOOP-12666-007.patch, 
> HADOOP-12666-008.patch, HADOOP-12666-009.patch, HADOOP-12666-010.patch, 
> HADOOP-12666-011.patch, HADOOP-12666-012.patch, HADOOP-12666-013.patch, 
> HADOOP-12666-014.patch, HADOOP-12666-015.patch, HADOOP-12666-1.patch
>
>   Original Estimate: 336h
>  Time Spent: 336h
>  Remaining Estimate: 0h
>
> h2. Description
> This JIRA describes a new file system implementation for accessing Microsoft 
> Azure Data Lake Store (ADL) from within Hadoop. This would enable existing 
> Hadoop applications such has MR, HIVE, Hbase etc..,  to use ADL store as 
> input or output.
>  
> ADL is ultra-high capacity, Optimized for massive throughput with rich 
> management and security features. More details available at 
> https://azure.microsoft.com/en-us/services/data-lake-store/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12666) Support Microsoft Azure Data Lake - as a file system in Hadoop

2016-05-17 Thread Vishwajeet Dusane (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12666?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vishwajeet Dusane updated HADOOP-12666:
---
Attachment: HADOOP-12666-015.patch

Resubmitting patch.

> Support Microsoft Azure Data Lake - as a file system in Hadoop
> --
>
> Key: HADOOP-12666
> URL: https://issues.apache.org/jira/browse/HADOOP-12666
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs, fs/azure, tools
>Reporter: Vishwajeet Dusane
>Assignee: Vishwajeet Dusane
> Attachments: Create_Read_Hadoop_Adl_Store_Semantics.pdf, 
> HADOOP-12666-002.patch, HADOOP-12666-003.patch, HADOOP-12666-004.patch, 
> HADOOP-12666-005.patch, HADOOP-12666-006.patch, HADOOP-12666-007.patch, 
> HADOOP-12666-008.patch, HADOOP-12666-009.patch, HADOOP-12666-010.patch, 
> HADOOP-12666-011.patch, HADOOP-12666-012.patch, HADOOP-12666-013.patch, 
> HADOOP-12666-014.patch, HADOOP-12666-015.patch, HADOOP-12666-1.patch
>
>   Original Estimate: 336h
>  Time Spent: 336h
>  Remaining Estimate: 0h
>
> h2. Description
> This JIRA describes a new file system implementation for accessing Microsoft 
> Azure Data Lake Store (ADL) from within Hadoop. This would enable existing 
> Hadoop applications such has MR, HIVE, Hbase etc..,  to use ADL store as 
> input or output.
>  
> ADL is ultra-high capacity, Optimized for massive throughput with rich 
> management and security features. More details available at 
> https://azure.microsoft.com/en-us/services/data-lake-store/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12666) Support Microsoft Azure Data Lake - as a file system in Hadoop

2016-05-17 Thread Vishwajeet Dusane (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12666?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vishwajeet Dusane updated HADOOP-12666:
---
Status: Open  (was: Patch Available)

> Support Microsoft Azure Data Lake - as a file system in Hadoop
> --
>
> Key: HADOOP-12666
> URL: https://issues.apache.org/jira/browse/HADOOP-12666
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs, fs/azure, tools
>Reporter: Vishwajeet Dusane
>Assignee: Vishwajeet Dusane
> Attachments: Create_Read_Hadoop_Adl_Store_Semantics.pdf, 
> HADOOP-12666-002.patch, HADOOP-12666-003.patch, HADOOP-12666-004.patch, 
> HADOOP-12666-005.patch, HADOOP-12666-006.patch, HADOOP-12666-007.patch, 
> HADOOP-12666-008.patch, HADOOP-12666-009.patch, HADOOP-12666-010.patch, 
> HADOOP-12666-011.patch, HADOOP-12666-012.patch, HADOOP-12666-013.patch, 
> HADOOP-12666-014.patch, HADOOP-12666-1.patch
>
>   Original Estimate: 336h
>  Time Spent: 336h
>  Remaining Estimate: 0h
>
> h2. Description
> This JIRA describes a new file system implementation for accessing Microsoft 
> Azure Data Lake Store (ADL) from within Hadoop. This would enable existing 
> Hadoop applications such has MR, HIVE, Hbase etc..,  to use ADL store as 
> input or output.
>  
> ADL is ultra-high capacity, Optimized for massive throughput with rich 
> management and security features. More details available at 
> https://azure.microsoft.com/en-us/services/data-lake-store/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13162) Consider reducing number of getFileStatus calls in S3AFileSystem.mkdirs

2016-05-17 Thread Rajesh Balamohan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajesh Balamohan updated HADOOP-13162:
--
Attachment: HADOOP-13162-branch-2-002.patch

> Consider reducing number of getFileStatus calls in S3AFileSystem.mkdirs
> ---
>
> Key: HADOOP-13162
> URL: https://issues.apache.org/jira/browse/HADOOP-13162
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Rajesh Balamohan
>Priority: Minor
> Attachments: HADOOP-13162-branch-2-002.patch, HADOOP-13162.001.patch
>
>
> getFileStatus is relatively expensive call and mkdirs invokes it multiple 
> times depending on how deep the directory structure is. It would be good to 
> reduce the number of getFileStatus calls in such cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13166) add getFileStatus("/") test to AbstractContractGetFileStatusTest

2016-05-17 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13166?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13166:

Target Version/s: 2.8.0
  Status: Patch Available  (was: Open)

> add getFileStatus("/") test to AbstractContractGetFileStatusTest
> 
>
> Key: HADOOP-13166
> URL: https://issues.apache.org/jira/browse/HADOOP-13166
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs, test
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-13166-001.patch
>
>
> The test suite {{AbstractContractGetFileStatusTest}} doesn't have a test for 
> {{getFileStatus("/")}} working.
> While it may seem "obvious" that this will work, on object stores, its 
> actually a special case



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13166) add getFileStatus("/") test to AbstractContractGetFileStatusTest

2016-05-17 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13166?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13166:

Attachment: HADOOP-13166-001.patch

Patch 001: adds the test

> add getFileStatus("/") test to AbstractContractGetFileStatusTest
> 
>
> Key: HADOOP-13166
> URL: https://issues.apache.org/jira/browse/HADOOP-13166
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs, test
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-13166-001.patch
>
>
> The test suite {{AbstractContractGetFileStatusTest}} doesn't have a test for 
> {{getFileStatus("/")}} working.
> While it may seem "obvious" that this will work, on object stores, its 
> actually a special case



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13130) s3a failures can surface as RTEs, not IOEs

2016-05-17 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13130:

Status: Patch Available  (was: Open)

> s3a failures can surface as RTEs, not IOEs
> --
>
> Key: HADOOP-13130
> URL: https://issues.apache.org/jira/browse/HADOOP-13130
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13130-001.patch, HADOOP-13130-002.patch, 
> HADOOP-13130-002.patch, HADOOP-13130-003.patch, HADOOP-13130-004.patch, 
> HADOOP-13130-005.patch, HADOOP-13130-branch-2-006.patch
>
>
> S3A failures happening in the AWS library surface as 
> {{AmazonClientException}} derivatives, rather than IOEs. As the amazon 
> exceptions are runtime exceptions, any code which catches IOEs for error 
> handling breaks.
> The fix will be to catch and wrap. The hard thing will be to wrap it with 
> meaningful exceptions rather than a generic IOE. Furthermore, if anyone has 
> been catching AWS exceptions, they are going to be disappointed. That means 
> that fixing this situation could be considered "incompatible" —but only for 
> code which contains assumptions about the underlying FS and the exceptions 
> they raise.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11694) Über-jira: S3a phase II: robustness, scale and performance

2016-05-17 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15286656#comment-15286656
 ] 

Steve Loughran commented on HADOOP-11694:
-

needs a test, HADOOP-13166 to stat the root path

> Über-jira: S3a phase II: robustness, scale and performance
> --
>
> Key: HADOOP-11694
> URL: https://issues.apache.org/jira/browse/HADOOP-11694
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 2.7.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>
> HADOOP-11571 covered the core s3a bugs surfacing in Hadoop-2.6 & other 
> enhancements to improve S3 (performance, proxy, custom endpoints)
> This JIRA covers post-2.7 issues and enhancements.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13166) add getFileStatus("/") test to AbstractContractGetFileStatusTest

2016-05-17 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13166?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13166:

Priority: Minor  (was: Major)

> add getFileStatus("/") test to AbstractContractGetFileStatusTest
> 
>
> Key: HADOOP-13166
> URL: https://issues.apache.org/jira/browse/HADOOP-13166
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs, test
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>Priority: Minor
>
> The test suite {{AbstractContractGetFileStatusTest}} doesn't have a test for 
> {{getFileStatus("/")}} working.
> While it may seem "obvious" that this will work, on object stores, its 
> actually a special case



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13166) add getFileStatus("/") test to AbstractContractGetFileStatusTest

2016-05-17 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-13166:
---

 Summary: add getFileStatus("/") test to 
AbstractContractGetFileStatusTest
 Key: HADOOP-13166
 URL: https://issues.apache.org/jira/browse/HADOOP-13166
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs, test
Affects Versions: 2.7.2
Reporter: Steve Loughran


The test suite {{AbstractContractGetFileStatusTest}} doesn't have a test for 
{{getFileStatus("/")}} working.

While it may seem "obvious" that this will work, on object stores, its actually 
a special case



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-9613) [JDK8] Update jersey version to latest 1.x release

2016-05-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15286518#comment-15286518
 ] 

Hadoop QA commented on HADOOP-9613:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 32 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 12s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 36s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
24s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 4m 6s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
45s {color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 
49s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 10s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 3m 31s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
18s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 32s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 32s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 25s 
{color} | {color:red} root: The patch generated 5 new + 376 unchanged - 51 
fixed = 381 total (was 427) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 4m 3s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
46s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 4s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 6m 
59s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 11s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 8s 
{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 10s 
{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 8s 
{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 11m 11s {color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 3m 2s {color} | 
{color:red} hadoop-yarn-server-applicationhistoryservice in the patch failed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 29m 17s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 66m 7s {color} 
| {color:red} hadoop-yarn-client in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 32s 
{color} | {color:green} hadoop-mapreduce-client-app in the patch passed. 
{color} |
| 

[jira] [Commented] (HADOOP-13158) S3AFileSystem#toString might throw NullPointerException due to null cannedACL.

2016-05-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13158?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15286508#comment-15286508
 ] 

Hudson commented on HADOOP-13158:
-

FAILURE: Integrated in Hadoop-trunk-Commit #9791 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9791/])
HADOOP-13158 S3AFileSystem#toString might throw NullPointerException due 
(stevel: rev 08ea07f1b8edbc38c99015c81a62ca127a247bf7)
* 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java


> S3AFileSystem#toString might throw NullPointerException due to null cannedACL.
> --
>
> Key: HADOOP-13158
> URL: https://issues.apache.org/jira/browse/HADOOP-13158
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Fix For: 2.8.0
>
> Attachments: HADOOP-13158.001.patch
>
>
> The {{cannedACL}} field of {{S3AFileSystem}} can be {{null}}.  The 
> {{toString}} implementation has an unguarded call to 
> {{cannedACL.toString()}}, so there is a risk of {{NullPointerException}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13162) Consider reducing number of getFileStatus calls in S3AFileSystem.mkdirs

2016-05-17 Thread Rajesh Balamohan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajesh Balamohan updated HADOOP-13162:
--
Target Version/s: 2.8.0
Priority: Minor  (was: Major)

> Consider reducing number of getFileStatus calls in S3AFileSystem.mkdirs
> ---
>
> Key: HADOOP-13162
> URL: https://issues.apache.org/jira/browse/HADOOP-13162
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Rajesh Balamohan
>Priority: Minor
> Attachments: HADOOP-13162.001.patch
>
>
> getFileStatus is relatively expensive call and mkdirs invokes it multiple 
> times depending on how deep the directory structure is. It would be good to 
> reduce the number of getFileStatus calls in such cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13130) s3a failures can surface as RTEs, not IOEs

2016-05-17 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13130:

Attachment: HADOOP-13130-branch-2-006.patch

Patch 006, in sync with branch-2, and tagged for test against that branch

> s3a failures can surface as RTEs, not IOEs
> --
>
> Key: HADOOP-13130
> URL: https://issues.apache.org/jira/browse/HADOOP-13130
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13130-001.patch, HADOOP-13130-002.patch, 
> HADOOP-13130-002.patch, HADOOP-13130-003.patch, HADOOP-13130-004.patch, 
> HADOOP-13130-005.patch, HADOOP-13130-branch-2-006.patch
>
>
> S3A failures happening in the AWS library surface as 
> {{AmazonClientException}} derivatives, rather than IOEs. As the amazon 
> exceptions are runtime exceptions, any code which catches IOEs for error 
> handling breaks.
> The fix will be to catch and wrap. The hard thing will be to wrap it with 
> meaningful exceptions rather than a generic IOE. Furthermore, if anyone has 
> been catching AWS exceptions, they are going to be disappointed. That means 
> that fixing this situation could be considered "incompatible" —but only for 
> code which contains assumptions about the underlying FS and the exceptions 
> they raise.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13130) s3a failures can surface as RTEs, not IOEs

2016-05-17 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13130:

Status: Open  (was: Patch Available)

> s3a failures can surface as RTEs, not IOEs
> --
>
> Key: HADOOP-13130
> URL: https://issues.apache.org/jira/browse/HADOOP-13130
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13130-001.patch, HADOOP-13130-002.patch, 
> HADOOP-13130-002.patch, HADOOP-13130-003.patch, HADOOP-13130-004.patch, 
> HADOOP-13130-005.patch
>
>
> S3A failures happening in the AWS library surface as 
> {{AmazonClientException}} derivatives, rather than IOEs. As the amazon 
> exceptions are runtime exceptions, any code which catches IOEs for error 
> handling breaks.
> The fix will be to catch and wrap. The hard thing will be to wrap it with 
> meaningful exceptions rather than a generic IOE. Furthermore, if anyone has 
> been catching AWS exceptions, they are going to be disappointed. That means 
> that fixing this situation could be considered "incompatible" —but only for 
> code which contains assumptions about the underlying FS and the exceptions 
> they raise.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13158) S3AFileSystem#toString might throw NullPointerException due to null cannedACL.

2016-05-17 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13158:

   Resolution: Fixed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

+1, stuck this patch in standalone; I'll fix up my HADOOP-13130 patch to handle 
it

> S3AFileSystem#toString might throw NullPointerException due to null cannedACL.
> --
>
> Key: HADOOP-13158
> URL: https://issues.apache.org/jira/browse/HADOOP-13158
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Fix For: 2.8.0
>
> Attachments: HADOOP-13158.001.patch
>
>
> The {{cannedACL}} field of {{S3AFileSystem}} can be {{null}}.  The 
> {{toString}} implementation has an unguarded call to 
> {{cannedACL.toString()}}, so there is a risk of {{NullPointerException}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13163) Reuse pre-computed filestatus in Distcp-CopyMapper

2016-05-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15286489#comment-15286489
 ] 

Hudson commented on HADOOP-13163:
-

FAILURE: Integrated in Hadoop-trunk-Commit #9790 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9790/])
HADOOP-13163 Reuse pre-computed filestatus in Distcp-CopyMapper (Rajesh 
(stevel: rev c69a649257a331da55c1a1bf61c819e289015a6b)
* 
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyMapper.java


> Reuse pre-computed filestatus in Distcp-CopyMapper
> --
>
> Key: HADOOP-13163
> URL: https://issues.apache.org/jira/browse/HADOOP-13163
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 2.8.0
>Reporter: Rajesh Balamohan
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-13163.001.patch
>
>
> https://github.com/apache/hadoop/blob/af942585a108d70e0946f6dd4c465a54d068eabf/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyMapper.java#L185
> targetStatus is already computed and it can be reused in checkUpdate() 
> function. This wouldn't be a major issue in NN/HDFS, but in the case of S3 
> getFileStatus calls can be expensive.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13163) Reuse pre-computed filestatus in Distcp-CopyMapper

2016-05-17 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13163?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13163:

   Resolution: Fixed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

+1

patched -thanks!

> Reuse pre-computed filestatus in Distcp-CopyMapper
> --
>
> Key: HADOOP-13163
> URL: https://issues.apache.org/jira/browse/HADOOP-13163
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 2.8.0
>Reporter: Rajesh Balamohan
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-13163.001.patch
>
>
> https://github.com/apache/hadoop/blob/af942585a108d70e0946f6dd4c465a54d068eabf/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyMapper.java#L185
> targetStatus is already computed and it can be reused in checkUpdate() 
> function. This wouldn't be a major issue in NN/HDFS, but in the case of S3 
> getFileStatus calls can be expensive.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13163) Reuse pre-computed filestatus in Distcp-CopyMapper

2016-05-17 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13163?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13163:

Affects Version/s: 2.8.0
 Priority: Minor  (was: Major)
  Component/s: tools/distcp

> Reuse pre-computed filestatus in Distcp-CopyMapper
> --
>
> Key: HADOOP-13163
> URL: https://issues.apache.org/jira/browse/HADOOP-13163
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 2.8.0
>Reporter: Rajesh Balamohan
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-13163.001.patch
>
>
> https://github.com/apache/hadoop/blob/af942585a108d70e0946f6dd4c465a54d068eabf/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyMapper.java#L185
> targetStatus is already computed and it can be reused in checkUpdate() 
> function. This wouldn't be a major issue in NN/HDFS, but in the case of S3 
> getFileStatus calls can be expensive.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13160) Suppress checkstyle JavadocPackage error for test source files

2016-05-17 Thread Andras Bokor (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15286472#comment-15286472
 ] 

Andras Bokor commented on HADOOP-13160:
---

[~jzhuge],

It does not work on Windows. Since it is a regex I suggest using the following:
{{ }}

Another thing. I would use DTD version 1.1 instead of 1.0.
There is no affect for this particular issue but {{id}} property can help if we 
need to write more complicated suppressions. 

> Suppress checkstyle JavadocPackage error for test source files
> --
>
> Key: HADOOP-13160
> URL: https://issues.apache.org/jira/browse/HADOOP-13160
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.7.2
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Attachments: HADOOP-13160.001.patch
>
>
> Suppress "Missing package-info.java" checkstyle error for test source files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12667) s3a: Support createNonRecursive API

2016-05-17 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15286456#comment-15286456
 ] 

Steve Loughran commented on HADOOP-12667:
-

Looking at patch

# All the check that is needed is a {{getFileStatus()}} + check there is a 
parent dir. But as create() does that anyway, it could be optimised by having 
the internal create do/not do this
# looking at S3A, I'm not sure that parent dirs are created right now. I guess 
it relies on the fact that by creating a long blob path, the parent dir comes 
in implicitly. But what if there is a parent and it is a file? Maybe we need a 
test case in {{AbstractContractCreateTest}} to make sure that it's an error to 
attempt to do that.
# this implementation of {{createNonRecursive()}} will need that.
# I don't see any tests for {{createNonRecursive()}}  in the filesystem 
contract tests, or indeed, anything in 
{{hadoop-common-project/hadoop-common/src/site/markdown/filesystem/filesystem.md}}
 to define the semantics of the operation. I'm afraid that'll be needed. The 
test you've written will be a foundation for that, though it'll need more tests 
(parent is file, parent is root dir, overwrite = true, overwrite = false)

> s3a: Support createNonRecursive API
> ---
>
> Key: HADOOP-12667
> URL: https://issues.apache.org/jira/browse/HADOOP-12667
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
> Attachments: HADOOP-12667.001.patch
>
>
> HBase and other clients rely on the createNonRecursive API, which was 
> recently un-deprecated. S3A currently does not support it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13131) Add tests to verify that S3A supports SSE-S3 encryption

2016-05-17 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13131?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13131:

Summary: Add tests to verify that S3A supports SSE-S3 encryption  (was: add 
tests to verify that s3a supports SSE-S3 encryption)

> Add tests to verify that S3A supports SSE-S3 encryption
> ---
>
> Key: HADOOP-13131
> URL: https://issues.apache.org/jira/browse/HADOOP-13131
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-13131-001.patch, HADOOP-13131-002.patch, 
> HADOOP-13131-003.patch, HADOOP-13131-004.patch, HADOOP-13131-005.patch
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> Although S3A claims to support server-side S3 encryption (and does, if you 
> set the option), we don't have any test to verify this. Of course, as the 
> encryption is transparent, it's hard to test.
> Here's what I propose
> # a test which sets encryption = AES256; expects things to work as normal.
> # a test which sets encyption = DES and expects any operation creating a file 
> or directory to fail with a 400 "bad request" error



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12667) s3a: Support createNonRecursive API

2016-05-17 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15286439#comment-15286439
 ] 

Steve Loughran commented on HADOOP-12667:
-

Once HADOOP-13131 is in we'll have a better test base for this; marking as a 
dependency purely for the test framework alone

> s3a: Support createNonRecursive API
> ---
>
> Key: HADOOP-12667
> URL: https://issues.apache.org/jira/browse/HADOOP-12667
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
> Attachments: HADOOP-12667.001.patch
>
>
> HBase and other clients rely on the createNonRecursive API, which was 
> recently un-deprecated. S3A currently does not support it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12667) s3a: Support createNonRecursive API

2016-05-17 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15286437#comment-15286437
 ] 

Steve Loughran commented on HADOOP-12667:
-

tagging as a HADOOP-11694  feature

> s3a: Support createNonRecursive API
> ---
>
> Key: HADOOP-12667
> URL: https://issues.apache.org/jira/browse/HADOOP-12667
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
> Attachments: HADOOP-12667.001.patch
>
>
> HBase and other clients rely on the createNonRecursive API, which was 
> recently un-deprecated. S3A currently does not support it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13131) add tests to verify that s3a supports SSE-S3 encryption

2016-05-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15286425#comment-15286425
 ] 

Hadoop QA commented on HADOOP-13131:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 7 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
21s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 11s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 16s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
10s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
25s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 9s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 9s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 10s 
{color} | {color:red} hadoop-tools/hadoop-aws: The patch generated 1 new + 33 
unchanged - 6 fixed = 34 total (was 39) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 13s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
9s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
28s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 10s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 11s 
{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
14s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 10m 33s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12804390/HADOOP-13131-005.patch
 |
| JIRA Issue | HADOOP-13131 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 69f00b652f12 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 9fe5828 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9462/artifact/patchprocess/diff-checkstyle-hadoop-tools_hadoop-aws.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9462/testReport/ |
| modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9462/console |
| Powered by | Apache Yetus 0.3.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> add tests to verify that s3a supports SSE-S3 encryption
> ---
>
> Key: HADOOP-13131
> URL: https://issues.apache.org/jira/browse/HADOOP-13131
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>  

[jira] [Updated] (HADOOP-13131) add tests to verify that s3a supports SSE-S3 encryption

2016-05-17 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13131?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13131:

Status: Patch Available  (was: Open)

patch 005 -checkstyle fixups, ignoring the complaint about an interface lacking 
methods.

> add tests to verify that s3a supports SSE-S3 encryption
> ---
>
> Key: HADOOP-13131
> URL: https://issues.apache.org/jira/browse/HADOOP-13131
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-13131-001.patch, HADOOP-13131-002.patch, 
> HADOOP-13131-003.patch, HADOOP-13131-004.patch, HADOOP-13131-005.patch
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> Although S3A claims to support server-side S3 encryption (and does, if you 
> set the option), we don't have any test to verify this. Of course, as the 
> encryption is transparent, it's hard to test.
> Here's what I propose
> # a test which sets encryption = AES256; expects things to work as normal.
> # a test which sets encyption = DES and expects any operation creating a file 
> or directory to fail with a 400 "bad request" error



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13131) add tests to verify that s3a supports SSE-S3 encryption

2016-05-17 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13131?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13131:

Status: Open  (was: Patch Available)

> add tests to verify that s3a supports SSE-S3 encryption
> ---
>
> Key: HADOOP-13131
> URL: https://issues.apache.org/jira/browse/HADOOP-13131
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-13131-001.patch, HADOOP-13131-002.patch, 
> HADOOP-13131-003.patch, HADOOP-13131-004.patch, HADOOP-13131-005.patch
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> Although S3A claims to support server-side S3 encryption (and does, if you 
> set the option), we don't have any test to verify this. Of course, as the 
> encryption is transparent, it's hard to test.
> Here's what I propose
> # a test which sets encryption = AES256; expects things to work as normal.
> # a test which sets encyption = DES and expects any operation creating a file 
> or directory to fail with a 400 "bad request" error



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



  1   2   >