[jira] [Commented] (HADOOP-12747) support wildcard in libjars argument

2016-01-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15123102#comment-15123102
 ] 

Hadoop QA commented on HADOOP-12747:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 
8s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
39s {color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
42s {color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
35s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 44s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
22s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
10s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 42s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 55s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
7s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
48s {color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 17m 48s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
12s {color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 13m 12s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 33s 
{color} | {color:red} hadoop-common-project/hadoop-common: patch generated 2 
new + 105 unchanged - 5 fixed = 107 total (was 110) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 43s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
21s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
42s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 53s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 51s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 14m 36s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_66. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 12m 4s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.7.0_91. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
35s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 117m 22s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_66 Failed junit tests | hadoop.ipc.TestRPCWaitForProxy |
|   | hadoop.fs.shell.find.TestIname |
|   | hadoop.ha.TestZKFailoverController |
|   | hadoop.fs.shell.find.TestPrint0 |
|   | hadoop.security.ssl.TestReloadingX509TrustManager |
|   | hadoop.fs.shell.find.TestPrint |
|   | hadoop.fs.shell.find.TestName |
|   | hadoop.io.compress.TestCodecPool |
| JDK v1.7.0_91 Failed junit tests | hadoop.fs.shell.find.TestPrint0 |
|   | hadoop.fs.shell.find.TestPrint |
|   | hadoop.fs.shell.find.TestName |
|   | hadoop.ipc.TestProtoBufRpc |
\\
\\
|| Subsystem || 

[jira] [Commented] (HADOOP-12426) Add Entry point for Kerberos health check

2016-01-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12426?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15121789#comment-15121789
 ] 

Hadoop QA commented on HADOOP-12426:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue} 0m 12s 
{color} | {color:blue} Shelldocs was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
2s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 20s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 35s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
23s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 10s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
58s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 0s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 7s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
45s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 4s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 4s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 41s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 41s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 24s 
{color} | {color:red} hadoop-common-project/hadoop-common: patch generated 85 
new + 101 unchanged - 0 fixed = 186 total (was 101) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 4s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 
9s {color} | {color:green} There were no new shellcheck issues. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 8 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 56s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 7s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 17m 8s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_66. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 47s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_91. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
25s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 78m 9s {color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_66 Failed junit tests | hadoop.metrics2.impl.TestGangliaMetrics |
| JDK v1.8.0_66 Timed 

[jira] [Updated] (HADOOP-12723) S3A: Add ability to plug in any AWSCredentialsProvider

2016-01-28 Thread Steven Wong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12723?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steven Wong updated HADOOP-12723:
-
Status: Patch Available  (was: Open)

> S3A: Add ability to plug in any AWSCredentialsProvider
> --
>
> Key: HADOOP-12723
> URL: https://issues.apache.org/jira/browse/HADOOP-12723
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/s3
>Affects Versions: 2.7.1
>Reporter: Steven Wong
>Assignee: Steven Wong
> Attachments: HADOOP-12723.0.patch
>
>
> Although S3A currently has built-in support for 
> {{org.apache.hadoop.fs.s3a.BasicAWSCredentialsProvider}}, 
> {{com.amazonaws.auth.InstanceProfileCredentialsProvider}}, and 
> {{org.apache.hadoop.fs.s3a.AnonymousAWSCredentialsProvider}}, it does not 
> support any other credentials provider that implements the 
> {{com.amazonaws.auth.AWSCredentialsProvider}} interface. Supporting the 
> ability to plug in any {{com.amazonaws.auth.AWSCredentialsProvider}} instance 
> will expand the options for S3 credentials, such as:
> * temporary credentials from STS, e.g. via 
> {{com.amazonaws.auth.STSSessionCredentialsProvider}}
> * IAM role-based credentials, e.g. via 
> {{com.amazonaws.auth.STSAssumeRoleSessionCredentialsProvider}}
> * a custom credentials provider that satisfies one's own needs, e.g. 
> bucket-specific credentials, user-specific credentials, etc.
> To support this, we can add a configuration for the fully qualified class 
> name of a credentials provider, to be loaded by {{S3AFileSystem.initialize}} 
> and added to its credentials provider chain.
> The configured credentials provider should implement 
> {{com.amazonaws.auth.AWSCredentialsProvider}} and have a constructor that 
> accepts {{(URI uri, Configuration conf)}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11875) [JDK8] Renaming _ as a one-character identifier to another identifier

2016-01-28 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15121861#comment-15121861
 ] 

Haohui Mai commented on HADOOP-11875:
-

Is there a reason to keep Hamlet? Maybe we should consider moving towards HTML5 
UI?

> [JDK8] Renaming _ as a one-character identifier to another identifier
> -
>
> Key: HADOOP-11875
> URL: https://issues.apache.org/jira/browse/HADOOP-11875
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Tsuyoshi Ozawa
>  Labels: newbie
> Attachments: build_error_dump.txt
>
>
> From JDK8, _ as a one-character identifier is disallowed. Currently Web UI 
> uses it. We should fix them to compile with JDK8. 
> https://bugs.openjdk.java.net/browse/JDK-8061549



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12723) S3A: Add ability to plug in any AWSCredentialsProvider

2016-01-28 Thread Steven Wong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12723?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steven Wong updated HADOOP-12723:
-
Description: 
Although S3A currently has built-in support for 
{{org.apache.hadoop.fs.s3a.BasicAWSCredentialsProvider}}, 
{{com.amazonaws.auth.InstanceProfileCredentialsProvider}}, and 
{{org.apache.hadoop.fs.s3a.AnonymousAWSCredentialsProvider}}, it does not 
support any other credentials provider that implements the 
{{com.amazonaws.auth.AWSCredentialsProvider}} interface. Supporting the ability 
to plug in any {{com.amazonaws.auth.AWSCredentialsProvider}} instance will 
expand the options for S3 credentials, such as:

* temporary credentials from STS, e.g. via 
{{com.amazonaws.auth.STSSessionCredentialsProvider}}
* IAM role-based credentials, e.g. via 
{{com.amazonaws.auth.STSAssumeRoleSessionCredentialsProvider}}
* a custom credentials provider that satisfies one's own needs, e.g. 
bucket-specific credentials, user-specific credentials, etc.

To support this, we can add a configuration for the fully qualified class name 
of a credentials provider, to be loaded by {{S3AFileSystem.initialize}} and 
added to its credentials provider chain.

The configured credentials provider should implement 
{{com.amazonaws.auth.AWSCredentialsProvider}} and have a constructor that 
accepts {{(URI uri, Configuration conf)}}.



  was:
Although S3A currently has built-in support for 
{{org.apache.hadoop.fs.s3a.BasicAWSCredentialsProvider}}, 
{{com.amazonaws.auth.InstanceProfileCredentialsProvider}}, and 
{{org.apache.hadoop.fs.s3a.AnonymousAWSCredentialsProvider}}, it does not 
support any other credentials provider that implements the 
{{com.amazonaws.auth.AWSCredentialsProvider}} interface. Supporting the ability 
to plug in any {{com.amazonaws.auth.AWSCredentialsProvider}} instance will 
expand the options for S3 credentials, such as:

* temporary credentials from STS, e.g. via 
{{com.amazonaws.auth.STSSessionCredentialsProvider}}
* IAM role-based credentials, e.g. via 
{{com.amazonaws.auth.STSAssumeRoleSessionCredentialsProvider}}
* a custom credentials provider that satisfies one's own needs, e.g. 
bucket-specific credentials, user-specific credentials, etc.

To support this, we can add a configuration for the fully qualified class name 
of a credentials provider, to be loaded by {{S3AFileSystem.initialize}} and 
added to its credentials provider chain.

The configured credentials provider should implement 
{{com.amazonaws.auth.AWSCredentialsProvider}} and have a constructor that 
accepts {{(URI name, Configuration conf)}}.




> S3A: Add ability to plug in any AWSCredentialsProvider
> --
>
> Key: HADOOP-12723
> URL: https://issues.apache.org/jira/browse/HADOOP-12723
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/s3
>Affects Versions: 2.7.1
>Reporter: Steven Wong
>Assignee: Steven Wong
>
> Although S3A currently has built-in support for 
> {{org.apache.hadoop.fs.s3a.BasicAWSCredentialsProvider}}, 
> {{com.amazonaws.auth.InstanceProfileCredentialsProvider}}, and 
> {{org.apache.hadoop.fs.s3a.AnonymousAWSCredentialsProvider}}, it does not 
> support any other credentials provider that implements the 
> {{com.amazonaws.auth.AWSCredentialsProvider}} interface. Supporting the 
> ability to plug in any {{com.amazonaws.auth.AWSCredentialsProvider}} instance 
> will expand the options for S3 credentials, such as:
> * temporary credentials from STS, e.g. via 
> {{com.amazonaws.auth.STSSessionCredentialsProvider}}
> * IAM role-based credentials, e.g. via 
> {{com.amazonaws.auth.STSAssumeRoleSessionCredentialsProvider}}
> * a custom credentials provider that satisfies one's own needs, e.g. 
> bucket-specific credentials, user-specific credentials, etc.
> To support this, we can add a configuration for the fully qualified class 
> name of a credentials provider, to be loaded by {{S3AFileSystem.initialize}} 
> and added to its credentials provider chain.
> The configured credentials provider should implement 
> {{com.amazonaws.auth.AWSCredentialsProvider}} and have a constructor that 
> accepts {{(URI uri, Configuration conf)}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12747) support wildcard in libjars argument

2016-01-28 Thread Gera Shegalov (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15123058#comment-15123058
 ] 

Gera Shegalov commented on HADOOP-12747:


FYI, I've been working around this problem by using GOP options 
{code} 
 -Dmapreduce.application.classpath='libs/*' -files libs
{code}
for the case where all the dependencies are in the directory ./libs . You can 
consider making this official behavior of libjars by allowing directories in 
libjars.

> support wildcard in libjars argument
> 
>
> Key: HADOOP-12747
> URL: https://issues.apache.org/jira/browse/HADOOP-12747
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: util
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
> Attachments: HADOOP-12747.01.patch
>
>
> There is a problem when a user job adds too many dependency jars in their 
> command line. The HADOOP_CLASSPATH part can be addressed, including using 
> wildcards (\*). But the same cannot be done with the -libjars argument. Today 
> it takes only fully specified file paths.
> We may want to consider supporting wildcards as a way to help users in this 
> situation. The idea is to handle it the same way the JVM does it: \* expands 
> to the list of jars in that directory. It does not traverse into any child 
> directory.
> Also, it probably would be a good idea to do it only for libjars (i.e. don't 
> do it for -files and -archives).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12735) core-default.xml misspells hadoop.workaround.non.threadsafe.getpwuid

2016-01-28 Thread Ray Chiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15123107#comment-15123107
 ] 

Ray Chiang commented on HADOOP-12735:
-

Thanks for the feedback and the commit!

> core-default.xml misspells hadoop.workaround.non.threadsafe.getpwuid
> 
>
> Key: HADOOP-12735
> URL: https://issues.apache.org/jira/browse/HADOOP-12735
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.1
>Reporter: Ray Chiang
>Assignee: Ray Chiang
>Priority: Minor
>  Labels: supportability
> Fix For: 2.8.0
>
> Attachments: HADOOP-12735.001.patch, HADOOP-12735.002.patch
>
>
> The property as defined in core-default.xml is
> bq.  hadoop.work.around.non.threadsafe.getpwuid
> But in NativeIO.java (the only place I can see a similar reference), the 
> property is defined as:
> bq. static final String WORKAROUND_NON_THREADSAFE_CALLS_KEY = 
> "hadoop.workaround.non.threadsafe.getpwuid";
> Note the extra period (.) in the word "workaround".
> Should the code be made to match the property or vice versa?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12723) S3A: Add ability to plug in any AWSCredentialsProvider

2016-01-28 Thread Steven Wong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12723?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steven Wong updated HADOOP-12723:
-
Attachment: HADOOP-12723.0.patch

Attaching patch. I tested it in the us-east-1 region.

> S3A: Add ability to plug in any AWSCredentialsProvider
> --
>
> Key: HADOOP-12723
> URL: https://issues.apache.org/jira/browse/HADOOP-12723
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/s3
>Affects Versions: 2.7.1
>Reporter: Steven Wong
>Assignee: Steven Wong
> Attachments: HADOOP-12723.0.patch
>
>
> Although S3A currently has built-in support for 
> {{org.apache.hadoop.fs.s3a.BasicAWSCredentialsProvider}}, 
> {{com.amazonaws.auth.InstanceProfileCredentialsProvider}}, and 
> {{org.apache.hadoop.fs.s3a.AnonymousAWSCredentialsProvider}}, it does not 
> support any other credentials provider that implements the 
> {{com.amazonaws.auth.AWSCredentialsProvider}} interface. Supporting the 
> ability to plug in any {{com.amazonaws.auth.AWSCredentialsProvider}} instance 
> will expand the options for S3 credentials, such as:
> * temporary credentials from STS, e.g. via 
> {{com.amazonaws.auth.STSSessionCredentialsProvider}}
> * IAM role-based credentials, e.g. via 
> {{com.amazonaws.auth.STSAssumeRoleSessionCredentialsProvider}}
> * a custom credentials provider that satisfies one's own needs, e.g. 
> bucket-specific credentials, user-specific credentials, etc.
> To support this, we can add a configuration for the fully qualified class 
> name of a credentials provider, to be loaded by {{S3AFileSystem.initialize}} 
> and added to its credentials provider chain.
> The configured credentials provider should implement 
> {{com.amazonaws.auth.AWSCredentialsProvider}} and have a constructor that 
> accepts {{(URI uri, Configuration conf)}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12725) RPC call benchmark and optimization in different SASL QOP level

2016-01-28 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12725?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15122152#comment-15122152
 ] 

Chris Nauroth commented on HADOOP-12725:


This effort seems like it would be related to HADOOP-10768, which seeks to 
optimize RPC with SASL auth-conf by integrating with the AES-NI support.  I'm 
linking the issues.

> RPC call benchmark and optimization in different SASL QOP level
> ---
>
> Key: HADOOP-12725
> URL: https://issues.apache.org/jira/browse/HADOOP-12725
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Kai Zheng
>Assignee: Wei Zhou
>
> This would implement a benchmark tool to measure and compare the performance 
> of Hadoop IPC/RPC call when security is enabled and different SASL 
> QOP(Quality of Protection) is enforced. Given the data collected by this 
> benchmark, it would then be able to know if any performance concern when 
> considering to enforce privacy, integration, or authenticy protection level, 
> and do optimization accordingly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12747) support wildcard in libjars argument

2016-01-28 Thread Sangjin Lee (JIRA)
Sangjin Lee created HADOOP-12747:


 Summary: support wildcard in libjars argument
 Key: HADOOP-12747
 URL: https://issues.apache.org/jira/browse/HADOOP-12747
 Project: Hadoop Common
  Issue Type: New Feature
  Components: util
Reporter: Sangjin Lee
Assignee: Sangjin Lee


There is a problem when a user job adds too many dependency jars in their 
command line. The HADOOP_CLASSPATH part can be addressed, including using 
wildcards (\*). But the same cannot be done with the -libjars argument. Today 
it takes only fully specified file paths.

We may want to consider supporting wildcards as a way to help users in this 
situation. The idea is to handle it the same way the JVM does it: \* expands to 
the list of jars in that directory. It does not traverse into any child 
directory.

Also, it probably would be a good idea to do it only for libjars (i.e. don't do 
it for -files and -archives).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12747) support wildcard in libjars argument

2016-01-28 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15122097#comment-15122097
 ] 

Chris Nauroth commented on HADOOP-12747:


There is some existing wildcard matching code in 
{{FileUtil#createJarWithClassPath}} that might be useful for refactoring or 
just inspiration.

> support wildcard in libjars argument
> 
>
> Key: HADOOP-12747
> URL: https://issues.apache.org/jira/browse/HADOOP-12747
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: util
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
>
> There is a problem when a user job adds too many dependency jars in their 
> command line. The HADOOP_CLASSPATH part can be addressed, including using 
> wildcards (\*). But the same cannot be done with the -libjars argument. Today 
> it takes only fully specified file paths.
> We may want to consider supporting wildcards as a way to help users in this 
> situation. The idea is to handle it the same way the JVM does it: \* expands 
> to the list of jars in that directory. It does not traverse into any child 
> directory.
> Also, it probably would be a good idea to do it only for libjars (i.e. don't 
> do it for -files and -archives).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12668) Modify HDFS embeded jetty server logic in HttpServer2.java to exclude weak Ciphers through ssl-server.conf

2016-01-28 Thread Vijay Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12668?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vijay Singh updated HADOOP-12668:
-
Attachment: Hadoop-12668.007.patch

Please find attached the updated patch file containing junit test and code to 
address review feedback. I am looking forward to additional suggestions if any. 
Thank you for reviewing the changes.

> Modify HDFS embeded jetty server logic in HttpServer2.java to exclude weak 
> Ciphers through ssl-server.conf
> --
>
> Key: HADOOP-12668
> URL: https://issues.apache.org/jira/browse/HADOOP-12668
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.7.1
>Reporter: Vijay Singh
>Assignee: Vijay Singh
>Priority: Critical
>  Labels: common, ha, hadoop, hdfs, security
> Attachments: Hadoop-12668.006.patch, Hadoop-12668.007.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> Currently Embeded jetty Server used across all hadoop services is configured 
> through ssl-server.xml file from their respective configuration section. 
> However, the SSL/TLS protocol being used for this jetty servers can be 
> downgraded to weak cipher suites. This code changes aims to add following 
> functionality:
> 1) Add logic in hadoop common (HttpServer2.java and associated interfaces) to 
> spawn jetty servers with ability to exclude weak cipher suites. I propose we 
> make this though ssl-server.xml and hence each service can choose to disable 
> specific ciphers.
> 2) Modify DFSUtil.java used by HDFS code to supply new parameter 
> ssl.server.exclude.cipher.list for hadoop-common code, so it can exclude the 
> ciphers supplied through this key.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HADOOP-12725) RPC call benchmark and optimization in different SASL QOP level

2016-01-28 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12725?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15122152#comment-15122152
 ] 

Chris Nauroth edited comment on HADOOP-12725 at 1/28/16 7:27 PM:
-

This effort seems like it would be related to HADOOP-10768, which seeks to 
optimize RPC with SASL auth-conf by integrating with the AES-NI support.  I'm 
linking the issues.  Cc [~hitliuyi].


was (Author: cnauroth):
This effort seems like it would be related to HADOOP-10768, which seeks to 
optimize RPC with SASL auth-conf by integrating with the AES-NI support.  I'm 
linking the issues.

> RPC call benchmark and optimization in different SASL QOP level
> ---
>
> Key: HADOOP-12725
> URL: https://issues.apache.org/jira/browse/HADOOP-12725
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Kai Zheng
>Assignee: Wei Zhou
>
> This would implement a benchmark tool to measure and compare the performance 
> of Hadoop IPC/RPC call when security is enabled and different SASL 
> QOP(Quality of Protection) is enforced. Given the data collected by this 
> benchmark, it would then be able to know if any performance concern when 
> considering to enforce privacy, integration, or authenticy protection level, 
> and do optimization accordingly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12723) S3A: Add ability to plug in any AWSCredentialsProvider

2016-01-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15122092#comment-15122092
 ] 

Hadoop QA commented on HADOOP-12723:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 3s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
53s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 0s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 49s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
6s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 21s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
42s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
18s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 5s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 17s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 17s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
53s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 52s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 52s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 44s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 44s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
4s {color} | {color:green} root: patch generated 0 new + 48 unchanged - 1 fixed 
= 48 total (was 49) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 20s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
28s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
47s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 10s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 17s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 6m 58s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_66. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 12s 
{color} | {color:green} hadoop-aws in the patch passed with JDK v1.8.0_66. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 5s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_91. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 14s 
{color} | {color:green} hadoop-aws in the patch passed with JDK v1.7.0_91. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
24s {color} | {color:green} Patch does not generate ASF 

[jira] [Commented] (HADOOP-12444) Consider implementing lazy seek in S3AInputStream

2016-01-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12444?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15122511#comment-15122511
 ] 

Hadoop QA commented on HADOOP-12444:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
52s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 12s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 13s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 19s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
18s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
30s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 9s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 9s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 11s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 11s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 10s 
{color} | {color:red} hadoop-tools/hadoop-aws: patch generated 3 new + 7 
unchanged - 3 fixed = 10 total (was 10) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 16s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 40s 
{color} | {color:red} hadoop-tools/hadoop-aws generated 3 new + 0 unchanged - 0 
fixed = 3 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 10s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 10s 
{color} | {color:green} hadoop-aws in the patch passed with JDK v1.8.0_66. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 11s 
{color} | {color:green} hadoop-aws in the patch passed with JDK v1.7.0_91. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
18s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 12m 59s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-tools/hadoop-aws |
|  |  Inconsistent synchronization of 
org.apache.hadoop.fs.s3a.S3AInputStream.nextReadPos; locked 76% of time  
Unsynchronized access at S3AInputStream.java:76% of time  Unsynchronized access 
at S3AInputStream.java:[line 134] |
|  |  Inconsistent synchronization of 
org.apache.hadoop.fs.s3a.S3AInputStream.pos; locked 89% of time  Unsynchronized 
access at 

[jira] [Updated] (HADOOP-12702) Add an HDFS metrics sink

2016-01-28 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12702?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton updated HADOOP-12702:
--
Attachment: HADOOP-12702.006.patch

Streams are now nulled out on close.

> Add an HDFS metrics sink
> 
>
> Key: HADOOP-12702
> URL: https://issues.apache.org/jira/browse/HADOOP-12702
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: metrics
>Affects Versions: 2.7.1
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
> Attachments: HADOOP-12702.001.patch, HADOOP-12702.002.patch, 
> HADOOP-12702.003.patch, HADOOP-12702.004.patch, HADOOP-12702.005.patch, 
> HADOOP-12702.006.patch
>
>
> We need a metrics2 sink that can write metrics to HDFS. The sink should 
> accept as configuration a "directory prefix" and do the following in 
> {{putMetrics()}}
> * Get MMddHH from current timestamp.
> * If HDFS dir "dir prefix" + MMddHH doesn't exist, create it. Close any 
> currently open file and create a new file called .log in the new 
> directory.
> * Write metrics to the current log file.
> * If a write fails, it should be fatal to the process running the sink.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12723) S3A: Add ability to plug in any AWSCredentialsProvider

2016-01-28 Thread Steven Wong (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15122484#comment-15122484
 ] 

Steven Wong commented on HADOOP-12723:
--

The failed junit tests appear to be unrelated.

> S3A: Add ability to plug in any AWSCredentialsProvider
> --
>
> Key: HADOOP-12723
> URL: https://issues.apache.org/jira/browse/HADOOP-12723
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/s3
>Affects Versions: 2.7.1
>Reporter: Steven Wong
>Assignee: Steven Wong
> Attachments: HADOOP-12723.0.patch
>
>
> Although S3A currently has built-in support for 
> {{org.apache.hadoop.fs.s3a.BasicAWSCredentialsProvider}}, 
> {{com.amazonaws.auth.InstanceProfileCredentialsProvider}}, and 
> {{org.apache.hadoop.fs.s3a.AnonymousAWSCredentialsProvider}}, it does not 
> support any other credentials provider that implements the 
> {{com.amazonaws.auth.AWSCredentialsProvider}} interface. Supporting the 
> ability to plug in any {{com.amazonaws.auth.AWSCredentialsProvider}} instance 
> will expand the options for S3 credentials, such as:
> * temporary credentials from STS, e.g. via 
> {{com.amazonaws.auth.STSSessionCredentialsProvider}}
> * IAM role-based credentials, e.g. via 
> {{com.amazonaws.auth.STSAssumeRoleSessionCredentialsProvider}}
> * a custom credentials provider that satisfies one's own needs, e.g. 
> bucket-specific credentials, user-specific credentials, etc.
> To support this, we can add a configuration for the fully qualified class 
> name of a credentials provider, to be loaded by {{S3AFileSystem.initialize}} 
> and added to its credentials provider chain.
> The configured credentials provider should implement 
> {{com.amazonaws.auth.AWSCredentialsProvider}} and have a constructor that 
> accepts {{(URI uri, Configuration conf)}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12702) Add an HDFS metrics sink

2016-01-28 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15122394#comment-15122394
 ] 

Karthik Kambatla commented on HADOOP-12702:
---

bq. Should close() call flush()?
Missed the fact that close() calls close() on the underlying stream, that in 
turn takes care of flush.

Latest patch looks good, but for one nit - we should null out the underlying 
streams on close. +1 after that. 

> Add an HDFS metrics sink
> 
>
> Key: HADOOP-12702
> URL: https://issues.apache.org/jira/browse/HADOOP-12702
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: metrics
>Affects Versions: 2.7.1
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
> Attachments: HADOOP-12702.001.patch, HADOOP-12702.002.patch, 
> HADOOP-12702.003.patch, HADOOP-12702.004.patch, HADOOP-12702.005.patch
>
>
> We need a metrics2 sink that can write metrics to HDFS. The sink should 
> accept as configuration a "directory prefix" and do the following in 
> {{putMetrics()}}
> * Get MMddHH from current timestamp.
> * If HDFS dir "dir prefix" + MMddHH doesn't exist, create it. Close any 
> currently open file and create a new file called .log in the new 
> directory.
> * Write metrics to the current log file.
> * If a write fails, it should be fatal to the process running the sink.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12444) Consider implementing lazy seek in S3AInputStream

2016-01-28 Thread Rajesh Balamohan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12444?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajesh Balamohan updated HADOOP-12444:
--
Attachment: HADOOP-12444.2.patch

Attaching revised .2 version (takes care of EOF during seek past the file 
length) of the patch for review

> Consider implementing lazy seek in S3AInputStream
> -
>
> Key: HADOOP-12444
> URL: https://issues.apache.org/jira/browse/HADOOP-12444
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.1
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
> Attachments: HADOOP-12444.1.patch, HADOOP-12444.2.patch, 
> HADOOP-12444.WIP.patch
>
>
> - Currently, "read(long position, byte[] buffer, int offset, int length)" is 
> not implemented in S3AInputStream (unlike DFSInputStream). So, 
> "readFully(long position, byte[] buffer, int offset, int length)" in 
> S3AInputStream goes through the default implementation of seek(), read(), 
> seek() in FSInputStream. 
> - However, seek() in S3AInputStream involves re-opening of connection to S3 
> everytime 
> (https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java#L115).
>   
> - It would be good to consider having a lazy seek implementation to reduce 
> connection overheads to S3. (e.g Presto implements lazy seek. 
> https://github.com/facebook/presto/blob/master/presto-hive/src/main/java/com/facebook/presto/hive/PrestoS3FileSystem.java#L623)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12747) support wildcard in libjars argument

2016-01-28 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15122254#comment-15122254
 ] 

Sangjin Lee commented on HADOOP-12747:
--

Thanks for that [~cnauroth]! There is also 
{{ApplicationClassLoader#constructUrlsFromClasspath}} that does a similar 
thing. Perhaps we can refactor this logic into a common place.

One interesting thing to consider is whether we want to limit this wildcard 
expansion to local paths or remote filesystems as well. I think both 
{{FileUtil}} and {{ApplicationClassLoader}} consider only local paths. Although 
supporting expanding for remote paths might make this enhancement more 
consistent, it might increase the scope somewhat. Thoughts?

> support wildcard in libjars argument
> 
>
> Key: HADOOP-12747
> URL: https://issues.apache.org/jira/browse/HADOOP-12747
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: util
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
>
> There is a problem when a user job adds too many dependency jars in their 
> command line. The HADOOP_CLASSPATH part can be addressed, including using 
> wildcards (\*). But the same cannot be done with the -libjars argument. Today 
> it takes only fully specified file paths.
> We may want to consider supporting wildcards as a way to help users in this 
> situation. The idea is to handle it the same way the JVM does it: \* expands 
> to the list of jars in that directory. It does not traverse into any child 
> directory.
> Also, it probably would be a good idea to do it only for libjars (i.e. don't 
> do it for -files and -archives).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12747) support wildcard in libjars argument

2016-01-28 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15122300#comment-15122300
 ] 

Chris Nauroth commented on HADOOP-12747:


I'm pretty sure {{-libjars}} only accepts paths on the local file system.  
(i.e. HADOOP-7112)  Since that's the case, I'd suggest keeping the scope of 
this issue limited to wildcard support on the local file system.

> support wildcard in libjars argument
> 
>
> Key: HADOOP-12747
> URL: https://issues.apache.org/jira/browse/HADOOP-12747
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: util
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
>
> There is a problem when a user job adds too many dependency jars in their 
> command line. The HADOOP_CLASSPATH part can be addressed, including using 
> wildcards (\*). But the same cannot be done with the -libjars argument. Today 
> it takes only fully specified file paths.
> We may want to consider supporting wildcards as a way to help users in this 
> situation. The idea is to handle it the same way the JVM does it: \* expands 
> to the list of jars in that directory. It does not traverse into any child 
> directory.
> Also, it probably would be a good idea to do it only for libjars (i.e. don't 
> do it for -files and -archives).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12702) Add an HDFS metrics sink

2016-01-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15122671#comment-15122671
 ] 

Hadoop QA commented on HADOOP-12702:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
47s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 
54s {color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
43s {color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
33s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 43s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
22s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 8s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 57s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 51s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 
52s {color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 18m 52s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
57s {color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 13m 57s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
42s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 47s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
24s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
28s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 3s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 54s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 15m 40s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_66. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 20m 38s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.7.0_91. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
38s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 138m 26s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_66 Failed junit tests | hadoop.fs.shell.find.TestName |
|   | hadoop.ipc.TestRPCWaitForProxy |
|   | hadoop.ha.TestZKFailoverController |
|   | hadoop.fs.shell.find.TestPrint |
|   | hadoop.fs.shell.find.TestPrint0 |
|   | hadoop.fs.shell.find.TestAnd |
|   | hadoop.fs.shell.find.TestIname |
| JDK v1.7.0_91 Failed junit tests | hadoop.fs.shell.find.TestName |
|   | hadoop.ipc.TestRPCWaitForProxy |
|   | hadoop.fs.shell.find.TestPrint |
|   | hadoop.fs.shell.find.TestPrint0 |
|   | hadoop.fs.shell.find.TestIname |
| JDK v1.7.0_91 Timed out junit tests | 
org.apache.hadoop.http.TestHttpServerLifecycle |
\\
\\
|| Subsystem || Report/Notes ||
| 

[jira] [Updated] (HADOOP-12702) Add an HDFS metrics sink

2016-01-28 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12702?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-12702:
--
Issue Type: New Feature  (was: Improvement)

> Add an HDFS metrics sink
> 
>
> Key: HADOOP-12702
> URL: https://issues.apache.org/jira/browse/HADOOP-12702
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: metrics
>Affects Versions: 2.7.1
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
> Attachments: HADOOP-12702.001.patch, HADOOP-12702.002.patch, 
> HADOOP-12702.003.patch, HADOOP-12702.004.patch, HADOOP-12702.005.patch, 
> HADOOP-12702.006.patch
>
>
> We need a metrics2 sink that can write metrics to HDFS. The sink should 
> accept as configuration a "directory prefix" and do the following in 
> {{putMetrics()}}
> * Get MMddHH from current timestamp.
> * If HDFS dir "dir prefix" + MMddHH doesn't exist, create it. Close any 
> currently open file and create a new file called .log in the new 
> directory.
> * Write metrics to the current log file.
> * If a write fails, it should be fatal to the process running the sink.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12622) RetryPolicies (other than FailoverOnNetworkExceptionRetry) should put on retry failed reason or the log from RMProxy's retry could be very misleading.

2016-01-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15122600#comment-15122600
 ] 

Hadoop QA commented on HADOOP-12622:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
41s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 9s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 54s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
21s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 2s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
46s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
40s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 58s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 58s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 52s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 52s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
20s {color} | {color:green} hadoop-common-project/hadoop-common: patch 
generated 0 new + 44 unchanged - 2 fixed = 44 total (was 46) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 1s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
59s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 3s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 16m 23s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_66. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 6m 41s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_91. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
21s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 68m 44s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_66 Timed out junit tests | 
org.apache.hadoop.http.TestHttpServerLifecycle |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12781600/HADOOP-12622-v4.patch 
|
| JIRA Issue | HADOOP-12622 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 174702c6e90c 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| 

[jira] [Commented] (HADOOP-12668) Modify HDFS embeded jetty server logic in HttpServer2.java to exclude weak Ciphers through ssl-server.conf

2016-01-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12668?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15122700#comment-15122700
 ] 

Hadoop QA commented on HADOOP-12668:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 36s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 
41s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 6s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
21s {color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
36s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 48s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
58s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 7m 
41s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 26s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 5m 3s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 36s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
55s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
12s {color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 12s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
58s {color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 58s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
40s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 4m 4s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
2s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 8m 
34s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 26s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 5m 30s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 12m 42s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_66. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 113m 21s 
{color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_66. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 3m 46s {color} 
| {color:red} hadoop-yarn-common in the patch failed with JDK v1.8.0_66. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 12m 53s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.7.0_91. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 103m 38s 
{color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_91. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 45s {color} 
| {color:red} hadoop-yarn-common in the patch failed with JDK v1.7.0_91. 
{color} |
| {color:green}+1{color} | 

[jira] [Commented] (HADOOP-12702) Add an HDFS metrics sink

2016-01-28 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15122704#comment-15122704
 ] 

Karthik Kambatla commented on HADOOP-12702:
---

The unit test failures look unrelated. 

+1, checking this in. 

> Add an HDFS metrics sink
> 
>
> Key: HADOOP-12702
> URL: https://issues.apache.org/jira/browse/HADOOP-12702
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: metrics
>Affects Versions: 2.7.1
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
> Attachments: HADOOP-12702.001.patch, HADOOP-12702.002.patch, 
> HADOOP-12702.003.patch, HADOOP-12702.004.patch, HADOOP-12702.005.patch, 
> HADOOP-12702.006.patch
>
>
> We need a metrics2 sink that can write metrics to HDFS. The sink should 
> accept as configuration a "directory prefix" and do the following in 
> {{putMetrics()}}
> * Get MMddHH from current timestamp.
> * If HDFS dir "dir prefix" + MMddHH doesn't exist, create it. Close any 
> currently open file and create a new file called .log in the new 
> directory.
> * Write metrics to the current log file.
> * If a write fails, it should be fatal to the process running the sink.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12725) RPC call benchmark and optimization in different SASL QOP level

2016-01-28 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12725?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15122612#comment-15122612
 ] 

Kai Zheng commented on HADOOP-12725:


Thanks [~cnauroth] for the hint. Yes this should be a duplicate of 
HADOOP-10768. I will go through the related discussions and discuss with 
[~hitliuyi] about how to collaborate. If sounds good, we would keep this one 
for the benchmark things and prototype for possible optimization solutions to 
be submitted to HADOOP-10768 for broad discussions further.

> RPC call benchmark and optimization in different SASL QOP level
> ---
>
> Key: HADOOP-12725
> URL: https://issues.apache.org/jira/browse/HADOOP-12725
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Kai Zheng
>Assignee: Wei Zhou
>
> This would implement a benchmark tool to measure and compare the performance 
> of Hadoop IPC/RPC call when security is enabled and different SASL 
> QOP(Quality of Protection) is enforced. Given the data collected by this 
> benchmark, it would then be able to know if any performance concern when 
> considering to enforce privacy, integration, or authenticy protection level, 
> and do optimization accordingly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12725) RPC encryption benchmark and optimization prototypes

2016-01-28 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12725?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HADOOP-12725:
---
Summary: RPC encryption benchmark and optimization prototypes  (was: RPC 
call benchmark and optimization in different SASL QOP level)

> RPC encryption benchmark and optimization prototypes
> 
>
> Key: HADOOP-12725
> URL: https://issues.apache.org/jira/browse/HADOOP-12725
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Kai Zheng
>Assignee: Wei Zhou
>
> This would implement a benchmark tool to measure and compare the performance 
> of Hadoop IPC/RPC call when security is enabled and different SASL 
> QOP(Quality of Protection) is enforced. Given the data collected by this 
> benchmark, it would then be able to know if any performance concern when 
> considering to enforce privacy, integration, or authenticy protection level, 
> and do optimization accordingly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12725) RPC call benchmark and optimization in different SASL QOP level

2016-01-28 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12725?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15120987#comment-15120987
 ] 

Kai Zheng commented on HADOOP-12725:


Thanks Wei for your quick work! So looks like we can focus on GSSAPI layer 
first. How would you like to change your bench codes to encrypt the packet data 
with {{Chimera}} cipher? Then we may have idea about how much we could optimize 
if it's possible to use a high efficiency cipher in the layer. Thanks.

> RPC call benchmark and optimization in different SASL QOP level
> ---
>
> Key: HADOOP-12725
> URL: https://issues.apache.org/jira/browse/HADOOP-12725
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Kai Zheng
>Assignee: Wei Zhou
>
> This would implement a benchmark tool to measure and compare the performance 
> of Hadoop IPC/RPC call when security is enabled and different SASL 
> QOP(Quality of Protection) is enforced. Given the data collected by this 
> benchmark, it would then be able to know if any performance concern when 
> considering to enforce privacy, integration, or authenticy protection level, 
> and do optimization accordingly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12725) RPC call benchmark and optimization in different SASL QOP level

2016-01-28 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12725?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15120996#comment-15120996
 ] 

Kai Zheng commented on HADOOP-12725:


To get around in simpler manner, for the time being when try other cipher in 
your GSSAPI test client and server, a faked encryption key like "123456" could 
be used, not involving into the security context.

> RPC call benchmark and optimization in different SASL QOP level
> ---
>
> Key: HADOOP-12725
> URL: https://issues.apache.org/jira/browse/HADOOP-12725
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Kai Zheng
>Assignee: Wei Zhou
>
> This would implement a benchmark tool to measure and compare the performance 
> of Hadoop IPC/RPC call when security is enabled and different SASL 
> QOP(Quality of Protection) is enforced. Given the data collected by this 
> benchmark, it would then be able to know if any performance concern when 
> considering to enforce privacy, integration, or authenticy protection level, 
> and do optimization accordingly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12426) Add Entry point for Kerberos health check

2016-01-28 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12426?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15121343#comment-15121343
 ] 

Steve Loughran commented on HADOOP-12426:
-

finbug check is broken; it's god confused by a \n straight after a %s. Let's 
see if inserting a space fixes the confusion

{code}
  private String arg(String name, String params, String meaning) {
return String.format("[%s%s%s] : %s\n",
name, (!params.isEmpty() ? " " : ""), params, meaning);
  }
{code}


> Add Entry point for Kerberos health check
> -
>
> Key: HADOOP-12426
> URL: https://issues.apache.org/jira/browse/HADOOP-12426
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: security
>Affects Versions: 2.7.1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-12426-001.patch, HADOOP-12426-002.patch, 
> HADOOP-12426-003.patch, HADOOP-12426-004.patch, HADOOP-12426-006.patch, 
> HADOOP-12426-007.patch, HADOOP-12426-008.patch
>
>
> If we a little command line entry point for testing kerberos settings, 
> including some automated diagnostics checks, we could simplify fielding the 
> client-side support calls.
> Specifically
> * check JRE for having java crypto extensions at full key length.
> * network checks: do you know your own name?
> * Is the user kinited in?
> * if a tgt is specified, does it exist?
> * are hadoop security options consistent?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11875) [JDK8] Renaming _ as a one-character identifier to another identifier

2016-01-28 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15121306#comment-15121306
 ] 

Steve Loughran commented on HADOOP-11875:
-

This is going to be fun in Hamlet. It not only has the method {{_()}}, it has 
the interface {{ _ }} too.
We'll have to do something like

# define a new method and interface, {{__}}
# tag the old one as deprecated "will be removed for Java 9"
# adopt the new one while retaining the old one for compatibility —for now


> [JDK8] Renaming _ as a one-character identifier to another identifier
> -
>
> Key: HADOOP-11875
> URL: https://issues.apache.org/jira/browse/HADOOP-11875
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Tsuyoshi Ozawa
>  Labels: newbie
> Attachments: build_error_dump.txt
>
>
> From JDK8, _ as a one-character identifier is disallowed. Currently Web UI 
> uses it. We should fix them to compile with JDK8. 
> https://bugs.openjdk.java.net/browse/JDK-8061549



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-12740) Hadoop copyFromLocal command fails with NoSuchMethodError on startCopyFromBlob

2016-01-28 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth resolved HADOOP-12740.

Resolution: Not A Problem

Thank you, Maik.  I'll close this issue.

> Hadoop copyFromLocal command fails with NoSuchMethodError on startCopyFromBlob
> --
>
> Key: HADOOP-12740
> URL: https://issues.apache.org/jira/browse/HADOOP-12740
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.1
>Reporter: Maik Groenewegen
>
> There was a change in the azure-storage version from 3.x to 4.x where the 
> method startCopyFromBlob was changed to startCopy. 
> https://github.com/Azure/azure-storage-java/blob/d108c110ea042df5c898a461007b91818fa98f15/BreakingChanges.txt



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12748) Uncaught exceptions are not caught/logged when ExecutorService is used

2016-01-28 Thread Sidharta Seethana (JIRA)
Sidharta Seethana created HADOOP-12748:
--

 Summary: Uncaught exceptions are not caught/logged when 
ExecutorService is used
 Key: HADOOP-12748
 URL: https://issues.apache.org/jira/browse/HADOOP-12748
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Sidharta Seethana
Assignee: Sidharta Seethana


{{ThreadPoolExecutor}} catches (otherwise uncaught) exceptions/errors thrown by 
the tasks running in the corresponding thread pool. These are passed to an 
{{afterExecute}} method which seems to do nothing by default unless overridden. 
 Even though we register {{UncaughtExceptionHandler}}s in various places (e.g 
{{YarnUncaughtExceptionHandler}}), these handlers are not invoked because the 
uncaught exceptions/errors are not propagated all the way. 

To fix this, one mechanism would be to override {{ThreadPoolExecutor}} with an 
{{afterExecute}} method that would log these exceptions/errors. Logging these 
exceptions/errors would be useful in debug issues that are otherwise difficult 
to trace (e.g YARN-4643) because there is nothing in the logs indicating an 
uncaught exception/error 




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12747) support wildcard in libjars argument

2016-01-28 Thread Sangjin Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12747?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sangjin Lee updated HADOOP-12747:
-
Status: Patch Available  (was: Open)

> support wildcard in libjars argument
> 
>
> Key: HADOOP-12747
> URL: https://issues.apache.org/jira/browse/HADOOP-12747
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: util
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
> Attachments: HADOOP-12747.01.patch
>
>
> There is a problem when a user job adds too many dependency jars in their 
> command line. The HADOOP_CLASSPATH part can be addressed, including using 
> wildcards (\*). But the same cannot be done with the -libjars argument. Today 
> it takes only fully specified file paths.
> We may want to consider supporting wildcards as a way to help users in this 
> situation. The idea is to handle it the same way the JVM does it: \* expands 
> to the list of jars in that directory. It does not traverse into any child 
> directory.
> Also, it probably would be a good idea to do it only for libjars (i.e. don't 
> do it for -files and -archives).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12747) support wildcard in libjars argument

2016-01-28 Thread Sangjin Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12747?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sangjin Lee updated HADOOP-12747:
-
Attachment: HADOOP-12747.01.patch

Posted patch v.1. I tested it with a pseudo-distributed cluster.

It takes a pretty minimal approach. When it sees a wildcard in the libjars 
option value, it replaces it with jars in that directory and sets it onto 
tmpjars.

I refactored {{FileUtil}}, {{ApplicationClassLoader}}, and 
{{GenericOptionsParser}} to use the common implementation (the one that was in 
{{FileUtil}}).

I also updated {{TestGenericOptionsParser}} to use JUnit 4.

I would greatly appreciate your review. Thanks!

> support wildcard in libjars argument
> 
>
> Key: HADOOP-12747
> URL: https://issues.apache.org/jira/browse/HADOOP-12747
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: util
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
> Attachments: HADOOP-12747.01.patch
>
>
> There is a problem when a user job adds too many dependency jars in their 
> command line. The HADOOP_CLASSPATH part can be addressed, including using 
> wildcards (\*). But the same cannot be done with the -libjars argument. Today 
> it takes only fully specified file paths.
> We may want to consider supporting wildcards as a way to help users in this 
> situation. The idea is to handle it the same way the JVM does it: \* expands 
> to the list of jars in that directory. It does not traverse into any child 
> directory.
> Also, it probably would be a good idea to do it only for libjars (i.e. don't 
> do it for -files and -archives).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12646) NPE thrown at KMS startup

2016-01-28 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-12646:

Affects Version/s: 3.0.0

> NPE thrown at KMS startup
> -
>
> Key: HADOOP-12646
> URL: https://issues.apache.org/jira/browse/HADOOP-12646
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 3.0.0
>Reporter: Archana T
>Assignee: Archana T
>
> NPE thrown while starting KMS --
> ERROR: Hadoop KMS could not be started
> REASON: java.lang.NullPointerException
> Stacktrace:
> ---
> java.lang.NullPointerException
> at 
> org.apache.hadoop.security.ProviderUtils.unnestUri(ProviderUtils.java:35)
> at 
> org.apache.hadoop.crypto.key.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:134)
> at 
> org.apache.hadoop.crypto.key.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:91)
> at 
> org.apache.hadoop.crypto.key.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:669)
> at 
> org.apache.hadoop.crypto.key.KeyProviderFactory.get(KeyProviderFactory.java:95)
> at 
> org.apache.hadoop.crypto.key.kms.server.KMSWebApp.contextInitialized(KMSWebApp.java:167)
> at 
> org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:5017)
> at 
> org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5531)
> at 
> org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150)
> at 
> org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:901)
> at 
> org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:877)
> at 
> org.apache.catalina.core.StandardHost.addChild(StandardHost.java:652)
> at 
> org.apache.catalina.startup.HostConfig.deployDirectory(HostConfig.java:1263)
> at 
> org.apache.catalina.startup.HostConfig$DeployDirectory.run(HostConfig.java:1948)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12646) NPE thrown at KMS startup

2016-01-28 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15121594#comment-15121594
 ] 

Steve Loughran commented on HADOOP-12646:
-

Ignoring the fact that a meaningful exception should be provided, this looks 
like there's no authority in your URI, or no nested URI.

{code}
nestedUri.getAuthority().split("@", 2);
{code}

What is your KMS URI?

> NPE thrown at KMS startup
> -
>
> Key: HADOOP-12646
> URL: https://issues.apache.org/jira/browse/HADOOP-12646
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 3.0.0
>Reporter: Archana T
>Assignee: Archana T
>
> NPE thrown while starting KMS --
> ERROR: Hadoop KMS could not be started
> REASON: java.lang.NullPointerException
> Stacktrace:
> ---
> java.lang.NullPointerException
> at 
> org.apache.hadoop.security.ProviderUtils.unnestUri(ProviderUtils.java:35)
> at 
> org.apache.hadoop.crypto.key.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:134)
> at 
> org.apache.hadoop.crypto.key.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:91)
> at 
> org.apache.hadoop.crypto.key.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:669)
> at 
> org.apache.hadoop.crypto.key.KeyProviderFactory.get(KeyProviderFactory.java:95)
> at 
> org.apache.hadoop.crypto.key.kms.server.KMSWebApp.contextInitialized(KMSWebApp.java:167)
> at 
> org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:5017)
> at 
> org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5531)
> at 
> org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150)
> at 
> org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:901)
> at 
> org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:877)
> at 
> org.apache.catalina.core.StandardHost.addChild(StandardHost.java:652)
> at 
> org.apache.catalina.startup.HostConfig.deployDirectory(HostConfig.java:1263)
> at 
> org.apache.catalina.startup.HostConfig$DeployDirectory.run(HostConfig.java:1948)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12426) Add Entry point for Kerberos health check

2016-01-28 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12426?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-12426:

Status: Patch Available  (was: Open)

> Add Entry point for Kerberos health check
> -
>
> Key: HADOOP-12426
> URL: https://issues.apache.org/jira/browse/HADOOP-12426
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: security
>Affects Versions: 2.7.1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-12426-001.patch, HADOOP-12426-002.patch, 
> HADOOP-12426-003.patch, HADOOP-12426-004.patch, HADOOP-12426-006.patch, 
> HADOOP-12426-007.patch, HADOOP-12426-008.patch, HADOOP-12426-009.patch
>
>
> If we a little command line entry point for testing kerberos settings, 
> including some automated diagnostics checks, we could simplify fielding the 
> client-side support calls.
> Specifically
> * check JRE for having java crypto extensions at full key length.
> * network checks: do you know your own name?
> * Is the user kinited in?
> * if a tgt is specified, does it exist?
> * are hadoop security options consistent?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12650) Document all of the secret env vars

2016-01-28 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15121604#comment-15121604
 ] 

Steve Loughran commented on HADOOP-12650:
-

HADOOP-12426 patch 009 documents {{HADOOP_JAAS_DEBUG}} and adds a mention of it 
to `hadoop-env.sh`, that being the place to add it to diagnose server-side 
problems

> Document all of the secret env vars
> ---
>
> Key: HADOOP-12650
> URL: https://issues.apache.org/jira/browse/HADOOP-12650
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Priority: Blocker
>
> Over the years, developers have added all kinds of magical environment 
> variables in the Java code without any concern or thought about either a) 
> documenting them or b) whether they are already used by something else.  We 
> need to update at least hadoop-env.sh to contain a list of these env vars so 
> that end users know that they are either private/unsafe and/or how they can 
> be used.
> Just one of many examples: HADOOP_JAAS_DEBUG.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12702) Add an HDFS metrics sink

2016-01-28 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12702?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-12702:
--
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.9.0
   Status: Resolved  (was: Patch Available)

Thanks Daniel for reporting and working on this. 

Just committed to trunk and branch-2. 

> Add an HDFS metrics sink
> 
>
> Key: HADOOP-12702
> URL: https://issues.apache.org/jira/browse/HADOOP-12702
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: metrics
>Affects Versions: 2.7.1
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
> Fix For: 2.9.0
>
> Attachments: HADOOP-12702.001.patch, HADOOP-12702.002.patch, 
> HADOOP-12702.003.patch, HADOOP-12702.004.patch, HADOOP-12702.005.patch, 
> HADOOP-12702.006.patch
>
>
> We need a metrics2 sink that can write metrics to HDFS. The sink should 
> accept as configuration a "directory prefix" and do the following in 
> {{putMetrics()}}
> * Get MMddHH from current timestamp.
> * If HDFS dir "dir prefix" + MMddHH doesn't exist, create it. Close any 
> currently open file and create a new file called .log in the new 
> directory.
> * Write metrics to the current log file.
> * If a write fails, it should be fatal to the process running the sink.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12748) Uncaught exceptions are not caught/logged when ExecutorService is used

2016-01-28 Thread Sidharta Seethana (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12748?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sidharta Seethana updated HADOOP-12748:
---
Attachment: TestUncaughExceptionHandler.java

Uploading some test code that demonstrates this issue. 

> Uncaught exceptions are not caught/logged when ExecutorService is used
> --
>
> Key: HADOOP-12748
> URL: https://issues.apache.org/jira/browse/HADOOP-12748
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Sidharta Seethana
>Assignee: Sidharta Seethana
> Attachments: TestUncaughExceptionHandler.java
>
>
> {{ThreadPoolExecutor}} catches (otherwise uncaught) exceptions/errors thrown 
> by the tasks running in the corresponding thread pool. These are passed to an 
> {{afterExecute}} method which seems to do nothing by default unless 
> overridden.  Even though we register {{UncaughtExceptionHandler}}s in various 
> places (e.g {{YarnUncaughtExceptionHandler}}), these handlers are not invoked 
> because the uncaught exceptions/errors are not propagated all the way. 
> To fix this, one mechanism would be to override {{ThreadPoolExecutor}} with 
> an {{afterExecute}} method that would log these exceptions/errors. Logging 
> these exceptions/errors would be useful in debug issues that are otherwise 
> difficult to trace (e.g YARN-4643) because there is nothing in the logs 
> indicating an uncaught exception/error 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12748) Uncaught exceptions are not caught/logged when ExecutorService is used

2016-01-28 Thread Sidharta Seethana (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12748?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sidharta Seethana updated HADOOP-12748:
---
Description: 
{{ThreadPoolExecutor}} catches (otherwise uncaught) exceptions/errors thrown by 
the tasks running in the corresponding thread pool. These are passed to an 
{{afterExecute}} method which seems to do nothing by default unless overridden. 
 Even though we register {{UncaughtExceptionHandler}}s in various places (e.g 
{{YarnUncaughtExceptionHandler}}), these handlers are not invoked because the 
uncaught exceptions/errors are not propagated all the way. 

To fix this, one mechanism would be to override {{ThreadPoolExecutor}} with an 
{{afterExecute}} method that would log these exceptions/errors. Logging these 
exceptions/errors would be useful in debugging issues that are otherwise 
difficult to trace (e.g YARN-4643) because there is nothing in the logs 
indicating an uncaught exception/error 

Edit: uploaded some test code that demonstrates this issue. 


  was:
{{ThreadPoolExecutor}} catches (otherwise uncaught) exceptions/errors thrown by 
the tasks running in the corresponding thread pool. These are passed to an 
{{afterExecute}} method which seems to do nothing by default unless overridden. 
 Even though we register {{UncaughtExceptionHandler}}s in various places (e.g 
{{YarnUncaughtExceptionHandler}}), these handlers are not invoked because the 
uncaught exceptions/errors are not propagated all the way. 

To fix this, one mechanism would be to override {{ThreadPoolExecutor}} with an 
{{afterExecute}} method that would log these exceptions/errors. Logging these 
exceptions/errors would be useful in debug issues that are otherwise difficult 
to trace (e.g YARN-4643) because there is nothing in the logs indicating an 
uncaught exception/error 

Edit: uploaded some test code that demonstrates this issue. 



> Uncaught exceptions are not caught/logged when ExecutorService is used
> --
>
> Key: HADOOP-12748
> URL: https://issues.apache.org/jira/browse/HADOOP-12748
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Sidharta Seethana
>Assignee: Sidharta Seethana
> Attachments: TestUncaughExceptionHandler.java
>
>
> {{ThreadPoolExecutor}} catches (otherwise uncaught) exceptions/errors thrown 
> by the tasks running in the corresponding thread pool. These are passed to an 
> {{afterExecute}} method which seems to do nothing by default unless 
> overridden.  Even though we register {{UncaughtExceptionHandler}}s in various 
> places (e.g {{YarnUncaughtExceptionHandler}}), these handlers are not invoked 
> because the uncaught exceptions/errors are not propagated all the way. 
> To fix this, one mechanism would be to override {{ThreadPoolExecutor}} with 
> an {{afterExecute}} method that would log these exceptions/errors. Logging 
> these exceptions/errors would be useful in debugging issues that are 
> otherwise difficult to trace (e.g YARN-4643) because there is nothing in the 
> logs indicating an uncaught exception/error 
> Edit: uploaded some test code that demonstrates this issue. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12748) Uncaught exceptions are not caught/logged when ExecutorService is used

2016-01-28 Thread Sidharta Seethana (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12748?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sidharta Seethana updated HADOOP-12748:
---
Description: 
{{ThreadPoolExecutor}} catches (otherwise uncaught) exceptions/errors thrown by 
the tasks running in the corresponding thread pool. These are passed to an 
{{afterExecute}} method which seems to do nothing by default unless overridden. 
 Even though we register {{UncaughtExceptionHandler}}s in various places (e.g 
{{YarnUncaughtExceptionHandler}}), these handlers are not invoked because the 
uncaught exceptions/errors are not propagated all the way. 

To fix this, one mechanism would be to override {{ThreadPoolExecutor}} with an 
{{afterExecute}} method that would log these exceptions/errors. Logging these 
exceptions/errors would be useful in debug issues that are otherwise difficult 
to trace (e.g YARN-4643) because there is nothing in the logs indicating an 
uncaught exception/error 

Edit: uploaded some test code that demonstrates this issue. 


  was:
{{ThreadPoolExecutor}} catches (otherwise uncaught) exceptions/errors thrown by 
the tasks running in the corresponding thread pool. These are passed to an 
{{afterExecute}} method which seems to do nothing by default unless overridden. 
 Even though we register {{UncaughtExceptionHandler}}s in various places (e.g 
{{YarnUncaughtExceptionHandler}}), these handlers are not invoked because the 
uncaught exceptions/errors are not propagated all the way. 

To fix this, one mechanism would be to override {{ThreadPoolExecutor}} with an 
{{afterExecute}} method that would log these exceptions/errors. Logging these 
exceptions/errors would be useful in debug issues that are otherwise difficult 
to trace (e.g YARN-4643) because there is nothing in the logs indicating an 
uncaught exception/error 



> Uncaught exceptions are not caught/logged when ExecutorService is used
> --
>
> Key: HADOOP-12748
> URL: https://issues.apache.org/jira/browse/HADOOP-12748
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Sidharta Seethana
>Assignee: Sidharta Seethana
> Attachments: TestUncaughExceptionHandler.java
>
>
> {{ThreadPoolExecutor}} catches (otherwise uncaught) exceptions/errors thrown 
> by the tasks running in the corresponding thread pool. These are passed to an 
> {{afterExecute}} method which seems to do nothing by default unless 
> overridden.  Even though we register {{UncaughtExceptionHandler}}s in various 
> places (e.g {{YarnUncaughtExceptionHandler}}), these handlers are not invoked 
> because the uncaught exceptions/errors are not propagated all the way. 
> To fix this, one mechanism would be to override {{ThreadPoolExecutor}} with 
> an {{afterExecute}} method that would log these exceptions/errors. Logging 
> these exceptions/errors would be useful in debug issues that are otherwise 
> difficult to trace (e.g YARN-4643) because there is nothing in the logs 
> indicating an uncaught exception/error 
> Edit: uploaded some test code that demonstrates this issue. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12749) Create a threadpoolexecutor that overrides afterExecute to log uncaught exceptions/errors

2016-01-28 Thread Sidharta Seethana (JIRA)
Sidharta Seethana created HADOOP-12749:
--

 Summary: Create a threadpoolexecutor that overrides afterExecute 
to log uncaught exceptions/errors
 Key: HADOOP-12749
 URL: https://issues.apache.org/jira/browse/HADOOP-12749
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Sidharta Seethana
Assignee: Sidharta Seethana






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12702) Add an HDFS metrics sink

2016-01-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15122734#comment-15122734
 ] 

Hudson commented on HADOOP-12702:
-

FAILURE: Integrated in Hadoop-trunk-Commit #9203 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9203/])
HADOOP-12702. Add an HDFS metrics sink. (Daniel Templeton via kasha) (kasha: 
rev ee005e010cff3f97a5daa8000ac2cd151e2631ca)
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/metrics2/sink/RollingFileSystemSinkTestBase.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/sink/RollingFileSystemSink.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/metrics2/sink/TestRollingFileSystemSink.java
* hadoop-common-project/hadoop-common/CHANGES.txt


> Add an HDFS metrics sink
> 
>
> Key: HADOOP-12702
> URL: https://issues.apache.org/jira/browse/HADOOP-12702
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: metrics
>Affects Versions: 2.7.1
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
> Fix For: 2.9.0
>
> Attachments: HADOOP-12702.001.patch, HADOOP-12702.002.patch, 
> HADOOP-12702.003.patch, HADOOP-12702.004.patch, HADOOP-12702.005.patch, 
> HADOOP-12702.006.patch
>
>
> We need a metrics2 sink that can write metrics to HDFS. The sink should 
> accept as configuration a "directory prefix" and do the following in 
> {{putMetrics()}}
> * Get MMddHH from current timestamp.
> * If HDFS dir "dir prefix" + MMddHH doesn't exist, create it. Close any 
> currently open file and create a new file called .log in the new 
> directory.
> * Write metrics to the current log file.
> * If a write fails, it should be fatal to the process running the sink.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12748) Uncaught exceptions are not caught/logged when ExecutorService is used

2016-01-28 Thread Sidharta Seethana (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12748?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sidharta Seethana updated HADOOP-12748:
---
Description: 
{{ThreadPoolExecutor}} catches (otherwise uncaught) exceptions/errors thrown by 
the tasks running in the corresponding thread pool. These are passed to an 
{{afterExecute}} method which seems to do nothing by default unless overridden. 
 Even though we register {{UncaughtExceptionHandler}} s in various places (e.g 
{{YarnUncaughtExceptionHandler}}), these handlers are not invoked because the 
uncaught exceptions/errors are not propagated all the way. 

To fix this, one mechanism would be to override {{ThreadPoolExecutor}} with an 
{{afterExecute}} method that would log these exceptions/errors. Logging these 
exceptions/errors would be useful in debugging issues that are otherwise 
difficult to trace (e.g YARN-4643) because there is nothing in the logs 
indicating an uncaught exception/error 

Edit: uploaded some test code that demonstrates this issue. 


  was:
{{ThreadPoolExecutor}} catches (otherwise uncaught) exceptions/errors thrown by 
the tasks running in the corresponding thread pool. These are passed to an 
{{afterExecute}} method which seems to do nothing by default unless overridden. 
 Even though we register {{UncaughtExceptionHandler}}s in various places (e.g 
{{YarnUncaughtExceptionHandler}}), these handlers are not invoked because the 
uncaught exceptions/errors are not propagated all the way. 

To fix this, one mechanism would be to override {{ThreadPoolExecutor}} with an 
{{afterExecute}} method that would log these exceptions/errors. Logging these 
exceptions/errors would be useful in debugging issues that are otherwise 
difficult to trace (e.g YARN-4643) because there is nothing in the logs 
indicating an uncaught exception/error 

Edit: uploaded some test code that demonstrates this issue. 



> Uncaught exceptions are not caught/logged when ExecutorService is used
> --
>
> Key: HADOOP-12748
> URL: https://issues.apache.org/jira/browse/HADOOP-12748
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Sidharta Seethana
>Assignee: Sidharta Seethana
> Attachments: TestUncaughExceptionHandler.java
>
>
> {{ThreadPoolExecutor}} catches (otherwise uncaught) exceptions/errors thrown 
> by the tasks running in the corresponding thread pool. These are passed to an 
> {{afterExecute}} method which seems to do nothing by default unless 
> overridden.  Even though we register {{UncaughtExceptionHandler}} s in 
> various places (e.g {{YarnUncaughtExceptionHandler}}), these handlers are not 
> invoked because the uncaught exceptions/errors are not propagated all the 
> way. 
> To fix this, one mechanism would be to override {{ThreadPoolExecutor}} with 
> an {{afterExecute}} method that would log these exceptions/errors. Logging 
> these exceptions/errors would be useful in debugging issues that are 
> otherwise difficult to trace (e.g YARN-4643) because there is nothing in the 
> logs indicating an uncaught exception/error 
> Edit: uploaded some test code that demonstrates this issue. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12426) Add Entry point for Kerberos health check

2016-01-28 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12426?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-12426:

Status: Open  (was: Patch Available)

> Add Entry point for Kerberos health check
> -
>
> Key: HADOOP-12426
> URL: https://issues.apache.org/jira/browse/HADOOP-12426
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: security
>Affects Versions: 2.7.1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-12426-001.patch, HADOOP-12426-002.patch, 
> HADOOP-12426-003.patch, HADOOP-12426-004.patch, HADOOP-12426-006.patch, 
> HADOOP-12426-007.patch, HADOOP-12426-008.patch, HADOOP-12426-009.patch
>
>
> If we a little command line entry point for testing kerberos settings, 
> including some automated diagnostics checks, we could simplify fielding the 
> client-side support calls.
> Specifically
> * check JRE for having java crypto extensions at full key length.
> * network checks: do you know your own name?
> * Is the user kinited in?
> * if a tgt is specified, does it exist?
> * are hadoop security options consistent?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10672) Add support for pushing metrics to OpenTSDB

2016-01-28 Thread zhangyubiao (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15122979#comment-15122979
 ] 

zhangyubiao commented on HADOOP-10672:
--

[~aw]

> Add support for pushing metrics to OpenTSDB
> ---
>
> Key: HADOOP-10672
> URL: https://issues.apache.org/jira/browse/HADOOP-10672
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: metrics
>Affects Versions: 0.21.0
>Reporter: Kamaldeep Singh
>Assignee: zhangyubiao
>Priority: Minor
> Attachments: HADOOP-10672-v1.patch, HADOOP-10672-v2.patch, 
> HADOOP-10672-v3.patch, HADOOP-10672-v4.patch, HADOOP-10672-v5.patch, 
> HADOOP-10672.patch
>
>
> We wish to add support for pushing metrics to OpenTSDB from hadoop 
> Code and instructions at - https://github.com/eBay/hadoop-tsdb-connector



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)