[jira] [Commented] (HADOOP-12924) Add default coder key for creating raw coders

2016-03-28 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15215487#comment-15215487
 ] 

Zhe Zhang commented on HADOOP-12924:


Thanks Rui and Kai! The two points Rui proposed above make sense to me.

I also like the {{rs-legacy}} name. For the other codec, how about 
{{rs-default}}?

> Add default coder key for creating raw coders
> -
>
> Key: HADOOP-12924
> URL: https://issues.apache.org/jira/browse/HADOOP-12924
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Rui Li
>Assignee: Rui Li
>Priority: Minor
> Attachments: HADOOP-12924.1.patch
>
>
> As suggested 
> [here|https://issues.apache.org/jira/browse/HADOOP-12826?focusedCommentId=15194402=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15194402].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11393) Revert HADOOP_PREFIX, go back to HADOOP_HOME

2016-03-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15215478#comment-15215478
 ] 

Hadoop QA commented on HADOOP-11393:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue} 0m 4s 
{color} | {color:blue} Shelldocs was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 12 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 20s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 
34s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 0s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 51s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
5s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 12m 
36s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 4m 
17s {color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped branch modules with no Java source: 
hadoop-hdfs-project/hadoop-hdfs-native-client hadoop-yarn-project/hadoop-yarn 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site hadoop-tools/hadoop-pipes 
hadoop-mapreduce-project {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 30s 
{color} | {color:red} hadoop-tools/hadoop-datajoin in trunk has 2 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 13m 
13s {color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 13m 
49s {color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 18s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 11m 
18s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 2s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 13m 2s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 30s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 9m 30s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 14s 
{color} | {color:red} root: patch generated 4 new + 171 unchanged - 8 fixed = 
175 total (was 179) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 13m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 4m 
31s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 
9s {color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patch modules with no Java source: 
hadoop-hdfs-project/hadoop-hdfs-native-client hadoop-yarn-project/hadoop-yarn 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site hadoop-tools/hadoop-pipes 
hadoop-mapreduce-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 14m 
41s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 12m 
27s {color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | 

[jira] [Updated] (HADOOP-12916) Allow different Hadoop IPC Call Queue throttling policies with FCQ/BackOff

2016-03-28 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HADOOP-12916:

Attachment: HADOOP-12916.05.patch

Thanks [~arpitagarwal] and [~szetszwo] for the review. 
Update the patch that 
1) Fixing exception handling when constructing the scheduler instance.
2) Keep the default decay factor and decay windows size to the same as the 
original FCQ.
3) Enable decay on the average response time for scheduler decision.

I don't change the Configuration#getTimeDurations() to return a List because it 
incurs extra box/unbox overhead. A long array should be good for many cases.

> Allow different Hadoop IPC Call Queue throttling policies with FCQ/BackOff
> --
>
> Key: HADOOP-12916
> URL: https://issues.apache.org/jira/browse/HADOOP-12916
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HADOOP-12916.00.patch, HADOOP-12916.01.patch, 
> HADOOP-12916.02.patch, HADOOP-12916.03.patch, HADOOP-12916.04.patch, 
> HADOOP-12916.05.patch
>
>
> Currently back off policy from HADOOP-10597 is hard coded to base on whether 
> call queue is full. This ticket is open to allow flexible back off policies 
> such as moving average of response time in RPC calls of different priorities. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10965) Print fully qualified path in CommandWithDestination error messages

2016-03-28 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15215444#comment-15215444
 ] 

John Zhuge commented on HADOOP-10965:
-

Filed a doc jira.

> Print fully qualified path in CommandWithDestination error messages
> ---
>
> Key: HADOOP-10965
> URL: https://issues.apache.org/jira/browse/HADOOP-10965
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.4.1
>Reporter: André Kelpe
>Assignee: John Zhuge
>Priority: Minor
>  Labels: supportability
> Fix For: 2.8.0
>
> Attachments: HADOOP-10965.001.patch, HADOOP-10965.002.patch
>
>
> Whenever I try to copy data from local to a cluster, but forget to create the 
> parent directory first, I get a very confusing error message:
> {code}
> $ whoami
> fs111
> $ hadoop fs -ls  /user
> Found 2 items
> drwxr-xr-x   - fs111   supergroup  0 2014-08-11 20:17 /user/hive
> drwxr-xr-x   - vagrant supergroup  0 2014-08-11 19:15 /user/vagrant
> $ hadoop fs -copyFromLocal data data
> copyFromLocal: `data': No such file or directory
> {code}
> From the error message, you would say that the local "data" directory is not 
> existing, but that is not the case. What is missing is the "/user/fs111" 
> directory on HDFS. After I created it, the copyFromLocal command works fine.
> I believe the error message is confusing and should at least be fixed. What 
> would be even better, if hadoop could restore the old behaviour in 1.x, where 
> copyFromLocal would just create the directories, if they are missing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12971) FileSystemShell doc should explain relative path

2016-03-28 Thread John Zhuge (JIRA)
John Zhuge created HADOOP-12971:
---

 Summary: FileSystemShell doc should explain relative path
 Key: HADOOP-12971
 URL: https://issues.apache.org/jira/browse/HADOOP-12971
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation
Affects Versions: 2.7.2
Reporter: John Zhuge
Assignee: John Zhuge
Priority: Critical


Update 
https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/FileSystemShell.html
 with information about relative path and current working directory, as 
suggested by [~yzhangal] during HADOOP-10965 discussion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12309) [Refactor] Use java.lang.Throwable.addSuppressed(Throwable) instead of class org.apache.hadoop.io.MultipleIOException

2016-03-28 Thread Ajith S (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12309?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajith S updated HADOOP-12309:
-
Attachment: HADOOP-12309.3.patch

Attached after rebase. Please review

> [Refactor] Use java.lang.Throwable.addSuppressed(Throwable) instead of class 
> org.apache.hadoop.io.MultipleIOException
> -
>
> Key: HADOOP-12309
> URL: https://issues.apache.org/jira/browse/HADOOP-12309
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Ajith S
>Assignee: Ajith S
>Priority: Minor
> Attachments: HADOOP-12309.2.patch, HADOOP-12309.3.patch, 
> HADOOP-12309.patch
>
>
> Can use java.lang.Throwable.addSuppressed(Throwable) instead of 
> org.apache.hadoop.io.MultipleIOException as 1.7+ java provides support for 
> this. org.apache.hadoop.io.MultipleIOException can be deprecated as for now
> {code}
> 
> catch (IOException e) {
>   if(generalException == null)
>   {
> generalException = new IOException("General exception");
>   }
>   generalException.addSuppressed(e);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12253) ViewFileSystem getFileStatus java.lang.ArrayIndexOutOfBoundsException: 0

2016-03-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12253?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15215417#comment-15215417
 ] 

Hadoop QA commented on HADOOP-12253:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
44s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 54s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 37s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
21s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 56s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
34s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
40s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 41s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 41s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 32s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 32s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
21s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 55s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
47s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 6m 16s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_74. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 6m 33s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.7.0_95. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 24s 
{color} | {color:red} Patch generated 3 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 56m 51s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_74 Timed out junit tests | 
org.apache.hadoop.util.TestNativeLibraryChecker |
| JDK v1.7.0_95 Timed out junit tests | 
org.apache.hadoop.util.TestNativeLibraryChecker |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:fbe3e86 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12795761/HADOOP-12253.2.patch |
| JIRA Issue | HADOOP-12253 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 3e57cb7e083a 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Commented] (HADOOP-12970) Intermittent signature match failures in S3AFileSystem due connection closure

2016-03-28 Thread Harsh J (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12970?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15215335#comment-15215335
 ] 

Harsh J commented on HADOOP-12970:
--

bq. Patch generated 1 ASF License warnings.

Unrelated to patch, per below, seems to have been caused elsewhere:

{code}
Lines that start with ? in the ASF License  report indicate files that do 
not have an Apache license header:
 !? 
/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/dfs.hosts.json
{code}

bq. The patch doesn't appear to include any new or modified tests. Please 
justify why no new tests are needed for this patch. Also please list what 
manual steps were performed to verify this patch.

Writing a test-case for this would be non-trivial as it would involve 
controlling the S3 service to send back a connection close header. I doubt 
there's a way to control/simulate that, as S3 does not always respond back with 
"Connection: close" to the object metadata requests.

> Intermittent signature match failures in S3AFileSystem due connection closure
> -
>
> Key: HADOOP-12970
> URL: https://issues.apache.org/jira/browse/HADOOP-12970
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.7.0
>Reporter: Harsh J
>Assignee: Harsh J
> Attachments: HADOOP-12970.patch
>
>
> S3AFileSystem's use of {{ObjectMetadata#clone()}} method inside the 
> {{copyFile}} implementation may fail in circumstances where the connection 
> used for obtaining the metadata is closed by the server (i.e. response 
> carries a {{Connection: close}} header). Due to this header not being 
> stripped away when the {{ObjectMetadata}} is created, and due to us cloning 
> it for use in the next {{CopyObjectRequest}}, it causes the request to use 
> {{Connection: close}} headers as a part of itself.
> This causes signer related exceptions because the client now includes the 
> {{Connection}} header as part of the {{SignedHeaders}}, but the S3 server 
> does not receive the same value for it ({{Connection}} headers are likely 
> stripped away before the S3 Server tries to match signature hashes), causing 
> a failure like below:
> {code}
> 2016-03-29 19:59:30,120 DEBUG [s3a-transfer-shared--pool1-t35] 
> org.apache.http.wire: >> "Authorization: AWS4-HMAC-SHA256 
> Credential=XXX/20160329/eu-central-1/s3/aws4_request, 
> SignedHeaders=accept-ranges;connection;content-length;content-type;etag;host;last-modified;user-agent;x-amz-acl;x-amz-content-sha256;x-amz-copy-source;x-amz-date;x-amz-metadata-directive;x-amz-server-side-encryption;x-amz-version-id,
>  Signature=MNOPQRSTUVWXYZ[\r][\n]"
> …
> com.amazonaws.services.s3.model.AmazonS3Exception: The request signature we 
> calculated does not match the signature you provided. Check your key and 
> signing method. (Service: Amazon S3; Status Code: 403; Error Code: 
> SignatureDoesNotMatch; Request ID: ABC), S3 Extended Request ID: XYZ
> {code}
> This is intermittent because the S3 Server does not always add a 
> {{Connection: close}} directive in its response, but whenever we receive it 
> AND we clone it, the above exception would happen for the copy request. The 
> copy request is often used in the context of FileOutputCommitter, when a lot 
> of the MR attempt files on {{s3a://}} destination filesystem are to be moved 
> to their parent directories post-commit.
> I've also submitted a fix upstream with AWS Java SDK to strip out the 
> {{Connection}} headers when dealing with {{ObjectMetadata}}, which is pending 
> acceptance and release at: https://github.com/aws/aws-sdk-java/pull/669, but 
> until that release is available and can be used by us, we'll need to 
> workaround the clone approach by manually excluding the {{Connection}} header 
> (not straight-forward due to the {{metadata}} object being private with no 
> mutable access). We can remove such a change in future when there's a release 
> available with the upstream fix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12970) Intermittent signature match failures in S3AFileSystem due connection closure

2016-03-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12970?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15215306#comment-15215306
 ] 

Hadoop QA commented on HADOOP-12970:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 
51s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 26s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 23s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
21s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 29s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
22s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
45s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 29s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 24s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
20s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 22s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 22s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 18s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 18s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
18s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 24s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
17s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
59s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 24s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 20s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 22s 
{color} | {color:green} hadoop-aws in the patch passed with JDK v1.8.0_74. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 18s 
{color} | {color:green} hadoop-aws in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 27s 
{color} | {color:red} Patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 21m 11s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:fbe3e86 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12795740/HADOOP-12970.patch |
| JIRA Issue | HADOOP-12970 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 5d5931140114 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git 

[jira] [Updated] (HADOOP-12253) ViewFileSystem getFileStatus java.lang.ArrayIndexOutOfBoundsException: 0

2016-03-28 Thread Ajith S (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12253?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajith S updated HADOOP-12253:
-
Attachment: HADOOP-12253.2.patch

Thanks for the input. Attached patch after modifications. Please review

> ViewFileSystem getFileStatus java.lang.ArrayIndexOutOfBoundsException: 0
> 
>
> Key: HADOOP-12253
> URL: https://issues.apache.org/jira/browse/HADOOP-12253
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.6.0
> Environment: hadoop 2.6.0   hive 1.1.0 tez0.7  cenos6.4
>Reporter: tangjunjie
>Assignee: Ajith S
> Attachments: HADOOP-12253.2.patch, HADOOP-12253.patch
>
>
> When I enable hdfs federation.I run a query on hive on tez. Then it occur a 
> exception:
> {noformat}
> 8.784 PM  WARNorg.apache.hadoop.security.UserGroupInformation No 
> groups available for user tangjijun
> 3:12:28.784 PMERROR   org.apache.hadoop.hive.ql.exec.Task Failed 
> to execute tez graph.
> java.lang.ArrayIndexOutOfBoundsException: 0
>   at 
> org.apache.hadoop.fs.viewfs.ViewFileSystem$InternalDirOfViewFs.getFileStatus(ViewFileSystem.java:771)
>   at 
> org.apache.hadoop.fs.viewfs.ViewFileSystem.getFileStatus(ViewFileSystem.java:359)
>   at 
> org.apache.tez.client.TezClientUtils.checkAncestorPermissionsForAllUsers(TezClientUtils.java:955)
>   at 
> org.apache.tez.client.TezClientUtils.setupTezJarsLocalResources(TezClientUtils.java:184)
>   at 
> org.apache.tez.client.TezClient.getTezJarResources(TezClient.java:787)
>   at org.apache.tez.client.TezClient.start(TezClient.java:337)
>   at 
> org.apache.hadoop.hive.ql.exec.tez.TezSessionState.open(TezSessionState.java:191)
>   at 
> org.apache.hadoop.hive.ql.exec.tez.TezTask.updateSession(TezTask.java:234)
>   at org.apache.hadoop.hive.ql.exec.tez.TezTask.execute(TezTask.java:136)
>   at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:160)
>   at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:88)
>   at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1640)
>   at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1399)
>   at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1183)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1049)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1044)
>   at 
> org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:144)
>   at 
> org.apache.hive.service.cli.operation.SQLOperation.access$100(SQLOperation.java:69)
>   at 
> org.apache.hive.service.cli.operation.SQLOperation$1$1.run(SQLOperation.java:196)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
>   at 
> org.apache.hive.service.cli.operation.SQLOperation$1.run(SQLOperation.java:208)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> {noformat}
> I digging into the issue,I found the code snippet in ViewFileSystem.java as 
> follows:
> {noformat}
>  @Override
> public FileStatus getFileStatus(Path f) throws IOException {
>   checkPathIsSlash(f);
>   return new FileStatus(0, true, 0, 0, creationTime, creationTime,
>   PERMISSION_555, ugi.getUserName(), ugi.getGroupNames()[0],
>   new Path(theInternalDir.fullPath).makeQualified(
>   myUri, ROOT_PATH));
> }
> {noformat}
> If the node in cluster  haven't creat user like 
> tangjijun,ugi.getGroupNames()[0]  will throw   
> ArrayIndexOutOfBoundsException.Because no user mean no group.
> I create user tangjijun on that node. Then the job was executed normally.  
> I think this code should check  ugi.getGroupNames() is empty.When it is empty 
> ,then print some log. Not to throw exception.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12902) JavaDocs for SignerSecretProvider are out-of-date in AuthenticationFilter

2016-03-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15215236#comment-15215236
 ] 

Hadoop QA commented on HADOOP-12902:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
44s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 59s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 36s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 19s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
29s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
15s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 49s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 49s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 47s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 47s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 13s 
{color} | {color:red} hadoop-common-project/hadoop-auth: patch generated 1 new 
+ 39 unchanged - 0 fixed = 40 total (was 39) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 19s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
38s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 11s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 40s 
{color} | {color:green} hadoop-auth in the patch passed with JDK v1.8.0_74. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 4m 2s 
{color} | {color:green} hadoop-auth in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 24s 
{color} | {color:red} Patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 45m 4s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:fbe3e86 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12795709/HADOOP-12902.3.patch |
| JIRA Issue | HADOOP-12902 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux b25f457e718d 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Commented] (HADOOP-12955) Fix bugs in the initialization of the ISA-L library JNI bindings

2016-03-28 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15215209#comment-15215209
 ] 

Kai Zheng commented on HADOOP-12955:


bq. we wouldn't even need a getLibraryName function.
If you meant the c function {{const char* get_library_name(char* buf, size_t 
buf_len)}}, yes it can be avoided in the new approach.
I may use the new approach in this patch just for the erasure coding part. For 
other native things, we may consider it separately. Sounds good?


> Fix bugs in the initialization of the ISA-L library JNI bindings
> 
>
> Key: HADOOP-12955
> URL: https://issues.apache.org/jira/browse/HADOOP-12955
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: native
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Attachments: HADOOP-12955-v1.patch, HADOOP-12955-v2.patch, 
> HADOOP-12955-v3.patch
>
>
> Ref. the comment 
> [here|https://issues.apache.org/jira/browse/HADOOP-11540?focusedCommentId=15207619=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15207619].
>  
> When run hadoop checknative, it also failed. Got something like below from 
> log:
> {noformat}
> Stack: [0x7f2b9d405000,0x7f2b9d506000],  sp=0x7f2b9d504748,  free 
> space=1021k
> Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native 
> code)
> V  [libjvm.so+0xa90c90]  UTF8::unicode_length(char const*)+0x0
> V  [libjvm.so+0x6ddfc3]  jni_NewStringUTF+0xc3
> j  
> org.apache.hadoop.io.erasurecode.ErasureCodeNative.getLibraryName()Ljava/lang/String;+0
> j  org.apache.hadoop.util.NativeLibraryChecker.main([Ljava/lang/String;)V+212
> v  ~StubRoutines::call_stub
> V  [libjvm.so+0x68c616]  JavaCalls::call_helper(JavaValue*, methodHandle*, 
> JavaCallArguments*, Thread*)+0x1056
> V  [libjvm.so+0x6cdc32]  jni_invoke_static(JNIEnv_*, JavaValue*, _jobject*, 
> JNICallType, _jmethodID*, JNI_ArgumentPusher*, Thread*)+0x362
> V  [libjvm.so+0x6ea63a]  jni_CallStaticVoidMethod+0x17a
> C  [libjli.so+0x7bcc]  JavaMain+0x80c
> C  [libpthread.so.0+0x8182]  start_thread+0xc2
> Java frames: (J=compiled Java code, j=interpreted, Vv=VM code)
> j  
> org.apache.hadoop.io.erasurecode.ErasureCodeNative.getLibraryName()Ljava/lang/String;+0
> j  org.apache.hadoop.util.NativeLibraryChecker.main([Ljava/lang/String;)V+212
> v  ~StubRoutines::call_stub
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12955) Fix bugs in the initialization of the ISA-L library JNI bindings

2016-03-28 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15215201#comment-15215201
 ] 

Kai Zheng commented on HADOOP-12955:


bq. I think it would be safer to have the initialization function set up a 
static variable so that it contained the library name. That way, we only have 
to expect a failure in one place, and we wouldn't even need a getLibraryName 
function.
Good idea and I agree with the new approach. Do you mean we'd refactor all the 
relate native codes? getLibraryName may be still needed as a JNI call to 
retrieve the library name to print in Java codes.

Guess we'd like to have a small change for the aspect to move on this. How 
about right now getting away the exception catching for 
{{ErasureCodeNative.getLibraryName()}} and do the refactoring suggested above 
separately?

> Fix bugs in the initialization of the ISA-L library JNI bindings
> 
>
> Key: HADOOP-12955
> URL: https://issues.apache.org/jira/browse/HADOOP-12955
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: native
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Attachments: HADOOP-12955-v1.patch, HADOOP-12955-v2.patch, 
> HADOOP-12955-v3.patch
>
>
> Ref. the comment 
> [here|https://issues.apache.org/jira/browse/HADOOP-11540?focusedCommentId=15207619=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15207619].
>  
> When run hadoop checknative, it also failed. Got something like below from 
> log:
> {noformat}
> Stack: [0x7f2b9d405000,0x7f2b9d506000],  sp=0x7f2b9d504748,  free 
> space=1021k
> Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native 
> code)
> V  [libjvm.so+0xa90c90]  UTF8::unicode_length(char const*)+0x0
> V  [libjvm.so+0x6ddfc3]  jni_NewStringUTF+0xc3
> j  
> org.apache.hadoop.io.erasurecode.ErasureCodeNative.getLibraryName()Ljava/lang/String;+0
> j  org.apache.hadoop.util.NativeLibraryChecker.main([Ljava/lang/String;)V+212
> v  ~StubRoutines::call_stub
> V  [libjvm.so+0x68c616]  JavaCalls::call_helper(JavaValue*, methodHandle*, 
> JavaCallArguments*, Thread*)+0x1056
> V  [libjvm.so+0x6cdc32]  jni_invoke_static(JNIEnv_*, JavaValue*, _jobject*, 
> JNICallType, _jmethodID*, JNI_ArgumentPusher*, Thread*)+0x362
> V  [libjvm.so+0x6ea63a]  jni_CallStaticVoidMethod+0x17a
> C  [libjli.so+0x7bcc]  JavaMain+0x80c
> C  [libpthread.so.0+0x8182]  start_thread+0xc2
> Java frames: (J=compiled Java code, j=interpreted, Vv=VM code)
> j  
> org.apache.hadoop.io.erasurecode.ErasureCodeNative.getLibraryName()Ljava/lang/String;+0
> j  org.apache.hadoop.util.NativeLibraryChecker.main([Ljava/lang/String;)V+212
> v  ~StubRoutines::call_stub
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12955) Fix bugs in the initialization of the ISA-L library JNI bindings

2016-03-28 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15215195#comment-15215195
 ] 

Colin Patrick McCabe commented on HADOOP-12955:
---

In general, the other native libraries expect {{getLibraryName}} to succeed if 
the initialization succeeded.  There may be some extremely rare cases where it 
doesn't, but this should reflect an internal bug 99% of the time.  In contrast, 
initialization routinely fails because of missing libraries.

I think it would be safer to have the initialization function set up a static 
variable so that it contained the library name.  That way, we only have to 
expect a failure in one place, and we wouldn't even need a {{getLibraryName}} 
function.

> Fix bugs in the initialization of the ISA-L library JNI bindings
> 
>
> Key: HADOOP-12955
> URL: https://issues.apache.org/jira/browse/HADOOP-12955
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: native
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Attachments: HADOOP-12955-v1.patch, HADOOP-12955-v2.patch, 
> HADOOP-12955-v3.patch
>
>
> Ref. the comment 
> [here|https://issues.apache.org/jira/browse/HADOOP-11540?focusedCommentId=15207619=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15207619].
>  
> When run hadoop checknative, it also failed. Got something like below from 
> log:
> {noformat}
> Stack: [0x7f2b9d405000,0x7f2b9d506000],  sp=0x7f2b9d504748,  free 
> space=1021k
> Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native 
> code)
> V  [libjvm.so+0xa90c90]  UTF8::unicode_length(char const*)+0x0
> V  [libjvm.so+0x6ddfc3]  jni_NewStringUTF+0xc3
> j  
> org.apache.hadoop.io.erasurecode.ErasureCodeNative.getLibraryName()Ljava/lang/String;+0
> j  org.apache.hadoop.util.NativeLibraryChecker.main([Ljava/lang/String;)V+212
> v  ~StubRoutines::call_stub
> V  [libjvm.so+0x68c616]  JavaCalls::call_helper(JavaValue*, methodHandle*, 
> JavaCallArguments*, Thread*)+0x1056
> V  [libjvm.so+0x6cdc32]  jni_invoke_static(JNIEnv_*, JavaValue*, _jobject*, 
> JNICallType, _jmethodID*, JNI_ArgumentPusher*, Thread*)+0x362
> V  [libjvm.so+0x6ea63a]  jni_CallStaticVoidMethod+0x17a
> C  [libjli.so+0x7bcc]  JavaMain+0x80c
> C  [libpthread.so.0+0x8182]  start_thread+0xc2
> Java frames: (J=compiled Java code, j=interpreted, Vv=VM code)
> j  
> org.apache.hadoop.io.erasurecode.ErasureCodeNative.getLibraryName()Ljava/lang/String;+0
> j  org.apache.hadoop.util.NativeLibraryChecker.main([Ljava/lang/String;)V+212
> v  ~StubRoutines::call_stub
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12955) Fix bugs in the initialization of the ISA-L library JNI bindings

2016-03-28 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15215194#comment-15215194
 ] 

Kai Zheng commented on HADOOP-12955:


bq. There needs to be a return null; here after the THROW macro.
I thought you meant the code in getLibraryName. It's a new mistake. Thanks for 
catching it. :)

> Fix bugs in the initialization of the ISA-L library JNI bindings
> 
>
> Key: HADOOP-12955
> URL: https://issues.apache.org/jira/browse/HADOOP-12955
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: native
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Attachments: HADOOP-12955-v1.patch, HADOOP-12955-v2.patch, 
> HADOOP-12955-v3.patch
>
>
> Ref. the comment 
> [here|https://issues.apache.org/jira/browse/HADOOP-11540?focusedCommentId=15207619=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15207619].
>  
> When run hadoop checknative, it also failed. Got something like below from 
> log:
> {noformat}
> Stack: [0x7f2b9d405000,0x7f2b9d506000],  sp=0x7f2b9d504748,  free 
> space=1021k
> Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native 
> code)
> V  [libjvm.so+0xa90c90]  UTF8::unicode_length(char const*)+0x0
> V  [libjvm.so+0x6ddfc3]  jni_NewStringUTF+0xc3
> j  
> org.apache.hadoop.io.erasurecode.ErasureCodeNative.getLibraryName()Ljava/lang/String;+0
> j  org.apache.hadoop.util.NativeLibraryChecker.main([Ljava/lang/String;)V+212
> v  ~StubRoutines::call_stub
> V  [libjvm.so+0x68c616]  JavaCalls::call_helper(JavaValue*, methodHandle*, 
> JavaCallArguments*, Thread*)+0x1056
> V  [libjvm.so+0x6cdc32]  jni_invoke_static(JNIEnv_*, JavaValue*, _jobject*, 
> JNICallType, _jmethodID*, JNI_ArgumentPusher*, Thread*)+0x362
> V  [libjvm.so+0x6ea63a]  jni_CallStaticVoidMethod+0x17a
> C  [libjli.so+0x7bcc]  JavaMain+0x80c
> C  [libpthread.so.0+0x8182]  start_thread+0xc2
> Java frames: (J=compiled Java code, j=interpreted, Vv=VM code)
> j  
> org.apache.hadoop.io.erasurecode.ErasureCodeNative.getLibraryName()Ljava/lang/String;+0
> j  org.apache.hadoop.util.NativeLibraryChecker.main([Ljava/lang/String;)V+212
> v  ~StubRoutines::call_stub
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HADOOP-12910) Add new FileSystem API to support asynchronous method calls

2016-03-28 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15215176#comment-15215176
 ] 

Colin Patrick McCabe edited comment on HADOOP-12910 at 3/29/16 12:57 AM:
-

bq. This is an interesting angle. While FileContext may offer clearer 
semantics, it offered no functionality beyond FileSystem. Would it be 
easier/clearer to implement async functionality in FileContext? If so, that 
might give users an incentive to use it.

Sorry, I just can't resist making a historical note.  Actually, we did have an 
explicit policy of adding new features to FileContext before FileSystem for a 
long time.  For example, HADOOP-6421 was added to FileContext before 
FileSystem, as far back as 2010.  A bunch of other features were also added to 
FC before FS, with the goal of "giving people an incentive to use FileContext." 
 It didn't work, and people kept using FileSystem.

bq. We don't need a third API...

An asynchronous API seems fundamentally different than either FileSystem or 
FileContext.  I'm not sure how much code reuse there really can be between FS 
and FC, which are basically collections of synchonrous functions you can 
invoke, and some new async API.  Maybe we can reuse things like CreateFlag, but 
surely an async API counts as "a third API" in the truest sense.

Anyway, I don't feel strongly about where the new async functions should live, 
I just wanted to make a historical note.

P.S.  We should be sure to think more carefully about the close method this 
time-- the lack of such a method in FileContext has caused a lot of grief.


was (Author: cmccabe):
bq. This is an interesting angle. While FileContext may offer clearer 
semantics, it offered no functionality beyond FileSystem. Would it be 
easier/clearer to implement async functionality in FileContext? If so, that 
might give users an incentive to use it.

The Hadoop historian in me can't help but comment that actually, we did have an 
explicit policy of adding new features to FileContext before FileSystem for a 
long time.  For example, HADOOP-6421 was added to FileContext before 
FileSystem, as far back as 2010.  A bunch of other features were also added to 
FC before FS, with the goal of "giving people an incentive to use FileContext." 
 It didn't work, and people kept using FileSystem.

bq. We don't need a third API...

An asynchronous API seems fundamentally different than either FileSystem or 
FileContext.  I'm not sure how much code reuse there really can be between FS 
and FC, which are basically collections of synchonrous functions you can 
invoke, and some new async API.  Maybe we can reuse things like CreateFlag, but 
surely an async API counts as "a third API" in the truest sense.

Anyway, I don't feel strongly about where the new async functions should live, 
I just wanted to make a historical note.  Can we add a close method this time?

> Add new FileSystem API to support asynchronous method calls
> ---
>
> Key: HADOOP-12910
> URL: https://issues.apache.org/jira/browse/HADOOP-12910
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Xiaobing Zhou
>
> Add a new API, namely FutureFileSystem (or AsynchronousFileSystem, if it is a 
> better name).  All the APIs in FutureFileSystem are the same as FileSystem 
> except that the return type is wrapped by Future, e.g.
> {code}
>   //FileSystem
>   public boolean rename(Path src, Path dst) throws IOException;
>   //FutureFileSystem
>   public Future rename(Path src, Path dst) throws IOException;
> {code}
> Note that FutureFileSystem does not extend FileSystem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12910) Add new FileSystem API to support asynchronous method calls

2016-03-28 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15215176#comment-15215176
 ] 

Colin Patrick McCabe commented on HADOOP-12910:
---

bq. This is an interesting angle. While FileContext may offer clearer 
semantics, it offered no functionality beyond FileSystem. Would it be 
easier/clearer to implement async functionality in FileContext? If so, that 
might give users an incentive to use it.

The Hadoop historian in me can't help but comment that actually, we did have an 
explicit policy of adding new features to FileContext before FileSystem for a 
long time.  For example, HADOOP-6421 was added to FileContext before 
FileSystem, as far back as 2010.  A bunch of other features were also added to 
FC before FS, with the goal of "giving people an incentive to use FileContext." 
 It didn't work, and people kept using FileSystem.

bq. We don't need a third API...

An asynchronous API seems fundamentally different than either FileSystem or 
FileContext.  I'm not sure how much code reuse there really can be between FS 
and FC, which are basically collections of synchonrous functions you can 
invoke, and some new async API.  Maybe we can reuse things like CreateFlag, but 
surely an async API counts as "a third API" in the truest sense.

Anyway, I don't feel strongly about where the new async functions should live, 
I just wanted to make a historical note.  Can we add a close method this time?

> Add new FileSystem API to support asynchronous method calls
> ---
>
> Key: HADOOP-12910
> URL: https://issues.apache.org/jira/browse/HADOOP-12910
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Xiaobing Zhou
>
> Add a new API, namely FutureFileSystem (or AsynchronousFileSystem, if it is a 
> better name).  All the APIs in FutureFileSystem are the same as FileSystem 
> except that the return type is wrapped by Future, e.g.
> {code}
>   //FileSystem
>   public boolean rename(Path src, Path dst) throws IOException;
>   //FutureFileSystem
>   public Future rename(Path src, Path dst) throws IOException;
> {code}
> Note that FutureFileSystem does not extend FileSystem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12955) Fix bugs in the initialization of the ISA-L library JNI bindings

2016-03-28 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15215173#comment-15215173
 ] 

Kai Zheng commented on HADOOP-12955:


Thanks Colin!
bq. There needs to be a return null; here after the THROW macro.
Yes it's needed. Sorry I may accidentally lost the line while changing around.
bq. If there is an UnsatisfiedLinkError, this should be reflected by the return 
value of ErasureCodeNative#getLoadingFailureReason.
Note getLoadingFailureReason can capture exceptions during loadLibrary. 
getLibraryName may also throw exception now. To be consistent, I'm wondering if 
we would do the same thing for other native things as well in case 
getLibraryName is allowed to throw exception in all the places.

> Fix bugs in the initialization of the ISA-L library JNI bindings
> 
>
> Key: HADOOP-12955
> URL: https://issues.apache.org/jira/browse/HADOOP-12955
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: native
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Attachments: HADOOP-12955-v1.patch, HADOOP-12955-v2.patch, 
> HADOOP-12955-v3.patch
>
>
> Ref. the comment 
> [here|https://issues.apache.org/jira/browse/HADOOP-11540?focusedCommentId=15207619=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15207619].
>  
> When run hadoop checknative, it also failed. Got something like below from 
> log:
> {noformat}
> Stack: [0x7f2b9d405000,0x7f2b9d506000],  sp=0x7f2b9d504748,  free 
> space=1021k
> Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native 
> code)
> V  [libjvm.so+0xa90c90]  UTF8::unicode_length(char const*)+0x0
> V  [libjvm.so+0x6ddfc3]  jni_NewStringUTF+0xc3
> j  
> org.apache.hadoop.io.erasurecode.ErasureCodeNative.getLibraryName()Ljava/lang/String;+0
> j  org.apache.hadoop.util.NativeLibraryChecker.main([Ljava/lang/String;)V+212
> v  ~StubRoutines::call_stub
> V  [libjvm.so+0x68c616]  JavaCalls::call_helper(JavaValue*, methodHandle*, 
> JavaCallArguments*, Thread*)+0x1056
> V  [libjvm.so+0x6cdc32]  jni_invoke_static(JNIEnv_*, JavaValue*, _jobject*, 
> JNICallType, _jmethodID*, JNI_ArgumentPusher*, Thread*)+0x362
> V  [libjvm.so+0x6ea63a]  jni_CallStaticVoidMethod+0x17a
> C  [libjli.so+0x7bcc]  JavaMain+0x80c
> C  [libpthread.so.0+0x8182]  start_thread+0xc2
> Java frames: (J=compiled Java code, j=interpreted, Vv=VM code)
> j  
> org.apache.hadoop.io.erasurecode.ErasureCodeNative.getLibraryName()Ljava/lang/String;+0
> j  org.apache.hadoop.util.NativeLibraryChecker.main([Ljava/lang/String;)V+212
> v  ~StubRoutines::call_stub
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-8143) Change distcp to have -pb on by default

2016-03-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15215166#comment-15215166
 ] 

Hadoop QA commented on HADOOP-8143:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 
21s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 27s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 25s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
20s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 31s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
19s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
38s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 21s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 20s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
24s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 22s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 22s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 20s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 20s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 14s 
{color} | {color:red} hadoop-tools/hadoop-distcp: patch generated 1 new + 13 
unchanged - 0 fixed = 14 total (was 13) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 26s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
50s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 16s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 10m 4s {color} 
| {color:red} hadoop-distcp in the patch failed with JDK v1.8.0_74. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 9m 31s {color} 
| {color:red} hadoop-distcp in the patch failed with JDK v1.7.0_95. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 24s 
{color} | {color:red} Patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 37m 44s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_74 Failed junit tests | hadoop.tools.TestOptionsParser |
| JDK v1.7.0_95 Failed junit tests | hadoop.tools.TestOptionsParser |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:fbe3e86 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12678063/HADOOP-8143.1.patch |
| JIRA Issue | HADOOP-8143 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 88cfdb4227ab 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |

[jira] [Updated] (HADOOP-12970) Intermittent signature match failures in S3AFileSystem due connection closure

2016-03-28 Thread Harsh J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HADOOP-12970:
-
Status: Patch Available  (was: Open)

> Intermittent signature match failures in S3AFileSystem due connection closure
> -
>
> Key: HADOOP-12970
> URL: https://issues.apache.org/jira/browse/HADOOP-12970
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.7.0
>Reporter: Harsh J
>Assignee: Harsh J
> Attachments: HADOOP-12970.patch
>
>
> S3AFileSystem's use of {{ObjectMetadata#clone()}} method inside the 
> {{copyFile}} implementation may fail in circumstances where the connection 
> used for obtaining the metadata is closed by the server (i.e. response 
> carries a {{Connection: close}} header). Due to this header not being 
> stripped away when the {{ObjectMetadata}} is created, and due to us cloning 
> it for use in the next {{CopyObjectRequest}}, it causes the request to use 
> {{Connection: close}} headers as a part of itself.
> This causes signer related exceptions because the client now includes the 
> {{Connection}} header as part of the {{SignedHeaders}}, but the S3 server 
> does not receive the same value for it ({{Connection}} headers are likely 
> stripped away before the S3 Server tries to match signature hashes), causing 
> a failure like below:
> {code}
> 2016-03-29 19:59:30,120 DEBUG [s3a-transfer-shared--pool1-t35] 
> org.apache.http.wire: >> "Authorization: AWS4-HMAC-SHA256 
> Credential=XXX/20160329/eu-central-1/s3/aws4_request, 
> SignedHeaders=accept-ranges;connection;content-length;content-type;etag;host;last-modified;user-agent;x-amz-acl;x-amz-content-sha256;x-amz-copy-source;x-amz-date;x-amz-metadata-directive;x-amz-server-side-encryption;x-amz-version-id,
>  Signature=MNOPQRSTUVWXYZ[\r][\n]"
> …
> com.amazonaws.services.s3.model.AmazonS3Exception: The request signature we 
> calculated does not match the signature you provided. Check your key and 
> signing method. (Service: Amazon S3; Status Code: 403; Error Code: 
> SignatureDoesNotMatch; Request ID: ABC), S3 Extended Request ID: XYZ
> {code}
> This is intermittent because the S3 Server does not always add a 
> {{Connection: close}} directive in its response, but whenever we receive it 
> AND we clone it, the above exception would happen for the copy request. The 
> copy request is often used in the context of FileOutputCommitter, when a lot 
> of the MR attempt files on {{s3a://}} destination filesystem are to be moved 
> to their parent directories post-commit.
> I've also submitted a fix upstream with AWS Java SDK to strip out the 
> {{Connection}} headers when dealing with {{ObjectMetadata}}, which is pending 
> acceptance and release at: https://github.com/aws/aws-sdk-java/pull/669, but 
> until that release is available and can be used by us, we'll need to 
> workaround the clone approach by manually excluding the {{Connection}} header 
> (not straight-forward due to the {{metadata}} object being private with no 
> mutable access). We can remove such a change in future when there's a release 
> available with the upstream fix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12955) Fix bugs in the initialization of the ISA-L library JNI bindings

2016-03-28 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15215162#comment-15215162
 ] 

Colin Patrick McCabe commented on HADOOP-12955:
---

{code}
  if (error != NULL) {
THROW(env, "java/lang/UnsatisfiedLinkError", error);
  }
{code}
There needs to be a {{return null;}} here after the {{THROW}} macro.

{code}
-  try {
-isalDetail = ErasureCodeNative.getLoadingFailureReason();
-isalDetail = ErasureCodeNative.getLibraryName();
-isalLoaded = true;
-  } catch (UnsatisfiedLinkError e) {
+  isalDetail = ErasureCodeNative.getLoadingFailureReason();
+  if (isalDetail != null) {
 isalLoaded = false;
+  } else {
+try {
+  isalDetail = ErasureCodeNative.getLibraryName();
+  isalLoaded = true;
+} catch (UnsatisfiedLinkError e) {
+  isalDetail = e.getMessage();
+  isalLoaded = false;
+}
{code}
I don't understand the rationale for doing this differently than the other 
native libraries.  If there is an {{UnsatisfiedLinkError}}, this should be 
reflected by the return value of {{ErasureCodeNative#getLoadingFailureReason}}.

> Fix bugs in the initialization of the ISA-L library JNI bindings
> 
>
> Key: HADOOP-12955
> URL: https://issues.apache.org/jira/browse/HADOOP-12955
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: native
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Attachments: HADOOP-12955-v1.patch, HADOOP-12955-v2.patch, 
> HADOOP-12955-v3.patch
>
>
> Ref. the comment 
> [here|https://issues.apache.org/jira/browse/HADOOP-11540?focusedCommentId=15207619=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15207619].
>  
> When run hadoop checknative, it also failed. Got something like below from 
> log:
> {noformat}
> Stack: [0x7f2b9d405000,0x7f2b9d506000],  sp=0x7f2b9d504748,  free 
> space=1021k
> Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native 
> code)
> V  [libjvm.so+0xa90c90]  UTF8::unicode_length(char const*)+0x0
> V  [libjvm.so+0x6ddfc3]  jni_NewStringUTF+0xc3
> j  
> org.apache.hadoop.io.erasurecode.ErasureCodeNative.getLibraryName()Ljava/lang/String;+0
> j  org.apache.hadoop.util.NativeLibraryChecker.main([Ljava/lang/String;)V+212
> v  ~StubRoutines::call_stub
> V  [libjvm.so+0x68c616]  JavaCalls::call_helper(JavaValue*, methodHandle*, 
> JavaCallArguments*, Thread*)+0x1056
> V  [libjvm.so+0x6cdc32]  jni_invoke_static(JNIEnv_*, JavaValue*, _jobject*, 
> JNICallType, _jmethodID*, JNI_ArgumentPusher*, Thread*)+0x362
> V  [libjvm.so+0x6ea63a]  jni_CallStaticVoidMethod+0x17a
> C  [libjli.so+0x7bcc]  JavaMain+0x80c
> C  [libpthread.so.0+0x8182]  start_thread+0xc2
> Java frames: (J=compiled Java code, j=interpreted, Vv=VM code)
> j  
> org.apache.hadoop.io.erasurecode.ErasureCodeNative.getLibraryName()Ljava/lang/String;+0
> j  org.apache.hadoop.util.NativeLibraryChecker.main([Ljava/lang/String;)V+212
> v  ~StubRoutines::call_stub
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12970) Intermittent signature match failures in S3AFileSystem due connection closure

2016-03-28 Thread Harsh J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HADOOP-12970:
-
Attachment: HADOOP-12970.patch

> Intermittent signature match failures in S3AFileSystem due connection closure
> -
>
> Key: HADOOP-12970
> URL: https://issues.apache.org/jira/browse/HADOOP-12970
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.7.0
>Reporter: Harsh J
>Assignee: Harsh J
> Attachments: HADOOP-12970.patch
>
>
> S3AFileSystem's use of {{ObjectMetadata#clone()}} method inside the 
> {{copyFile}} implementation may fail in circumstances where the connection 
> used for obtaining the metadata is closed by the server (i.e. response 
> carries a {{Connection: close}} header). Due to this header not being 
> stripped away when the {{ObjectMetadata}} is created, and due to us cloning 
> it for use in the next {{CopyObjectRequest}}, it causes the request to use 
> {{Connection: close}} headers as a part of itself.
> This causes signer related exceptions because the client now includes the 
> {{Connection}} header as part of the {{SignedHeaders}}, but the S3 server 
> does not receive the same value for it ({{Connection}} headers are likely 
> stripped away before the S3 Server tries to match signature hashes), causing 
> a failure like below:
> {code}
> 2016-03-29 19:59:30,120 DEBUG [s3a-transfer-shared--pool1-t35] 
> org.apache.http.wire: >> "Authorization: AWS4-HMAC-SHA256 
> Credential=XXX/20160329/eu-central-1/s3/aws4_request, 
> SignedHeaders=accept-ranges;connection;content-length;content-type;etag;host;last-modified;user-agent;x-amz-acl;x-amz-content-sha256;x-amz-copy-source;x-amz-date;x-amz-metadata-directive;x-amz-server-side-encryption;x-amz-version-id,
>  Signature=MNOPQRSTUVWXYZ[\r][\n]"
> …
> com.amazonaws.services.s3.model.AmazonS3Exception: The request signature we 
> calculated does not match the signature you provided. Check your key and 
> signing method. (Service: Amazon S3; Status Code: 403; Error Code: 
> SignatureDoesNotMatch; Request ID: ABC), S3 Extended Request ID: XYZ
> {code}
> This is intermittent because the S3 Server does not always add a 
> {{Connection: close}} directive in its response, but whenever we receive it 
> AND we clone it, the above exception would happen for the copy request. The 
> copy request is often used in the context of FileOutputCommitter, when a lot 
> of the MR attempt files on {{s3a://}} destination filesystem are to be moved 
> to their parent directories post-commit.
> I've also submitted a fix upstream with AWS Java SDK to strip out the 
> {{Connection}} headers when dealing with {{ObjectMetadata}}, which is pending 
> acceptance and release at: https://github.com/aws/aws-sdk-java/pull/669, but 
> until that release is available and can be used by us, we'll need to 
> workaround the clone approach by manually excluding the {{Connection}} header 
> (not straight-forward due to the {{metadata}} object being private with no 
> mutable access). We can remove such a change in future when there's a release 
> available with the upstream fix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12970) Intermittent signature match failures in S3AFileSystem due connection closure

2016-03-28 Thread Harsh J (JIRA)
Harsh J created HADOOP-12970:


 Summary: Intermittent signature match failures in S3AFileSystem 
due connection closure
 Key: HADOOP-12970
 URL: https://issues.apache.org/jira/browse/HADOOP-12970
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/s3
Affects Versions: 2.7.0
Reporter: Harsh J
Assignee: Harsh J


S3AFileSystem's use of {{ObjectMetadata#clone()}} method inside the 
{{copyFile}} implementation may fail in circumstances where the connection used 
for obtaining the metadata is closed by the server (i.e. response carries a 
{{Connection: close}} header). Due to this header not being stripped away when 
the {{ObjectMetadata}} is created, and due to us cloning it for use in the next 
{{CopyObjectRequest}}, it causes the request to use {{Connection: close}} 
headers as a part of itself.

This causes signer related exceptions because the client now includes the 
{{Connection}} header as part of the {{SignedHeaders}}, but the S3 server does 
not receive the same value for it ({{Connection}} headers are likely stripped 
away before the S3 Server tries to match signature hashes), causing a failure 
like below:

{code}
2016-03-29 19:59:30,120 DEBUG [s3a-transfer-shared--pool1-t35] 
org.apache.http.wire: >> "Authorization: AWS4-HMAC-SHA256 
Credential=XXX/20160329/eu-central-1/s3/aws4_request, 
SignedHeaders=accept-ranges;connection;content-length;content-type;etag;host;last-modified;user-agent;x-amz-acl;x-amz-content-sha256;x-amz-copy-source;x-amz-date;x-amz-metadata-directive;x-amz-server-side-encryption;x-amz-version-id,
 Signature=MNOPQRSTUVWXYZ[\r][\n]"
…
com.amazonaws.services.s3.model.AmazonS3Exception: The request signature we 
calculated does not match the signature you provided. Check your key and 
signing method. (Service: Amazon S3; Status Code: 403; Error Code: 
SignatureDoesNotMatch; Request ID: ABC), S3 Extended Request ID: XYZ
{code}

This is intermittent because the S3 Server does not always add a {{Connection: 
close}} directive in its response, but whenever we receive it AND we clone it, 
the above exception would happen for the copy request. The copy request is 
often used in the context of FileOutputCommitter, when a lot of the MR attempt 
files on {{s3a://}} destination filesystem are to be moved to their parent 
directories post-commit.

I've also submitted a fix upstream with AWS Java SDK to strip out the 
{{Connection}} headers when dealing with {{ObjectMetadata}}, which is pending 
acceptance and release at: https://github.com/aws/aws-sdk-java/pull/669, but 
until that release is available and can be used by us, we'll need to workaround 
the clone approach by manually excluding the {{Connection}} header (not 
straight-forward due to the {{metadata}} object being private with no mutable 
access). We can remove such a change in future when there's a release available 
with the upstream fix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12969) Mark IPC.Client and IPC.Server as @Public, @Evolving

2016-03-28 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15215085#comment-15215085
 ] 

Arpit Agarwal commented on HADOOP-12969:


I didn't see any direct references to o.a.h.ipc.server in HBase, Hive etc. but 
it looks like Tez and Slider are using it.

> Mark IPC.Client and IPC.Server as @Public, @Evolving
> 
>
> Key: HADOOP-12969
> URL: https://issues.apache.org/jira/browse/HADOOP-12969
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
>
> Per the discussion in 
> [HADOOP-12909|https://issues.apache.org/jira/browse/HADOOP-12909?focusedCommentId=15211745=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15211745],
>  this is to propose marking IPC.Client and IPC.Server as @Public, @Evolving 
> as a result of HADOOP-12909



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12969) Mark IPC.Client and IPC.Server as @Public, @Evolving

2016-03-28 Thread Hitesh Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15215071#comment-15215071
 ] 

Hitesh Shah commented on HADOOP-12969:
--

Anyone using hadoop rpc ( HBase, Tez, etc. ) for client-server communication 
are already using this regardless of it being private. 

> Mark IPC.Client and IPC.Server as @Public, @Evolving
> 
>
> Key: HADOOP-12969
> URL: https://issues.apache.org/jira/browse/HADOOP-12969
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
>
> Per the discussion in 
> [HADOOP-12909|https://issues.apache.org/jira/browse/HADOOP-12909?focusedCommentId=15211745=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15211745],
>  this is to propose marking IPC.Client and IPC.Server as @Public, @Evolving 
> as a result of HADOOP-12909



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12964) Http server vulnerable to clickjacking

2016-03-28 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15215012#comment-15215012
 ] 

Haibo Chen commented on HADOOP-12964:
-

Forgot to add java doc for XFrameOption. The unit test failures were caused by 
timeout, which is unrelated to this patch.

> Http server vulnerable to clickjacking 
> ---
>
> Key: HADOOP-12964
> URL: https://issues.apache.org/jira/browse/HADOOP-12964
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Haibo Chen
>Assignee: Haibo Chen
> Attachments: hadoop-12964.001.patch, hadoop-12964.002.patch, 
> hadoop-12964.003.patch
>
>
> Nessus report shows a medium level issue that "Web Application Potentially 
> Vulnerable to Clickjacking" with the description as follows:
> "The remote web server does not set an X-Frame-Options response header in all 
> content responses. This could potentially expose the site to a clickjacking 
> or UI Redress attack wherein an attacker can trick a user into clicking an 
> area of the vulnerable page that is different than what the user perceives 
> the page to be. This can result in a user performing fraudulent or malicious 
> transactions"
> We could add X-Frame-Options, supported in all major browsers, in the Http 
> response header to mitigate the issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12964) Http server vulnerable to clickjacking

2016-03-28 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12964?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated HADOOP-12964:

Attachment: hadoop-12964.003.patch

> Http server vulnerable to clickjacking 
> ---
>
> Key: HADOOP-12964
> URL: https://issues.apache.org/jira/browse/HADOOP-12964
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Haibo Chen
>Assignee: Haibo Chen
> Attachments: hadoop-12964.001.patch, hadoop-12964.002.patch, 
> hadoop-12964.003.patch
>
>
> Nessus report shows a medium level issue that "Web Application Potentially 
> Vulnerable to Clickjacking" with the description as follows:
> "The remote web server does not set an X-Frame-Options response header in all 
> content responses. This could potentially expose the site to a clickjacking 
> or UI Redress attack wherein an attacker can trick a user into clicking an 
> area of the vulnerable page that is different than what the user perceives 
> the page to be. This can result in a user performing fraudulent or malicious 
> transactions"
> We could add X-Frame-Options, supported in all major browsers, in the Http 
> response header to mitigate the issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12964) Http server vulnerable to clickjacking

2016-03-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15214995#comment-15214995
 ] 

Hadoop QA commented on HADOOP-12964:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
17s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 20s 
{color} | {color:green} trunk passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 47s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
22s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 59s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
37s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s 
{color} | {color:green} trunk passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 3s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
41s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 51s 
{color} | {color:green} the patch passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 51s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 41s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 41s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 21s 
{color} | {color:red} hadoop-common-project/hadoop-common: patch generated 1 
new + 60 unchanged - 3 fixed = 61 total (was 63) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 55s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
48s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s 
{color} | {color:green} the patch passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 27s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_77. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 34s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.7.0_95. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 23s 
{color} | {color:red} Patch generated 3 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 60m 46s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_77 Timed out junit tests | 
org.apache.hadoop.util.TestNativeLibraryChecker |
| JDK v1.7.0_95 Timed out junit tests | 
org.apache.hadoop.util.TestNativeLibraryChecker |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:fbe3e86 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12795702/hadoop-12964.002.patch
 |
| JIRA Issue | HADOOP-12964 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux e2fe0661396b 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 

[jira] [Updated] (HADOOP-12910) Add new FileSystem API to support asynchronous method calls

2016-03-28 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HADOOP-12910:
---
Status: Open  (was: Patch Available)

> Add new FileSystem API to support asynchronous method calls
> ---
>
> Key: HADOOP-12910
> URL: https://issues.apache.org/jira/browse/HADOOP-12910
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Xiaobing Zhou
>
> Add a new API, namely FutureFileSystem (or AsynchronousFileSystem, if it is a 
> better name).  All the APIs in FutureFileSystem are the same as FileSystem 
> except that the return type is wrapped by Future, e.g.
> {code}
>   //FileSystem
>   public boolean rename(Path src, Path dst) throws IOException;
>   //FutureFileSystem
>   public Future rename(Path src, Path dst) throws IOException;
> {code}
> Note that FutureFileSystem does not extend FileSystem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12910) Add new FileSystem API to support asynchronous method calls

2016-03-28 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HADOOP-12910:
---
Attachment: (was: HADOOP-12910-HDFS-9924.000.patch)

> Add new FileSystem API to support asynchronous method calls
> ---
>
> Key: HADOOP-12910
> URL: https://issues.apache.org/jira/browse/HADOOP-12910
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Xiaobing Zhou
>
> Add a new API, namely FutureFileSystem (or AsynchronousFileSystem, if it is a 
> better name).  All the APIs in FutureFileSystem are the same as FileSystem 
> except that the return type is wrapped by Future, e.g.
> {code}
>   //FileSystem
>   public boolean rename(Path src, Path dst) throws IOException;
>   //FutureFileSystem
>   public Future rename(Path src, Path dst) throws IOException;
> {code}
> Note that FutureFileSystem does not extend FileSystem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12910) Add new FileSystem API to support asynchronous method calls

2016-03-28 Thread Xiaobing Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15214982#comment-15214982
 ] 

Xiaobing Zhou commented on HADOOP-12910:


HDFS-10224 is created to scope changes to only DistributedFileSystem. I will 
delete the patch here, and change status to Open.

> Add new FileSystem API to support asynchronous method calls
> ---
>
> Key: HADOOP-12910
> URL: https://issues.apache.org/jira/browse/HADOOP-12910
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Xiaobing Zhou
>
> Add a new API, namely FutureFileSystem (or AsynchronousFileSystem, if it is a 
> better name).  All the APIs in FutureFileSystem are the same as FileSystem 
> except that the return type is wrapped by Future, e.g.
> {code}
>   //FileSystem
>   public boolean rename(Path src, Path dst) throws IOException;
>   //FutureFileSystem
>   public Future rename(Path src, Path dst) throws IOException;
> {code}
> Note that FutureFileSystem does not extend FileSystem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12701) Run checkstyle on test source files

2016-03-28 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12701?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15214972#comment-15214972
 ] 

Andrew Wang commented on HADOOP-12701:
--

This change seems fine, I'm not too worried about adding 10s of seconds to 
checkstyle runs.

However, why aren't we seeing checkstyle run by precommit? Do we need to add a 
space to a java file somewhere to trigger change detection?

> Run checkstyle on test source files
> ---
>
> Key: HADOOP-12701
> URL: https://issues.apache.org/jira/browse/HADOOP-12701
> Project: Hadoop Common
>  Issue Type: Test
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Attachments: HADOOP-12701.001.patch
>
>
> Test source files are not checked by checkstyle because Maven checkstyle 
> plugin parameter *includeTestSourceDirectory* is *false* by default.
> Propose to enable checkstyle on test source files in order to improve the 
> quality of unit tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12969) Mark IPC.Client and IPC.Server as @Public, @Evolving

2016-03-28 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15214955#comment-15214955
 ] 

Arpit Agarwal commented on HADOOP-12969:


Is IPC.Server used by any other components outside of HDFS, YARN and MR? What 
is the benefit of making it @Public?

> Mark IPC.Client and IPC.Server as @Public, @Evolving
> 
>
> Key: HADOOP-12969
> URL: https://issues.apache.org/jira/browse/HADOOP-12969
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
>
> Per the discussion in 
> [HADOOP-12909|https://issues.apache.org/jira/browse/HADOOP-12909?focusedCommentId=15211745=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15211745],
>  this is to propose marking IPC.Client and IPC.Server as @Public, @Evolving 
> as a result of HADOOP-12909



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10965) Print fully qualified path in CommandWithDestination error messages

2016-03-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15214950#comment-15214950
 ] 

Hudson commented on HADOOP-10965:
-

FAILURE: Integrated in Hadoop-trunk-Commit #9514 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9514/])
HADOOP-10965. Print fully qualified path in CommandWithDestination error (wang: 
rev 8bfaa80037365c0790083313a905d1e7d88b0682)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/PathIOException.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFsShellTouch.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/CommandWithDestination.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Touch.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFsShellCopy.java


> Print fully qualified path in CommandWithDestination error messages
> ---
>
> Key: HADOOP-10965
> URL: https://issues.apache.org/jira/browse/HADOOP-10965
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.4.1
>Reporter: André Kelpe
>Assignee: John Zhuge
>Priority: Minor
>  Labels: supportability
> Fix For: 2.8.0
>
> Attachments: HADOOP-10965.001.patch, HADOOP-10965.002.patch
>
>
> Whenever I try to copy data from local to a cluster, but forget to create the 
> parent directory first, I get a very confusing error message:
> {code}
> $ whoami
> fs111
> $ hadoop fs -ls  /user
> Found 2 items
> drwxr-xr-x   - fs111   supergroup  0 2014-08-11 20:17 /user/hive
> drwxr-xr-x   - vagrant supergroup  0 2014-08-11 19:15 /user/vagrant
> $ hadoop fs -copyFromLocal data data
> copyFromLocal: `data': No such file or directory
> {code}
> From the error message, you would say that the local "data" directory is not 
> existing, but that is not the case. What is missing is the "/user/fs111" 
> directory on HDFS. After I created it, the copyFromLocal command works fine.
> I believe the error message is confusing and should at least be fixed. What 
> would be even better, if hadoop could restore the old behaviour in 1.x, where 
> copyFromLocal would just create the directories, if they are missing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12909) Change ipc.Client to support asynchronous calls

2016-03-28 Thread Xiaobing Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15214932#comment-15214932
 ] 

Xiaobing Zhou commented on HADOOP-12909:


I posted v006 to fix your comments thanks [~ste...@apache.org]. I filed 
HADOOP-12968 to address changes of TestIPC.TestServer. I also filed 
HADOOP-12969 to cover marking IPC.Client and IPC.Server as @Public, @Evolving.

> Change ipc.Client to support asynchronous calls
> ---
>
> Key: HADOOP-12909
> URL: https://issues.apache.org/jira/browse/HADOOP-12909
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: ipc
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Xiaobing Zhou
> Attachments: HADOOP-12909-HDFS-9924.000.patch, 
> HADOOP-12909-HDFS-9924.001.patch, HADOOP-12909-HDFS-9924.002.patch, 
> HADOOP-12909-HDFS-9924.003.patch, HADOOP-12909-HDFS-9924.004.patch, 
> HADOOP-12909-HDFS-9924.005.patch, HADOOP-12909-HDFS-9924.006.patch
>
>
> In ipc.Client, the underlying mechanism is already supporting asynchronous 
> calls -- the calls shares a connection, the call requests are sent using a 
> thread pool and the responses can be out of order.  Indeed, synchronous call 
> is implemented by invoking wait() in the caller thread in order to wait for 
> the server response.
> In this JIRA, we change ipc.Client to support asynchronous mode.  In 
> asynchronous mode, it return once the request has been sent out but not wait 
> for the response from the server.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12969) Mark IPC.Client and IPC.Server as @Public, @Evolving

2016-03-28 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HADOOP-12969:
---
Description: Per the discussion in 
[HADOOP-12909|https://issues.apache.org/jira/browse/HADOOP-12909?focusedCommentId=15211745=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15211745],
 this is to propose marking IPC.Client and IPC.Server as @Public, @Evolving as 
a result of HADOOP-12909  (was: Per the discussion in 
[HADOOP-12909|https://issues.apache.org/jira/browse/HADOOP-12909?focusedCommentId=15211745=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15211745],
 this is to propose marking )

> Mark IPC.Client and IPC.Server as @Public, @Evolving
> 
>
> Key: HADOOP-12969
> URL: https://issues.apache.org/jira/browse/HADOOP-12969
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Xiaobing Zhou
>
> Per the discussion in 
> [HADOOP-12909|https://issues.apache.org/jira/browse/HADOOP-12909?focusedCommentId=15211745=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15211745],
>  this is to propose marking IPC.Client and IPC.Server as @Public, @Evolving 
> as a result of HADOOP-12909



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HADOOP-12969) Mark IPC.Client and IPC.Server as @Public, @Evolving

2016-03-28 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou reassigned HADOOP-12969:
--

Assignee: Xiaobing Zhou

> Mark IPC.Client and IPC.Server as @Public, @Evolving
> 
>
> Key: HADOOP-12969
> URL: https://issues.apache.org/jira/browse/HADOOP-12969
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
>
> Per the discussion in 
> [HADOOP-12909|https://issues.apache.org/jira/browse/HADOOP-12909?focusedCommentId=15211745=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15211745],
>  this is to propose marking IPC.Client and IPC.Server as @Public, @Evolving 
> as a result of HADOOP-12909



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12902) JavaDocs for SignerSecretProvider are out-of-date in AuthenticationFilter

2016-03-28 Thread Gabor Liptak (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15214929#comment-15214929
 ] 

Gabor Liptak commented on HADOOP-12902:
---

Done.

> JavaDocs for SignerSecretProvider are out-of-date in AuthenticationFilter
> -
>
> Key: HADOOP-12902
> URL: https://issues.apache.org/jira/browse/HADOOP-12902
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.7.0
>Reporter: Robert Kanter
>Assignee: Gabor Liptak
>  Labels: newbie
> Attachments: HADOOP-12902.1.patch, HADOOP-12902.2.patch, 
> HADOOP-12902.3.patch
>
>
> The Javadocs in {{AuthenticationFilter}} say:
> {noformat}
>  * Out of the box it provides 3 signer secret provider implementations:
>  * "string", "random", and "zookeeper"
> {noformat}
> However, the "string" implementation is no longer available because 
> HADOOP-11748 moved it to be a test-only artifact.  This also doesn't mention 
> anything about the file-backed secret provider ({{FileSignerSecretProvider}}).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12969) Mark IPC.Client and IPC.Server as @Public, @Evolving

2016-03-28 Thread Xiaobing Zhou (JIRA)
Xiaobing Zhou created HADOOP-12969:
--

 Summary: Mark IPC.Client and IPC.Server as @Public, @Evolving
 Key: HADOOP-12969
 URL: https://issues.apache.org/jira/browse/HADOOP-12969
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Xiaobing Zhou






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12969) Mark IPC.Client and IPC.Server as @Public, @Evolving

2016-03-28 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HADOOP-12969:
---
Description: Per the discussion in 
[HADOOP-12909|https://issues.apache.org/jira/browse/HADOOP-12909?focusedCommentId=15211745=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15211745],
 this is to propose marking 

> Mark IPC.Client and IPC.Server as @Public, @Evolving
> 
>
> Key: HADOOP-12969
> URL: https://issues.apache.org/jira/browse/HADOOP-12969
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Xiaobing Zhou
>
> Per the discussion in 
> [HADOOP-12909|https://issues.apache.org/jira/browse/HADOOP-12909?focusedCommentId=15211745=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15211745],
>  this is to propose marking 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12902) JavaDocs for SignerSecretProvider are out-of-date in AuthenticationFilter

2016-03-28 Thread Gabor Liptak (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Liptak updated HADOOP-12902:
--
Attachment: HADOOP-12902.3.patch

> JavaDocs for SignerSecretProvider are out-of-date in AuthenticationFilter
> -
>
> Key: HADOOP-12902
> URL: https://issues.apache.org/jira/browse/HADOOP-12902
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.7.0
>Reporter: Robert Kanter
>Assignee: Gabor Liptak
>  Labels: newbie
> Attachments: HADOOP-12902.1.patch, HADOOP-12902.2.patch, 
> HADOOP-12902.3.patch
>
>
> The Javadocs in {{AuthenticationFilter}} say:
> {noformat}
>  * Out of the box it provides 3 signer secret provider implementations:
>  * "string", "random", and "zookeeper"
> {noformat}
> However, the "string" implementation is no longer available because 
> HADOOP-11748 moved it to be a test-only artifact.  This also doesn't mention 
> anything about the file-backed secret provider ({{FileSignerSecretProvider}}).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HADOOP-12968) Make TestIPC.TestServer implement AutoCloseable

2016-03-28 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12968?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou reassigned HADOOP-12968:
--

Assignee: Xiaobing Zhou

> Make TestIPC.TestServer implement AutoCloseable
> ---
>
> Key: HADOOP-12968
> URL: https://issues.apache.org/jira/browse/HADOOP-12968
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
>
> Per the discussion in 
> [HADOOP-12909|https://issues.apache.org/jira/browse/HADOOP-12909?focusedCommentId=15211745=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15211745],
>  this is to propose making TestIPC.TestServer implement AutoCloseable to 
> benefit from try-with-resources regarding test cleanup.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12968) Make TestIPC.TestServer implement AutoCloseable

2016-03-28 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12968?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HADOOP-12968:
---
Description: Per the discussion in 
[HADOOP-12909|https://issues.apache.org/jira/browse/HADOOP-12909?focusedCommentId=15211745=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15211745],
 this is to propose making TestIPC.TestServer implement AutoCloseable to 
benefit from try-with-resources regarding test cleanup.

> Make TestIPC.TestServer implement AutoCloseable
> ---
>
> Key: HADOOP-12968
> URL: https://issues.apache.org/jira/browse/HADOOP-12968
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Xiaobing Zhou
>
> Per the discussion in 
> [HADOOP-12909|https://issues.apache.org/jira/browse/HADOOP-12909?focusedCommentId=15211745=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15211745],
>  this is to propose making TestIPC.TestServer implement AutoCloseable to 
> benefit from try-with-resources regarding test cleanup.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12968) Make TestIPC.TestServer implement AutoCloseable

2016-03-28 Thread Xiaobing Zhou (JIRA)
Xiaobing Zhou created HADOOP-12968:
--

 Summary: Make TestIPC.TestServer implement AutoCloseable
 Key: HADOOP-12968
 URL: https://issues.apache.org/jira/browse/HADOOP-12968
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Xiaobing Zhou






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12968) Make TestIPC.TestServer implement AutoCloseable

2016-03-28 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12968?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HADOOP-12968:
---
Issue Type: Improvement  (was: Bug)

> Make TestIPC.TestServer implement AutoCloseable
> ---
>
> Key: HADOOP-12968
> URL: https://issues.apache.org/jira/browse/HADOOP-12968
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Xiaobing Zhou
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12909) Change ipc.Client to support asynchronous calls

2016-03-28 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HADOOP-12909:
---
Attachment: HADOOP-12909-HDFS-9924.006.patch

> Change ipc.Client to support asynchronous calls
> ---
>
> Key: HADOOP-12909
> URL: https://issues.apache.org/jira/browse/HADOOP-12909
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: ipc
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Xiaobing Zhou
> Attachments: HADOOP-12909-HDFS-9924.000.patch, 
> HADOOP-12909-HDFS-9924.001.patch, HADOOP-12909-HDFS-9924.002.patch, 
> HADOOP-12909-HDFS-9924.003.patch, HADOOP-12909-HDFS-9924.004.patch, 
> HADOOP-12909-HDFS-9924.005.patch, HADOOP-12909-HDFS-9924.006.patch
>
>
> In ipc.Client, the underlying mechanism is already supporting asynchronous 
> calls -- the calls shares a connection, the call requests are sent using a 
> thread pool and the responses can be out of order.  Indeed, synchronous call 
> is implemented by invoking wait() in the caller thread in order to wait for 
> the server response.
> In this JIRA, we change ipc.Client to support asynchronous mode.  In 
> asynchronous mode, it return once the request has been sent out but not wait 
> for the response from the server.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10965) Print fully qualified path in CommandWithDestination error messages

2016-03-28 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10965?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-10965:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Pushed to trunk, branch-2, branch-2.8. Thank you John for fixing this 
long-standing issue!

> Print fully qualified path in CommandWithDestination error messages
> ---
>
> Key: HADOOP-10965
> URL: https://issues.apache.org/jira/browse/HADOOP-10965
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.4.1
>Reporter: André Kelpe
>Assignee: John Zhuge
>Priority: Minor
>  Labels: supportability
> Fix For: 2.8.0
>
> Attachments: HADOOP-10965.001.patch, HADOOP-10965.002.patch
>
>
> Whenever I try to copy data from local to a cluster, but forget to create the 
> parent directory first, I get a very confusing error message:
> {code}
> $ whoami
> fs111
> $ hadoop fs -ls  /user
> Found 2 items
> drwxr-xr-x   - fs111   supergroup  0 2014-08-11 20:17 /user/hive
> drwxr-xr-x   - vagrant supergroup  0 2014-08-11 19:15 /user/vagrant
> $ hadoop fs -copyFromLocal data data
> copyFromLocal: `data': No such file or directory
> {code}
> From the error message, you would say that the local "data" directory is not 
> existing, but that is not the case. What is missing is the "/user/fs111" 
> directory on HDFS. After I created it, the copyFromLocal command works fine.
> I believe the error message is confusing and should at least be fixed. What 
> would be even better, if hadoop could restore the old behaviour in 1.x, where 
> copyFromLocal would just create the directories, if they are missing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10965) Print fully qualified path in CommandWithDestination error messages

2016-03-28 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10965?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-10965:
-
Issue Type: Improvement  (was: Bug)

> Print fully qualified path in CommandWithDestination error messages
> ---
>
> Key: HADOOP-10965
> URL: https://issues.apache.org/jira/browse/HADOOP-10965
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.4.1
>Reporter: André Kelpe
>Assignee: John Zhuge
>Priority: Minor
>  Labels: supportability
> Fix For: 2.8.0
>
> Attachments: HADOOP-10965.001.patch, HADOOP-10965.002.patch
>
>
> Whenever I try to copy data from local to a cluster, but forget to create the 
> parent directory first, I get a very confusing error message:
> {code}
> $ whoami
> fs111
> $ hadoop fs -ls  /user
> Found 2 items
> drwxr-xr-x   - fs111   supergroup  0 2014-08-11 20:17 /user/hive
> drwxr-xr-x   - vagrant supergroup  0 2014-08-11 19:15 /user/vagrant
> $ hadoop fs -copyFromLocal data data
> copyFromLocal: `data': No such file or directory
> {code}
> From the error message, you would say that the local "data" directory is not 
> existing, but that is not the case. What is missing is the "/user/fs111" 
> directory on HDFS. After I created it, the copyFromLocal command works fine.
> I believe the error message is confusing and should at least be fixed. What 
> would be even better, if hadoop could restore the old behaviour in 1.x, where 
> copyFromLocal would just create the directories, if they are missing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10965) Print fully qualified path in CommandWithDestination error messages

2016-03-28 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10965?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-10965:
-
Summary: Print fully qualified path in CommandWithDestination error 
messages  (was: Incorrect error message by fs -copyFromLocal)

> Print fully qualified path in CommandWithDestination error messages
> ---
>
> Key: HADOOP-10965
> URL: https://issues.apache.org/jira/browse/HADOOP-10965
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.4.1
>Reporter: André Kelpe
>Assignee: John Zhuge
>Priority: Minor
>  Labels: supportability
> Fix For: 2.8.0
>
> Attachments: HADOOP-10965.001.patch, HADOOP-10965.002.patch
>
>
> Whenever I try to copy data from local to a cluster, but forget to create the 
> parent directory first, I get a very confusing error message:
> {code}
> $ whoami
> fs111
> $ hadoop fs -ls  /user
> Found 2 items
> drwxr-xr-x   - fs111   supergroup  0 2014-08-11 20:17 /user/hive
> drwxr-xr-x   - vagrant supergroup  0 2014-08-11 19:15 /user/vagrant
> $ hadoop fs -copyFromLocal data data
> copyFromLocal: `data': No such file or directory
> {code}
> From the error message, you would say that the local "data" directory is not 
> existing, but that is not the case. What is missing is the "/user/fs111" 
> directory on HDFS. After I created it, the copyFromLocal command works fine.
> I believe the error message is confusing and should at least be fixed. What 
> would be even better, if hadoop could restore the old behaviour in 1.x, where 
> copyFromLocal would just create the directories, if they are missing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10965) Print fully qualified path in CommandWithDestination error messages

2016-03-28 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15214908#comment-15214908
 ] 

John Zhuge commented on HADOOP-10965:
-

Thanks [~andrew.wang].

> Print fully qualified path in CommandWithDestination error messages
> ---
>
> Key: HADOOP-10965
> URL: https://issues.apache.org/jira/browse/HADOOP-10965
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.4.1
>Reporter: André Kelpe
>Assignee: John Zhuge
>Priority: Minor
>  Labels: supportability
> Fix For: 2.8.0
>
> Attachments: HADOOP-10965.001.patch, HADOOP-10965.002.patch
>
>
> Whenever I try to copy data from local to a cluster, but forget to create the 
> parent directory first, I get a very confusing error message:
> {code}
> $ whoami
> fs111
> $ hadoop fs -ls  /user
> Found 2 items
> drwxr-xr-x   - fs111   supergroup  0 2014-08-11 20:17 /user/hive
> drwxr-xr-x   - vagrant supergroup  0 2014-08-11 19:15 /user/vagrant
> $ hadoop fs -copyFromLocal data data
> copyFromLocal: `data': No such file or directory
> {code}
> From the error message, you would say that the local "data" directory is not 
> existing, but that is not the case. What is missing is the "/user/fs111" 
> directory on HDFS. After I created it, the copyFromLocal command works fine.
> I believe the error message is confusing and should at least be fixed. What 
> would be even better, if hadoop could restore the old behaviour in 1.x, where 
> copyFromLocal would just create the directories, if they are missing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10965) Incorrect error message by fs -copyFromLocal

2016-03-28 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15214906#comment-15214906
 ] 

Andrew Wang commented on HADOOP-10965:
--

LGTM will commit shortly

> Incorrect error message by fs -copyFromLocal
> 
>
> Key: HADOOP-10965
> URL: https://issues.apache.org/jira/browse/HADOOP-10965
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.4.1
>Reporter: André Kelpe
>Assignee: John Zhuge
>Priority: Minor
>  Labels: supportability
> Fix For: 2.8.0
>
> Attachments: HADOOP-10965.001.patch, HADOOP-10965.002.patch
>
>
> Whenever I try to copy data from local to a cluster, but forget to create the 
> parent directory first, I get a very confusing error message:
> {code}
> $ whoami
> fs111
> $ hadoop fs -ls  /user
> Found 2 items
> drwxr-xr-x   - fs111   supergroup  0 2014-08-11 20:17 /user/hive
> drwxr-xr-x   - vagrant supergroup  0 2014-08-11 19:15 /user/vagrant
> $ hadoop fs -copyFromLocal data data
> copyFromLocal: `data': No such file or directory
> {code}
> From the error message, you would say that the local "data" directory is not 
> existing, but that is not the case. What is missing is the "/user/fs111" 
> directory on HDFS. After I created it, the copyFromLocal command works fine.
> I believe the error message is confusing and should at least be fixed. What 
> would be even better, if hadoop could restore the old behaviour in 1.x, where 
> copyFromLocal would just create the directories, if they are missing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-8143) Change distcp to have -pb on by default

2016-03-28 Thread Sushanth Sowmyan (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15214864#comment-15214864
 ] 

Sushanth Sowmyan commented on HADOOP-8143:
--

Hi,

It looks like this patch might have gotten lost in limbo - is there a target 
version we can see this patch in hadoop by? As more tools like hive/falcon 
auto-use distcp behind the scenes, the more important this gets.

> Change distcp to have -pb on by default
> ---
>
> Key: HADOOP-8143
> URL: https://issues.apache.org/jira/browse/HADOOP-8143
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Dave Thompson
>Assignee: Mithun Radhakrishnan
>Priority: Minor
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-8143.1.patch
>
>
> We should have the preserve blocksize (-pb) on in distcp by default.
> checksum which is on by default will always fail if blocksize is not the same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12964) Http server vulnerable to clickjacking

2016-03-28 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12964?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated HADOOP-12964:

Attachment: hadoop-12964.002.patch

updated to fix checkstyle issues

> Http server vulnerable to clickjacking 
> ---
>
> Key: HADOOP-12964
> URL: https://issues.apache.org/jira/browse/HADOOP-12964
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Haibo Chen
>Assignee: Haibo Chen
> Attachments: hadoop-12964.001.patch, hadoop-12964.002.patch
>
>
> Nessus report shows a medium level issue that "Web Application Potentially 
> Vulnerable to Clickjacking" with the description as follows:
> "The remote web server does not set an X-Frame-Options response header in all 
> content responses. This could potentially expose the site to a clickjacking 
> or UI Redress attack wherein an attacker can trick a user into clicking an 
> area of the vulnerable page that is different than what the user perceives 
> the page to be. This can result in a user performing fraudulent or malicious 
> transactions"
> We could add X-Frame-Options, supported in all major browsers, in the Http 
> response header to mitigate the issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12082) Support multiple authentication schemes via AuthenticationFilter

2016-03-28 Thread Benoy Antony (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15214811#comment-15214811
 ] 

Benoy Antony commented on HADOOP-12082:
---

Thanks [~hgadre],

The patch filename convention is is like this : HADOOP-12082-001.patch
Could you please add the documentation regarding this feature ?
You can start with hadoop-common-project/hadoop-auth/src/site/markdown/index.md 

> Support multiple authentication schemes via AuthenticationFilter
> 
>
> Key: HADOOP-12082
> URL: https://issues.apache.org/jira/browse/HADOOP-12082
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Hrishikesh Gadre
>Assignee: Hrishikesh Gadre
> Attachments: HADOOP-12082.patch, hadoop-ldap-auth-v2.patch, 
> hadoop-ldap-auth-v3.patch, hadoop-ldap-auth-v4.patch, 
> hadoop-ldap-auth-v5.patch, hadoop-ldap-auth-v6.patch, hadoop-ldap.patch, 
> multi-scheme-auth-support-poc.patch
>
>
> The requirement is to support LDAP based authentication scheme via Hadoop 
> AuthenticationFilter. HADOOP-9054 added a support to plug-in custom 
> authentication scheme (in addition to Kerberos) via 
> AltKerberosAuthenticationHandler class. But it is based on selecting the 
> authentication mechanism based on User-Agent HTTP header which does not 
> conform to HTTP protocol semantics.
> As per [RFC-2616|http://www.w3.org/Protocols/rfc2616/rfc2616.html]
> - HTTP protocol provides a simple challenge-response authentication mechanism 
> that can be used by a server to challenge a client request and by a client to 
> provide the necessary authentication information. 
> - This mechanism is initiated by server sending the 401 (Authenticate) 
> response with ‘WWW-Authenticate’ header which includes at least one challenge 
> that indicates the authentication scheme(s) and parameters applicable to the 
> Request-URI. 
> - In case server supports multiple authentication schemes, it may return 
> multiple challenges with a 401 (Authenticate) response, and each challenge 
> may use a different auth-scheme. 
> - A user agent MUST choose to use the strongest auth-scheme it understands 
> and request credentials from the user based upon that challenge.
> The existing Hadoop authentication filter implementation supports Kerberos 
> authentication scheme and uses ‘Negotiate’ as the challenge as part of 
> ‘WWW-Authenticate’ response header. As per the following documentation, 
> ‘Negotiate’ challenge scheme is only applicable to Kerberos (and Windows 
> NTLM) authentication schemes.
> [SPNEGO-based Kerberos and NTLM HTTP 
> Authentication|http://tools.ietf.org/html/rfc4559]
> [Understanding HTTP 
> Authentication|https://msdn.microsoft.com/en-us/library/ms789031%28v=vs.110%29.aspx]
> On the other hand for LDAP authentication, typically ‘Basic’ authentication 
> scheme is used (Note TLS is mandatory with Basic authentication scheme).
> http://httpd.apache.org/docs/trunk/mod/mod_authnz_ldap.html
> Hence for this feature, the idea would be to provide a custom implementation 
> of Hadoop AuthenticationHandler and Authenticator interfaces which would 
> support both schemes - Kerberos (via Negotiate auth challenge) and LDAP (via 
> Basic auth challenge). During the authentication phase, it would send both 
> the challenges and let client pick the appropriate one. If client responds 
> with an ‘Authorization’ header tagged with ‘Negotiate’ - it will use Kerberos 
> authentication. If client responds with an ‘Authorization’ header tagged with 
> ‘Basic’ - it will use LDAP authentication.
> Note - some HTTP clients (e.g. curl or Apache Http Java client) need to be 
> configured to use one scheme over the other e.g.
> - curl tool supports option to use either Kerberos (via --negotiate flag) or 
> username/password based authentication (via --basic and -u flags). 
> - Apache HttpClient library can be configured to use specific authentication 
> scheme.
> http://hc.apache.org/httpcomponents-client-ga/tutorial/html/authentication.html
> Typically web browsers automatically choose an authentication scheme based on 
> a notion of “strength” of security. e.g. take a look at the [design of Chrome 
> browser for HTTP 
> authentication|https://www.chromium.org/developers/design-documents/http-authentication]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12964) Http server vulnerable to clickjacking

2016-03-28 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15214768#comment-15214768
 ] 

Haibo Chen commented on HADOOP-12964:
-

The unit test failures and license warning are unrelated. Will update to 
correct the check style issues.

> Http server vulnerable to clickjacking 
> ---
>
> Key: HADOOP-12964
> URL: https://issues.apache.org/jira/browse/HADOOP-12964
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Haibo Chen
>Assignee: Haibo Chen
> Attachments: hadoop-12964.001.patch
>
>
> Nessus report shows a medium level issue that "Web Application Potentially 
> Vulnerable to Clickjacking" with the description as follows:
> "The remote web server does not set an X-Frame-Options response header in all 
> content responses. This could potentially expose the site to a clickjacking 
> or UI Redress attack wherein an attacker can trick a user into clicking an 
> area of the vulnerable page that is different than what the user perceives 
> the page to be. This can result in a user performing fraudulent or malicious 
> transactions"
> We could add X-Frame-Options, supported in all major browsers, in the Http 
> response header to mitigate the issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12964) Http server vulnerable to clickjacking

2016-03-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15214761#comment-15214761
 ] 

Hadoop QA commented on HADOOP-12964:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 
50s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 5s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
45s {color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
36s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 32s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
24s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
31s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 48s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 47s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
8s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
49s {color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 49s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 0s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m 0s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 33s 
{color} | {color:red} hadoop-common-project/hadoop-common: patch generated 3 
new + 63 unchanged - 0 fixed = 66 total (was 63) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 30s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
24s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 1s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 47s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 47s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 13m 49s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_74. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 12m 5s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.7.0_95. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 38s 
{color} | {color:red} Patch generated 3 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 113m 16s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_74 Failed junit tests | hadoop.ipc.TestRPCWaitForProxy |
|   | hadoop.fs.shell.find.TestIname |
|   | hadoop.fs.shell.find.TestPrint0 |
|   | hadoop.fs.shell.find.TestPrint |
|   | hadoop.fs.shell.find.TestName |
| JDK v1.8.0_74 Timed out junit tests | 
org.apache.hadoop.util.TestNativeLibraryChecker |
| JDK v1.7.0_95 Failed junit tests | hadoop.ipc.TestRPCWaitForProxy |
|   | hadoop.fs.shell.find.TestIname |
|   | hadoop.fs.shell.find.TestPrint0 |
|   | hadoop.security.ssl.TestReloadingX509TrustManager |
|   | hadoop.fs.shell.find.TestPrint |
|   | hadoop.fs.shell.find.TestName 

[jira] [Commented] (HADOOP-12954) Add a way to change hadoop.security.token.service.use_ip

2016-03-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15214753#comment-15214753
 ] 

Hudson commented on HADOOP-12954:
-

FAILURE: Integrated in Hadoop-trunk-Commit #9513 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9513/])
HADOOP-12954. Add a way to change hadoop.security.token.service.use_ip 
(rkanter: rev 8cac1bb09f55ff2f285914e349507472ff86f4d7)
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestSecurityUtil.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SecurityUtil.java


> Add a way to change hadoop.security.token.service.use_ip
> 
>
> Key: HADOOP-12954
> URL: https://issues.apache.org/jira/browse/HADOOP-12954
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Robert Kanter
>Assignee: Robert Kanter
> Fix For: 2.9.0
>
> Attachments: HADOOP-12954.001.patch, HADOOP-12954.002.patch, 
> HADOOP-12954.003.patch
>
>
> Currently, {{hadoop.security.token.service.use_ip}} is set on JVM startup via:
> {code:java}
>   static {
> Configuration conf = new Configuration();
> boolean useIp = conf.getBoolean(
> CommonConfigurationKeys.HADOOP_SECURITY_TOKEN_SERVICE_USE_IP,
> CommonConfigurationKeys.HADOOP_SECURITY_TOKEN_SERVICE_USE_IP_DEFAULT);
> setTokenServiceUseIp(useIp);
>   }
> {code}
> This is a problem for clients, such as Oozie, who don't add *-site.xml files 
> to their classpath.  Oozie normally creates a {{JobClient}} and passes a 
> {{Configuration}} to it with the proper configs we need.  However, because 
> {{hadoop.security.token.service.use_ip}} is specified in a static block like 
> this, and there's no API to change it, Oozie has no way to set it to the 
> non-default value.
> I propose we add a {{setConfiguration}} method which takes a 
> {{Configuration}} and rereads {{hadoop.security.token.service.use_ip}}.  
> There's a few other properties that are also loaded statically on startup 
> that can be reloaded here as well.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12916) Allow different Hadoop IPC Call Queue throttling policies with FCQ/BackOff

2016-03-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15214729#comment-15214729
 ] 

ASF GitHub Bot commented on HADOOP-12916:
-

Github user xiaoyuyao commented on a diff in the pull request:

https://github.com/apache/hadoop/pull/86#discussion_r57619429
  
--- Diff: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
 ---
@@ -1636,6 +1640,15 @@ public long getTimeDuration(String name, long 
defaultValue, TimeUnit unit) {
 return unit.convert(Long.parseLong(vStr), vUnit.unit());
   }
 
+  public long[] getTimeDurations(String name, TimeUnit unit) {
--- End diff --

Good point. I will address that in the next patch.


> Allow different Hadoop IPC Call Queue throttling policies with FCQ/BackOff
> --
>
> Key: HADOOP-12916
> URL: https://issues.apache.org/jira/browse/HADOOP-12916
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HADOOP-12916.00.patch, HADOOP-12916.01.patch, 
> HADOOP-12916.02.patch, HADOOP-12916.03.patch, HADOOP-12916.04.patch
>
>
> Currently back off policy from HADOOP-10597 is hard coded to base on whether 
> call queue is full. This ticket is open to allow flexible back off policies 
> such as moving average of response time in RPC calls of different priorities. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12916) Allow different Hadoop IPC Call Queue throttling policies with FCQ/BackOff

2016-03-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15214591#comment-15214591
 ] 

ASF GitHub Bot commented on HADOOP-12916:
-

Github user arp7 commented on a diff in the pull request:

https://github.com/apache/hadoop/pull/86#discussion_r57606818
  
--- Diff: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/CallQueueManager.java
 ---
@@ -49,25 +57,68 @@
   private final AtomicReference putRef;
   private final AtomicReference takeRef;
 
+  private RpcScheduler scheduler;
+
   public CallQueueManager(Class> backingClass,
+  Class schedulerClass,
   boolean clientBackOffEnabled, int maxQueueSize, String namespace,
   Configuration conf) {
+int priorityLevels = parseNumLevels(namespace, conf);
+this.scheduler = createScheduler(schedulerClass, priorityLevels,
+namespace, conf);
 BlockingQueue bq = createCallQueueInstance(backingClass,
-  maxQueueSize, namespace, conf);
+priorityLevels, maxQueueSize, namespace, conf);
 this.clientBackOffEnabled = clientBackOffEnabled;
 this.putRef = new AtomicReference(bq);
 this.takeRef = new AtomicReference(bq);
 LOG.info("Using callQueue " + backingClass);
   }
 
+  private static  T createScheduler(
+  Class theClass, int priorityLevels, String ns, Configuration 
conf) {
+// Used for custom, configurable scheduler
+try {
+  Constructor ctor = theClass.getDeclaredConstructor(int.class,
+  String.class, Configuration.class);
+  return ctor.newInstance(priorityLevels, ns, conf);
+} catch (RuntimeException e) {
--- End diff --

See HDFS-9478 which is fixing exception handling when constructing 
callqueue instances. We could use a similar fix for createScheduler.


> Allow different Hadoop IPC Call Queue throttling policies with FCQ/BackOff
> --
>
> Key: HADOOP-12916
> URL: https://issues.apache.org/jira/browse/HADOOP-12916
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HADOOP-12916.00.patch, HADOOP-12916.01.patch, 
> HADOOP-12916.02.patch, HADOOP-12916.03.patch, HADOOP-12916.04.patch
>
>
> Currently back off policy from HADOOP-10597 is hard coded to base on whether 
> call queue is full. This ticket is open to allow flexible back off policies 
> such as moving average of response time in RPC calls of different priorities. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12916) Allow different Hadoop IPC Call Queue throttling policies with FCQ/BackOff

2016-03-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15214584#comment-15214584
 ] 

ASF GitHub Bot commented on HADOOP-12916:
-

Github user arp7 commented on a diff in the pull request:

https://github.com/apache/hadoop/pull/86#discussion_r57606373
  
--- Diff: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
 ---
@@ -1636,6 +1640,15 @@ public long getTimeDuration(String name, long 
defaultValue, TimeUnit unit) {
 return unit.convert(Long.parseLong(vStr), vUnit.unit());
   }
 
+  public long[] getTimeDurations(String name, TimeUnit unit) {
--- End diff --

Consider returning a List.


> Allow different Hadoop IPC Call Queue throttling policies with FCQ/BackOff
> --
>
> Key: HADOOP-12916
> URL: https://issues.apache.org/jira/browse/HADOOP-12916
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HADOOP-12916.00.patch, HADOOP-12916.01.patch, 
> HADOOP-12916.02.patch, HADOOP-12916.03.patch, HADOOP-12916.04.patch
>
>
> Currently back off policy from HADOOP-10597 is hard coded to base on whether 
> call queue is full. This ticket is open to allow flexible back off policies 
> such as moving average of response time in RPC calls of different priorities. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12935) API documentation for dynamic subcommands

2016-03-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15214558#comment-15214558
 ] 

Hadoop QA commented on HADOOP-12935:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 
32s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 19s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 11s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 24s 
{color} | {color:red} Patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 13m 7s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:fbe3e86 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12795642/HADOOP-12935.00.patch 
|
| JIRA Issue | HADOOP-12935 |
| Optional Tests |  asflicense  mvnsite  |
| uname | Linux 69f408263309 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 8cac1bb |
| whitespace | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/8942/artifact/patchprocess/whitespace-eol.txt
 |
| asflicense | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/8942/artifact/patchprocess/patch-asflicense-problems.txt
 |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/8942/console |
| Powered by | Apache Yetus 0.2.0   http://yetus.apache.org |


This message was automatically generated.



> API documentation for dynamic subcommands
> -
>
> Key: HADOOP-12935
> URL: https://issues.apache.org/jira/browse/HADOOP-12935
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: scripts
>Reporter: Allen Wittenauer
> Attachments: HADOOP-12935.00.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12954) Add a way to change hadoop.security.token.service.use_ip

2016-03-28 Thread Robert Kanter (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Kanter updated HADOOP-12954:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.9.0
   Status: Resolved  (was: Patch Available)

Thanks for the review Steve.  Committed to trunk and branch-2!

> Add a way to change hadoop.security.token.service.use_ip
> 
>
> Key: HADOOP-12954
> URL: https://issues.apache.org/jira/browse/HADOOP-12954
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Robert Kanter
>Assignee: Robert Kanter
> Fix For: 2.9.0
>
> Attachments: HADOOP-12954.001.patch, HADOOP-12954.002.patch, 
> HADOOP-12954.003.patch
>
>
> Currently, {{hadoop.security.token.service.use_ip}} is set on JVM startup via:
> {code:java}
>   static {
> Configuration conf = new Configuration();
> boolean useIp = conf.getBoolean(
> CommonConfigurationKeys.HADOOP_SECURITY_TOKEN_SERVICE_USE_IP,
> CommonConfigurationKeys.HADOOP_SECURITY_TOKEN_SERVICE_USE_IP_DEFAULT);
> setTokenServiceUseIp(useIp);
>   }
> {code}
> This is a problem for clients, such as Oozie, who don't add *-site.xml files 
> to their classpath.  Oozie normally creates a {{JobClient}} and passes a 
> {{Configuration}} to it with the proper configs we need.  However, because 
> {{hadoop.security.token.service.use_ip}} is specified in a static block like 
> this, and there's no API to change it, Oozie has no way to set it to the 
> non-default value.
> I propose we add a {{setConfiguration}} method which takes a 
> {{Configuration}} and rereads {{hadoop.security.token.service.use_ip}}.  
> There's a few other properties that are also loaded statically on startup 
> that can be reloaded here as well.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12935) API documentation for dynamic subcommands

2016-03-28 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-12935:
--
Summary: API documentation for dynamic subcommands  (was: API documenation 
for dynamic subcommands)

> API documentation for dynamic subcommands
> -
>
> Key: HADOOP-12935
> URL: https://issues.apache.org/jira/browse/HADOOP-12935
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: scripts
>Reporter: Allen Wittenauer
> Attachments: HADOOP-12935.00.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12505) ShellBasedUnixGroupMapping should support group names with space

2016-03-28 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12505?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-12505:
-
Assignee: (was: Wei-Chiu Chuang)

> ShellBasedUnixGroupMapping should support group names with space
> 
>
> Key: HADOOP-12505
> URL: https://issues.apache.org/jira/browse/HADOOP-12505
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Wei-Chiu Chuang
>
> In a typical configuration, group name is obtained from AD through SSSD/LDAP. 
> AD permits group names with space (e.g. "Domain Users").
> Unfortunately, the present implementation of ShellBasedUnixGroupMapping 
> parses the output of shell command "id -Gn", and assumes group names are 
> separated by space.
> This could be achieved by using a combination of shell scripts, for example,
> bash -c 'id -G weichiu | tr " " "\n" | xargs -I % getent group "%" | cut 
> -d":" -f1'
> But I am still looking for a more compact form, and potentially more 
> efficient one.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-12505) ShellBasedUnixGroupMapping should support group names with space

2016-03-28 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12505?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang resolved HADOOP-12505.
--
Resolution: Won't Fix

> ShellBasedUnixGroupMapping should support group names with space
> 
>
> Key: HADOOP-12505
> URL: https://issues.apache.org/jira/browse/HADOOP-12505
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>
> In a typical configuration, group name is obtained from AD through SSSD/LDAP. 
> AD permits group names with space (e.g. "Domain Users").
> Unfortunately, the present implementation of ShellBasedUnixGroupMapping 
> parses the output of shell command "id -Gn", and assumes group names are 
> separated by space.
> This could be achieved by using a combination of shell scripts, for example,
> bash -c 'id -G weichiu | tr " " "\n" | xargs -I % getent group "%" | cut 
> -d":" -f1'
> But I am still looking for a more compact form, and potentially more 
> efficient one.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12964) Http server vulnerable to clickjacking

2016-03-28 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12964?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated HADOOP-12964:

Status: Patch Available  (was: Open)

> Http server vulnerable to clickjacking 
> ---
>
> Key: HADOOP-12964
> URL: https://issues.apache.org/jira/browse/HADOOP-12964
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Haibo Chen
>Assignee: Haibo Chen
> Attachments: hadoop-12964.001.patch
>
>
> Nessus report shows a medium level issue that "Web Application Potentially 
> Vulnerable to Clickjacking" with the description as follows:
> "The remote web server does not set an X-Frame-Options response header in all 
> content responses. This could potentially expose the site to a clickjacking 
> or UI Redress attack wherein an attacker can trick a user into clicking an 
> area of the vulnerable page that is different than what the user perceives 
> the page to be. This can result in a user performing fraudulent or malicious 
> transactions"
> We could add X-Frame-Options, supported in all major browsers, in the Http 
> response header to mitigate the issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12935) API documenation for dynamic subcommands

2016-03-28 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-12935:
--
Attachment: HADOOP-12935.00.patch

> API documenation for dynamic subcommands
> 
>
> Key: HADOOP-12935
> URL: https://issues.apache.org/jira/browse/HADOOP-12935
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: scripts
>Reporter: Allen Wittenauer
> Attachments: HADOOP-12935.00.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12935) API documenation for dynamic subcommands

2016-03-28 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-12935:
--
Status: Patch Available  (was: Open)

> API documenation for dynamic subcommands
> 
>
> Key: HADOOP-12935
> URL: https://issues.apache.org/jira/browse/HADOOP-12935
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: scripts
>Reporter: Allen Wittenauer
> Attachments: HADOOP-12935.00.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12931) bin/hadoop work for dynamic subcommands

2016-03-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15214366#comment-15214366
 ] 

Hadoop QA commented on HADOOP-12931:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue} 0m 3s 
{color} | {color:blue} Shelldocs was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 
2s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 10s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 7s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 
10s {color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 1s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.8.0_74. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 45s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 23s 
{color} | {color:red} Patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 16m 11s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:fbe3e86 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12795638/HADOOP-12931.00.patch 
|
| JIRA Issue | HADOOP-12931 |
| Optional Tests |  asflicense  mvnsite  unit  shellcheck  shelldocs  |
| uname | Linux f7842d181bca 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 90fcb16 |
| shellcheck | v0.4.3 |
| JDK v1.7.0_95  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/8941/testReport/ |
| asflicense | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/8941/artifact/patchprocess/patch-asflicense-problems.txt
 |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/8941/console |
| Powered by | Apache Yetus 0.2.0   http://yetus.apache.org |


This message was automatically generated.



> bin/hadoop work for dynamic subcommands
> ---
>
> Key: HADOOP-12931
> URL: https://issues.apache.org/jira/browse/HADOOP-12931
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: scripts
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-12931.00.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HADOOP-12931) bin/hadoop work for dynamic subcommands

2016-03-28 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12931?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer reassigned HADOOP-12931:
-

Assignee: Allen Wittenauer

> bin/hadoop work for dynamic subcommands
> ---
>
> Key: HADOOP-12931
> URL: https://issues.apache.org/jira/browse/HADOOP-12931
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: scripts
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-12931.00.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12931) bin/hadoop work for dynamic subcommands

2016-03-28 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12931?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-12931:
--
Status: Patch Available  (was: Open)

> bin/hadoop work for dynamic subcommands
> ---
>
> Key: HADOOP-12931
> URL: https://issues.apache.org/jira/browse/HADOOP-12931
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: scripts
>Reporter: Allen Wittenauer
> Attachments: HADOOP-12931.00.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12931) bin/hadoop work for dynamic subcommands

2016-03-28 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12931?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-12931:
--
Attachment: HADOOP-12931.00.patch

-00:
* basic restructure
* pull out archive, distch, and distcp as they'll become dynamic in HADOOP-12936

> bin/hadoop work for dynamic subcommands
> ---
>
> Key: HADOOP-12931
> URL: https://issues.apache.org/jira/browse/HADOOP-12931
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: scripts
>Reporter: Allen Wittenauer
> Attachments: HADOOP-12931.00.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-9657) NetUtils.wrapException to have special handling for 0.0.0.0 addresses and :0 ports

2016-03-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15214324#comment-15214324
 ] 

Hadoop QA commented on HADOOP-9657:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
55s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 21s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 53s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
21s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 57s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
35s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 6s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
42s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 56s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 56s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 42s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 42s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
20s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 56s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
49s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 5s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 6m 47s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_74. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 2s {color} | 
{color:red} hadoop-common in the patch failed with JDK v1.7.0_95. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 25s 
{color} | {color:red} Patch generated 3 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 59m 27s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_74 Timed out junit tests | 
org.apache.hadoop.util.TestNativeLibraryChecker |
| JDK v1.7.0_95 Timed out junit tests | 
org.apache.hadoop.util.TestNativeLibraryChecker |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:fbe3e86 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12761282/HADOOP-9657.02.patch |
| JIRA Issue | HADOOP-9657 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 7466e0c8a65f 3.13.0-36-lowlatency #63-Ubuntu 

[jira] [Commented] (HADOOP-12875) [Azure Data Lake] Support for contract test and unit test cases

2016-03-28 Thread Tony Wu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15214317#comment-15214317
 ] 

Tony Wu commented on HADOOP-12875:
--

Hi [~vishwajeet.dusane],

Thanks a lot for filing a separate JIRA, isolating out the ADL unit tests and 
providing a FS contract test! This is very helpful. 

I did a quick scan of the patch and have the following comments regarding the 
newly added FS contract tests:

Instead of having the following change in various contract test implementations:
{code:java}
+  @Override
+  protected boolean isSupported(String feature) throws IOException {
+return true;
+  }
{code}
It's better to define a {{adls.xml}} file where you specify the file system 
behavior. You can refer to {{wasb.xml}} as an example. This page 
[here|https://hadoop.apache.org/docs/r2.7.2/hadoop-project-dist/hadoop-common/filesystem/testing.html]
 also describes the details of FS contract tests and best practices of adding 
new ones.

Instead of adding {{@Ignore}} to test cases unsupported by ADL, I believe you 
can use {{ContractTestUtils#unsupported("...")}} or 
{{ContractTestUtils#unsupported("skip")}} instead.


> [Azure Data Lake] Support for contract test and unit test cases
> ---
>
> Key: HADOOP-12875
> URL: https://issues.apache.org/jira/browse/HADOOP-12875
> Project: Hadoop Common
>  Issue Type: Test
>  Components: fs, fs/azure, tools
>Reporter: Vishwajeet Dusane
>Assignee: Vishwajeet Dusane
> Attachments: Hadoop-12875-001.patch
>
>
> This JIRA describes contract test and unit test cases support for azure data 
> lake file system.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11393) Revert HADOOP_PREFIX, go back to HADOOP_HOME

2016-03-28 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11393:
--
Attachment: HADOOP-11393.05.patch

Good catch! Thanks for the review!

-05:
* fix the bats test to unset both _HOME and _PREFIX

> Revert HADOOP_PREFIX, go back to HADOOP_HOME
> 
>
> Key: HADOOP-11393
> URL: https://issues.apache.org/jira/browse/HADOOP-11393
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts, tracing
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-11393-00.patch, HADOOP-11393.01.patch, 
> HADOOP-11393.02.patch, HADOOP-11393.03.patch, HADOOP-11393.04.patch, 
> HADOOP-11393.05.patch
>
>
> Today, Windows and parts of the Hadoop source code still use HADOOP_HOME.  
> The switch to HADOOP_PREFIX back in 0.21 or so didn't really accomplish what 
> it was intended to do and only helped confuse the situation.
> _HOME is a much more standard suffix and is, in fact, used for everything in 
> Hadoop except for the top level project home.  I think it would be beneficial 
> to use HADOOP_HOME in the shell code as the Official(tm) variable, still 
> honoring HADOOP_PREFIX if it is set.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-9657) NetUtils.wrapException to have special handling for 0.0.0.0 addresses and :0 ports

2016-03-28 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9657?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-9657:
---
Target Version/s: 2.9.0  (was: 2.8.0)
  Status: Patch Available  (was: Open)

resubmitting

> NetUtils.wrapException to have special handling for 0.0.0.0 addresses and :0 
> ports
> --
>
> Key: HADOOP-9657
> URL: https://issues.apache.org/jira/browse/HADOOP-9657
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: net
>Affects Versions: 2.7.0
>Reporter: Steve Loughran
>Assignee: Varun Saxena
>Priority: Minor
> Attachments: HADOOP-9657.01.patch, HADOOP-9657.02.patch
>
>
> when an exception is wrapped, it may look like {{0.0.0.0:0 failed on 
> connection exception: java.net.ConnectException: Connection refused; For more 
> details see:  http://wiki.apache.org/hadoop/ConnectionRefused}}
> We should recognise all zero ip addresses and 0 ports and flag them as "your 
> configuration of the endpoint is wrong", as it is clearly the case



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-9657) NetUtils.wrapException to have special handling for 0.0.0.0 addresses and :0 ports

2016-03-28 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9657?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-9657:
---
Status: Open  (was: Patch Available)

> NetUtils.wrapException to have special handling for 0.0.0.0 addresses and :0 
> ports
> --
>
> Key: HADOOP-9657
> URL: https://issues.apache.org/jira/browse/HADOOP-9657
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: net
>Affects Versions: 2.7.0
>Reporter: Steve Loughran
>Assignee: Varun Saxena
>Priority: Minor
> Attachments: HADOOP-9657.01.patch, HADOOP-9657.02.patch
>
>
> when an exception is wrapped, it may look like {{0.0.0.0:0 failed on 
> connection exception: java.net.ConnectException: Connection refused; For more 
> details see:  http://wiki.apache.org/hadoop/ConnectionRefused}}
> We should recognise all zero ip addresses and 0 ports and flag them as "your 
> configuration of the endpoint is wrong", as it is clearly the case



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12956) Inevitable Log4j2 migration via slf4j

2016-03-28 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15214114#comment-15214114
 ] 

Steve Loughran commented on HADOOP-12956:
-

Has anyone spoken to the Log4J team about this?

Chris —that's an interesting q. about mixing. We'd need to make sure the SLF4J 
to Log4j2 JAR was on the CP, and that log4j was off it. Would commons-logging 
be able to feed in to log4j2 then? Or is a full move off it required? As that's 
more expensive, especially as there are so many nested libraries which use what 
has been the defacto Java logging API since the Avalon framework was written.

Would the log tuning servlets work? They are invaluable for debugging live 
services.

we can still continue the move to SLF4J APIs, even for branch-2 code. it works 
great as an API...it's the new back end which is the troublespot

> Inevitable Log4j2 migration via slf4j
> -
>
> Key: HADOOP-12956
> URL: https://issues.apache.org/jira/browse/HADOOP-12956
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Gopal V
>
> {{5 August 2015 --The Apache Logging Services™ Project Management Committee 
> (PMC) has announced that the Log4j™ 1.x logging framework has reached its end 
> of life (EOL) and is no longer officially supported.}}
> https://blogs.apache.org/foundation/entry/apache_logging_services_project_announces
> A whole framework log4j2 upgrade has to be synchronized, partly for improved 
> performance brought about by log4j2.
> https://logging.apache.org/log4j/2.x/manual/async.html#Performance



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12873) Remove MRv1 terms from HttpAuthentication.md

2016-03-28 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15213930#comment-15213930
 ] 

Brahma Reddy Battula commented on HADOOP-12873:
---

[~ajisakaa] thanks a lot for review and commit.

> Remove MRv1 terms from HttpAuthentication.md
> 
>
> Key: HADOOP-12873
> URL: https://issues.apache.org/jira/browse/HADOOP-12873
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.7.2
>Reporter: Akira AJISAKA
>Assignee: Brahma Reddy Battula
> Fix For: 2.8.0
>
> Attachments: HADOOP-12873-002.patch, HADOOP-12873.patch
>
>
> {noformat:HttpAuthentication.md}
> By default Hadoop HTTP web-consoles (JobTracker, NameNode, TaskTrackers and 
> DataNodes) allow access without any form of authentication.
> {noformat}
> We should use ResourceManager and NodeManager instead of JobTracker and 
> TaskTracker.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12955) Fix bugs in the initialization of the ISA-L library JNI bindings

2016-03-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15213927#comment-15213927
 ] 

Hadoop QA commented on HADOOP-12955:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 25s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
26s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 6s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 36s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
24s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 16s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 7s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 14s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 6s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
44s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 4s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 14m 26s 
{color} | {color:green} root-jdk1.8.0_74 with JDK v1.8.0_74 generated 0 new + 
11 unchanged - 10 fixed = 11 total (was 21) {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 9m 4s 
{color} | {color:green} root in the patch passed with JDK v1.8.0_74. {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 9m 4s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 32s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 21m 58s 
{color} | {color:green} root-jdk1.7.0_95 with JDK v1.7.0_95 generated 0 new + 
21 unchanged - 10 fixed = 21 total (was 31) {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 7m 32s 
{color} | {color:green} root in the patch passed with JDK v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 32s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
19s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 1s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
52s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 5s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 8s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 13m 49s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_74. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 10s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 22s 
{color} | {color:red} Patch 

[jira] [Commented] (HADOOP-12873) Remove MRv1 terms from HttpAuthentication.md

2016-03-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15213919#comment-15213919
 ] 

Hudson commented on HADOOP-12873:
-

FAILURE: Integrated in Hadoop-trunk-Commit #9509 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9509/])
HADOOP-12873. Remove MRv1 terms from HttpAuthentication.md. Contributed 
(aajisaka: rev 01cfee63815a1c9d63652edc21db63626df7e53c)
* hadoop-common-project/hadoop-common/src/site/markdown/HttpAuthentication.md


> Remove MRv1 terms from HttpAuthentication.md
> 
>
> Key: HADOOP-12873
> URL: https://issues.apache.org/jira/browse/HADOOP-12873
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.7.2
>Reporter: Akira AJISAKA
>Assignee: Brahma Reddy Battula
> Fix For: 2.8.0
>
> Attachments: HADOOP-12873-002.patch, HADOOP-12873.patch
>
>
> {noformat:HttpAuthentication.md}
> By default Hadoop HTTP web-consoles (JobTracker, NameNode, TaskTrackers and 
> DataNodes) allow access without any form of authentication.
> {noformat}
> We should use ResourceManager and NodeManager instead of JobTracker and 
> TaskTracker.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12873) Remove MRv1 terms from HttpAuthentication.md

2016-03-28 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-12873:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

+1, committed this to trunk, branch-2, and branch-2.8. Thanks [~brahmareddy] 
for the contribution!

> Remove MRv1 terms from HttpAuthentication.md
> 
>
> Key: HADOOP-12873
> URL: https://issues.apache.org/jira/browse/HADOOP-12873
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.7.2
>Reporter: Akira AJISAKA
>Assignee: Brahma Reddy Battula
> Fix For: 2.8.0
>
> Attachments: HADOOP-12873-002.patch, HADOOP-12873.patch
>
>
> {noformat:HttpAuthentication.md}
> By default Hadoop HTTP web-consoles (JobTracker, NameNode, TaskTrackers and 
> DataNodes) allow access without any form of authentication.
> {noformat}
> We should use ResourceManager and NodeManager instead of JobTracker and 
> TaskTracker.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12924) Add default coder key for creating raw coders

2016-03-28 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15213877#comment-15213877
 ] 

Kai Zheng commented on HADOOP-12924:


Thanks [~lirui] for the nice summary! it's fine to me.
We may need to define the required codec name and configuration key here right 
now to proceed with. How about:
* For the HDFS-RAID coder/codec: {{rs-legacy}};
* For the ISA-L compatible coder/codec: {{rs-isal}}.
* Configuration key: 
org.apache.hadoop.erasurecode.codec.{{CODEC-NAME}}.rawcoder, this configures 
the corresponding raw coder factory class.

Sounds good? Thanks.

> Add default coder key for creating raw coders
> -
>
> Key: HADOOP-12924
> URL: https://issues.apache.org/jira/browse/HADOOP-12924
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Rui Li
>Assignee: Rui Li
>Priority: Minor
> Attachments: HADOOP-12924.1.patch
>
>
> As suggested 
> [here|https://issues.apache.org/jira/browse/HADOOP-12826?focusedCommentId=15194402=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15194402].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11393) Revert HADOOP_PREFIX, go back to HADOOP_HOME

2016-03-28 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15213859#comment-15213859
 ] 

Akira AJISAKA commented on HADOOP-11393:


Mostly looks good to me. Minor nit:
* Duplicate {{unset HADOOP_HOME}} in hadoop-functions_test_helper.bash.

> Revert HADOOP_PREFIX, go back to HADOOP_HOME
> 
>
> Key: HADOOP-11393
> URL: https://issues.apache.org/jira/browse/HADOOP-11393
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts, tracing
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-11393-00.patch, HADOOP-11393.01.patch, 
> HADOOP-11393.02.patch, HADOOP-11393.03.patch, HADOOP-11393.04.patch
>
>
> Today, Windows and parts of the Hadoop source code still use HADOOP_HOME.  
> The switch to HADOOP_PREFIX back in 0.21 or so didn't really accomplish what 
> it was intended to do and only helped confuse the situation.
> _HOME is a much more standard suffix and is, in fact, used for everything in 
> Hadoop except for the top level project home.  I think it would be beneficial 
> to use HADOOP_HOME in the shell code as the Official(tm) variable, still 
> honoring HADOOP_PREFIX if it is set.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12955) Fix bugs in the initialization of the ISA-L library JNI bindings

2016-03-28 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12955?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HADOOP-12955:
---
Attachment: HADOOP-12955-v3.patch

Updated the patch according to above discussions. Tested and passed with or 
without ISA-L installed or the related options.

> Fix bugs in the initialization of the ISA-L library JNI bindings
> 
>
> Key: HADOOP-12955
> URL: https://issues.apache.org/jira/browse/HADOOP-12955
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: native
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Attachments: HADOOP-12955-v1.patch, HADOOP-12955-v2.patch, 
> HADOOP-12955-v3.patch
>
>
> Ref. the comment 
> [here|https://issues.apache.org/jira/browse/HADOOP-11540?focusedCommentId=15207619=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15207619].
>  
> When run hadoop checknative, it also failed. Got something like below from 
> log:
> {noformat}
> Stack: [0x7f2b9d405000,0x7f2b9d506000],  sp=0x7f2b9d504748,  free 
> space=1021k
> Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native 
> code)
> V  [libjvm.so+0xa90c90]  UTF8::unicode_length(char const*)+0x0
> V  [libjvm.so+0x6ddfc3]  jni_NewStringUTF+0xc3
> j  
> org.apache.hadoop.io.erasurecode.ErasureCodeNative.getLibraryName()Ljava/lang/String;+0
> j  org.apache.hadoop.util.NativeLibraryChecker.main([Ljava/lang/String;)V+212
> v  ~StubRoutines::call_stub
> V  [libjvm.so+0x68c616]  JavaCalls::call_helper(JavaValue*, methodHandle*, 
> JavaCallArguments*, Thread*)+0x1056
> V  [libjvm.so+0x6cdc32]  jni_invoke_static(JNIEnv_*, JavaValue*, _jobject*, 
> JNICallType, _jmethodID*, JNI_ArgumentPusher*, Thread*)+0x362
> V  [libjvm.so+0x6ea63a]  jni_CallStaticVoidMethod+0x17a
> C  [libjli.so+0x7bcc]  JavaMain+0x80c
> C  [libpthread.so.0+0x8182]  start_thread+0xc2
> Java frames: (J=compiled Java code, j=interpreted, Vv=VM code)
> j  
> org.apache.hadoop.io.erasurecode.ErasureCodeNative.getLibraryName()Ljava/lang/String;+0
> j  org.apache.hadoop.util.NativeLibraryChecker.main([Ljava/lang/String;)V+212
> v  ~StubRoutines::call_stub
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)