[jira] [Assigned] (HADOOP-13676) Update jackson from 1.9.13 to 2.x in hadoop-mapreduce

2016-10-03 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka reassigned HADOOP-13676:
--

Assignee: Akira Ajisaka

> Update jackson from 1.9.13 to 2.x in hadoop-mapreduce
> -
>
> Key: HADOOP-13676
> URL: https://issues.apache.org/jira/browse/HADOOP-13676
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13676) Update jackson from 1.9.13 to 2.x in hadoop-mapreduce

2016-10-03 Thread Akira Ajisaka (JIRA)
Akira Ajisaka created HADOOP-13676:
--

 Summary: Update jackson from 1.9.13 to 2.x in hadoop-mapreduce
 Key: HADOOP-13676
 URL: https://issues.apache.org/jira/browse/HADOOP-13676
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Akira Ajisaka






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13332) Remove jackson 1.9.13 and switch all jackson code to 2.x code line

2016-10-03 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-13332:
---
Issue Type: Improvement  (was: Sub-task)
Parent: (was: HADOOP-9991)

> Remove jackson 1.9.13 and switch all jackson code to 2.x code line
> --
>
> Key: HADOOP-13332
> URL: https://issues.apache.org/jira/browse/HADOOP-13332
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 2.8.0
>Reporter: PJ Fanning
>Assignee: Akira Ajisaka
> Attachments: HADOOP-13332.00.patch, HADOOP-13332.01.patch, 
> HADOOP-13332.02.patch, HADOOP-13332.03.patch
>
>
> This jackson 1.9 code line is no longer maintained. Upgrade
> Most changes from jackson 1.9 to 2.x just involve changing the package name.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13332) Remove jackson 1.9.13 and switch all jackson code to 2.x code line

2016-10-03 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15544533#comment-15544533
 ] 

Akira Ajisaka commented on HADOOP-13332:


I'd like to split the big patch into some smaller ones and will create jiras.

> Remove jackson 1.9.13 and switch all jackson code to 2.x code line
> --
>
> Key: HADOOP-13332
> URL: https://issues.apache.org/jira/browse/HADOOP-13332
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Affects Versions: 2.8.0
>Reporter: PJ Fanning
>Assignee: Akira Ajisaka
> Attachments: HADOOP-13332.00.patch, HADOOP-13332.01.patch, 
> HADOOP-13332.02.patch, HADOOP-13332.03.patch
>
>
> This jackson 1.9 code line is no longer maintained. Upgrade
> Most changes from jackson 1.9 to 2.x just involve changing the package name.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13674) S3A can provide a more detailed error message when accessing a bucket through an incorrect S3 endpoint.

2016-10-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15544479#comment-15544479
 ] 

Hadoop QA commented on HADOOP-13674:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
49s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
18s{color} | {color:green} branch-2 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
18s{color} | {color:green} branch-2 passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
24s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
34s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} branch-2 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} branch-2 passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 11s{color} | {color:orange} hadoop-tools/hadoop-aws: The patch generated 1 
new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch 8 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
20s{color} | {color:green} hadoop-aws in the patch passed with JDK v1.7.0_111. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 15m  9s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:b59b8b7 |
| JIRA Issue | HADOOP-13674 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12831467/HADOOP-13674-branch-2.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux ae1a347cc164 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | branch-2 / 612aa0c |
| Default Java | 1.7.0_111 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_101 
/usr/lib/jvm/java

[jira] [Commented] (HADOOP-13674) S3A can provide a more detailed error message when accessing a bucket through an incorrect S3 endpoint.

2016-10-03 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=1556#comment-1556
 ] 

Chris Nauroth commented on HADOOP-13674:


I still need to do a full test run against the service.

> S3A can provide a more detailed error message when accessing a bucket through 
> an incorrect S3 endpoint.
> ---
>
> Key: HADOOP-13674
> URL: https://issues.apache.org/jira/browse/HADOOP-13674
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Minor
> Attachments: HADOOP-13674-branch-2.001.patch
>
>
> When accessing the S3 service through a region-specific endpoint, the bucket 
> must be located in that region.  If the client attempts to access a bucket 
> that is not located in that region, then the service replies with a 301 
> redirect and the correct region endpoint.  However, the exception thrown by 
> S3A does not include the correct endpoint.  If we included that information 
> in the exception, it would make it easier for users to diagnose and fix 
> incorrect configuration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13674) S3A can provide a more detailed error message when accessing a bucket through an incorrect S3 endpoint.

2016-10-03 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13674?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-13674:
---
Attachment: HADOOP-13674-branch-2.001.patch

I'm attaching patch 001.  Summary:

* Add specific exception translation logic for an HTTP 301 response.  Include 
the recommended S3 endpoint and a hint to check {{fs.s3a.endpoint}} in the 
exception message.
* Update example error in documentation to show that the error message contains 
the recommended endpoint.
* Introduce a new unit test suite, {{TestS3AExceptionTranslation}}.  This 
consists of tests refactored out of {{ITestS3AFailureHandling}} that didn't 
really have a dependency on the S3 service, a new test for an HTTP 301 
response, and several other tests for HTTP status codes that weren't already 
covered, like 403, 410 and 416.


> S3A can provide a more detailed error message when accessing a bucket through 
> an incorrect S3 endpoint.
> ---
>
> Key: HADOOP-13674
> URL: https://issues.apache.org/jira/browse/HADOOP-13674
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Minor
> Attachments: HADOOP-13674-branch-2.001.patch
>
>
> When accessing the S3 service through a region-specific endpoint, the bucket 
> must be located in that region.  If the client attempts to access a bucket 
> that is not located in that region, then the service replies with a 301 
> redirect and the correct region endpoint.  However, the exception thrown by 
> S3A does not include the correct endpoint.  If we included that information 
> in the exception, it would make it easier for users to diagnose and fix 
> incorrect configuration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13674) S3A can provide a more detailed error message when accessing a bucket through an incorrect S3 endpoint.

2016-10-03 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13674?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-13674:
---
Status: Patch Available  (was: Open)

> S3A can provide a more detailed error message when accessing a bucket through 
> an incorrect S3 endpoint.
> ---
>
> Key: HADOOP-13674
> URL: https://issues.apache.org/jira/browse/HADOOP-13674
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Minor
> Attachments: HADOOP-13674-branch-2.001.patch
>
>
> When accessing the S3 service through a region-specific endpoint, the bucket 
> must be located in that region.  If the client attempts to access a bucket 
> that is not located in that region, then the service replies with a 301 
> redirect and the correct region endpoint.  However, the exception thrown by 
> S3A does not include the correct endpoint.  If we included that information 
> in the exception, it would make it easier for users to diagnose and fix 
> incorrect configuration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13502) Rename/split fs.contract.is-blobstore flag used by contract tests.

2016-10-03 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15544304#comment-15544304
 ] 

Chris Nauroth commented on HADOOP-13502:


bq. Can we keep is-blobstore purely as a high level flag in those *.xml files?

It sounds like you'd like to see is-blobstore kept as a purely descriptive 
flag.  I can see the appeal of that, but unfortunately, I think it would 
conflict with what I tried to achieve for backward compatibility in the 001 
patch.

Some of the tests were changed to check the new flags, but also continue 
checking the is-blobstore flag as a fallback.  This helps for backward 
compatibility if there is a file system implementation outside of the Hadoop 
source tree that has subclassed {{AbstractContractCreateTest}} to run those 
contract tests in its own project.  If we keep is-blobstore in the XML files, 
then I also need to remove the fallbacks check from the code, because I want 
WASB and S3A to run the tests with the stricter checks enforced.

Overall, I'd prefer to stick with the approach in the 001 patch, but let me 
know your thoughts.

> Rename/split fs.contract.is-blobstore flag used by contract tests.
> --
>
> Key: HADOOP-13502
> URL: https://issues.apache.org/jira/browse/HADOOP-13502
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: test
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Minor
> Attachments: HADOOP-13502-branch-2.001.patch
>
>
> The {{fs.contract.is-blobstore}} flag guards against execution of several 
> contract tests to account for known limitations with blob stores.  However, 
> the name is not entirely accurate, because it's still possible that a file 
> system implemented against a blob store could pass those tests, depending on 
> whether or not the implementation matches the semantics of HDFS.  This issue 
> proposes to rename the flag or split it into different flags with different 
> definitions for the semantics covered by the current flag.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13081) add the ability to create multiple UGIs/subjects from one kerberos login

2016-10-03 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15544287#comment-15544287
 ] 

Chris Nauroth commented on HADOOP-13081:


This JIRA will take one of two directions depending on discussion:

* If this becomes effectively a "Won't Fix", then re-resolve this as fixed in 
3.0.0-alpha1 and open a new JIRA to track removal of the API in 3.0.0-alpha2, 
for the sake of accuracy in release notes.
* If the change is accepted in some form, then re-resolve this as fixed in 
3.0.0-alpha1 and open a new JIRA targeted to 3.0.0-alpha2 and 2.8.0 to track 
the corrected patch.

[~sershe], I think you're best equipped to provide justification for this API, 
because you're effectively already doing it via reflection in Hive.

bq. We don't have control over which parts of the code need kerberos or tokens; 
I suspect that usually only one would be needed but we don't know which one.

Can you describe why you don't have control?  Intuitively, I'd expect to see 
isolated pieces of code that need to make service calls on behalf of the user 
with delegation token (a separate UGI), and then other parts of the code acting 
as the privileged user.  Maybe I'm not understanding the full context.


> add the ability to create multiple UGIs/subjects from one kerberos login
> 
>
> Key: HADOOP-13081
> URL: https://issues.apache.org/jira/browse/HADOOP-13081
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: HADOOP-13081.01.patch, HADOOP-13081.02.patch, 
> HADOOP-13081.02.patch, HADOOP-13081.03.patch, HADOOP-13081.03.patch, 
> HADOOP-13081.patch
>
>
> We have a scenario where we log in with kerberos as a certain user for some 
> tasks, but also want to add tokens to the resulting UGI that would be 
> specific to each task. We don't want to authenticate with kerberos for every 
> task.
> I am not sure how this can be accomplished with the existing UGI interface. 
> Perhaps some clone method would be helpful, similar to createProxyUser minus 
> the proxy stuff; or it could just relogin anew from ticket cache. 
> getUGIFromTicketCache seems like the best option in existing code, but there 
> doesn't appear to be a consistent way of handling ticket cache location - the 
> above method, that I only see called in test, is using a config setting that 
> is not used anywhere else, and the env variable for the location that is used 
> in the main ticket cache related methods is not set uniformly on all paths - 
> therefore, trying to find the correct ticket cache and passing it via the 
> config setting to getUGIFromTicketCache seems even hackier than doing the 
> clone via reflection ;) Moreover, getUGIFromTicketCache ignores the user 
> parameter on the main path - it logs a warning for multiple principals and 
> then logs in with first available.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13578) Add Codec for ZStandard Compression

2016-10-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15544232#comment-15544232
 ] 

Hadoop QA commented on HADOOP-13578:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
38s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 12m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
18s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-project-dist . {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
45s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  7m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
35s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 35s{color} | {color:orange} root: The patch generated 53 new + 82 unchanged 
- 1 fixed = 135 total (was 83) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 10m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
 6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
13s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m  
9s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 7 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
5s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-project-dist . {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m  
8s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 99m 50s{color} 
| {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
43s{color} | {color:red} The patch generated 2 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}205m 48s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency |
|   | hadoop.yarn.server.applicationhistoryservice.webapp.TestAHSWebServices |
|   | 
hadoop.yarn.server.nodemanager.containermanager.queuing.TestQ

[jira] [Commented] (HADOOP-13234) Get random port by new ServerSocket(0).getLocalPort() in ServerSocketUtil#getPort

2016-10-03 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15544194#comment-15544194
 ] 

Brahma Reddy Battula commented on HADOOP-13234:
---

[~xyao] yes, this will solve.

> Get random port by new ServerSocket(0).getLocalPort() in 
> ServerSocketUtil#getPort
> -
>
> Key: HADOOP-13234
> URL: https://issues.apache.org/jira/browse/HADOOP-13234
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>
> As per [~iwasakims] comment from 
> [here|https://issues.apache.org/jira/browse/HDFS-10367?focusedCommentId=15275604&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15275604]
> we can get available random port by {{new ServerSocket(0).getLocalPort()}} 
> and it's more portable. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13628) Support to retrieve specific property from configuration via REST API

2016-10-03 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15544117#comment-15544117
 ] 

Mingliang Liu commented on HADOOP-13628:


Can you provide a branch-2 and branch-2.7 patch, [~cheersyang]? I saw 
non-trivial conflicts when committing. Thanks.

> Support to retrieve specific property from configuration via REST API
> -
>
> Key: HADOOP-13628
> URL: https://issues.apache.org/jira/browse/HADOOP-13628
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf
>Affects Versions: 2.7.3
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: 404_error_browser.png, HADOOP-13628.01.patch, 
> HADOOP-13628.02.patch, HADOOP-13628.03.patch, HADOOP-13628.04.patch, 
> HADOOP-13628.05.patch, HADOOP-13628.06.patch
>
>
> Currently we can use rest API to retrieve all configuration properties per 
> daemon, but unable to get a specific property by name. This causes extra 
> parse work at client side when dealing with Hadoop configurations, and also 
> it's quite over head to send all configuration in a http response over 
> network. Propose to support following a {{name}} parameter in the http 
> request, by issuing
> {code}
> curl --header "Accept:application/json" 
> http://${RM_HOST}/conf?name=yarn.nodemanager.aux-services
> {code}
> get output
> {code}
> {"property"{"key":"yarn.resourcemanager.hostname","value":"${RM_HOST}","isFinal":false,"resource":"yarn-site.xml"}}
> {code}
> This change is fully backwards compatible.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-10101) Update guava dependency to the latest version

2016-10-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15544030#comment-15544030
 ] 

Hadoop QA commented on HADOOP-10101:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  6s{color} 
| {color:red} HADOOP-10101 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-10101 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12699092/HADOOP-10101-011.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10648/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Update guava dependency to the latest version
> -
>
> Key: HADOOP-10101
> URL: https://issues.apache.org/jira/browse/HADOOP-10101
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.2.0, 2.6.0
>Reporter: Rakesh R
>Assignee: Vinayakumar B
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-10101-002.patch, HADOOP-10101-004.patch, 
> HADOOP-10101-005.patch, HADOOP-10101-006.patch, HADOOP-10101-007.patch, 
> HADOOP-10101-008.patch, HADOOP-10101-009.patch, HADOOP-10101-009.patch, 
> HADOOP-10101-010.patch, HADOOP-10101-010.patch, HADOOP-10101-011.patch, 
> HADOOP-10101.patch, HADOOP-10101.patch
>
>
> The existing guava version is 11.0.2 which is quite old. This 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-10101) Update guava dependency to the latest version

2016-10-03 Thread Taklon Stephen Wu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15543958#comment-15543958
 ] 

Taklon Stephen Wu commented on HADOOP-10101:


Ping, any plan on upgrading Guava in this thread? btw, Guava is on 19.0.

> Update guava dependency to the latest version
> -
>
> Key: HADOOP-10101
> URL: https://issues.apache.org/jira/browse/HADOOP-10101
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.2.0, 2.6.0
>Reporter: Rakesh R
>Assignee: Vinayakumar B
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-10101-002.patch, HADOOP-10101-004.patch, 
> HADOOP-10101-005.patch, HADOOP-10101-006.patch, HADOOP-10101-007.patch, 
> HADOOP-10101-008.patch, HADOOP-10101-009.patch, HADOOP-10101-009.patch, 
> HADOOP-10101-010.patch, HADOOP-10101-010.patch, HADOOP-10101-011.patch, 
> HADOOP-10101.patch, HADOOP-10101.patch
>
>
> The existing guava version is 11.0.2 which is quite old. This 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13675) Bug in return value for delete() calls in WASB

2016-10-03 Thread Dushyanth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dushyanth updated HADOOP-13675:
---
Attachment: HADOOP-13675.001.patch

Adding first iteration of the patch to fix the return type handling for 
deletes. 

Testing: The patch contains new test to verify the changes made. Also changes 
have been tested against FileSystemContractLive tests for the both Block Blobs 
and Page Blobs.

> Bug in return value for delete() calls in WASB
> --
>
> Key: HADOOP-13675
> URL: https://issues.apache.org/jira/browse/HADOOP-13675
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: azure, fs/azure
>Affects Versions: 2.8.0
>Reporter: Dushyanth
> Fix For: 2.9.0
>
> Attachments: HADOOP-13675.001.patch
>
>
> Current implementation of WASB does not correctly handle multiple 
> threads/clients calling delete on the same file. The expected behavior in 
> such scenarios is only one of the thread should delete the file and return 
> true, while all other threads should receive false. However in the current 
> implementation even though only one thread deletes the file, multiple clients 
> incorrectly get "true" as the return from delete() call..



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13675) Bug in return value for delete() calls in WASB

2016-10-03 Thread Dushyanth (JIRA)
Dushyanth created HADOOP-13675:
--

 Summary: Bug in return value for delete() calls in WASB
 Key: HADOOP-13675
 URL: https://issues.apache.org/jira/browse/HADOOP-13675
 Project: Hadoop Common
  Issue Type: Bug
  Components: azure, fs/azure
Affects Versions: 2.8.0
Reporter: Dushyanth
 Fix For: 2.9.0


Current implementation of WASB does not correctly handle multiple 
threads/clients calling delete on the same file. The expected behavior in such 
scenarios is only one of the thread should delete the file and return true, 
while all other threads should receive false. However in the current 
implementation even though only one thread deletes the file, multiple clients 
incorrectly get "true" as the return from delete() call..



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13055) Implement linkMergeSlash for ViewFs

2016-10-03 Thread Manoj Govindassamy (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15543788#comment-15543788
 ] 

Manoj Govindassamy commented on HADOOP-13055:
-

Thanks [~zhz], [~eddyxu]. Will takeover this task and thanks for the patch.

> Implement linkMergeSlash for ViewFs
> ---
>
> Key: HADOOP-13055
> URL: https://issues.apache.org/jira/browse/HADOOP-13055
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs, viewfs
>Reporter: Zhe Zhang
>Assignee: Manoj Govindassamy
> Attachments: HADOOP-13055.00.patch, HADOOP-13055.01.patch, 
> HADOOP-13055.02.patch
>
>
> In a multi-cluster environment it is sometimes useful to operate on the root 
> / slash directory of an HDFS cluster. E.g., list all top level directories. 
> Quoting the comment in {{ViewFs}}:
> {code}
>  *   A special case of the merge mount is where mount table's root is merged
>  *   with the root (slash) of another file system:
>  *   
>  *   fs.viewfs.mounttable.default.linkMergeSlash=hdfs://nn99/
>  *   
>  *   In this cases the root of the mount table is merged with the root of
>  *hdfs://nn99/  
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13055) Implement linkMergeSlash for ViewFs

2016-10-03 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15543784#comment-15543784
 ] 

Lei (Eddy) Xu commented on HADOOP-13055:


[~zhz] Thanks. I assigned this to [~manojg].


> Implement linkMergeSlash for ViewFs
> ---
>
> Key: HADOOP-13055
> URL: https://issues.apache.org/jira/browse/HADOOP-13055
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs, viewfs
>Reporter: Zhe Zhang
>Assignee: Manoj Govindassamy
> Attachments: HADOOP-13055.00.patch, HADOOP-13055.01.patch, 
> HADOOP-13055.02.patch
>
>
> In a multi-cluster environment it is sometimes useful to operate on the root 
> / slash directory of an HDFS cluster. E.g., list all top level directories. 
> Quoting the comment in {{ViewFs}}:
> {code}
>  *   A special case of the merge mount is where mount table's root is merged
>  *   with the root (slash) of another file system:
>  *   
>  *   fs.viewfs.mounttable.default.linkMergeSlash=hdfs://nn99/
>  *   
>  *   In this cases the root of the mount table is merged with the root of
>  *hdfs://nn99/  
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13055) Implement linkMergeSlash for ViewFs

2016-10-03 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HADOOP-13055:
---
Assignee: Manoj Govindassamy

> Implement linkMergeSlash for ViewFs
> ---
>
> Key: HADOOP-13055
> URL: https://issues.apache.org/jira/browse/HADOOP-13055
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs, viewfs
>Reporter: Zhe Zhang
>Assignee: Manoj Govindassamy
> Attachments: HADOOP-13055.00.patch, HADOOP-13055.01.patch, 
> HADOOP-13055.02.patch
>
>
> In a multi-cluster environment it is sometimes useful to operate on the root 
> / slash directory of an HDFS cluster. E.g., list all top level directories. 
> Quoting the comment in {{ViewFs}}:
> {code}
>  *   A special case of the merge mount is where mount table's root is merged
>  *   with the root (slash) of another file system:
>  *   
>  *   fs.viewfs.mounttable.default.linkMergeSlash=hdfs://nn99/
>  *   
>  *   In this cases the root of the mount table is merged with the root of
>  *hdfs://nn99/  
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13055) Implement linkMergeSlash for ViewFs

2016-10-03 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HADOOP-13055:
---
Assignee: (was: Zhe Zhang)

> Implement linkMergeSlash for ViewFs
> ---
>
> Key: HADOOP-13055
> URL: https://issues.apache.org/jira/browse/HADOOP-13055
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs, viewfs
>Reporter: Zhe Zhang
> Attachments: HADOOP-13055.00.patch, HADOOP-13055.01.patch, 
> HADOOP-13055.02.patch
>
>
> In a multi-cluster environment it is sometimes useful to operate on the root 
> / slash directory of an HDFS cluster. E.g., list all top level directories. 
> Quoting the comment in {{ViewFs}}:
> {code}
>  *   A special case of the merge mount is where mount table's root is merged
>  *   with the root (slash) of another file system:
>  *   
>  *   fs.viewfs.mounttable.default.linkMergeSlash=hdfs://nn99/
>  *   
>  *   In this cases the root of the mount table is merged with the root of
>  *hdfs://nn99/  
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13055) Implement linkMergeSlash for ViewFs

2016-10-03 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15543769#comment-15543769
 ] 

Zhe Zhang commented on HADOOP-13055:


Sorry for getting back to this late.

[~shv] Yes the patch only implements {{linkMergeSlash}} instead of 
{{linkMerge}} in general.

[~manojg] Thanks for the interest! Yes it would be great if you can take over 
this task. Unassigning myself now. I'll get back to your question after 
refreshing my own memory on the patch.

> Implement linkMergeSlash for ViewFs
> ---
>
> Key: HADOOP-13055
> URL: https://issues.apache.org/jira/browse/HADOOP-13055
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs, viewfs
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
> Attachments: HADOOP-13055.00.patch, HADOOP-13055.01.patch, 
> HADOOP-13055.02.patch
>
>
> In a multi-cluster environment it is sometimes useful to operate on the root 
> / slash directory of an HDFS cluster. E.g., list all top level directories. 
> Quoting the comment in {{ViewFs}}:
> {code}
>  *   A special case of the merge mount is where mount table's root is merged
>  *   with the root (slash) of another file system:
>  *   
>  *   fs.viewfs.mounttable.default.linkMergeSlash=hdfs://nn99/
>  *   
>  *   In this cases the root of the mount table is merged with the root of
>  *hdfs://nn99/  
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13578) Add Codec for ZStandard Compression

2016-10-03 Thread churro morales (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15543699#comment-15543699
 ] 

churro morales edited comment on HADOOP-13578 at 10/3/16 11:10 PM:
---

Ran the mapreduce jobs with 
{noformat} 
hadoop jar $HADOOP_HOME/share/hadoop/mapreduce/hadoop-mapreduce-examples*.jar 
wordcount 
-Dmapreduce.map.output.compress.codec=org.apache.hadoop.io.compress.ZStandardCodec
 -Dmapreduce.map.output.compress=true 
-Dmapreduce.output.fileoutputformat.compress=true 
-Dmapreduce.output.fileoutputformat.compress.codec=org.apache.hadoop.io.compress.ZStandardCodec
 wcin wcout-zst 

hadoop jar $HADOOP_HOME/share/hadoop/mapreduce/hadoop-mapreduce-examples*.jar 
wordcount 
-Dmapreduce.map.output.compress.codec=org.apache.hadoop.io.compress.ZStandardCodec
  wcout-zst wcout-zst2

{noformat}

* Fixed the warnings for ZStandardDecompressor.c
* Updated the Building.txt
* Used the constant IO_COMPRESSION_CODEC_ZSTD_LEVEL_DEFAULT and fixed the 
default compression level 
* Sorted out the TODO for the compression overhead.

[~jlowe] what do you think about adding a test that would go through all the 
codecs and run the M/R job you used as your example with the MRMiniCluster, do 
you think that would be worthwhile?



was (Author: churromorales):
Ran the mapreduce jobs with 
{noformat} 
hadoop jar $HADOOP_HOME/share/hadoop/mapreduce/hadoop-mapreduce-examples*.jar 
wordcount 
-Dmapreduce.map.output.compress.codec=org.apache.hadoop.io.compress.ZStandardCodec
 -Dmapreduce.map.output.compress=true 
-Dmapreduce.output.fileoutputformat.compress=true 
-Dmapreduce.output.fileoutputformat.compress.codec=org.apache.hadoop.io.compress.ZStandardCodec
 wcin wcout-zst 

hadoop jar $HADOOP_HOME/share/hadoop/mapreduce/hadoop-mapreduce-examples*.jar 
wordcount 
-Dmapreduce.map.output.compress.codec=org.apache.hadoop.io.compress.ZStandardCodec
  wcout-zst wcout-zst2

{noformat}

* Fixed the warnings for ZStandardDecompressor.c
* Updated the Building.txt
* Used the constant IO_COMPRESSION_CODEC_ZSTD_LEVEL_DEFAULT and fixed the 
default compression level 
* Sorted out the TODO for the compression overhead.


> Add Codec for ZStandard Compression
> ---
>
> Key: HADOOP-13578
> URL: https://issues.apache.org/jira/browse/HADOOP-13578
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: churro morales
>Assignee: churro morales
> Attachments: HADOOP-13578.patch, HADOOP-13578.v1.patch
>
>
> ZStandard: https://github.com/facebook/zstd has been used in production for 6 
> months by facebook now.  v1.0 was recently released.  Create a codec for this 
> library.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13578) Add Codec for ZStandard Compression

2016-10-03 Thread churro morales (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

churro morales updated HADOOP-13578:

Attachment: HADOOP-13578.v1.patch

Ran the mapreduce jobs with 
{noformat} 
hadoop jar $HADOOP_HOME/share/hadoop/mapreduce/hadoop-mapreduce-examples*.jar 
wordcount 
-Dmapreduce.map.output.compress.codec=org.apache.hadoop.io.compress.ZStandardCodec
 -Dmapreduce.map.output.compress=true 
-Dmapreduce.output.fileoutputformat.compress=true 
-Dmapreduce.output.fileoutputformat.compress.codec=org.apache.hadoop.io.compress.ZStandardCodec
 wcin wcout-zst 

hadoop jar $HADOOP_HOME/share/hadoop/mapreduce/hadoop-mapreduce-examples*.jar 
wordcount 
-Dmapreduce.map.output.compress.codec=org.apache.hadoop.io.compress.ZStandardCodec
  wcout-zst wcout-zst2

{noformat}

* Fixed the warnings for ZStandardDecompressor.c
* Updated the Building.txt
* Used the constant IO_COMPRESSION_CODEC_ZSTD_LEVEL_DEFAULT and fixed the 
default compression level 
* Sorted out the TODO for the compression overhead.


> Add Codec for ZStandard Compression
> ---
>
> Key: HADOOP-13578
> URL: https://issues.apache.org/jira/browse/HADOOP-13578
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: churro morales
>Assignee: churro morales
> Attachments: HADOOP-13578.patch, HADOOP-13578.v1.patch
>
>
> ZStandard: https://github.com/facebook/zstd has been used in production for 6 
> months by facebook now.  v1.0 was recently released.  Create a codec for this 
> library.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13502) Rename/split fs.contract.is-blobstore flag used by contract tests.

2016-10-03 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15543689#comment-15543689
 ] 

Xiaoyu Yao commented on HADOOP-13502:
-

[~cnauroth], the changes looks pretty good to me. I have one question about:
bq. Deprecated the is-blobstore flag, but retained it in case file system 
implementations outside the Hadoop source tree are using it. (Side note: do we 
need to add audience and stability annotations to the contract test classes?)

Can we keep is-blobstore purely as a high level flag in those *.xml files? The 
name itself matches with the backstore of those file system. 
We can still use the new flags to differentiate hadoop contract test.

> Rename/split fs.contract.is-blobstore flag used by contract tests.
> --
>
> Key: HADOOP-13502
> URL: https://issues.apache.org/jira/browse/HADOOP-13502
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: test
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Minor
> Attachments: HADOOP-13502-branch-2.001.patch
>
>
> The {{fs.contract.is-blobstore}} flag guards against execution of several 
> contract tests to account for known limitations with blob stores.  However, 
> the name is not entirely accurate, because it's still possible that a file 
> system implemented against a blob store could pass those tests, depending on 
> whether or not the implementation matches the semantics of HDFS.  This issue 
> proposes to rename the flag or split it into different flags with different 
> definitions for the semantics covered by the current flag.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13128) Manage Hadoop RPC resource usage via resource coupon

2016-10-03 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13128?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15543586#comment-15543586
 ] 

Konstantin Shvachko commented on HADOOP-13128:
--

Will this also affect WebHDFS clients, or is it limited to RPCs only?
Http clients can be as "aggressive" as RPC ones based on my experience.

> Manage Hadoop RPC resource usage via resource coupon
> 
>
> Key: HADOOP-13128
> URL: https://issues.apache.org/jira/browse/HADOOP-13128
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HADOOP-13128-Proposal-20160511.pdf
>
>
> HADOOP-9640 added RPC Fair Call Queue and HADOOP-10597 added RPC backoff to 
> ensure the fairness usage of the HDFS namenode resources. YARN, the Hadoop 
> cluster resource manager currently manages the CPU and Memory resources for 
> jobs/tasks but not the storage resources such as HDFS namenode and datanode 
> usage directly. As a result of that, a high priority Yarn Job may send too 
> many RPC requests to HDFS namenode and get demoted into low priority call 
> queues due to lack of reservation/coordination. 
> To better support multi-tenancy use cases like above, we propose to manage 
> RPC server resource usage via coupon mechanism integrated with YARN. The idea 
> is to allow YARN request HDFS storage resource coupon (e.g., namenode RPC 
> calls, datanode I/O bandwidth) from namenode on behalf of the job upon 
> submission time.  Once granted, the tasks will include the coupon identifier 
> in RPC header for the subsequent calls. HDFS namenode RPC scheduler maintains 
> the state of the coupon usage based on the scheduler policy (fairness or 
> priority) to match the RPC priority with the YARN scheduling priority.
> I will post a proposal with more detail shortly. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13578) Add Codec for ZStandard Compression

2016-10-03 Thread churro morales (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15543553#comment-15543553
 ] 

churro morales commented on HADOOP-13578:
-

[~jlowe] Great catch!  I figured out the issue I was returning the wrong buffer 
length in the decompressBytes function in ZStandardDecompressor.c .  Made the 
change locally and everything works.  I'll fix the warnings, make sure all your 
test cases work and incorporate your earlier comments as well.  Thanks again 
for the review, I really appreciate you taking the time to take a look. 

> Add Codec for ZStandard Compression
> ---
>
> Key: HADOOP-13578
> URL: https://issues.apache.org/jira/browse/HADOOP-13578
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: churro morales
>Assignee: churro morales
> Attachments: HADOOP-13578.patch
>
>
> ZStandard: https://github.com/facebook/zstd has been used in production for 6 
> months by facebook now.  v1.0 was recently released.  Create a codec for this 
> library.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13081) add the ability to create multiple UGIs/subjects from one kerberos login

2016-10-03 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15543526#comment-15543526
 ] 

Daryn Sharp commented on HADOOP-13081:
--

So what are we doing with this jira?  I have not heard a compelling use case 
for adding/keeping a dangerous api when I believe the current api is sufficient 
and just misunderstood.  I want to make sure I understand the use case before 
rejecting the jira.

> add the ability to create multiple UGIs/subjects from one kerberos login
> 
>
> Key: HADOOP-13081
> URL: https://issues.apache.org/jira/browse/HADOOP-13081
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: HADOOP-13081.01.patch, HADOOP-13081.02.patch, 
> HADOOP-13081.02.patch, HADOOP-13081.03.patch, HADOOP-13081.03.patch, 
> HADOOP-13081.patch
>
>
> We have a scenario where we log in with kerberos as a certain user for some 
> tasks, but also want to add tokens to the resulting UGI that would be 
> specific to each task. We don't want to authenticate with kerberos for every 
> task.
> I am not sure how this can be accomplished with the existing UGI interface. 
> Perhaps some clone method would be helpful, similar to createProxyUser minus 
> the proxy stuff; or it could just relogin anew from ticket cache. 
> getUGIFromTicketCache seems like the best option in existing code, but there 
> doesn't appear to be a consistent way of handling ticket cache location - the 
> above method, that I only see called in test, is using a config setting that 
> is not used anywhere else, and the env variable for the location that is used 
> in the main ticket cache related methods is not set uniformly on all paths - 
> therefore, trying to find the correct ticket cache and passing it via the 
> config setting to getUGIFromTicketCache seems even hackier than doing the 
> clone via reflection ;) Moreover, getUGIFromTicketCache ignores the user 
> parameter on the main path - it logs a warning for multiple principals and 
> then logs in with first available.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13669) KMS Server should log exceptions before throwing

2016-10-03 Thread Suraj Acharya (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15543524#comment-15543524
 ] 

Suraj Acharya commented on HADOOP-13669:


I agree with the logic of it being a bit too much.
However, my thought process was that since most of these errors will be a fatal 
error (either killing the operation in progress / killing the startup of the 
KMS) i thought it was worth putting this as an error message.
But ill take whatever recommendation you'll give me on this one.

> KMS Server should log exceptions before throwing
> 
>
> Key: HADOOP-13669
> URL: https://issues.apache.org/jira/browse/HADOOP-13669
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Reporter: Xiao Chen
>Assignee: Suraj Acharya
>  Labels: supportability
> Attachments: HADOOP-13369.patch
>
>
> In some recent investigation, it turns out when KMS throws an exception (into 
> tomcat), it's not logged anywhere and we can only see the exception message 
> from client-side, but not the stacktrace. Logging the stacktrance would help 
> debugging.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13055) Implement linkMergeSlash for ViewFs

2016-10-03 Thread Manoj Govindassamy (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15543519#comment-15543519
 ] 

Manoj Govindassamy commented on HADOOP-13055:
-

Hi [~zhz],

After the support for "fs.viewfs.mounttable.linkMergeSlash", should an explicit 
slash mount "fs.viewfs.mounttable.link./" be treated like linkMergeSlash ? I 
don't see the patch handling this case -- like supporting or gracefully denying 
it. Can you please clarify ?  I would be happy to work on this if you are 
currently onto something else. Please let me know.

> Implement linkMergeSlash for ViewFs
> ---
>
> Key: HADOOP-13055
> URL: https://issues.apache.org/jira/browse/HADOOP-13055
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs, viewfs
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
> Attachments: HADOOP-13055.00.patch, HADOOP-13055.01.patch, 
> HADOOP-13055.02.patch
>
>
> In a multi-cluster environment it is sometimes useful to operate on the root 
> / slash directory of an HDFS cluster. E.g., list all top level directories. 
> Quoting the comment in {{ViewFs}}:
> {code}
>  *   A special case of the merge mount is where mount table's root is merged
>  *   with the root (slash) of another file system:
>  *   
>  *   fs.viewfs.mounttable.default.linkMergeSlash=hdfs://nn99/
>  *   
>  *   In this cases the root of the mount table is merged with the root of
>  *hdfs://nn99/  
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13669) KMS Server should log exceptions before throwing

2016-10-03 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15543511#comment-15543511
 ] 

Xiao Chen commented on HADOOP-13669:


Thanks for working on this [~sacharya]. I know this is still in the works, but 
propose to only log the exception stack trace in debug, to prevent spamming the 
server logs and unnecessary performance impact.
One way to do that is to log a higher level message without the stack trace, 
and then log a DEBUG with stacktrace. But since we can already see the error 
from client side, maybe just DEBUG logging the message + stack trace is fine.

Also ping [~asuresh] for his input.

> KMS Server should log exceptions before throwing
> 
>
> Key: HADOOP-13669
> URL: https://issues.apache.org/jira/browse/HADOOP-13669
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Reporter: Xiao Chen
>Assignee: Suraj Acharya
>  Labels: supportability
> Attachments: HADOOP-13369.patch
>
>
> In some recent investigation, it turns out when KMS throws an exception (into 
> tomcat), it's not logged anywhere and we can only see the exception message 
> from client-side, but not the stacktrace. Logging the stacktrance would help 
> debugging.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12611) TestZKSignerSecretProvider#testMultipleInit occasionally fail

2016-10-03 Thread Eric Badger (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Badger updated HADOOP-12611:
-
Attachment: HADOOP-12611.004.patch

bq. Does that make sense?
Yes. It does. However, it means that {{testMultipleInit}} only has 1 possible 
outcome, since there is only 1 {{rollSecret}} call. It makes me wonder what it 
gives us that {{testMultipleUnsynchronized}} doesn't already give us. 
{{testMultipleUnsynchronized}} has 3 secretProviders and rolls the secrets 
twice, while {{testMultipleInit}} has 2 secretProviders and rolls the secrets 
once. So it seems like {{testMultipleInit}} is just a subset of 
{{testMultipleUnsynchronized}}. [~rkanter], what do you think?

I'm uploading a patch that makes it so that the tests only test the ordering 
between {{rollSecret}} calls (meaning that {{testMultipleInit}} only has 1 
outcome, while {{testMultipleUnsynchronized}} has 2). If you agree that 
{{testMultipleInit}} is a redundant subset of {{testMultipleUnsynchronized}} 
then  I can upload a patch removing it completely. And of course if you think 
that there is functionality in {{testMultipleInit}} that I've stripped out, 
please do let me know. 

> TestZKSignerSecretProvider#testMultipleInit occasionally fail
> -
>
> Key: HADOOP-12611
> URL: https://issues.apache.org/jira/browse/HADOOP-12611
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HADOOP-12611.001.patch, HADOOP-12611.002.patch, 
> HADOOP-12611.003.patch, HADOOP-12611.004.patch
>
>
> https://builds.apache.org/job/Hadoop-Common-trunk/2053/testReport/junit/org.apache.hadoop.security.authentication.util/TestZKSignerSecretProvider/testMultipleInit/
> Error Message
> expected null, but was:<[B@142bad79>
> Stacktrace
> java.lang.AssertionError: expected null, but was:<[B@142bad79>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotNull(Assert.java:664)
>   at org.junit.Assert.assertNull(Assert.java:646)
>   at org.junit.Assert.assertNull(Assert.java:656)
>   at 
> org.apache.hadoop.security.authentication.util.TestZKSignerSecretProvider.testMultipleInit(TestZKSignerSecretProvider.java:149)
> I think the failure was introduced after HADOOP-12181
> This is likely where the root cause is:
> 2015-11-29 00:24:33,325 ERROR ZKSignerSecretProvider - An unexpected 
> exception occurred while pulling data fromZooKeeper
> java.lang.IllegalStateException: instance must be started before calling this 
> method
>   at 
> com.google.common.base.Preconditions.checkState(Preconditions.java:145)
>   at 
> org.apache.curator.framework.imps.CuratorFrameworkImpl.getData(CuratorFrameworkImpl.java:363)
>   at 
> org.apache.hadoop.security.authentication.util.ZKSignerSecretProvider.pullFromZK(ZKSignerSecretProvider.java:341)
>   at 
> org.apache.hadoop.security.authentication.util.ZKSignerSecretProvider.rollSecret(ZKSignerSecretProvider.java:264)
>   at 
> org.apache.hadoop.security.authentication.util.ZKSignerSecretProvider$$EnhancerByMockitoWithCGLIB$$575f06d8.CGLIB$rollSecret$2()
>   at 
> org.apache.hadoop.security.authentication.util.ZKSignerSecretProvider$$EnhancerByMockitoWithCGLIB$$575f06d8$$FastClassByMockitoWithCGLIB$$6f94a716.invoke()
>   at org.mockito.cglib.proxy.MethodProxy.invokeSuper(MethodProxy.java:216)
>   at 
> org.mockito.internal.creation.AbstractMockitoMethodProxy.invokeSuper(AbstractMockitoMethodProxy.java:10)
>   at 
> org.mockito.internal.invocation.realmethod.CGLIBProxyRealMethod.invoke(CGLIBProxyRealMethod.java:22)
>   at 
> org.mockito.internal.invocation.realmethod.FilteredCGLIBProxyRealMethod.invoke(FilteredCGLIBProxyRealMethod.java:27)
>   at 
> org.mockito.internal.invocation.Invocation.callRealMethod(Invocation.java:211)
>   at 
> org.mockito.internal.stubbing.answers.CallsRealMethods.answer(CallsRealMethods.java:36)
>   at org.mockito.internal.MockHandler.handle(MockHandler.java:99)
>   at 
> org.mockito.internal.creation.MethodInterceptorFilter.intercept(MethodInterceptorFilter.java:47)
>   at 
> org.apache.hadoop.security.authentication.util.ZKSignerSecretProvider$$EnhancerByMockitoWithCGLIB$$575f06d8.rollSecret()
>   at 
> org.apache.hadoop.security.authentication.util.RolloverSignerSecretProvider$1.run(RolloverSignerSecretProvider.java:97)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>   at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoo

[jira] [Commented] (HADOOP-13502) Rename/split fs.contract.is-blobstore flag used by contract tests.

2016-10-03 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15543434#comment-15543434
 ] 

Chris Nauroth commented on HADOOP-13502:


All whitespace warnings are in a file I didn't touch with this patch, so they 
aren't relevant.

> Rename/split fs.contract.is-blobstore flag used by contract tests.
> --
>
> Key: HADOOP-13502
> URL: https://issues.apache.org/jira/browse/HADOOP-13502
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: test
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Minor
> Attachments: HADOOP-13502-branch-2.001.patch
>
>
> The {{fs.contract.is-blobstore}} flag guards against execution of several 
> contract tests to account for known limitations with blob stores.  However, 
> the name is not entirely accurate, because it's still possible that a file 
> system implemented against a blob store could pass those tests, depending on 
> whether or not the implementation matches the semantics of HDFS.  This issue 
> proposes to rename the flag or split it into different flags with different 
> definitions for the semantics covered by the current flag.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13502) Rename/split fs.contract.is-blobstore flag used by contract tests.

2016-10-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15543423#comment-15543423
 ] 

Hadoop QA commented on HADOOP-13502:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
12s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
44s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
50s{color} | {color:green} branch-2 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
33s{color} | {color:green} branch-2 passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
28s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
5s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
54s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
23s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
30s{color} | {color:green} branch-2 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
49s{color} | {color:green} branch-2 passed with JDK v1.7.0_111 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
33s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
32s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
 9s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 47 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
0s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
45s{color} | {color:green} hadoop-common in the patch passed with JDK 
v1.7.0_111. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
20s{color} | {color:green} hadoop-openstack in the patch passed with JDK 
v1.7.0_111. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
28s{color} | {color:green} hadoop-aws in the patch passed with JDK v1.7.0_111. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
46s{color} | {color:green} hadoop-azure in the patch passed with JDK 
v1.7.0_111. {color} |
| {color:green}+1{color} | {col

[jira] [Commented] (HADOOP-13578) Add Codec for ZStandard Compression

2016-10-03 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15543405#comment-15543405
 ] 

Jason Lowe commented on HADOOP-13578:
-

bq.  As far as the first issue were you using the hadoop-mapreduce-native-task 
code for intermediate compression for MR jobs?

No, I don't believe so unless trunk uses that native code by default.  This was 
just a straightforward mapreduce wordcount job with just the two settings I 
mentioned above, mapreduce.map.output.compress=true and 
mapreduce.map.output.compress.codec=org.apache.hadoop.io.compress.ZStandardCodec.


> Add Codec for ZStandard Compression
> ---
>
> Key: HADOOP-13578
> URL: https://issues.apache.org/jira/browse/HADOOP-13578
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: churro morales
>Assignee: churro morales
> Attachments: HADOOP-13578.patch
>
>
> ZStandard: https://github.com/facebook/zstd has been used in production for 6 
> months by facebook now.  v1.0 was recently released.  Create a codec for this 
> library.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13674) S3A can provide a more detailed error message when accessing a bucket through an incorrect S3 endpoint.

2016-10-03 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15543380#comment-15543380
 ] 

Chris Nauroth commented on HADOOP-13674:


Here is the error I see when I attempt to access a bucket in us-west-2, but 
with {{fs.s3a.endpoint}} pointing to us-west-1.

{code}
> hadoop fs -D fs.s3a.endpoint=s3-us-west-1.amazonaws.com -ls 
> s3a://cnauroth-test-aws-s3a-logs/
ls: getFileStatus on : com.amazonaws.services.s3.model.AmazonS3Exception: The 
bucket you are attempting to access must be addressed using the specified 
endpoint. Please send all future requests to this endpoint. (Service: Amazon 
S3; Status Code: 301; Error Code: PermanentRedirect; Request ID: 
EC6C7FCF8B40B27C), S3 Extended Request ID: 
EQ1h4MW2CRLV4ZJBGs2xz2CVXwsfGS5X+ByWfyl1tdzbXbf7bFn5DI5pejcWWCmu1/P/uDEOjaU=: 
The bucket you are attempting to access must be addressed using the specified 
endpoint. Please send all future requests to this endpoint. (Service: Amazon 
S3; Status Code: 301; Error Code: PermanentRedirect; Request ID: 
EC6C7FCF8B40B27C)
{code}

It says that the endpoint is wrong, but it doesn't say which endpoint is 
correct.  Turning on debug logging shows that the information does come back in 
the HTTP 301 response:

{code}
> hadoop --loglevel DEBUG fs -D fs.s3a.endpoint=s3-us-west-1.amazonaws.com -ls 
> s3a://cnauroth-test-aws-s3a-logs/
...
16/10/03 13:35:28 DEBUG http.wire:  << 
"PermanentRedirectThe bucket you are attempting to 
access must be addressed using the specified endpoint. Please send all future 
requests to this 
endpoint.cnauroth-test-aws-s3a-logscnauroth-test-aws-s3a-logs.s3-us-west-2.amazonaws.com995927D9C5DD8F90LK/kvbR/gdnxyr5JXj1L41TOfcO4VBF6MtT8FkwOXXyRdjhasccrHc2bux+b4uHSqJmiBEgHJcI="
...
{code}

It appears that the AWS SDK maps the {{}} element into the map 
returned by 
[{{AmazonS3Exception#getAdditionalDetails()}}|http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/s3/model/AmazonS3Exception.html#getAdditionalDetails--].
  We can use that to get the information and put it into the exception thrown 
from S3A.

> S3A can provide a more detailed error message when accessing a bucket through 
> an incorrect S3 endpoint.
> ---
>
> Key: HADOOP-13674
> URL: https://issues.apache.org/jira/browse/HADOOP-13674
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Minor
>
> When accessing the S3 service through a region-specific endpoint, the bucket 
> must be located in that region.  If the client attempts to access a bucket 
> that is not located in that region, then the service replies with a 301 
> redirect and the correct region endpoint.  However, the exception thrown by 
> S3A does not include the correct endpoint.  If we included that information 
> in the exception, it would make it easier for users to diagnose and fix 
> incorrect configuration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13674) S3A can provide a more detailed error message when accessing a bucket through an incorrect S3 endpoint.

2016-10-03 Thread Chris Nauroth (JIRA)
Chris Nauroth created HADOOP-13674:
--

 Summary: S3A can provide a more detailed error message when 
accessing a bucket through an incorrect S3 endpoint.
 Key: HADOOP-13674
 URL: https://issues.apache.org/jira/browse/HADOOP-13674
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Reporter: Chris Nauroth
Assignee: Chris Nauroth
Priority: Minor


When accessing the S3 service through a region-specific endpoint, the bucket 
must be located in that region.  If the client attempts to access a bucket that 
is not located in that region, then the service replies with a 301 redirect and 
the correct region endpoint.  However, the exception thrown by S3A does not 
include the correct endpoint.  If we included that information in the 
exception, it would make it easier for users to diagnose and fix incorrect 
configuration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13626) Remove distcp dependency on FileStatus serialization

2016-10-03 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15543344#comment-15543344
 ] 

Chris Douglas commented on HADOOP-13626:


[~cnauroth] could you take a look? Would like to commit this before HDFS-6984; 
you'd 
[cited|https://issues.apache.org/jira/browse/HDFS-6984?focusedCommentId=14123412&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14123412]
 this as a todo over there.

> Remove distcp dependency on FileStatus serialization
> 
>
> Key: HADOOP-13626
> URL: https://issues.apache.org/jira/browse/HADOOP-13626
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Reporter: Chris Douglas
>Assignee: Chris Douglas
> Attachments: HADOOP-13626.001.patch, HADOOP-13626.002.patch, 
> HADOOP-13626.003.patch
>
>
> DistCp uses an internal struct {{CopyListingFileStatus}} to record metadata. 
> Because this record extends {{FileStatus}}, it also relies on the 
> {{Writable}} contract from that type. Because DistCp performs its checks on a 
> subset of the fields (i.e., does not actually rely on {{FileStatus}} as a 
> supertype), these types should be independent.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13578) Add Codec for ZStandard Compression

2016-10-03 Thread churro morales (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15543332#comment-15543332
 ] 

churro morales commented on HADOOP-13578:
-

[~jlowe] Thanks for the update.  I will look into the issues.  As far as the 
first issue were you using the hadoop-mapreduce-native-task code for 
intermediate compression for MR jobs?  If so I didn't implement that feature 
because I didn't know if enough people were interested.  As far as the 
warnings, I will clean them up first and will run a word count job to ensure 
that the decompression  / compression works correctly.   I'll see if I can 
reproduce your word count results, that is a bit concerning.  Thanks for the 
review, I'll take a look at it today / tomorrow. 

> Add Codec for ZStandard Compression
> ---
>
> Key: HADOOP-13578
> URL: https://issues.apache.org/jira/browse/HADOOP-13578
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: churro morales
>Assignee: churro morales
> Attachments: HADOOP-13578.patch
>
>
> ZStandard: https://github.com/facebook/zstd has been used in production for 6 
> months by facebook now.  v1.0 was recently released.  Create a codec for this 
> library.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13672) Extract out jackson calls into an overrideable method in DelegationTokenAuthenticationHandler

2016-10-03 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15543301#comment-15543301
 ] 

Xiao Chen commented on HADOOP-13672:


+1 on latest patch. Will hold off the commit for 24 hours in case 
[~ste...@apache.org] or others have any comments.

> Extract out jackson calls into an overrideable method in 
> DelegationTokenAuthenticationHandler
> -
>
> Key: HADOOP-13672
> URL: https://issues.apache.org/jira/browse/HADOOP-13672
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Ishan Chattopadhyaya
>Assignee: Ishan Chattopadhyaya
>Priority: Minor
> Attachments: HADOOP-13672.patch, HADOOP-13672.patch
>
>
> In Apache Solr, we use hadoop-auth for delegation tokens. However, because of 
> the following lines, we need to import Jackson (old version).
> https://github.com/apache/hadoop/blob/branch-2.7/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/web/DelegationTokenAuthenticationHandler.java#L279
> If we could extract out the calls to ObjectMapper to another method, so that 
> at Solr we could override it to do the Map -> json conversion using noggit, 
> it would be helpful.
> Reference: SOLR-9542



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13672) Extract out jackson calls into an overrideable method in DelegationTokenAuthenticationHandler

2016-10-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15543255#comment-15543255
 ] 

Hadoop QA commented on HADOOP-13672:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 49s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 43m 37s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.net.TestDNS |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HADOOP-13672 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12831371/HADOOP-13672.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux ba79b213c4b1 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 
21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 607705c |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10644/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10644/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10644/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Extract out jackson calls into an overrideable method in 
> DelegationTokenAuthenticationHandler
> -
>
> Key: HADOOP-13672
> URL: https://issues.apache

[jira] [Commented] (HADOOP-13672) Extract out jackson calls into an overrideable method in DelegationTokenAuthenticationHandler

2016-10-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15543176#comment-15543176
 ] 

Hadoop QA commented on HADOOP-13672:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
45s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 23s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 2 new + 60 unchanged - 0 fixed = 62 total (was 60) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  6m 56s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 37m 33s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.net.TestDNS |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HADOOP-13672 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12831330/HADOOP-13672.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux df516de53696 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 607705c |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10643/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10643/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10643/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10643/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Extract out jackson cal

[jira] [Updated] (HADOOP-13502) Rename/split fs.contract.is-blobstore flag used by contract tests.

2016-10-03 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-13502:
---
Target Version/s: 2.9.0

> Rename/split fs.contract.is-blobstore flag used by contract tests.
> --
>
> Key: HADOOP-13502
> URL: https://issues.apache.org/jira/browse/HADOOP-13502
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: test
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Minor
> Attachments: HADOOP-13502-branch-2.001.patch
>
>
> The {{fs.contract.is-blobstore}} flag guards against execution of several 
> contract tests to account for known limitations with blob stores.  However, 
> the name is not entirely accurate, because it's still possible that a file 
> system implemented against a blob store could pass those tests, depending on 
> whether or not the implementation matches the semantics of HDFS.  This issue 
> proposes to rename the flag or split it into different flags with different 
> definitions for the semantics covered by the current flag.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13502) Rename/split fs.contract.is-blobstore flag used by contract tests.

2016-10-03 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-13502:
---
Status: Patch Available  (was: Open)

> Rename/split fs.contract.is-blobstore flag used by contract tests.
> --
>
> Key: HADOOP-13502
> URL: https://issues.apache.org/jira/browse/HADOOP-13502
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: test
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Minor
> Attachments: HADOOP-13502-branch-2.001.patch
>
>
> The {{fs.contract.is-blobstore}} flag guards against execution of several 
> contract tests to account for known limitations with blob stores.  However, 
> the name is not entirely accurate, because it's still possible that a file 
> system implemented against a blob store could pass those tests, depending on 
> whether or not the implementation matches the semantics of HDFS.  This issue 
> proposes to rename the flag or split it into different flags with different 
> definitions for the semantics covered by the current flag.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13502) Rename/split fs.contract.is-blobstore flag used by contract tests.

2016-10-03 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-13502:
---
Attachment: HADOOP-13502-branch-2.001.patch

I'm attaching patch 001.  Summary:

* Introduced 2 new specific contract options: create-overwrites-directory and 
create-visibility-delayed.
* Deprecated the is-blobstore flag, but retained it in case file system 
implementations outside the Hadoop source tree are using it.  (Side note: do we 
need to add audience and stability annotations to the contract test classes?)
* Cleaned up a few minor JavaDoc omissions in {{ContractOptions}}.
* Updated contract XML configuration files to remove usage of is-blobstore for 
the actively maintained file systems.  Notice that S3A adds 
create-visibility-delayed, but does not add create-overwrites-directory.  
That's because HADOOP-13188 recently changed the implementation to avoid 
overwriting directories.

So far, I have tested by running all subclasses of 
{{AbstractContractCreateTest}}, except {{TestSwiftContractCreate}}.  Maybe this 
will be the patch where I finally get myself set up to run Swift tests.


> Rename/split fs.contract.is-blobstore flag used by contract tests.
> --
>
> Key: HADOOP-13502
> URL: https://issues.apache.org/jira/browse/HADOOP-13502
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: test
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Minor
> Attachments: HADOOP-13502-branch-2.001.patch
>
>
> The {{fs.contract.is-blobstore}} flag guards against execution of several 
> contract tests to account for known limitations with blob stores.  However, 
> the name is not entirely accurate, because it's still possible that a file 
> system implemented against a blob store could pass those tests, depending on 
> whether or not the implementation matches the semantics of HDFS.  This issue 
> proposes to rename the flag or split it into different flags with different 
> definitions for the semantics covered by the current flag.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13672) Extract out jackson calls into an overrideable method in DelegationTokenAuthenticationHandler

2016-10-03 Thread Ishan Chattopadhyaya (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Chattopadhyaya updated HADOOP-13672:
--
Attachment: HADOOP-13672.patch

Adding a simpler patch, based on [~noble.paul]'s suggestion [0].
[~xiaochen], please review. Thanks.

[0] - https://github.com/apache/hadoop/compare/branch-2.7...noblepaul:patch-2

> Extract out jackson calls into an overrideable method in 
> DelegationTokenAuthenticationHandler
> -
>
> Key: HADOOP-13672
> URL: https://issues.apache.org/jira/browse/HADOOP-13672
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Ishan Chattopadhyaya
>Assignee: Ishan Chattopadhyaya
>Priority: Minor
> Attachments: HADOOP-13672.patch, HADOOP-13672.patch
>
>
> In Apache Solr, we use hadoop-auth for delegation tokens. However, because of 
> the following lines, we need to import Jackson (old version).
> https://github.com/apache/hadoop/blob/branch-2.7/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/web/DelegationTokenAuthenticationHandler.java#L279
> If we could extract out the calls to ObjectMapper to another method, so that 
> at Solr we could override it to do the Map -> json conversion using noggit, 
> it would be helpful.
> Reference: SOLR-9542



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13672) Extract out jackson calls into an overrideable method in DelegationTokenAuthenticationHandler

2016-10-03 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15543049#comment-15543049
 ] 

Xiao Chen commented on HADOOP-13672:


+1 pending jenkins. Looks to be a simple refactor, and I don't see any harm 
doing this.
Targeting to 2.7.x per Steve's comment.

> Extract out jackson calls into an overrideable method in 
> DelegationTokenAuthenticationHandler
> -
>
> Key: HADOOP-13672
> URL: https://issues.apache.org/jira/browse/HADOOP-13672
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Ishan Chattopadhyaya
>Assignee: Ishan Chattopadhyaya
>Priority: Minor
> Attachments: HADOOP-13672.patch
>
>
> In Apache Solr, we use hadoop-auth for delegation tokens. However, because of 
> the following lines, we need to import Jackson (old version).
> https://github.com/apache/hadoop/blob/branch-2.7/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/web/DelegationTokenAuthenticationHandler.java#L279
> If we could extract out the calls to ObjectMapper to another method, so that 
> at Solr we could override it to do the Map -> json conversion using noggit, 
> it would be helpful.
> Reference: SOLR-9542



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13672) Extract out jackson calls into an overrideable method in DelegationTokenAuthenticationHandler

2016-10-03 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-13672:
---
Target Version/s: 2.7.4

> Extract out jackson calls into an overrideable method in 
> DelegationTokenAuthenticationHandler
> -
>
> Key: HADOOP-13672
> URL: https://issues.apache.org/jira/browse/HADOOP-13672
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Ishan Chattopadhyaya
>Assignee: Ishan Chattopadhyaya
>Priority: Minor
> Attachments: HADOOP-13672.patch
>
>
> In Apache Solr, we use hadoop-auth for delegation tokens. However, because of 
> the following lines, we need to import Jackson (old version).
> https://github.com/apache/hadoop/blob/branch-2.7/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/web/DelegationTokenAuthenticationHandler.java#L279
> If we could extract out the calls to ObjectMapper to another method, so that 
> at Solr we could override it to do the Map -> json conversion using noggit, 
> it would be helpful.
> Reference: SOLR-9542



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13672) Extract out jackson calls into an overrideable method in DelegationTokenAuthenticationHandler

2016-10-03 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-13672:
---
Status: Patch Available  (was: Open)

> Extract out jackson calls into an overrideable method in 
> DelegationTokenAuthenticationHandler
> -
>
> Key: HADOOP-13672
> URL: https://issues.apache.org/jira/browse/HADOOP-13672
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Ishan Chattopadhyaya
>Assignee: Ishan Chattopadhyaya
>Priority: Minor
> Attachments: HADOOP-13672.patch
>
>
> In Apache Solr, we use hadoop-auth for delegation tokens. However, because of 
> the following lines, we need to import Jackson (old version).
> https://github.com/apache/hadoop/blob/branch-2.7/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/web/DelegationTokenAuthenticationHandler.java#L279
> If we could extract out the calls to ObjectMapper to another method, so that 
> at Solr we could override it to do the Map -> json conversion using noggit, 
> it would be helpful.
> Reference: SOLR-9542



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13672) Extract out jackson calls into an overrideable method in DelegationTokenAuthenticationHandler

2016-10-03 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-13672:
---
Assignee: Ishan Chattopadhyaya

> Extract out jackson calls into an overrideable method in 
> DelegationTokenAuthenticationHandler
> -
>
> Key: HADOOP-13672
> URL: https://issues.apache.org/jira/browse/HADOOP-13672
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Ishan Chattopadhyaya
>Assignee: Ishan Chattopadhyaya
>Priority: Minor
> Attachments: HADOOP-13672.patch
>
>
> In Apache Solr, we use hadoop-auth for delegation tokens. However, because of 
> the following lines, we need to import Jackson (old version).
> https://github.com/apache/hadoop/blob/branch-2.7/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/web/DelegationTokenAuthenticationHandler.java#L279
> If we could extract out the calls to ObjectMapper to another method, so that 
> at Solr we could override it to do the Map -> json conversion using noggit, 
> it would be helpful.
> Reference: SOLR-9542



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13234) Get random port by new ServerSocket(0).getLocalPort() in ServerSocketUtil#getPort

2016-10-03 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15542988#comment-15542988
 ] 

Xiaoyu Yao commented on HADOOP-13234:
-

I've seen a recent instance of Jenkins failure 
[here|https://builds.apache.org/job/PreCommit-HDFS-Build/16970/testReport/org.apache.hadoop.hdfs/TestDFSShell/testMoveWithTargetPortEmpty/]
 as shown below, [~brahmareddy], can you confirm if the proposal here will fix 
the issue?

{code}
Tests run: 48, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 63.928 sec <<< 
FAILURE! - in org.apache.hadoop.hdfs.TestDFSShell
testMoveWithTargetPortEmpty(org.apache.hadoop.hdfs.TestDFSShell)  Time elapsed: 
9.026 sec  <<< ERROR!
java.io.IOException: Port is already in use; giving up after 10 times.
at 
org.apache.hadoop.net.ServerSocketUtil.waitForPort(ServerSocketUtil.java:98)
at 
org.apache.hadoop.hdfs.TestDFSShell.testMoveWithTargetPortEmpty(TestDFSShell.java:809)

{code}

> Get random port by new ServerSocket(0).getLocalPort() in 
> ServerSocketUtil#getPort
> -
>
> Key: HADOOP-13234
> URL: https://issues.apache.org/jira/browse/HADOOP-13234
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>
> As per [~iwasakims] comment from 
> [here|https://issues.apache.org/jira/browse/HDFS-10367?focusedCommentId=15275604&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15275604]
> we can get available random port by {{new ServerSocket(0).getLocalPort()}} 
> and it's more portable. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13673) Update sbin/start-* and sbin/stop-* to be smarter

2016-10-03 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13673?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-13673:
--
Description: 
As work continues on HADOOP-13397, it's become evident that we need better 
hooks to start daemons as specifically configured users.  Via the 
(command)_(subcommand)_USER environment variables in 3.x, we actually have a 
standardized way to do that.  This in turn means we can make the sbin scripts 
super functional with a bit of updating:

* Consolidate start-dfs.sh and start-secure-dns.sh into one script
* Make start-\*.sh and stop-\*.sh know how to switch users when run as root
* Undeprecate start/stop-all.sh so that it could be used as root for production 
purposes and as a single user for non-production users


  was:
As work continues on HADOOP-13397, it's become evident that we need better 
hooks to start daemons as specifically configured users.  Via the 
(command)_(subcommand)_USER environment variables in 3.x, we actually have a 
standardized way to do that.  This in turn means we can make the sbin scripts 
super functional with a bit of updating:

* Consolidate start-dfs.sh and start-secure-dns.sh into one script
* Make start-*.sh and stop-*.sh know how to switch users when run as root
* Undeprecate start/stop-all.sh so that it could be used as root for production 
purposes and as a single user for non-production users



> Update sbin/start-* and sbin/stop-* to be smarter
> -
>
> Key: HADOOP-13673
> URL: https://issues.apache.org/jira/browse/HADOOP-13673
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 3.0.0-alpha1, 3.0.0-alpha2
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>
> As work continues on HADOOP-13397, it's become evident that we need better 
> hooks to start daemons as specifically configured users.  Via the 
> (command)_(subcommand)_USER environment variables in 3.x, we actually have a 
> standardized way to do that.  This in turn means we can make the sbin scripts 
> super functional with a bit of updating:
> * Consolidate start-dfs.sh and start-secure-dns.sh into one script
> * Make start-\*.sh and stop-\*.sh know how to switch users when run as root
> * Undeprecate start/stop-all.sh so that it could be used as root for 
> production purposes and as a single user for non-production users



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-13502) Rename/split fs.contract.is-blobstore flag used by contract tests.

2016-10-03 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth reassigned HADOOP-13502:
--

Assignee: Chris Nauroth

> Rename/split fs.contract.is-blobstore flag used by contract tests.
> --
>
> Key: HADOOP-13502
> URL: https://issues.apache.org/jira/browse/HADOOP-13502
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: test
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Minor
>
> The {{fs.contract.is-blobstore}} flag guards against execution of several 
> contract tests to account for known limitations with blob stores.  However, 
> the name is not entirely accurate, because it's still possible that a file 
> system implemented against a blob store could pass those tests, depending on 
> whether or not the implementation matches the semantics of HDFS.  This issue 
> proposes to rename the flag or split it into different flags with different 
> definitions for the semantics covered by the current flag.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13673) Update sbin/start-* and sbin/stop-* to be smarter

2016-10-03 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HADOOP-13673:
-

 Summary: Update sbin/start-* and sbin/stop-* to be smarter
 Key: HADOOP-13673
 URL: https://issues.apache.org/jira/browse/HADOOP-13673
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts
Affects Versions: 3.0.0-alpha1, 3.0.0-alpha2
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer


As work continues on HADOOP-13397, it's become evident that we need better 
hooks to start daemons as specifically configured users.  Via the 
(command)_(subcommand)_USER environment variables in 3.x, we actually have a 
standardized way to do that.  This in turn means we can make the sbin scripts 
super functional with a bit of updating:

* Consolidate start-dfs.sh and start-secure-dns.sh into one script
* Make start-*.sh and stop-*.sh know how to switch users when run as root
* Undeprecate start/stop-all.sh so that it could be used as root for production 
purposes and as a single user for non-production users




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13671) Fix ClassFormatException in trunk build.

2016-10-03 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13671:

Affects Version/s: 3.0.0-alpha2
  Component/s: build

> Fix ClassFormatException in trunk build.
> 
>
> Key: HADOOP-13671
> URL: https://issues.apache.org/jira/browse/HADOOP-13671
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-alpha2
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13671.patch
>
>
> The maven-project-info-reports-plugin version 2.7 depends on 
> maven-shared-jar-1.1, which uses bcel 5.2.  This does not work well with the 
> new lamda expression.  The 2.9 depends on maven-shared-jar-1.2, which works 
> around this problem by using the custom release of bcel 6.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11086) Upgrade jets3t to 0.9.4

2016-10-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15542574#comment-15542574
 ] 

Hadoop QA commented on HADOOP-11086:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
12s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m  
0s{color} | {color:green} branch-2 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
47s{color} | {color:green} branch-2 passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
23s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
33s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
32s{color} | {color:green} branch-2 passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
15s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
18s{color} | {color:green} branch-2 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
32s{color} | {color:green} branch-2 passed with JDK v1.7.0_111 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
50s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
47s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
47s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 30s{color} | {color:orange} root: The patch generated 34 new + 55 unchanged 
- 2 fixed = 89 total (was 57) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
 2s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 47 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
16s{color} | {color:green} hadoop-project in the patch passed with JDK 
v1.7.0_111. {colo

[jira] [Commented] (HADOOP-13578) Add Codec for ZStandard Compression

2016-10-03 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15542571#comment-15542571
 ] 

Jason Lowe commented on HADOOP-13578:
-

Sorry for the delay in getting a more detailed review.  Before I delved deep 
into the code I ran the codec through some basic tests and found a number of 
problems.

The native code compiles with warnings that should be cleaned up:
{noformat}
[WARNING] 
/hadoop/y-src/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zstd/ZStandardDecompressor.c:
 In function 
‘Java_org_apache_hadoop_io_compress_zstd_ZStandardDecompressor_decompressBytes’:
[WARNING] 
/hadoop/y-src/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zstd/ZStandardDecompressor.c:110:
 warning: format ‘%d’ expects type ‘int’, but argument 5 has type ‘size_t’
[WARNING] 
/hadoop/y-src/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zstd/ZStandardDecompressor.c:110:
 warning: format ‘%d’ expects type ‘int’, but argument 5 has type ‘size_t’
[WARNING] 
/hadoop/y-src/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zstd/ZStandardDecompressor.c:
 In function 
‘Java_org_apache_hadoop_io_compress_zstd_ZStandardDecompressor_decompressBytes’:
[WARNING] 
/hadoop/y-src/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zstd/ZStandardDecompressor.c:110:
 warning: format ‘%d’ expects type ‘int’, but argument 5 has type ‘size_t’
[WARNING] 
/hadoop/y-src/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zstd/ZStandardDecompressor.c:110:
 warning: format ‘%d’ expects type ‘int’, but argument 5 has type ‘size_t’
{noformat}

The codec is not working as an intermediate codec for MapReduce jobs.  Running 
a wordcount job with -Dmapreduce.map.output.compress=true 
-Dmapreduce.map.output.compress.codec=org.apache.hadoop.io.compress.GzipCodec 
works, but specifying -Dmapreduce.map.output.compress=true 
-Dmapreduce.map.output.compress.codec=org.apache.hadoop.io.compress.ZStandardCodec
 causes the reducers to fail while fetching outputs complaining about premature 
EOF:
{noformat}
2016-10-03 13:51:32,140 INFO [fetcher#5] 
org.apache.hadoop.mapreduce.task.reduce.Fetcher: fetcher#5 about to shuffle 
output of map attempt_1475501532481_0007_m_00_0 decomp: 323113 len: 93339 
to MEMORY
2016-10-03 13:51:32,149 WARN [fetcher#5] 
org.apache.hadoop.mapreduce.task.reduce.Fetcher: Failed to shuffle for fetcher#5
java.io.IOException: Premature EOF from inputStream
at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209)
at 
org.apache.hadoop.mapreduce.task.reduce.InMemoryMapOutput.doShuffle(InMemoryMapOutput.java:90)
at 
org.apache.hadoop.mapreduce.task.reduce.IFileWrappedMapOutput.shuffle(IFileWrappedMapOutput.java:63)
at 
org.apache.hadoop.mapreduce.task.reduce.Fetcher.copyMapOutput(Fetcher.java:536)
at 
org.apache.hadoop.mapreduce.task.reduce.Fetcher.copyFromHost(Fetcher.java:336)
at org.apache.hadoop.mapreduce.task.reduce.Fetcher.run(Fetcher.java:193)
{noformat}

The codec also has some issues with MapReduce jobs when reading input from a 
previous job's output that has been zstd compressed.  For example, this 
sequence of steps generates output one would expect, where we're effectively 
word counting the output of wordcount on /etc/services (just some sample input 
for wordcount):
{noformat}
$ hadoop fs -put /etc/services wcin
$ hadoop jar 
$HADOOP_PREFIX/share/hadoop/mapreduce/hadoop-mapreduce-examples*.jar wordcount 
-Dmapreduce.map.output.compress=true 
-Dmapreduce.output.fileoutputformat.compress=true 
-Dmapreduce.output.fileoutputformat.compress.codec=org.apache.hadoop.io.compress.GzipCodec
 wcin wcout-gzip
$ hadoop jar 
$HADOOP_PREFIX/share/hadoop/mapreduce/hadoop-mapreduce-examples*.jar wordcount 
wcout-gzip wcout-gzip2
{noformat}
But if we do the same with org.apache.hadoop.io.compress.ZStandardCodec there's 
an odd record consisting of about 25K of NULs (i.e.: 0x00 bytes) in the output 
of the second job.

The output of the ZStandardCodec is not readable by the zstd CLI utility, nor 
is output generated by the zstd CLI utility readable by ZStandardCodec.

> Add Codec for ZStandard Compression
> ---
>
> Key: HADOOP-13578
> URL: https://issues.apache.org/jira/browse/HADOOP-13578
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: churro morales
>Assignee: churro morales
> Attachments: HADOOP-13578.patch
>
>
> ZStandard: https://github.com/facebook/zstd has been used in production for 6 
> months by facebook now.  v1.0 was recently released.  Create a codec for this 
> library.  



--
This message was sent by Atlassian JIRA
(v6.3

[jira] [Updated] (HADOOP-13672) Extract out jackson calls into an overrideable method in DelegationTokenAuthenticationHandler

2016-10-03 Thread Ishan Chattopadhyaya (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Chattopadhyaya updated HADOOP-13672:
--
Attachment: HADOOP-13672.patch

Adding a patch for the proposed extraction.

> Extract out jackson calls into an overrideable method in 
> DelegationTokenAuthenticationHandler
> -
>
> Key: HADOOP-13672
> URL: https://issues.apache.org/jira/browse/HADOOP-13672
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Ishan Chattopadhyaya
>Priority: Minor
> Attachments: HADOOP-13672.patch
>
>
> In Apache Solr, we use hadoop-auth for delegation tokens. However, because of 
> the following lines, we need to import Jackson (old version).
> https://github.com/apache/hadoop/blob/branch-2.7/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/web/DelegationTokenAuthenticationHandler.java#L279
> If we could extract out the calls to ObjectMapper to another method, so that 
> at Solr we could override it to do the Map -> json conversion using noggit, 
> it would be helpful.
> Reference: SOLR-9542



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (HADOOP-13662) Upgrade jackson2 version

2016-10-03 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-13662:
-
Comment: was deleted

(was: [~mackrorysd], please take a look at a parallel work at HADOOP-13332.)

> Upgrade jackson2 version
> 
>
> Key: HADOOP-13662
> URL: https://issues.apache.org/jira/browse/HADOOP-13662
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
> Attachments: HADOOP-13662.001.patch
>
>
> We're currently pulling in version 2.2.3 - I think we should upgrade to the 
> latest 2.8.3.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13662) Upgrade jackson2 version

2016-10-03 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15542401#comment-15542401
 ] 

Wei-Chiu Chuang commented on HADOOP-13662:
--

[~mackrorysd], please take a look at a parallel work at HADOOP-13332.

> Upgrade jackson2 version
> 
>
> Key: HADOOP-13662
> URL: https://issues.apache.org/jira/browse/HADOOP-13662
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
> Attachments: HADOOP-13662.001.patch
>
>
> We're currently pulling in version 2.2.3 - I think we should upgrade to the 
> latest 2.8.3.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13672) Extract out jackson calls into an overrideable method in DelegationTokenAuthenticationHandler

2016-10-03 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15542357#comment-15542357
 ] 

Noble Paul commented on HADOOP-13672:
-

The point is, we do not want to use jackson at all, because Solr uses another 
library. if the method is extracted out, it is an easy win without any effort 
from your side

> Extract out jackson calls into an overrideable method in 
> DelegationTokenAuthenticationHandler
> -
>
> Key: HADOOP-13672
> URL: https://issues.apache.org/jira/browse/HADOOP-13672
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Ishan Chattopadhyaya
>Priority: Minor
>
> In Apache Solr, we use hadoop-auth for delegation tokens. However, because of 
> the following lines, we need to import Jackson (old version).
> https://github.com/apache/hadoop/blob/branch-2.7/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/web/DelegationTokenAuthenticationHandler.java#L279
> If we could extract out the calls to ObjectMapper to another method, so that 
> at Solr we could override it to do the Map -> json conversion using noggit, 
> it would be helpful.
> Reference: SOLR-9542



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-11086) Upgrade jets3t to 0.9.4

2016-10-03 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-11086:

Affects Version/s: (was: 2.6.0)
   2.7.3
   Status: Patch Available  (was: Open)

> Upgrade jets3t to 0.9.4
> ---
>
> Key: HADOOP-11086
> URL: https://issues.apache.org/jira/browse/HADOOP-11086
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.7.3
>Reporter: Matteo Bertozzi
>Priority: Minor
> Attachments: HADOOP-11086-branch-2-003.patch, HADOOP-11086-v0.patch, 
> HADOOP-11086.2.patch
>
>
> jets3t 0.9.2 contains a fix that caused failure of multi-part uploads with 
> service-side encryption.
> http://jets3t.s3.amazonaws.com/RELEASE_NOTES.html
> (it also removes an exception thrown from the RestS3Service constructor which 
> requires removing the try/catch around that code)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-11086) Upgrade jets3t to 0.9.4

2016-10-03 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-11086:

Attachment: HADOOP-11086-branch-2-003.patch

Patch 003:

* moves to 0.9.4
* moves jet3t dependency unde hadoop-aws, which you need on the classpath for 
s3n/s3a anyway.
* Tested? S3  US-east.

This patch does *not* work; s3:// tests are failing with auth problems


I'm not going to put any time into look at this, because it's clearly more than 
a simple patch & test run, at least to understand the failures.  I've put in 1h 
already and am declaring defeat & unenthusiasm for debugging a problem which 
only appears to be surfacing on s3, but not s3n or s3a (hence: not clock 
problems)

{code}
testSeekZeroByteFile(org.apache.hadoop.fs.contract.s3.ITestS3ContractSeek)  
Time elapsed: 0.452 sec  <<< ERROR!
org.apache.hadoop.fs.s3.S3Exception: org.jets3t.service.S3ServiceException: 
Service Error Message. -- ResponseCode: 403, ResponseStatus: Forbidden, XML 
Error Message: SignatureDoesNotMatchThe request 
signature we calculated does not match the signature you provided. Check your 
key and signing 
method.AKIAIYZ5JQOW3N5H6NPAGETMon,
 03 Oct 2016 12:29:34 
GMT/hwdev-steve-useast/AYzprUagWZ6w12dI9jzmWPucFKU=47
 45 54 0a 0a 0a 4d 6f 6e 2c 20 30 33 20 4f 63 74 20 32 30 31 36 20 31 32 3a 32 
39 3a 33 34 20 47 4d 54 0a 2f 68 77 64 65 76 2d 73 74 65 76 65 2d 6e 65 77 
2f075F76464E06D512bfBm6fT3i+3q8jThLDMXY877oIBjyyW6iJ4mDRj0gSE4wJYXOf2M2dgRSpwJpJRPiEhHsypF2ho=
at 
org.apache.hadoop.fs.s3.Jets3tFileSystemStore.get(Jets3tFileSystemStore.java:169)
at 
org.apache.hadoop.fs.s3.Jets3tFileSystemStore.retrieveINode(Jets3tFileSystemStore.java:215)
at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:398)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:163)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:155)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:335)
at com.sun.proxy.$Proxy12.retrieveINode(Unknown Source)
at org.apache.hadoop.fs.s3.S3FileSystem.mkdir(S3FileSystem.java:202)
at org.apache.hadoop.fs.s3.S3FileSystem.mkdirs(S3FileSystem.java:189)
at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:2005)
at 
org.apache.hadoop.fs.contract.AbstractFSContractTestBase.mkdirs(AbstractFSContractTestBase.java:338)
at 
org.apache.hadoop.fs.contract.AbstractFSContractTestBase.setup(AbstractFSContractTestBase.java:193)
at 
org.apache.hadoop.fs.contract.AbstractContractSeekTest.setup(AbstractContractSeekTest.java:56)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
Caused by: org.jets3t.service.S3ServiceException: Service Error Message.
at org.jets3t.service.S3Service.getObject(S3Service.java:1470)
at 
org.apache.hadoop.fs.s3.Jets3tFileSystemStore.get(Jets3tFileSystemStore.java:157)
at 
org.apache.hadoop.fs.s3.Jets3tFileSystemStore.retrieveINode(Jets3tFileSystemStore.java:215)
at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:398)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:163)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invo

[jira] [Updated] (HADOOP-11086) Upgrade jets3t to 0.9.4

2016-10-03 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-11086:

Summary: Upgrade jets3t to 0.9.4  (was: Upgrade jets3t to 0.9.2)

> Upgrade jets3t to 0.9.4
> ---
>
> Key: HADOOP-11086
> URL: https://issues.apache.org/jira/browse/HADOOP-11086
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.6.0
>Reporter: Matteo Bertozzi
>Priority: Minor
> Attachments: HADOOP-11086-v0.patch, HADOOP-11086.2.patch
>
>
> jets3t 0.9.2 contains a fix that caused failure of multi-part uploads with 
> service-side encryption.
> http://jets3t.s3.amazonaws.com/RELEASE_NOTES.html
> (it also removes an exception thrown from the RestS3Service constructor which 
> requires removing the try/catch around that code)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12064) [JDK8] Update guice version to 4.0

2016-10-03 Thread Panagiotis Garefalakis (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15542338#comment-15542338
 ] 

Panagiotis Garefalakis commented on HADOOP-12064:
-

[~ozawa] its been a while and I did not face the issue again so I agree it must 
have been a maven issue indeed. 

Cheers,
Panagiotis

> [JDK8] Update guice version to 4.0
> --
>
> Key: HADOOP-12064
> URL: https://issues.apache.org/jira/browse/HADOOP-12064
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha1
>Reporter: Tsuyoshi Ozawa
>Assignee: Tsuyoshi Ozawa
>Priority: Blocker
>  Labels: UpgradeKeyLibrary
> Fix For: 3.0.0-alpha1
>
> Attachments: HADOOP-12064.001.patch, HADOOP-12064.002.WIP.patch, 
> HADOOP-12064.002.patch
>
>
> guice 3.0 doesn't work with lambda statement. 
> https://github.com/google/guice/issues/757
> We should upgrade it to 4.0 which includes the fix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11086) Upgrade jets3t to 0.9.2

2016-10-03 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15542284#comment-15542284
 ] 

Steve Loughran commented on HADOOP-11086:
-

Tried setting version to 0.9.4, it being later. Maven is rejecting a 
convergency problem (nice!), though ot's between mockserver-netty and 
bouncycastle.

# I think we might want to have this patch move the jets3t JAR out of 
hadoop-common and under hadoop-aws
# don't know what to do about versions here; I'd vote for the 1.52 version of 
bouncycastle in the shipping code. Are we shipping bouncycastle today?
{code}
[INFO] --- maven-enforcer-plugin:1.4.1:enforce (depcheck) @ hadoop-hdfs-client 
---
[WARNING] 
Dependency convergence error for org.bouncycastle:bcprov-jdk15on:1.52 paths to 
dependency are:
+-org.apache.hadoop:hadoop-hdfs-client:2.9.0-SNAPSHOT
  +-org.apache.hadoop:hadoop-common:2.9.0-SNAPSHOT
+-net.java.dev.jets3t:jets3t:0.9.4
  +-org.bouncycastle:bcprov-jdk15on:1.52
and
+-org.apache.hadoop:hadoop-hdfs-client:2.9.0-SNAPSHOT
  +-org.mock-server:mockserver-netty:3.9.2
+-org.mock-server:mockserver-core:3.9.2
  +-org.bouncycastle:bcprov-jdk15on:1.51
and
+-org.apache.hadoop:hadoop-hdfs-client:2.9.0-SNAPSHOT
  +-org.mock-server:mockserver-netty:3.9.2
+-org.bouncycastle:bcprov-jdk15on:1.51

[WARNING] Rule 0: org.apache.maven.plugins.enforcer.DependencyConvergence 
failed with message:
Failed while enforcing releasability the error(s) are [
Dependency convergence error for org.bouncycastle:bcprov-jdk15on:1.52 paths to 
dependency are:
+-org.apache.hadoop:hadoop-hdfs-client:2.9.0-SNAPSHOT
  +-org.apache.hadoop:hadoop-common:2.9.0-SNAPSHOT
+-net.java.dev.jets3t:jets3t:0.9.4
  +-org.bouncycastle:bcprov-jdk15on:1.52
and
+-org.apache.hadoop:hadoop-hdfs-client:2.9.0-SNAPSHOT
  +-org.mock-server:mockserver-netty:3.9.2
+-org.mock-server:mockserver-core:3.9.2
  +-org.bouncycastle:bcprov-jdk15on:1.51
and
+-org.apache.hadoop:hadoop-hdfs-client:2.9.0-SNAPSHOT
  +-org.mock-server:mockserver-netty:3.9.2
+-org.bouncycastle:bcprov-jdk15on:1.51
]
{code}

> Upgrade jets3t to 0.9.2
> ---
>
> Key: HADOOP-11086
> URL: https://issues.apache.org/jira/browse/HADOOP-11086
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.6.0
>Reporter: Matteo Bertozzi
>Priority: Minor
> Attachments: HADOOP-11086-v0.patch, HADOOP-11086.2.patch
>
>
> jets3t 0.9.2 contains a fix that caused failure of multi-part uploads with 
> service-side encryption.
> http://jets3t.s3.amazonaws.com/RELEASE_NOTES.html
> (it also removes an exception thrown from the RestS3Service constructor which 
> requires removing the try/catch around that code)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13672) Extract out jackson calls into an overrideable method in DelegationTokenAuthenticationHandler

2016-10-03 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15542248#comment-15542248
 ] 

Steve Loughran commented on HADOOP-13672:
-

you might want to look at HADOOP-13332 as a fix for this, but an extracted 
method here is something wed be able to safely backport to 2.7.x, so of more 
immediate benefit

> Extract out jackson calls into an overrideable method in 
> DelegationTokenAuthenticationHandler
> -
>
> Key: HADOOP-13672
> URL: https://issues.apache.org/jira/browse/HADOOP-13672
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Ishan Chattopadhyaya
>Priority: Minor
>
> In Apache Solr, we use hadoop-auth for delegation tokens. However, because of 
> the following lines, we need to import Jackson (old version).
> https://github.com/apache/hadoop/blob/branch-2.7/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/web/DelegationTokenAuthenticationHandler.java#L279
> If we could extract out the calls to ObjectMapper to another method, so that 
> at Solr we could override it to do the Map -> json conversion using noggit, 
> it would be helpful.
> Reference: SOLR-9542



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-12064) [JDK8] Update guice version to 4.0

2016-10-03 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15542213#comment-15542213
 ] 

Tsuyoshi Ozawa edited comment on HADOOP-12064 at 10/3/16 11:32 AM:
---

I cannot reproduce the problem. I think above error reported by [~pgaref] can 
be fixed after removing ~/.m2 directory.


was (Author: ozawa):
I think above error reported by [~pgaref] can be fixed after removing ~/.m2 
directory.

> [JDK8] Update guice version to 4.0
> --
>
> Key: HADOOP-12064
> URL: https://issues.apache.org/jira/browse/HADOOP-12064
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha1
>Reporter: Tsuyoshi Ozawa
>Assignee: Tsuyoshi Ozawa
>Priority: Blocker
>  Labels: UpgradeKeyLibrary
> Fix For: 3.0.0-alpha1
>
> Attachments: HADOOP-12064.001.patch, HADOOP-12064.002.WIP.patch, 
> HADOOP-12064.002.patch
>
>
> guice 3.0 doesn't work with lambda statement. 
> https://github.com/google/guice/issues/757
> We should upgrade it to 4.0 which includes the fix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12064) [JDK8] Update guice version to 4.0

2016-10-03 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15542213#comment-15542213
 ] 

Tsuyoshi Ozawa commented on HADOOP-12064:
-

I think above error reported by [~pgaref] can be fixed after removing ~/.m2 
directory.

> [JDK8] Update guice version to 4.0
> --
>
> Key: HADOOP-12064
> URL: https://issues.apache.org/jira/browse/HADOOP-12064
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha1
>Reporter: Tsuyoshi Ozawa
>Assignee: Tsuyoshi Ozawa
>Priority: Blocker
>  Labels: UpgradeKeyLibrary
> Fix For: 3.0.0-alpha1
>
> Attachments: HADOOP-12064.001.patch, HADOOP-12064.002.WIP.patch, 
> HADOOP-12064.002.patch
>
>
> guice 3.0 doesn't work with lambda statement. 
> https://github.com/google/guice/issues/757
> We should upgrade it to 4.0 which includes the fix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13670) Update CHANGES.txt to reflect all the changes in branch-2.7

2016-10-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15541793#comment-15541793
 ] 

Hadoop QA commented on HADOOP-13670:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
30s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
1s{color} | {color:red} The patch has 628 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m 
14s{color} | {color:red} The patch 32 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}  1m 59s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:c420dfe |
| JIRA Issue | HADOOP-13670 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12831280/HADOOP-13670-branch-2.7-02.patch
 |
| Optional Tests |  asflicense  |
| uname | Linux d67d0a6e9c8e 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | branch-2.7 / c08346e |
| whitespace | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10641/artifact/patchprocess/whitespace-eol.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10641/artifact/patchprocess/whitespace-tabs.txt
 |
| modules | C: hadoop-common-project/hadoop-common 
hadoop-hdfs-project/hadoop-hdfs hadoop-yarn-project hadoop-mapreduce-project U: 
. |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10641/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Update CHANGES.txt to reflect all the changes in branch-2.7
> ---
>
> Key: HADOOP-13670
> URL: https://issues.apache.org/jira/browse/HADOOP-13670
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Blocker
> Attachments: HADOOP-13670-branch-2.7-02.patch, 
> HADOOP-13670-branch-2.7.patch, HADOOP-13670.patch
>
>
> When committing to branch-2.7, we need to edit CHANGES.txt. However, there 
> are some recent commits to branch-2.7 without editing CHANGES.txt. We need to 
> update the change log.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13670) Update CHANGES.txt to reflect all the changes in branch-2.7

2016-10-03 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HADOOP-13670:
--
Attachment: HADOOP-13670-branch-2.7-02.patch

Uploaded patch to address the above comments..

> Update CHANGES.txt to reflect all the changes in branch-2.7
> ---
>
> Key: HADOOP-13670
> URL: https://issues.apache.org/jira/browse/HADOOP-13670
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Blocker
> Attachments: HADOOP-13670-branch-2.7-02.patch, 
> HADOOP-13670-branch-2.7.patch, HADOOP-13670.patch
>
>
> When committing to branch-2.7, we need to edit CHANGES.txt. However, there 
> are some recent commits to branch-2.7 without editing CHANGES.txt. We need to 
> update the change log.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13672) Extract out jackson calls into an overrideable method in DelegationTokenAuthenticationHandler

2016-10-03 Thread Ishan Chattopadhyaya (JIRA)
Ishan Chattopadhyaya created HADOOP-13672:
-

 Summary: Extract out jackson calls into an overrideable method in 
DelegationTokenAuthenticationHandler
 Key: HADOOP-13672
 URL: https://issues.apache.org/jira/browse/HADOOP-13672
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Ishan Chattopadhyaya
Priority: Minor


In Apache Solr, we use hadoop-auth for delegation tokens. However, because of 
the following lines, we need to import Jackson (old version).

https://github.com/apache/hadoop/blob/branch-2.7/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/web/DelegationTokenAuthenticationHandler.java#L279

If we could extract out the calls to ObjectMapper to another method, so that at 
Solr we could override it to do the Map -> json conversion using noggit, it 
would be helpful.

Reference: SOLR-9542



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org