[jira] [Commented] (HADOOP-12893) Verify LICENSE.txt and NOTICE.txt

2016-06-20 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15341164#comment-15341164
 ] 

Vinayakumar B commented on HADOOP-12893:


bq. FYI: Now we can build trunk, branch-2, branch-2.8, and branch-2.7 without 
any fix because hadoop-build-tools jar and pom are uploaded to snapshot 
repository.
I had pushed branch-2.7 artifacts to unblock the 2.7 QA run on 17th June. 

> Verify LICENSE.txt and NOTICE.txt
> -
>
> Key: HADOOP-12893
> URL: https://issues.apache.org/jira/browse/HADOOP-12893
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0, 2.7.3, 2.6.5, 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Assignee: Xiao Chen
>Priority: Blocker
> Fix For: 2.7.3, 2.6.5
>
> Attachments: HADOOP-12893-addendum-branch-2.7.01.patch, 
> HADOOP-12893.002.patch, HADOOP-12893.003.patch, HADOOP-12893.004.patch, 
> HADOOP-12893.005.patch, HADOOP-12893.006.patch, HADOOP-12893.007.patch, 
> HADOOP-12893.008.patch, HADOOP-12893.009.patch, HADOOP-12893.01.patch, 
> HADOOP-12893.011.patch, HADOOP-12893.012.patch, HADOOP-12893.10.patch, 
> HADOOP-12893.branch-2.01.patch, HADOOP-12893.branch-2.6.01.patch, 
> HADOOP-12893.branch-2.7.01.patch, HADOOP-12893.branch-2.7.02.patch, 
> HADOOP-12893.branch-2.7.3.01.patch
>
>
> We have many bundled dependencies in both the source and the binary artifacts 
> that are not in LICENSE.txt and NOTICE.txt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12893) Verify LICENSE.txt and NOTICE.txt

2016-06-20 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15341077#comment-15341077
 ] 

Xiao Chen commented on HADOOP-12893:


Thanks [~arpitagarwal] for looking into this.

Per [ASF 
requirement|http://www.apache.org/dev/licensing-howto.html#mod-notice], NOTICE 
file is intended for explicit NOTICE by the dependencies, not listing copyright 
info. (I had similar misconception too when working on the spreadsheet). So for 
jcip, we're correct to have it's NOTICE to be empty.

If you see anything else missing, please feel free to comment.

> Verify LICENSE.txt and NOTICE.txt
> -
>
> Key: HADOOP-12893
> URL: https://issues.apache.org/jira/browse/HADOOP-12893
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0, 2.7.3, 2.6.5, 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Assignee: Xiao Chen
>Priority: Blocker
> Fix For: 2.7.3, 2.6.5
>
> Attachments: HADOOP-12893-addendum-branch-2.7.01.patch, 
> HADOOP-12893.002.patch, HADOOP-12893.003.patch, HADOOP-12893.004.patch, 
> HADOOP-12893.005.patch, HADOOP-12893.006.patch, HADOOP-12893.007.patch, 
> HADOOP-12893.008.patch, HADOOP-12893.009.patch, HADOOP-12893.01.patch, 
> HADOOP-12893.011.patch, HADOOP-12893.012.patch, HADOOP-12893.10.patch, 
> HADOOP-12893.branch-2.01.patch, HADOOP-12893.branch-2.6.01.patch, 
> HADOOP-12893.branch-2.7.01.patch, HADOOP-12893.branch-2.7.02.patch, 
> HADOOP-12893.branch-2.7.3.01.patch
>
>
> We have many bundled dependencies in both the source and the binary artifacts 
> that are not in LICENSE.txt and NOTICE.txt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12064) [JDK8] Update guice version to 4.0

2016-06-20 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15341047#comment-15341047
 ] 

Tsuyoshi Ozawa commented on HADOOP-12064:
-

OK, I will check whether the upgrading to 4.1 is acceptable.

> [JDK8] Update guice version to 4.0
> --
>
> Key: HADOOP-12064
> URL: https://issues.apache.org/jira/browse/HADOOP-12064
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Tsuyoshi Ozawa
>Assignee: Tsuyoshi Ozawa
>  Labels: UpgradeKeyLibrary
> Attachments: HADOOP-12064.001.patch, HADOOP-12064.002.WIP.patch
>
>
> guice 3.0 doesn't work with lambda statement. 
> https://github.com/google/guice/issues/757
> We should upgrade it to 4.0 which includes the fix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12893) Verify LICENSE.txt and NOTICE.txt

2016-06-20 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15340935#comment-15340935
 ] 

Arpit Agarwal commented on HADOOP-12893:


Hi all, thanks for taking on this monumental task. 

Do you think we need to include a notice for the jcip-annotation bundled jars 
per their [README|https://github.com/chrisoei/JCIP/blob/master/README]. 
{code}
Copyright (c) 2005 Brian Goetz and Tim Peierls
Released under the Creative Commons Attribution License
(http://creativecommons.org/licenses/by/2.5)
Official home: http://www.jcip.net
{code}

> Verify LICENSE.txt and NOTICE.txt
> -
>
> Key: HADOOP-12893
> URL: https://issues.apache.org/jira/browse/HADOOP-12893
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0, 2.7.3, 2.6.5, 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Assignee: Xiao Chen
>Priority: Blocker
> Fix For: 2.7.3, 2.6.5
>
> Attachments: HADOOP-12893-addendum-branch-2.7.01.patch, 
> HADOOP-12893.002.patch, HADOOP-12893.003.patch, HADOOP-12893.004.patch, 
> HADOOP-12893.005.patch, HADOOP-12893.006.patch, HADOOP-12893.007.patch, 
> HADOOP-12893.008.patch, HADOOP-12893.009.patch, HADOOP-12893.01.patch, 
> HADOOP-12893.011.patch, HADOOP-12893.012.patch, HADOOP-12893.10.patch, 
> HADOOP-12893.branch-2.01.patch, HADOOP-12893.branch-2.6.01.patch, 
> HADOOP-12893.branch-2.7.01.patch, HADOOP-12893.branch-2.7.02.patch, 
> HADOOP-12893.branch-2.7.3.01.patch
>
>
> We have many bundled dependencies in both the source and the binary artifacts 
> that are not in LICENSE.txt and NOTICE.txt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-9613) [JDK8] Update jersey version to latest 1.x release

2016-06-20 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15340871#comment-15340871
 ] 

Akira AJISAKA commented on HADOOP-9613:
---

bq. Can we do this on another jira? Your comment is a just minor refactoring, 
so we can do it on separate issue.
Filed HADOOP-13302.

> [JDK8] Update jersey version to latest 1.x release
> --
>
> Key: HADOOP-9613
> URL: https://issues.apache.org/jira/browse/HADOOP-9613
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Affects Versions: 2.4.0, 3.0.0-alpha1
>Reporter: Timothy St. Clair
>Assignee: Tsuyoshi Ozawa
>  Labels: UpgradeKeyLibrary, maven
> Fix For: 3.0.0-alpha1
>
> Attachments: HADOOP-2.2.0-9613.patch, 
> HADOOP-9613.004.incompatible.patch, HADOOP-9613.005.incompatible.patch, 
> HADOOP-9613.006.incompatible.patch, HADOOP-9613.007.incompatible.patch, 
> HADOOP-9613.008.incompatible.patch, HADOOP-9613.009.incompatible.patch, 
> HADOOP-9613.010.incompatible.patch, HADOOP-9613.011.incompatible.patch, 
> HADOOP-9613.012.incompatible.patch, HADOOP-9613.013.incompatible.patch, 
> HADOOP-9613.014.incompatible.patch, HADOOP-9613.014.incompatible.patch, 
> HADOOP-9613.015.incompatible.patch, HADOOP-9613.016.incompatible.patch, 
> HADOOP-9613.017.incompatible.patch, HADOOP-9613.019.incompatible.patch, 
> HADOOP-9613.020.incompatible.patch, HADOOP-9613.021.incompatible.patch, 
> HADOOP-9613.1.patch, HADOOP-9613.2.patch, HADOOP-9613.3.patch, 
> HADOOP-9613.patch
>
>
> Update pom.xml dependencies exposed when running a mvn-rpmbuild against 
> system dependencies on Fedora 18.  
> The existing version is 1.8 which is quite old. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13302) Remove unused variable in TestRMWebServicesForCSWithPartitions#setupQueueConfiguration

2016-06-20 Thread Akira AJISAKA (JIRA)
Akira AJISAKA created HADOOP-13302:
--

 Summary: Remove unused variable in 
TestRMWebServicesForCSWithPartitions#setupQueueConfiguration
 Key: HADOOP-13302
 URL: https://issues.apache.org/jira/browse/HADOOP-13302
 Project: Hadoop Common
  Issue Type: Improvement
  Components: test
Reporter: Akira AJISAKA
Priority: Minor


{code}
  private static void setupQueueConfiguration(
  CapacitySchedulerConfiguration config, ResourceManager resourceManager) {
{code}
{{resourceManager}} is not used, so it can be removed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12064) [JDK8] Update guice version to 4.0

2016-06-20 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15340859#comment-15340859
 ] 

Akira AJISAKA commented on HADOOP-12064:


Guice 4.1 has been released, so can we move to 4.1 instead of 4.0?

> [JDK8] Update guice version to 4.0
> --
>
> Key: HADOOP-12064
> URL: https://issues.apache.org/jira/browse/HADOOP-12064
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Tsuyoshi Ozawa
>Assignee: Tsuyoshi Ozawa
>  Labels: UpgradeKeyLibrary
> Attachments: HADOOP-12064.001.patch, HADOOP-12064.002.WIP.patch
>
>
> guice 3.0 doesn't work with lambda statement. 
> https://github.com/google/guice/issues/757
> We should upgrade it to 4.0 which includes the fix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12949) Add HTrace to the s3a connector

2016-06-20 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15340829#comment-15340829
 ] 

Colin Patrick McCabe commented on HADOOP-12949:
---

Yeah, we certainly could use the UA header for this.  That assumes that 
Amazon's s3 implementation will start looking for this (which maybe they 
will?).  In the short term, the big win will be just connecting up the job 
being run with the operations being done at the s3a level.

> Add HTrace to the s3a connector
> ---
>
> Key: HADOOP-12949
> URL: https://issues.apache.org/jira/browse/HADOOP-12949
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Reporter: Madhawa Gunasekara
>Assignee: Madhawa Gunasekara
>
> Hi All, 
> s3, GCS, WASB, and other cloud blob stores are becoming increasingly 
> important in Hadoop. But we don't have distributed tracing for these yet. It 
> would be interesting to add distributed tracing here. It would enable 
> collecting really interesting data like probability distributions of PUT and 
> GET requests to s3 and their impact on MR jobs, etc.
> I would like to implement this feature, Please shed some light on this 
> Thanks,
> Madhawa



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12064) [JDK8] Update guice version to 4.0

2016-06-20 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15340818#comment-15340818
 ] 

Tsuyoshi Ozawa commented on HADOOP-12064:
-

Kicking the Jenkins with 002 patch. I'll test the patch with on my local.

> [JDK8] Update guice version to 4.0
> --
>
> Key: HADOOP-12064
> URL: https://issues.apache.org/jira/browse/HADOOP-12064
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Tsuyoshi Ozawa
>Assignee: Tsuyoshi Ozawa
>  Labels: UpgradeKeyLibrary
> Attachments: HADOOP-12064.001.patch, HADOOP-12064.002.WIP.patch
>
>
> guice 3.0 doesn't work with lambda statement. 
> https://github.com/google/guice/issues/757
> We should upgrade it to 4.0 which includes the fix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13251) DelegationTokenAuthenticationHandler should detect actual renewer when renew token

2016-06-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15340804#comment-15340804
 ] 

Hadoop QA commented on HADOOP-13251:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
 3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
6s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
46s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
27s{color} | {color:red} hadoop-common-project: The patch generated 9 new + 132 
unchanged - 0 fixed = 141 total (was 132) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m  
6s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  2m  7s{color} 
| {color:red} hadoop-kms in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 42m 44s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.crypto.key.kms.server.TestKMS |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:e2f6409 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12812049/HADOOP-13251.04.patch 
|
| JIRA Issue | HADOOP-13251 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 2395010cf9fc 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / b7c4cf7 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9840/artifact/patchprocess/diff-checkstyle-hadoop-common-project.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9840/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-kms.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9840/testReport/ |
| modules | C: 

[jira] [Updated] (HADOOP-13251) DelegationTokenAuthenticationHandler should detect actual renewer when renew token

2016-06-20 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13251?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-13251:
---
Attachment: HADOOP-13251.04.patch

Further to an offline talk with ATM, I learnt that due to the security 
sensitiveness of delegation tokens, DT ops should require more secure 
authentication (i.e. must not be allowed using DT auth).
So, I think we should:
- Revert HADOOP-13228, which is based on my wrong understanding.
- Continue of the right fix for this. Attached patch 4 (unit test passes after 
reverting HADOOP-13228)
- File a new jira to fix existing add/renew behavior to disallow using a DT.

[~atm] and [~andrew.wang],
Could you please take a look and share your thoughts? Thanks a lot.

> DelegationTokenAuthenticationHandler should detect actual renewer when renew 
> token
> --
>
> Key: HADOOP-13251
> URL: https://issues.apache.org/jira/browse/HADOOP-13251
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.8.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-13251.01.patch, HADOOP-13251.02.patch, 
> HADOOP-13251.03.patch, HADOOP-13251.04.patch, HADOOP-13251.innocent.patch
>
>
> Turns out KMS delegation token renewal feature (HADOOP-13155) does not work 
> well with client side impersonation.
> In a MR example, an end user (UGI:user) gets all kinds of DTs (with 
> renewer=yarn), and pass them to Yarn. Yarn's resource manager (UGI:yarn) then 
> renews these DTs as long as the MR jobs are running. But currently, the token 
> is used at the kms server side to decide the renewer, in which case is always 
> the token's owner. This ends up rejecting the renew request due to renewer 
> mismatch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-9613) [JDK8] Update jersey version to latest 1.x release

2016-06-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15340727#comment-15340727
 ] 

Hudson commented on HADOOP-9613:


SUCCESS: Integrated in Hadoop-trunk-Commit #9990 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9990/])
HADOOP-9613. [JDK8] Update jersey version to latest 1.x release. (ozawa: rev 
5d58858bb6dfc07272ef099d60ca7cfb3b04423c)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesAppsModification.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/test/java/org/apache/hadoop/mapreduce/v2/hs/webapp/TestHsWebServicesTasks.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/webapp/TestAMWebServicesAttempt.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/webapp/TestAMWebServicesJobConf.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesApps.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/webapp/TestAMWebServices.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/TestAHSWebServices.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesForCSWithPartitions.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/webapp/TestAMWebServicesJobs.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/test/java/org/apache/hadoop/mapreduce/v2/hs/webapp/TestHsWebServicesJobConf.java
* hadoop-common-project/hadoop-common/pom.xml
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesDelegationTokens.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServices.java
* hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/pom.xml
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/api/impl/TimelineWriter.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesReservation.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/cli/LogsCLI.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/webapp/TestNMWebServices.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/webapp/GuiceServletConfig.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesFairScheduler.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/test/java/org/apache/hadoop/mapreduce/v2/hs/webapp/TestHsWebServicesJobs.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/webapp/TestRMWithCSRFFilter.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/webapp/TestAMWebServicesTasks.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/test/java/org/apache/hadoop/mapreduce/v2/hs/webapp/TestHsWebServices.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/webapp/WebServicesTestUtils.java
* hadoop-project/pom.xml
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/test/java/org/apache/hadoop/yarn/server/timeline/webapp/TestTimelineWebServices.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestTimelineClientForATS1_5.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/test/java/org/apache/hadoop/mapreduce/v2/hs/webapp/TestHsWebServicesJobsQuery.java
* 

[jira] [Commented] (HADOOP-13301) Millisecond timestamp for FsShell console log

2016-06-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15340724#comment-15340724
 ] 

Hadoop QA commented on HADOOP-13301:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
40s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}  9m 49s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:e2f6409 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12812041/HADOOP-13301.001.patch
 |
| JIRA Issue | HADOOP-13301 |
| Optional Tests |  asflicense  mvnsite  unit  |
| uname | Linux 1cdce2a93655 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / b7c4cf7 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9839/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9839/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Millisecond timestamp for FsShell console log
> -
>
> Key: HADOOP-13301
> URL: https://issues.apache.org/jira/browse/HADOOP-13301
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Trivial
> Fix For: 3.0.0-alpha1
>
> Attachments: HADOOP-13301.001.patch
>
>
> The log message timestamp on FsShell console show only seconds. 
> {noformat}
> $ export HADOOP_ROOT_LOGGER=TRACE,console
> $ hdfs dfs -rm -skipTrash /tmp/2G*
> 16/06/20 16:00:03 DEBUG util.Shell: setsid exited with exit code 0
> {noformat}
> Would like to see milliseconds for quick performance turning.
> {noformat}
> 2016-06-20 16:01:42,588 DEBUG util.Shell: setsid exited with exit code 0
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13291) Probing stats in DFSOpsCountStatistics/S3AStorageStatistics should be correctly implemented

2016-06-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15340726#comment-15340726
 ] 

Hudson commented on HADOOP-13291:
-

SUCCESS: Integrated in Hadoop-trunk-Commit #9990 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9990/])
HADOOP-13291. Probing stats in (jitendra: rev 
b7c4cf7129768c0312b186dfb94ba1beb891e2f3)
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSOpsCountStatistics.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/test/java/org/apache/hadoop/hdfs/TestDFSOpsCountStatistics.java
* 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AStorageStatistics.java


> Probing stats in DFSOpsCountStatistics/S3AStorageStatistics should be 
> correctly implemented
> ---
>
> Key: HADOOP-13291
> URL: https://issues.apache.org/jira/browse/HADOOP-13291
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Fix For: 2.8.0
>
> Attachments: HADOOP-13291.000.patch, HADOOP-13291.001.patch, 
> HADOOP-13291.002.patch
>
>
> To probe a stat in {{StorageStatistics}}, users can use the 
> {{StorageStatistics#isTracked()}} API. Currently {{DFSOpsCountStatistics}} 
> implements this function wrongly. {{S3AStorageStatistics}} borrowed the same 
> idea and also has the same error.
> # The {{isTracked()}} is not correctly implemented. I believe this was an 
> omission in code.
> # {{isTracked()}} checks a stat with operation symbol (instead of enum name). 
> {{getLongStatistics()}} should return LongStatistics iterators with symbol as 
> name, instead of the enum variable name. Or else, 
> {{isTracked(getLongStatistics().next().getName());}} returns false. This will 
> lead to confusion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-9613) [JDK8] Update jersey version to latest 1.x release

2016-06-20 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HADOOP-9613:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Committed this to trunk.

Thanks [~ajisakaa] and [~ste...@apache.org] for iterative reviews, and thanks 
[~leftnoteasy] [~gtCarrera9], [~sunilg], [~sjlee0] for the feedbacks.

Created YARN-5275 for tracking the problem.

> [JDK8] Update jersey version to latest 1.x release
> --
>
> Key: HADOOP-9613
> URL: https://issues.apache.org/jira/browse/HADOOP-9613
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Affects Versions: 2.4.0, 3.0.0-alpha1
>Reporter: Timothy St. Clair
>Assignee: Tsuyoshi Ozawa
>  Labels: UpgradeKeyLibrary, maven
> Fix For: 3.0.0-alpha1
>
> Attachments: HADOOP-2.2.0-9613.patch, 
> HADOOP-9613.004.incompatible.patch, HADOOP-9613.005.incompatible.patch, 
> HADOOP-9613.006.incompatible.patch, HADOOP-9613.007.incompatible.patch, 
> HADOOP-9613.008.incompatible.patch, HADOOP-9613.009.incompatible.patch, 
> HADOOP-9613.010.incompatible.patch, HADOOP-9613.011.incompatible.patch, 
> HADOOP-9613.012.incompatible.patch, HADOOP-9613.013.incompatible.patch, 
> HADOOP-9613.014.incompatible.patch, HADOOP-9613.014.incompatible.patch, 
> HADOOP-9613.015.incompatible.patch, HADOOP-9613.016.incompatible.patch, 
> HADOOP-9613.017.incompatible.patch, HADOOP-9613.019.incompatible.patch, 
> HADOOP-9613.020.incompatible.patch, HADOOP-9613.021.incompatible.patch, 
> HADOOP-9613.1.patch, HADOOP-9613.2.patch, HADOOP-9613.3.patch, 
> HADOOP-9613.patch
>
>
> Update pom.xml dependencies exposed when running a mvn-rpmbuild against 
> system dependencies on Fedora 18.  
> The existing version is 1.8 which is quite old. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13301) Millisecond timestamp for FsShell console log

2016-06-20 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-13301:

Status: Patch Available  (was: In Progress)

> Millisecond timestamp for FsShell console log
> -
>
> Key: HADOOP-13301
> URL: https://issues.apache.org/jira/browse/HADOOP-13301
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Trivial
> Fix For: 3.0.0-alpha1
>
> Attachments: HADOOP-13301.001.patch
>
>
> The log message timestamp on FsShell console show only seconds. 
> {noformat}
> $ export HADOOP_ROOT_LOGGER=TRACE,console
> $ hdfs dfs -rm -skipTrash /tmp/2G*
> 16/06/20 16:00:03 DEBUG util.Shell: setsid exited with exit code 0
> {noformat}
> Would like to see milliseconds for quick performance turning.
> {noformat}
> 2016-06-20 16:01:42,588 DEBUG util.Shell: setsid exited with exit code 0
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13301) Millisecond timestamp for FsShell console log

2016-06-20 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-13301:

Attachment: HADOOP-13301.001.patch

Patch 001:
* Change console log4j.appender.console.layout.ConversionPattern to ISO8601 
which prints milliseconds.
* Change JSA as well so that everybody uses ISO8601.

> Millisecond timestamp for FsShell console log
> -
>
> Key: HADOOP-13301
> URL: https://issues.apache.org/jira/browse/HADOOP-13301
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Trivial
> Fix For: 3.0.0-alpha1
>
> Attachments: HADOOP-13301.001.patch
>
>
> The log message timestamp on FsShell console show only seconds. 
> {noformat}
> $ export HADOOP_ROOT_LOGGER=TRACE,console
> $ hdfs dfs -rm -skipTrash /tmp/2G*
> 16/06/20 16:00:03 DEBUG util.Shell: setsid exited with exit code 0
> {noformat}
> Would like to see milliseconds for quick performance turning.
> {noformat}
> 2016-06-20 16:01:42,588 DEBUG util.Shell: setsid exited with exit code 0
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13291) Probing stats in DFSOpsCountStatistics/S3AStorageStatistics should be correctly implemented

2016-06-20 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15340687#comment-15340687
 ] 

Mingliang Liu commented on HADOOP-13291:


Thanks [~jnp] for the commit, and [~ste...@apache.org] for the review and 
discussion.

> Probing stats in DFSOpsCountStatistics/S3AStorageStatistics should be 
> correctly implemented
> ---
>
> Key: HADOOP-13291
> URL: https://issues.apache.org/jira/browse/HADOOP-13291
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Fix For: 2.8.0
>
> Attachments: HADOOP-13291.000.patch, HADOOP-13291.001.patch, 
> HADOOP-13291.002.patch
>
>
> To probe a stat in {{StorageStatistics}}, users can use the 
> {{StorageStatistics#isTracked()}} API. Currently {{DFSOpsCountStatistics}} 
> implements this function wrongly. {{S3AStorageStatistics}} borrowed the same 
> idea and also has the same error.
> # The {{isTracked()}} is not correctly implemented. I believe this was an 
> omission in code.
> # {{isTracked()}} checks a stat with operation symbol (instead of enum name). 
> {{getLongStatistics()}} should return LongStatistics iterators with symbol as 
> name, instead of the enum variable name. Or else, 
> {{isTracked(getLongStatistics().next().getName());}} returns false. This will 
> lead to confusion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13299) JMXJsonServlet is vulnerable to TRACE

2016-06-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15340682#comment-15340682
 ] 

Hadoop QA commented on HADOOP-13299:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
30s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 37m 47s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:e2f6409 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12812025/hadoop13299.001.patch 
|
| JIRA Issue | HADOOP-13299 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 250499de3aa4 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 8c1f81d |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9838/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9838/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> JMXJsonServlet is vulnerable to TRACE 
> --
>
> Key: HADOOP-13299
> URL: https://issues.apache.org/jira/browse/HADOOP-13299
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>Priority: Minor
> Attachments: hadoop13299.001.patch
>
>
> Nessus scan shows that JMXJsonServlet is vulnerable to TRACE/TRACK requests.  
> We could disable this to avoid such vulnerability.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-13291) Probing stats in DFSOpsCountStatistics/S3AStorageStatistics should be correctly implemented

2016-06-20 Thread Jitendra Nath Pandey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jitendra Nath Pandey updated HADOOP-13291:
--
   Resolution: Fixed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

I have committed this to trunk, branch-2 and branch-2.8. Thanks Mingliang for 
the patch, and Steve for testing it with S3.

> Probing stats in DFSOpsCountStatistics/S3AStorageStatistics should be 
> correctly implemented
> ---
>
> Key: HADOOP-13291
> URL: https://issues.apache.org/jira/browse/HADOOP-13291
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Fix For: 2.8.0
>
> Attachments: HADOOP-13291.000.patch, HADOOP-13291.001.patch, 
> HADOOP-13291.002.patch
>
>
> To probe a stat in {{StorageStatistics}}, users can use the 
> {{StorageStatistics#isTracked()}} API. Currently {{DFSOpsCountStatistics}} 
> implements this function wrongly. {{S3AStorageStatistics}} borrowed the same 
> idea and also has the same error.
> # The {{isTracked()}} is not correctly implemented. I believe this was an 
> omission in code.
> # {{isTracked()}} checks a stat with operation symbol (instead of enum name). 
> {{getLongStatistics()}} should return LongStatistics iterators with symbol as 
> name, instead of the enum variable name. Or else, 
> {{isTracked(getLongStatistics().next().getName());}} returns false. This will 
> lead to confusion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-9613) [JDK8] Update jersey version to latest 1.x release

2016-06-20 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HADOOP-9613:
---
Fix Version/s: 3.0.0-alpha1
 Release Note: 
Upgrading Jersey and its related libraries: 

1. Upgrading jersey from 1.9 to 1.19
2. Adding jersey-servlet 1.19
3. Upgrading grizzly-http-servlet from 2.1.2 to 2.2.21
4. Adding grizzly-http 2.2.21
5. Adding grizzly-http-server 2.2.21

After upgrading Jersey from 1.12 to 1.13, the root element whose content is 
empty collection is changed from null to empty object({}). 


> [JDK8] Update jersey version to latest 1.x release
> --
>
> Key: HADOOP-9613
> URL: https://issues.apache.org/jira/browse/HADOOP-9613
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Affects Versions: 2.4.0, 3.0.0-alpha1
>Reporter: Timothy St. Clair
>Assignee: Tsuyoshi Ozawa
>  Labels: UpgradeKeyLibrary, maven
> Fix For: 3.0.0-alpha1
>
> Attachments: HADOOP-2.2.0-9613.patch, 
> HADOOP-9613.004.incompatible.patch, HADOOP-9613.005.incompatible.patch, 
> HADOOP-9613.006.incompatible.patch, HADOOP-9613.007.incompatible.patch, 
> HADOOP-9613.008.incompatible.patch, HADOOP-9613.009.incompatible.patch, 
> HADOOP-9613.010.incompatible.patch, HADOOP-9613.011.incompatible.patch, 
> HADOOP-9613.012.incompatible.patch, HADOOP-9613.013.incompatible.patch, 
> HADOOP-9613.014.incompatible.patch, HADOOP-9613.014.incompatible.patch, 
> HADOOP-9613.015.incompatible.patch, HADOOP-9613.016.incompatible.patch, 
> HADOOP-9613.017.incompatible.patch, HADOOP-9613.019.incompatible.patch, 
> HADOOP-9613.020.incompatible.patch, HADOOP-9613.021.incompatible.patch, 
> HADOOP-9613.1.patch, HADOOP-9613.2.patch, HADOOP-9613.3.patch, 
> HADOOP-9613.patch
>
>
> Update pom.xml dependencies exposed when running a mvn-rpmbuild against 
> system dependencies on Fedora 18.  
> The existing version is 1.8 which is quite old. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work started] (HADOOP-13301) Millisecond timestamp for FsShell console log

2016-06-20 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-13301 started by John Zhuge.
---
> Millisecond timestamp for FsShell console log
> -
>
> Key: HADOOP-13301
> URL: https://issues.apache.org/jira/browse/HADOOP-13301
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Trivial
> Fix For: 3.0.0-alpha1
>
>
> The log message timestamp on FsShell console show only seconds. 
> {noformat}
> $ export HADOOP_ROOT_LOGGER=TRACE,console
> $ hdfs dfs -rm -skipTrash /tmp/2G*
> 16/06/20 16:00:03 DEBUG util.Shell: setsid exited with exit code 0
> {noformat}
> Would like to see milliseconds for quick performance turning.
> {noformat}
> 2016-06-20 16:01:42,588 DEBUG util.Shell: setsid exited with exit code 0
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13301) Millisecond timestamp for FsShell console log

2016-06-20 Thread John Zhuge (JIRA)
John Zhuge created HADOOP-13301:
---

 Summary: Millisecond timestamp for FsShell console log
 Key: HADOOP-13301
 URL: https://issues.apache.org/jira/browse/HADOOP-13301
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.6.0
Reporter: John Zhuge
Assignee: John Zhuge
Priority: Trivial
 Fix For: 3.0.0-alpha1


The log message timestamp on FsShell console show only seconds. 
{noformat}
$ export HADOOP_ROOT_LOGGER=TRACE,console
$ hdfs dfs -rm -skipTrash /tmp/2G*
16/06/20 16:00:03 DEBUG util.Shell: setsid exited with exit code 0
{noformat}

Would like to see milliseconds for quick performance turning.
{noformat}
2016-06-20 16:01:42,588 DEBUG util.Shell: setsid exited with exit code 0
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-9613) [JDK8] Update jersey version to latest 1.x release

2016-06-20 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15340618#comment-15340618
 ] 

Tsuyoshi Ozawa commented on HADOOP-9613:


Sure.

> [JDK8] Update jersey version to latest 1.x release
> --
>
> Key: HADOOP-9613
> URL: https://issues.apache.org/jira/browse/HADOOP-9613
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Affects Versions: 2.4.0, 3.0.0-alpha1
>Reporter: Timothy St. Clair
>Assignee: Tsuyoshi Ozawa
>  Labels: UpgradeKeyLibrary, maven
> Attachments: HADOOP-2.2.0-9613.patch, 
> HADOOP-9613.004.incompatible.patch, HADOOP-9613.005.incompatible.patch, 
> HADOOP-9613.006.incompatible.patch, HADOOP-9613.007.incompatible.patch, 
> HADOOP-9613.008.incompatible.patch, HADOOP-9613.009.incompatible.patch, 
> HADOOP-9613.010.incompatible.patch, HADOOP-9613.011.incompatible.patch, 
> HADOOP-9613.012.incompatible.patch, HADOOP-9613.013.incompatible.patch, 
> HADOOP-9613.014.incompatible.patch, HADOOP-9613.014.incompatible.patch, 
> HADOOP-9613.015.incompatible.patch, HADOOP-9613.016.incompatible.patch, 
> HADOOP-9613.017.incompatible.patch, HADOOP-9613.019.incompatible.patch, 
> HADOOP-9613.020.incompatible.patch, HADOOP-9613.021.incompatible.patch, 
> HADOOP-9613.1.patch, HADOOP-9613.2.patch, HADOOP-9613.3.patch, 
> HADOOP-9613.patch
>
>
> Update pom.xml dependencies exposed when running a mvn-rpmbuild against 
> system dependencies on Fedora 18.  
> The existing version is 1.8 which is quite old. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13299) JMXJsonServlet is vulnerable to TRACE

2016-06-20 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated HADOOP-13299:

Status: Patch Available  (was: Open)

> JMXJsonServlet is vulnerable to TRACE 
> --
>
> Key: HADOOP-13299
> URL: https://issues.apache.org/jira/browse/HADOOP-13299
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>Priority: Minor
> Attachments: hadoop13299.001.patch
>
>
> Nessus scan shows that JMXJsonServlet is vulnerable to TRACE/TRACK requests.  
> We could disable this to avoid such vulnerability.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13299) JMXJsonServlet is vulnerable to TRACE

2016-06-20 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated HADOOP-13299:

Attachment: hadoop13299.001.patch

The patch overrides the doTrace method in JMXJsonServlet to disable TRACE 
requests.

> JMXJsonServlet is vulnerable to TRACE 
> --
>
> Key: HADOOP-13299
> URL: https://issues.apache.org/jira/browse/HADOOP-13299
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>Priority: Minor
> Attachments: hadoop13299.001.patch
>
>
> Nessus scan shows that JMXJsonServlet is vulnerable to TRACE/TRACK requests.  
> We could disable this to avoid such vulnerability.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13300) maven-jar-plugin executions break build with newer plugin

2016-06-20 Thread Christopher Tubbs (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15340570#comment-15340570
 ] 

Christopher Tubbs commented on HADOOP-13300:


In some of the cases, it looks like the main jar artifact is being created 
earlier in the build lifecycle so it is available for tests. In these cases, it 
might be acceptable to override (or skip) the default execution of the 
maven-jar-plugin, but it's probably better to omit the custom execution, and 
instead move the tests to the integration-test phases, which exists for this 
purpose.

> maven-jar-plugin executions break build with newer plugin
> -
>
> Key: HADOOP-13300
> URL: https://issues.apache.org/jira/browse/HADOOP-13300
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Christopher Tubbs
>
> Several places throughout the Hadoop build (going back at least as far as 
> 2.4.1; I didn't check earlier), extra executions of maven-jar-plugin have 
> been used to create jars at different phases in the build lifecycle.
> These have typically not specified an execution id, but should have specified 
> "default-jar" to override the default execution of maven-jar-plugin. They 
> have worked in the past because maven-jar-plugin didn't check to verify if an 
> artifact was built/attached multiple times (without using a classifier), but 
> will not work when a newer version of maven-jar-plugin is used (>3.0), which 
> is more strict about checking.
> This is a problem for any downstream packagers which are using newer versions 
> of build plugins (due to dependency convergence) and will become a problem 
> when Hadoop moves to a newer version of the jar plugin (with ASF Parent POM 
> 18, for example).
> [These are the ones I've 
> found|https://lists.apache.org/thread.html/2c9d9ea5448a3ed22743916d20e40a9e589bfa383c8ea65f35cb3f0d@%3Cuser.hadoop.apache.org%3E].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13300) maven-jar-plugin executions break build with newer plugin

2016-06-20 Thread Christopher Tubbs (JIRA)
Christopher Tubbs created HADOOP-13300:
--

 Summary: maven-jar-plugin executions break build with newer plugin
 Key: HADOOP-13300
 URL: https://issues.apache.org/jira/browse/HADOOP-13300
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Reporter: Christopher Tubbs


Several places throughout the Hadoop build (going back at least as far as 
2.4.1; I didn't check earlier), extra executions of maven-jar-plugin have been 
used to create jars at different phases in the build lifecycle.

These have typically not specified an execution id, but should have specified 
"default-jar" to override the default execution of maven-jar-plugin. They have 
worked in the past because maven-jar-plugin didn't check to verify if an 
artifact was built/attached multiple times (without using a classifier), but 
will not work when a newer version of maven-jar-plugin is used (>3.0), which is 
more strict about checking.

This is a problem for any downstream packagers which are using newer versions 
of build plugins (due to dependency convergence) and will become a problem when 
Hadoop moves to a newer version of the jar plugin (with ASF Parent POM 18, for 
example).

[These are the ones I've 
found|https://lists.apache.org/thread.html/2c9d9ea5448a3ed22743916d20e40a9e589bfa383c8ea65f35cb3f0d@%3Cuser.hadoop.apache.org%3E].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13263) Reload cached groups in background after expiry

2016-06-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15340554#comment-15340554
 ] 

Hadoop QA commented on HADOOP-13263:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
 0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
23s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
25s{color} | {color:red} hadoop-common-project/hadoop-common: The patch 
generated 28 new + 216 unchanged - 0 fixed = 244 total (was 216) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
29s{color} | {color:red} hadoop-common-project/hadoop-common generated 1 new + 
0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 18m  4s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 49m 30s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-common-project/hadoop-common |
|  |  Inconsistent synchronization of 
org.apache.hadoop.security.Groups$GroupCacheLoader.executorService; locked 66% 
of time  Unsynchronized access at Groups.java:66% of time  Unsynchronized 
access at Groups.java:[line 340] |
| Timed out junit tests | org.apache.hadoop.http.TestHttpServerLifecycle |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:e2f6409 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12811990/HADOOP-13263.004.patch
 |
| JIRA Issue | HADOOP-13263 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 09bbfa1ed99e 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 8c1f81d |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9837/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9837/artifact/patchprocess/whitespace-eol.txt
 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9837/artifact/patchprocess/new-findbugs-hadoop-common-project_hadoop-common.html
 |
| unit | 

[jira] [Updated] (HADOOP-9613) [JDK8] Update jersey version to latest 1.x release

2016-06-20 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-9613:
--
Target Version/s: 3.0.0-alpha1  (was: )
Hadoop Flags: Incompatible change,Reviewed  (was: Incompatible change)

Would you write a release note?

> [JDK8] Update jersey version to latest 1.x release
> --
>
> Key: HADOOP-9613
> URL: https://issues.apache.org/jira/browse/HADOOP-9613
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Affects Versions: 2.4.0, 3.0.0-alpha1
>Reporter: Timothy St. Clair
>Assignee: Tsuyoshi Ozawa
>  Labels: UpgradeKeyLibrary, maven
> Attachments: HADOOP-2.2.0-9613.patch, 
> HADOOP-9613.004.incompatible.patch, HADOOP-9613.005.incompatible.patch, 
> HADOOP-9613.006.incompatible.patch, HADOOP-9613.007.incompatible.patch, 
> HADOOP-9613.008.incompatible.patch, HADOOP-9613.009.incompatible.patch, 
> HADOOP-9613.010.incompatible.patch, HADOOP-9613.011.incompatible.patch, 
> HADOOP-9613.012.incompatible.patch, HADOOP-9613.013.incompatible.patch, 
> HADOOP-9613.014.incompatible.patch, HADOOP-9613.014.incompatible.patch, 
> HADOOP-9613.015.incompatible.patch, HADOOP-9613.016.incompatible.patch, 
> HADOOP-9613.017.incompatible.patch, HADOOP-9613.019.incompatible.patch, 
> HADOOP-9613.020.incompatible.patch, HADOOP-9613.021.incompatible.patch, 
> HADOOP-9613.1.patch, HADOOP-9613.2.patch, HADOOP-9613.3.patch, 
> HADOOP-9613.patch
>
>
> Update pom.xml dependencies exposed when running a mvn-rpmbuild against 
> system dependencies on Fedora 18.  
> The existing version is 1.8 which is quite old. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-9613) [JDK8] Update jersey version to latest 1.x release

2016-06-20 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15340546#comment-15340546
 ] 

Akira AJISAKA commented on HADOOP-9613:
---

bq. Can we do this on another jira?
Agreed. +1 for the current patch. The test failures are not related.

> [JDK8] Update jersey version to latest 1.x release
> --
>
> Key: HADOOP-9613
> URL: https://issues.apache.org/jira/browse/HADOOP-9613
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Affects Versions: 2.4.0, 3.0.0-alpha1
>Reporter: Timothy St. Clair
>Assignee: Tsuyoshi Ozawa
>  Labels: UpgradeKeyLibrary, maven
> Attachments: HADOOP-2.2.0-9613.patch, 
> HADOOP-9613.004.incompatible.patch, HADOOP-9613.005.incompatible.patch, 
> HADOOP-9613.006.incompatible.patch, HADOOP-9613.007.incompatible.patch, 
> HADOOP-9613.008.incompatible.patch, HADOOP-9613.009.incompatible.patch, 
> HADOOP-9613.010.incompatible.patch, HADOOP-9613.011.incompatible.patch, 
> HADOOP-9613.012.incompatible.patch, HADOOP-9613.013.incompatible.patch, 
> HADOOP-9613.014.incompatible.patch, HADOOP-9613.014.incompatible.patch, 
> HADOOP-9613.015.incompatible.patch, HADOOP-9613.016.incompatible.patch, 
> HADOOP-9613.017.incompatible.patch, HADOOP-9613.019.incompatible.patch, 
> HADOOP-9613.020.incompatible.patch, HADOOP-9613.021.incompatible.patch, 
> HADOOP-9613.1.patch, HADOOP-9613.2.patch, HADOOP-9613.3.patch, 
> HADOOP-9613.patch
>
>
> Update pom.xml dependencies exposed when running a mvn-rpmbuild against 
> system dependencies on Fedora 18.  
> The existing version is 1.8 which is quite old. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13299) JMXJsonServlet is vulnerable to TRACE

2016-06-20 Thread Haibo Chen (JIRA)
Haibo Chen created HADOOP-13299:
---

 Summary: JMXJsonServlet is vulnerable to TRACE 
 Key: HADOOP-13299
 URL: https://issues.apache.org/jira/browse/HADOOP-13299
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Haibo Chen
Assignee: Haibo Chen
Priority: Minor


Nessus scan shows that JMXJsonServlet is vulnerable to TRACE/TRACK requests.  
We could disable this to avoid such vulnerability.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-9613) [JDK8] Update jersey version to latest 1.x release

2016-06-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15340524#comment-15340524
 ] 

ASF GitHub Bot commented on HADOOP-9613:


Github user oza commented on a diff in the pull request:

https://github.com/apache/hadoop/pull/76#discussion_r67776639
  
--- Diff: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesForCSWithPartitions.java
 ---
@@ -106,76 +105,79 @@ protected void configureServlets() {
   bind(ResourceManager.class).toInstance(rm);
   serve("/*").with(GuiceContainer.class);
 }
-  });
+  };
 
-  public class GuiceServletConfig extends GuiceServletContextListener {
-@Override
-protected Injector getInjector() {
-  return injector;
-}
+  static {
+GuiceServletConfig.setInjector(
+Guice.createInjector(new WebServletModule()));
   }
 
   private static void setupQueueConfiguration(
-  CapacitySchedulerConfiguration conf, ResourceManager rm) {
+  CapacitySchedulerConfiguration config, ResourceManager 
resourceManager) {
--- End diff --

@aajisaka thank you for the comment. The fix I made is just for avoiding 
warning.  
Can we do this on another jira? Your comment is a just minor refactoring, 
so we can do it on separate issue.


> [JDK8] Update jersey version to latest 1.x release
> --
>
> Key: HADOOP-9613
> URL: https://issues.apache.org/jira/browse/HADOOP-9613
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Affects Versions: 2.4.0, 3.0.0-alpha1
>Reporter: Timothy St. Clair
>Assignee: Tsuyoshi Ozawa
>  Labels: UpgradeKeyLibrary, maven
> Attachments: HADOOP-2.2.0-9613.patch, 
> HADOOP-9613.004.incompatible.patch, HADOOP-9613.005.incompatible.patch, 
> HADOOP-9613.006.incompatible.patch, HADOOP-9613.007.incompatible.patch, 
> HADOOP-9613.008.incompatible.patch, HADOOP-9613.009.incompatible.patch, 
> HADOOP-9613.010.incompatible.patch, HADOOP-9613.011.incompatible.patch, 
> HADOOP-9613.012.incompatible.patch, HADOOP-9613.013.incompatible.patch, 
> HADOOP-9613.014.incompatible.patch, HADOOP-9613.014.incompatible.patch, 
> HADOOP-9613.015.incompatible.patch, HADOOP-9613.016.incompatible.patch, 
> HADOOP-9613.017.incompatible.patch, HADOOP-9613.019.incompatible.patch, 
> HADOOP-9613.020.incompatible.patch, HADOOP-9613.021.incompatible.patch, 
> HADOOP-9613.1.patch, HADOOP-9613.2.patch, HADOOP-9613.3.patch, 
> HADOOP-9613.patch
>
>
> Update pom.xml dependencies exposed when running a mvn-rpmbuild against 
> system dependencies on Fedora 18.  
> The existing version is 1.8 which is quite old. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13288) Guard null stats key in FileSystemStorageStatistics

2016-06-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15340491#comment-15340491
 ] 

Hudson commented on HADOOP-13288:
-

SUCCESS: Integrated in Hadoop-trunk-Commit #9989 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9989/])
HADOOP-13288. Guard null stats key in FileSystemStorageStatistics (cmccabe: rev 
8c1f81d4bf424bdc421cf4952b230344e39a7b68)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystemStorageStatistics.java


> Guard null stats key in FileSystemStorageStatistics
> ---
>
> Key: HADOOP-13288
> URL: https://issues.apache.org/jira/browse/HADOOP-13288
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 2.8.0, 3.0.0-alpha1
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Fix For: 2.8.0
>
> Attachments: HADOOP-13288.000.patch, HADOOP-13288.001.patch
>
>
> Currently in {{FileSystemStorageStatistics}} we simply returns data from 
> {{FileSystem#Statistics}}. However there is no null key check, which leads to 
>  NPE problems to downstream applications. For example, we got a NPE when 
> passing a null key to {{FileSystemStorageStatistics#getLong()}}, exception 
> stack as following:
> {quote}
> NullPointerException
> at 
> org.apache.hadoop.fs.FileSystemStorageStatistics.fetch(FileSystemStorageStatistics.java:80)
> at 
> org.apache.hadoop.fs.FileSystemStorageStatistics.getLong(FileSystemStorageStatistics.java:108)
> at 
> org.apache.tez.runtime.metrics.FileSystemStatisticsUpdater2.updateCounters(FileSystemStatisticsUpdater2.java:60)
> at 
> org.apache.tez.runtime.metrics.TaskCounterUpdater.updateCounters(TaskCounterUpdater.java:118)
> at 
> org.apache.tez.runtime.RuntimeTask.setFrameworkCounters(RuntimeTask.java:172)
> at 
> org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:100)
> at 
> org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:37)
> at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> {quote}
> This jira is to add null stat key check to {{FileSystemStorageStatistics}}.
> Thanks [~hitesh] for trying in Tez and reporting this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13288) Guard null stats key in FileSystemStorageStatistics

2016-06-20 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15340470#comment-15340470
 ] 

Mingliang Liu commented on HADOOP-13288:


Thank you [~cmccabe] for the review and commit!

> Guard null stats key in FileSystemStorageStatistics
> ---
>
> Key: HADOOP-13288
> URL: https://issues.apache.org/jira/browse/HADOOP-13288
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 2.8.0, 3.0.0-alpha1
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Fix For: 2.8.0
>
> Attachments: HADOOP-13288.000.patch, HADOOP-13288.001.patch
>
>
> Currently in {{FileSystemStorageStatistics}} we simply returns data from 
> {{FileSystem#Statistics}}. However there is no null key check, which leads to 
>  NPE problems to downstream applications. For example, we got a NPE when 
> passing a null key to {{FileSystemStorageStatistics#getLong()}}, exception 
> stack as following:
> {quote}
> NullPointerException
> at 
> org.apache.hadoop.fs.FileSystemStorageStatistics.fetch(FileSystemStorageStatistics.java:80)
> at 
> org.apache.hadoop.fs.FileSystemStorageStatistics.getLong(FileSystemStorageStatistics.java:108)
> at 
> org.apache.tez.runtime.metrics.FileSystemStatisticsUpdater2.updateCounters(FileSystemStatisticsUpdater2.java:60)
> at 
> org.apache.tez.runtime.metrics.TaskCounterUpdater.updateCounters(TaskCounterUpdater.java:118)
> at 
> org.apache.tez.runtime.RuntimeTask.setFrameworkCounters(RuntimeTask.java:172)
> at 
> org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:100)
> at 
> org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:37)
> at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> {quote}
> This jira is to add null stat key check to {{FileSystemStorageStatistics}}.
> Thanks [~hitesh] for trying in Tez and reporting this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13288) Guard null stats key in FileSystemStorageStatistics

2016-06-20 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-13288:
--
  Resolution: Fixed
   Fix Version/s: 2.8.0
Target Version/s: 2.8.0
  Status: Resolved  (was: Patch Available)

> Guard null stats key in FileSystemStorageStatistics
> ---
>
> Key: HADOOP-13288
> URL: https://issues.apache.org/jira/browse/HADOOP-13288
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 2.8.0, 3.0.0-alpha1
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Fix For: 2.8.0
>
> Attachments: HADOOP-13288.000.patch, HADOOP-13288.001.patch
>
>
> Currently in {{FileSystemStorageStatistics}} we simply returns data from 
> {{FileSystem#Statistics}}. However there is no null key check, which leads to 
>  NPE problems to downstream applications. For example, we got a NPE when 
> passing a null key to {{FileSystemStorageStatistics#getLong()}}, exception 
> stack as following:
> {quote}
> NullPointerException
> at 
> org.apache.hadoop.fs.FileSystemStorageStatistics.fetch(FileSystemStorageStatistics.java:80)
> at 
> org.apache.hadoop.fs.FileSystemStorageStatistics.getLong(FileSystemStorageStatistics.java:108)
> at 
> org.apache.tez.runtime.metrics.FileSystemStatisticsUpdater2.updateCounters(FileSystemStatisticsUpdater2.java:60)
> at 
> org.apache.tez.runtime.metrics.TaskCounterUpdater.updateCounters(TaskCounterUpdater.java:118)
> at 
> org.apache.tez.runtime.RuntimeTask.setFrameworkCounters(RuntimeTask.java:172)
> at 
> org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:100)
> at 
> org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:37)
> at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> {quote}
> This jira is to add null stat key check to {{FileSystemStorageStatistics}}.
> Thanks [~hitesh] for trying in Tez and reporting this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13280) FileSystemStorageStatistics#getLong(“readOps“) should return readOps + largeReadOps

2016-06-20 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-13280:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> FileSystemStorageStatistics#getLong(“readOps“) should return readOps + 
> largeReadOps
> ---
>
> Key: HADOOP-13280
> URL: https://issues.apache.org/jira/browse/HADOOP-13280
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Fix For: 2.8.0
>
> Attachments: HADOOP-13280-branch-2.8.000.patch, 
> HADOOP-13280.000.patch, HADOOP-13280.001.patch
>
>
> Currently {{FileSystemStorageStatistics}} instance simply returns data from 
> {{FileSystem$Statistics}}. As to {{readOps}}, the 
> {{FileSystem$Statistics#getReadOps()}} returns {{readOps + largeReadOps}}. We 
> should make the {{FileSystemStorageStatistics#getLong(“readOps“)}} return the 
> sum as well.
> Moreover, there is no unit tests for {{FileSystemStorageStatistics}} and this 
> JIRA will also address this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13288) Guard null stats key in FileSystemStorageStatistics

2016-06-20 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15340428#comment-15340428
 ] 

Colin Patrick McCabe commented on HADOOP-13288:
---

+1.  Thanks, [~liuml07].

> Guard null stats key in FileSystemStorageStatistics
> ---
>
> Key: HADOOP-13288
> URL: https://issues.apache.org/jira/browse/HADOOP-13288
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 2.8.0, 3.0.0-alpha1
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HADOOP-13288.000.patch, HADOOP-13288.001.patch
>
>
> Currently in {{FileSystemStorageStatistics}} we simply returns data from 
> {{FileSystem#Statistics}}. However there is no null key check, which leads to 
>  NPE problems to downstream applications. For example, we got a NPE when 
> passing a null key to {{FileSystemStorageStatistics#getLong()}}, exception 
> stack as following:
> {quote}
> NullPointerException
> at 
> org.apache.hadoop.fs.FileSystemStorageStatistics.fetch(FileSystemStorageStatistics.java:80)
> at 
> org.apache.hadoop.fs.FileSystemStorageStatistics.getLong(FileSystemStorageStatistics.java:108)
> at 
> org.apache.tez.runtime.metrics.FileSystemStatisticsUpdater2.updateCounters(FileSystemStatisticsUpdater2.java:60)
> at 
> org.apache.tez.runtime.metrics.TaskCounterUpdater.updateCounters(TaskCounterUpdater.java:118)
> at 
> org.apache.tez.runtime.RuntimeTask.setFrameworkCounters(RuntimeTask.java:172)
> at 
> org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:100)
> at 
> org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:37)
> at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> {quote}
> This jira is to add null stat key check to {{FileSystemStorageStatistics}}.
> Thanks [~hitesh] for trying in Tez and reporting this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13263) Reload cached groups in background after expiry

2016-06-20 Thread Stephen O'Donnell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13263?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen O'Donnell updated HADOOP-13263:
---
Attachment: HADOOP-13263.004.patch

Updated patch to fix find bugs and checkstyle issues

> Reload cached groups in background after expiry
> ---
>
> Key: HADOOP-13263
> URL: https://issues.apache.org/jira/browse/HADOOP-13263
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
> Attachments: HADOOP-13263.001.patch, HADOOP-13263.002.patch, 
> HADOOP-13263.003.patch, HADOOP-13263.004.patch
>
>
> In HADOOP-11238 the Guava cache was introduced to allow refreshes on the 
> Namenode group cache to run in the background, avoiding many slow group 
> lookups. Even with this change, I have seen quite a few clusters with issues 
> due to slow group lookups. The problem is most prevalent in HA clusters, 
> where a slow group lookup on the hdfs user can fail to return for over 45 
> seconds causing the Failover Controller to kill it.
> The way the current Guava cache implementation works is approximately:
> 1) On initial load, the first thread to request groups for a given user 
> blocks until it returns. Any subsequent threads requesting that user block 
> until that first thread populates the cache.
> 2) When the key expires, the first thread to hit the cache after expiry 
> blocks. While it is blocked, other threads will return the old value.
> I feel it is this blocking thread that still gives the Namenode issues on 
> slow group lookups. If the call from the FC is the one that blocks and 
> lookups are slow, if can cause the NN to be killed.
> Guava has the ability to refresh expired keys completely in the background, 
> where the first thread that hits an expired key schedules a background cache 
> reload, but still returns the old value. Then the cache is eventually 
> updated. This patch introduces this background reload feature. There are two 
> new parameters:
> 1) hadoop.security.groups.cache.background.reload - default false to keep the 
> current behaviour. Set to true to enable a small thread pool and background 
> refresh for expired keys
> 2) hadoop.security.groups.cache.background.reload.threads - only relevant if 
> the above is set to true. Controls how many threads are in the background 
> refresh pool. Default is 1, which is likely to be enough.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13297) hadoop-common module depends on hadoop-build-tools module, but the modules are not ordered correctly

2016-06-20 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15340343#comment-15340343
 ] 

Xiao Chen commented on HADOOP-13297:


Sure Sean, created HADOOP-13298 and assigned to you. Thank you.

> hadoop-common module depends on hadoop-build-tools module, but the modules 
> are not ordered correctly
> 
>
> Key: HADOOP-13297
> URL: https://issues.apache.org/jira/browse/HADOOP-13297
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira AJISAKA
>Assignee: Sean Busbey
>
> After HADOOP-12893, we are seeing {{mvn install -DskipTests}} failing in 
> branch-2.7, branch-2.7.3, and branch-2.6. This failure is caused by the 
> followings
> * hadoop-project module depends on hadoop-build-tools module, but 
> hadoop-project module does not declare hadoop-build-tools as its submodule. 
> Therefore, hadoop-build-tools is not built before building hadoop-project.
> * hadoop-build-tools pom and jar are not uploaded to the snapshot repository 
> (https://repository.apache.org/content/repositories/snapshots/org/apache/hadoop/hadoop-build-tools/)
> The build failure occurs if the *both* of the above conditions are satisfied.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13298) Fix the leftover L files in hadoop-build-tools/src/main/resources/META-INF/

2016-06-20 Thread Xiao Chen (JIRA)
Xiao Chen created HADOOP-13298:
--

 Summary: Fix the leftover L files in 
hadoop-build-tools/src/main/resources/META-INF/
 Key: HADOOP-13298
 URL: https://issues.apache.org/jira/browse/HADOOP-13298
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.8.0, 2.7.3, 2.6.5, 3.0.0-alpha1
Reporter: Xiao Chen
Assignee: Sean Busbey


After HADOOP-12893, an extra copy of LICENSE.txt and NOTICE.txt exists in 
{{hadoop-build-tools/src/main/resources/META-INF/}} after build. We should 
remove it and do it the maven way.

Details in 
https://mail-archives.apache.org/mod_mbox/hadoop-common-dev/201606.mbox/browser

Thanks [~ste...@apache.org] for raising the issue and [~busbey] for offering 
the help!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13297) hadoop-common module depends on hadoop-build-tools module, but the modules are not ordered correctly

2016-06-20 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15340323#comment-15340323
 ] 

Sean Busbey commented on HADOOP-13297:
--

please file a different jira, since that issue is likely going to require 
different changes.

> hadoop-common module depends on hadoop-build-tools module, but the modules 
> are not ordered correctly
> 
>
> Key: HADOOP-13297
> URL: https://issues.apache.org/jira/browse/HADOOP-13297
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira AJISAKA
>Assignee: Sean Busbey
>
> After HADOOP-12893, we are seeing {{mvn install -DskipTests}} failing in 
> branch-2.7, branch-2.7.3, and branch-2.6. This failure is caused by the 
> followings
> * hadoop-project module depends on hadoop-build-tools module, but 
> hadoop-project module does not declare hadoop-build-tools as its submodule. 
> Therefore, hadoop-build-tools is not built before building hadoop-project.
> * hadoop-build-tools pom and jar are not uploaded to the snapshot repository 
> (https://repository.apache.org/content/repositories/snapshots/org/apache/hadoop/hadoop-build-tools/)
> The build failure occurs if the *both* of the above conditions are satisfied.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13280) FileSystemStorageStatistics#getLong(“readOps“) should return readOps + largeReadOps

2016-06-20 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15340321#comment-15340321
 ] 

Mingliang Liu commented on HADOOP-13280:


Thanks [~cmccabe] for your review and commit. For coding styles fixes, we'll 
track the effort separately.

> FileSystemStorageStatistics#getLong(“readOps“) should return readOps + 
> largeReadOps
> ---
>
> Key: HADOOP-13280
> URL: https://issues.apache.org/jira/browse/HADOOP-13280
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Fix For: 2.8.0
>
> Attachments: HADOOP-13280-branch-2.8.000.patch, 
> HADOOP-13280.000.patch, HADOOP-13280.001.patch
>
>
> Currently {{FileSystemStorageStatistics}} instance simply returns data from 
> {{FileSystem$Statistics}}. As to {{readOps}}, the 
> {{FileSystem$Statistics#getReadOps()}} returns {{readOps + largeReadOps}}. We 
> should make the {{FileSystemStorageStatistics#getLong(“readOps“)}} return the 
> sum as well.
> Moreover, there is no unit tests for {{FileSystemStorageStatistics}} and this 
> JIRA will also address this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13297) hadoop-common module depends on hadoop-build-tools module, but the modules are not ordered correctly

2016-06-20 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15340319#comment-15340319
 ] 

Xiao Chen commented on HADOOP-13297:


BTW, there's also an email about the L files got copied into 
hadoop-build-tools/src/main/resources/META-INF/ , could you please help to have 
that covered in this too [~busbey]?
If you think that should go with a separate jira, feel free to let me know and 
I'll create one. Thanks!

> hadoop-common module depends on hadoop-build-tools module, but the modules 
> are not ordered correctly
> 
>
> Key: HADOOP-13297
> URL: https://issues.apache.org/jira/browse/HADOOP-13297
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira AJISAKA
>Assignee: Sean Busbey
>
> After HADOOP-12893, we are seeing {{mvn install -DskipTests}} failing in 
> branch-2.7, branch-2.7.3, and branch-2.6. This failure is caused by the 
> followings
> * hadoop-project module depends on hadoop-build-tools module, but 
> hadoop-project module does not declare hadoop-build-tools as its submodule. 
> Therefore, hadoop-build-tools is not built before building hadoop-project.
> * hadoop-build-tools pom and jar are not uploaded to the snapshot repository 
> (https://repository.apache.org/content/repositories/snapshots/org/apache/hadoop/hadoop-build-tools/)
> The build failure occurs if the *both* of the above conditions are satisfied.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (HADOOP-13280) FileSystemStorageStatistics#getLong(“readOps“) should return readOps + largeReadOps

2016-06-20 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-13280:
---
Comment: was deleted

(was: Thanks [~cmccabe] for the explanation. I find "readOps", "largeReadOps" 
and "writeOps" all use the same approach {{Long.valueOf( readOps + 
largeReadOps)}}. I think we can stick to this. For coding style consistency, I 
can prepare an updated patch very soon.)

> FileSystemStorageStatistics#getLong(“readOps“) should return readOps + 
> largeReadOps
> ---
>
> Key: HADOOP-13280
> URL: https://issues.apache.org/jira/browse/HADOOP-13280
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Fix For: 2.8.0
>
> Attachments: HADOOP-13280-branch-2.8.000.patch, 
> HADOOP-13280.000.patch, HADOOP-13280.001.patch
>
>
> Currently {{FileSystemStorageStatistics}} instance simply returns data from 
> {{FileSystem$Statistics}}. As to {{readOps}}, the 
> {{FileSystem$Statistics#getReadOps()}} returns {{readOps + largeReadOps}}. We 
> should make the {{FileSystemStorageStatistics#getLong(“readOps“)}} return the 
> sum as well.
> Moreover, there is no unit tests for {{FileSystemStorageStatistics}} and this 
> JIRA will also address this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13297) hadoop-common module depends on hadoop-build-tools module, but the modules are not ordered correctly

2016-06-20 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15340315#comment-15340315
 ] 

Xiao Chen commented on HADOOP-13297:


Many thanks to Akira and Sean for pushing this forward!

> hadoop-common module depends on hadoop-build-tools module, but the modules 
> are not ordered correctly
> 
>
> Key: HADOOP-13297
> URL: https://issues.apache.org/jira/browse/HADOOP-13297
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira AJISAKA
>Assignee: Sean Busbey
>
> After HADOOP-12893, we are seeing {{mvn install -DskipTests}} failing in 
> branch-2.7, branch-2.7.3, and branch-2.6. This failure is caused by the 
> followings
> * hadoop-project module depends on hadoop-build-tools module, but 
> hadoop-project module does not declare hadoop-build-tools as its submodule. 
> Therefore, hadoop-build-tools is not built before building hadoop-project.
> * hadoop-build-tools pom and jar are not uploaded to the snapshot repository 
> (https://repository.apache.org/content/repositories/snapshots/org/apache/hadoop/hadoop-build-tools/)
> The build failure occurs if the *both* of the above conditions are satisfied.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-5353) add progress callback feature to the slow FileUtil operations with ability to cancel the work

2016-06-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-5353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15340281#comment-15340281
 ] 

Hadoop QA commented on HADOOP-5353:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
 3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
52s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
25s{color} | {color:red} hadoop-common-project/hadoop-common: The patch 
generated 62 new + 134 unchanged - 6 fixed = 196 total (was 140) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
36s{color} | {color:red} hadoop-common-project/hadoop-common generated 1 new + 
0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 19m 35s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 51m 24s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-common-project/hadoop-common |
|  |  Private method org.apache.hadoop.fs.FileUtil.copy(FileSystem, FileStatus, 
File, boolean, Configuration) is never called  At FileUtil.java:boolean, 
Configuration) is never called  At FileUtil.java:[line 354] |
| Timed out junit tests | org.apache.hadoop.http.TestHttpServerLifecycle |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:e2f6409 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12810910/HADOOP-5353.002.patch 
|
| JIRA Issue | HADOOP-5353 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 42497f147a95 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 5370a6f |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9836/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9836/artifact/patchprocess/new-findbugs-hadoop-common-project_hadoop-common.html
 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9836/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 

[jira] [Commented] (HADOOP-13203) S3a: Consider reducing the number of connection aborts by setting correct length in s3 request

2016-06-20 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15340267#comment-15340267
 ] 

Steve Loughran commented on HADOOP-13203:
-

Here is the comparison of a small sequence of forward/backward read operations, 
between the random and sequential policies

"random" keeps buffer sizes in requests down to a minimum, hence no bytes 
wasted in close or aborts {{BytesReadInClose=0, BytesDiscardedInAbort=0}}

"sequential" expects a read from 0-len, requests the entire range. Forward 
seeks can be skipped, backward seeks trigger retreat. However, the default 
block size (32K) is too low for any forward skip (should we change this to 
something like 128K?), then there's an abort, leading to the values 
{{BytesReadInClose=0, BytesDiscardedInAbort=80771308}}. Those abort bytes never 
get read in, but they do measure how oversized the request was

{code}
testRandomIO_RandomPolicy: Random IO with policy "random"

2016-06-20 20:35:15,680 [Thread-0] DEBUG s3a.S3AFileSystem 
(S3AFileSystem.java:setInputPolicy(566)) - Setting input strategy: random
2016-06-20 20:35:15,680 [Thread-0] DEBUG s3a.S3AFileSystem 
(S3AFileSystem.java:open(617)) - Opening 's3a://landsat-pds/scene_list.gz' for 
reading.
2016-06-20 20:35:15,680 [Thread-0] DEBUG s3a.S3AFileSystem 
(S3AStorageStatistics.java:incrementCounter(60)) - invocations_getfilestatus += 
1  ->  2
2016-06-20 20:35:15,680 [Thread-0] DEBUG s3a.S3AFileSystem 
(S3AFileSystem.java:getFileStatus(1421)) - Getting path status for 
s3a://landsat-pds/scene_list.gz  (scene_list.gz)
2016-06-20 20:35:15,681 [Thread-0] DEBUG s3a.S3AFileSystem 
(S3AStorageStatistics.java:incrementCounter(60)) - object_metadata_requests += 
1  ->  2
2016-06-20 20:35:15,846 [Thread-0] DEBUG s3a.S3AFileSystem 
(S3AFileSystem.java:getFileStatus(1432)) - Found exact file: normal file
2016-06-20 20:35:15,848 [Thread-0] DEBUG s3a.S3AFileSystem 
(S3AFileSystem.java:setInputPolicy(566)) - Setting input strategy: normal
2016-06-20 20:35:15,850 [Thread-0] DEBUG s3a.S3AFileSystem 
(S3AInputStream.java:reopen(145)) - reopen(s3a://landsat-pds/scene_list.gz) for 
read from new offset at targetPos=2097152, length=131072, 
requestedStreamLen=2228224, streamPosition=0, nextReadPosition=2097152
2016-06-20 20:35:16,069 [Thread-0] DEBUG s3a.S3AFileSystem 
(S3AInputStream.java:closeStream(470)) - Stream s3a://landsat-pds/scene_list.gz 
closed: seekInStream(); streamPos=2228224, nextReadPos=131072,request range 
2097152-2228224 length=2228224
2016-06-20 20:35:16,069 [Thread-0] DEBUG s3a.S3AFileSystem 
(S3AInputStream.java:reopen(145)) - reopen(s3a://landsat-pds/scene_list.gz) for 
read from new offset at targetPos=131072, length=131072, 
requestedStreamLen=262144, streamPosition=131072, nextReadPosition=131072
2016-06-20 20:35:16,259 [Thread-0] DEBUG s3a.S3AFileSystem 
(S3AInputStream.java:closeStream(470)) - Stream s3a://landsat-pds/scene_list.gz 
closed: seekInStream(); streamPos=262144, nextReadPos=5242880,request range 
131072-262144 length=262144
2016-06-20 20:35:16,259 [Thread-0] DEBUG s3a.S3AFileSystem 
(S3AInputStream.java:reopen(145)) - reopen(s3a://landsat-pds/scene_list.gz) for 
read from new offset at targetPos=5242880, length=65536, 
requestedStreamLen=5308416, streamPosition=5242880, nextReadPosition=5242880
2016-06-20 20:35:16,437 [Thread-0] DEBUG s3a.S3AFileSystem 
(S3AInputStream.java:closeStream(470)) - Stream s3a://landsat-pds/scene_list.gz 
closed: seekInStream(); streamPos=5308416, nextReadPos=1048576,request range 
5242880-5308416 length=5308416
2016-06-20 20:35:16,437 [Thread-0] DEBUG s3a.S3AFileSystem 
(S3AInputStream.java:reopen(145)) - reopen(s3a://landsat-pds/scene_list.gz) for 
read from new offset at targetPos=1048576, length=1048576, 
requestedStreamLen=2097152, streamPosition=1048576, nextReadPosition=1048576
2016-06-20 20:35:16,994 [Thread-0] INFO  contract.ContractTestUtils 
(ContractTestUtils.java:end(1262)) - Duration of Time to execute 4 reads of 
total size 1376256 bytes: 1,141,400,611 nS
2016-06-20 20:35:16,994 [Thread-0] DEBUG s3a.S3AFileSystem 
(S3AInputStream.java:closeStream(470)) - Stream s3a://landsat-pds/scene_list.gz 
closed: close() operation; streamPos=2097152, nextReadPos=0,request range 
1048576-2097152 length=2097152
2016-06-20 20:35:16,995 [Thread-0] INFO  scale.TestS3AInputStreamPerformance 
(TestS3AInputStreamPerformance.java:logTimePerIOP(165)) - Time per byte read: 
829 nS
2016-06-20 20:35:16,996 [Thread-0] INFO  scale.TestS3AInputStreamPerformance 
(TestS3AInputStreamPerformance.java:executeRandomIO(388)) - Effective bandwidth 
1.205761 MB/S
2016-06-20 20:35:16,997 [Thread-0] INFO  scale.TestS3AInputStreamPerformance 
(TestS3AInputStreamPerformance.java:logStreamStatistics(292)) - Stream 
Statistics
StreamStatistics{OpenOperations=4, CloseOperations=4, Closed=4, Aborted=0, 
SeekOperations=2, ReadExceptions=0, ForwardSeekOperations=0, 
BackwardSeekOperations=2, 

[jira] [Updated] (HADOOP-13203) S3a: Consider reducing the number of connection aborts by setting correct length in s3 request

2016-06-20 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13203:

Attachment: HADOOP-13203-branch-2-007.patch

HADOOP-13203 Patch 007 cleanup, including findbugs and checkstyle

While this patch is ready for some review, there's one feature I want to write 
a test for and then address: a read which starts in the current requested range 
but which goes past it causes the stream to be closed, starting again at the 
new position. This can be fixed. 

I plan to do it by having the {{read(bytes[])}} return only the bytes in the 
current request; this meets the semantics of {{read(bytes[])}}. The 
{{readFullly()}} calls already iterate on the reads(), so this is handled at 
that level...there is no need to be clever further down.

> S3a: Consider reducing the number of connection aborts by setting correct 
> length in s3 request
> --
>
> Key: HADOOP-13203
> URL: https://issues.apache.org/jira/browse/HADOOP-13203
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
> Attachments: HADOOP-13203-branch-2-001.patch, 
> HADOOP-13203-branch-2-002.patch, HADOOP-13203-branch-2-003.patch, 
> HADOOP-13203-branch-2-004.patch, HADOOP-13203-branch-2-005.patch, 
> HADOOP-13203-branch-2-006.patch, HADOOP-13203-branch-2-007.patch, 
> stream_stats.tar.gz
>
>
> Currently file's "contentLength" is set as the "requestedStreamLen", when 
> invoking S3AInputStream::reopen().  As a part of lazySeek(), sometimes the 
> stream had to be closed and reopened. But lots of times the stream was closed 
> with abort() causing the internal http connection to be unusable. This incurs 
> lots of connection establishment cost in some jobs.  It would be good to set 
> the correct value for the stream length to avoid connection aborts. 
> I will post the patch once aws tests passes in my machine.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13203) S3a: Consider reducing the number of connection aborts by setting correct length in s3 request

2016-06-20 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13203:

Status: Open  (was: Patch Available)

> S3a: Consider reducing the number of connection aborts by setting correct 
> length in s3 request
> --
>
> Key: HADOOP-13203
> URL: https://issues.apache.org/jira/browse/HADOOP-13203
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
> Attachments: HADOOP-13203-branch-2-001.patch, 
> HADOOP-13203-branch-2-002.patch, HADOOP-13203-branch-2-003.patch, 
> HADOOP-13203-branch-2-004.patch, HADOOP-13203-branch-2-005.patch, 
> HADOOP-13203-branch-2-006.patch, stream_stats.tar.gz
>
>
> Currently file's "contentLength" is set as the "requestedStreamLen", when 
> invoking S3AInputStream::reopen().  As a part of lazySeek(), sometimes the 
> stream had to be closed and reopened. But lots of times the stream was closed 
> with abort() causing the internal http connection to be unusable. This incurs 
> lots of connection establishment cost in some jobs.  It would be good to set 
> the correct value for the stream length to avoid connection aborts. 
> I will post the patch once aws tests passes in my machine.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13297) hadoop-common module depends on hadoop-build-tools module, but the modules are not ordered correctly

2016-06-20 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-13297:
---
Assignee: Sean Busbey

Hi [~busbey], I added you into contributors role in Hadoop Common and assigned.

> hadoop-common module depends on hadoop-build-tools module, but the modules 
> are not ordered correctly
> 
>
> Key: HADOOP-13297
> URL: https://issues.apache.org/jira/browse/HADOOP-13297
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira AJISAKA
>Assignee: Sean Busbey
>
> After HADOOP-12893, we are seeing {{mvn install -DskipTests}} failing in 
> branch-2.7, branch-2.7.3, and branch-2.6. This failure is caused by the 
> followings
> * hadoop-project module depends on hadoop-build-tools module, but 
> hadoop-project module does not declare hadoop-build-tools as its submodule. 
> Therefore, hadoop-build-tools is not built before building hadoop-project.
> * hadoop-build-tools pom and jar are not uploaded to the snapshot repository 
> (https://repository.apache.org/content/repositories/snapshots/org/apache/hadoop/hadoop-build-tools/)
> The build failure occurs if the *both* of the above conditions are satisfied.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13286) add a scale test to do gunzip and linecount

2016-06-20 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13286:

Status: Open  (was: Patch Available)

> add a scale test to do gunzip and linecount
> ---
>
> Key: HADOOP-13286
> URL: https://issues.apache.org/jira/browse/HADOOP-13286
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13286-branch-2-001.patch
>
>
> the HADOOP-13203 patch proposal showed that there were performance problems 
> downstream which weren't surfacing in the current scale tests.
> Trying to decompress the .gz test file and then go through it with LineReader 
> models a basic use case: parse a .csv.gz data source. 
> Add this, with metric printing



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13203) S3a: Consider reducing the number of connection aborts by setting correct length in s3 request

2016-06-20 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13203:

Status: Patch Available  (was: Open)

> S3a: Consider reducing the number of connection aborts by setting correct 
> length in s3 request
> --
>
> Key: HADOOP-13203
> URL: https://issues.apache.org/jira/browse/HADOOP-13203
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
> Attachments: HADOOP-13203-branch-2-001.patch, 
> HADOOP-13203-branch-2-002.patch, HADOOP-13203-branch-2-003.patch, 
> HADOOP-13203-branch-2-004.patch, HADOOP-13203-branch-2-005.patch, 
> HADOOP-13203-branch-2-006.patch, stream_stats.tar.gz
>
>
> Currently file's "contentLength" is set as the "requestedStreamLen", when 
> invoking S3AInputStream::reopen().  As a part of lazySeek(), sometimes the 
> stream had to be closed and reopened. But lots of times the stream was closed 
> with abort() causing the internal http connection to be unusable. This incurs 
> lots of connection establishment cost in some jobs.  It would be good to set 
> the correct value for the stream length to avoid connection aborts. 
> I will post the patch once aws tests passes in my machine.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13286) add a scale test to do gunzip and linecount

2016-06-20 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15340223#comment-15340223
 ] 

Steve Loughran commented on HADOOP-13286:
-

I've incorporated this into the HADOOP-13203 006 patch; cut out some others 
which are essentially surplus

> add a scale test to do gunzip and linecount
> ---
>
> Key: HADOOP-13286
> URL: https://issues.apache.org/jira/browse/HADOOP-13286
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13286-branch-2-001.patch
>
>
> the HADOOP-13203 patch proposal showed that there were performance problems 
> downstream which weren't surfacing in the current scale tests.
> Trying to decompress the .gz test file and then go through it with LineReader 
> models a basic use case: parse a .csv.gz data source. 
> Add this, with metric printing



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13203) S3a: Consider reducing the number of connection aborts by setting correct length in s3 request

2016-06-20 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13203:

Attachment: HADOOP-13203-branch-2-006.patch

HADOOP-13203 Patch 006. Explicit policies based on fadvise terminology(); 
configuration file option uses "experimental" in its name, to indicate that it 
is. Explicit test of random IO shows 4x speedup over sequential IO. Also: cut 
back on some of the scale tests that were just doing sequential seek+read with 
different readahead sizes. They don't add much, just take up another 10-20s

> S3a: Consider reducing the number of connection aborts by setting correct 
> length in s3 request
> --
>
> Key: HADOOP-13203
> URL: https://issues.apache.org/jira/browse/HADOOP-13203
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
> Attachments: HADOOP-13203-branch-2-001.patch, 
> HADOOP-13203-branch-2-002.patch, HADOOP-13203-branch-2-003.patch, 
> HADOOP-13203-branch-2-004.patch, HADOOP-13203-branch-2-005.patch, 
> HADOOP-13203-branch-2-006.patch, stream_stats.tar.gz
>
>
> Currently file's "contentLength" is set as the "requestedStreamLen", when 
> invoking S3AInputStream::reopen().  As a part of lazySeek(), sometimes the 
> stream had to be closed and reopened. But lots of times the stream was closed 
> with abort() causing the internal http connection to be unusable. This incurs 
> lots of connection establishment cost in some jobs.  It would be good to set 
> the correct value for the stream length to avoid connection aborts. 
> I will post the patch once aws tests passes in my machine.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13297) hadoop-common module depends on hadoop-build-tools module, but the modules are not ordered correctly

2016-06-20 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15340210#comment-15340210
 ] 

Sean Busbey commented on HADOOP-13297:
--

I no longer have sufficient perms on the HADOOP tracker to self-assign, but I 
plan to take a try at this issue this week.

> hadoop-common module depends on hadoop-build-tools module, but the modules 
> are not ordered correctly
> 
>
> Key: HADOOP-13297
> URL: https://issues.apache.org/jira/browse/HADOOP-13297
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira AJISAKA
>
> After HADOOP-12893, we are seeing {{mvn install -DskipTests}} failing in 
> branch-2.7, branch-2.7.3, and branch-2.6. This failure is caused by the 
> followings
> * hadoop-project module depends on hadoop-build-tools module, but 
> hadoop-project module does not declare hadoop-build-tools as its submodule. 
> Therefore, hadoop-build-tools is not built before building hadoop-project.
> * hadoop-build-tools pom and jar are not uploaded to the snapshot repository 
> (https://repository.apache.org/content/repositories/snapshots/org/apache/hadoop/hadoop-build-tools/)
> The build failure occurs if the *both* of the above conditions are satisfied.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13203) S3a: Consider reducing the number of connection aborts by setting correct length in s3 request

2016-06-20 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13203:

Status: Open  (was: Patch Available)

> S3a: Consider reducing the number of connection aborts by setting correct 
> length in s3 request
> --
>
> Key: HADOOP-13203
> URL: https://issues.apache.org/jira/browse/HADOOP-13203
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
> Attachments: HADOOP-13203-branch-2-001.patch, 
> HADOOP-13203-branch-2-002.patch, HADOOP-13203-branch-2-003.patch, 
> HADOOP-13203-branch-2-004.patch, HADOOP-13203-branch-2-005.patch, 
> stream_stats.tar.gz
>
>
> Currently file's "contentLength" is set as the "requestedStreamLen", when 
> invoking S3AInputStream::reopen().  As a part of lazySeek(), sometimes the 
> stream had to be closed and reopened. But lots of times the stream was closed 
> with abort() causing the internal http connection to be unusable. This incurs 
> lots of connection establishment cost in some jobs.  It would be good to set 
> the correct value for the stream length to avoid connection aborts. 
> I will post the patch once aws tests passes in my machine.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13297) hadoop-common module depends on hadoop-build-tools module, but the modules are not ordered correctly

2016-06-20 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15340197#comment-15340197
 ] 

Akira AJISAKA commented on HADOOP-13297:


Error log in branch-2.7.3.
{noformat}
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-remote-resources-plugin:1.5:process (default) on 
project hadoop-project: Resources archive cannot be found. Failure to find 
org.apache.hadoop:hadoop-build-tools:jar:2.7.3 in 
https://repository.apache.org/content/repositories/snapshots was cached in the 
local repository, resolution will not be reattempted until the update interval 
of apache.snapshots.https has elapsed or updates are forced
[ERROR] 
[ERROR] Try downloading the file manually from the project website.
[ERROR] 
[ERROR] Then, install it using the command:
[ERROR] mvn install:install-file -DgroupId=org.apache.hadoop 
-DartifactId=hadoop-build-tools -Dversion=2.7.3 -Dpackaging=jar 
-Dfile=/path/to/file
[ERROR] 
[ERROR] Alternatively, if you host your own repository you can deploy the file 
there:
[ERROR] mvn deploy:deploy-file -DgroupId=org.apache.hadoop 
-DartifactId=hadoop-build-tools -Dversion=2.7.3 -Dpackaging=jar 
-Dfile=/path/to/file -Durl=[url] -DrepositoryId=[id]
[ERROR] 
[ERROR] 
[ERROR] org.apache.hadoop:hadoop-build-tools:jar:2.7.3
[ERROR] 
[ERROR] from the specified remote repositories:
[ERROR] apache.snapshots.https 
(https://repository.apache.org/content/repositories/snapshots, releases=true, 
snapshots=true),
[ERROR] repository.jboss.org 
(http://repository.jboss.org/nexus/content/groups/public/, releases=true, 
snapshots=false),
[ERROR] central (https://repo.maven.apache.org/maven2, releases=true, 
snapshots=false)
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-project
{noformat}

> hadoop-common module depends on hadoop-build-tools module, but the modules 
> are not ordered correctly
> 
>
> Key: HADOOP-13297
> URL: https://issues.apache.org/jira/browse/HADOOP-13297
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira AJISAKA
>
> After HADOOP-12893, we are seeing {{mvn install -DskipTests}} failing in 
> branch-2.7, branch-2.7.3, and branch-2.6. This failure is caused by the 
> followings
> * hadoop-project module depends on hadoop-build-tools module, but 
> hadoop-project module does not declare hadoop-build-tools as its submodule. 
> Therefore, hadoop-build-tools is not built before building hadoop-project.
> * hadoop-build-tools pom and jar are not uploaded to the snapshot repository 
> (https://repository.apache.org/content/repositories/snapshots/org/apache/hadoop/hadoop-build-tools/)
> The build failure occurs if the *both* of the above conditions are satisfied.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13297) hadoop-common module depends on hadoop-build-tools module, but the modules are not ordered correctly

2016-06-20 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-13297:
---
Component/s: build

> hadoop-common module depends on hadoop-build-tools module, but the modules 
> are not ordered correctly
> 
>
> Key: HADOOP-13297
> URL: https://issues.apache.org/jira/browse/HADOOP-13297
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira AJISAKA
>
> After HADOOP-12893, we are seeing {{mvn install -DskipTests}} failing in 
> branch-2.7, branch-2.7.3, and branch-2.6. This failure is caused by the 
> followings
> * hadoop-project module depends on hadoop-build-tools module, but 
> hadoop-project module does not declare hadoop-build-tools as its submodule. 
> Therefore, hadoop-build-tools is not built before building hadoop-project.
> * hadoop-build-tools pom and jar are not uploaded to the snapshot repository 
> (https://repository.apache.org/content/repositories/snapshots/org/apache/hadoop/hadoop-build-tools/)
> The build failure occurs if the *both* of the above conditions are satisfied.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13297) hadoop-common module depends on hadoop-build-tools module, but the modules are not ordered correctly

2016-06-20 Thread Akira AJISAKA (JIRA)
Akira AJISAKA created HADOOP-13297:
--

 Summary: hadoop-common module depends on hadoop-build-tools 
module, but the modules are not ordered correctly
 Key: HADOOP-13297
 URL: https://issues.apache.org/jira/browse/HADOOP-13297
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Akira AJISAKA


After HADOOP-12893, we are seeing {{mvn install -DskipTests}} failing in 
branch-2.7, branch-2.7.3, and branch-2.6. This failure is caused by the 
followings
* hadoop-project module depends on hadoop-build-tools module, but 
hadoop-project module does not declare hadoop-build-tools as its submodule. 
Therefore, hadoop-build-tools is not built before building hadoop-project.
* hadoop-build-tools pom and jar are not uploaded to the snapshot repository 
(https://repository.apache.org/content/repositories/snapshots/org/apache/hadoop/hadoop-build-tools/)

The build failure occurs if the *both* of the above conditions are satisfied.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12893) Verify LICENSE.txt and NOTICE.txt

2016-06-20 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15340186#comment-15340186
 ] 

Akira AJISAKA commented on HADOOP-12893:


Sure! Filed HADOOP-13297.

> Verify LICENSE.txt and NOTICE.txt
> -
>
> Key: HADOOP-12893
> URL: https://issues.apache.org/jira/browse/HADOOP-12893
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0, 2.7.3, 2.6.5, 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Assignee: Xiao Chen
>Priority: Blocker
> Fix For: 2.7.3, 2.6.5
>
> Attachments: HADOOP-12893-addendum-branch-2.7.01.patch, 
> HADOOP-12893.002.patch, HADOOP-12893.003.patch, HADOOP-12893.004.patch, 
> HADOOP-12893.005.patch, HADOOP-12893.006.patch, HADOOP-12893.007.patch, 
> HADOOP-12893.008.patch, HADOOP-12893.009.patch, HADOOP-12893.01.patch, 
> HADOOP-12893.011.patch, HADOOP-12893.012.patch, HADOOP-12893.10.patch, 
> HADOOP-12893.branch-2.01.patch, HADOOP-12893.branch-2.6.01.patch, 
> HADOOP-12893.branch-2.7.01.patch, HADOOP-12893.branch-2.7.02.patch, 
> HADOOP-12893.branch-2.7.3.01.patch
>
>
> We have many bundled dependencies in both the source and the binary artifacts 
> that are not in LICENSE.txt and NOTICE.txt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13280) FileSystemStorageStatistics#getLong(“readOps“) should return readOps + largeReadOps

2016-06-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15340173#comment-15340173
 ] 

Hudson commented on HADOOP-13280:
-

SUCCESS: Integrated in Hadoop-trunk-Commit #9987 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9987/])
HADOOP-13280. FileSystemStorageStatistics#getLong(“readOps“) should 
(cmccabe: rev 5370a6ffaec5227c0978f10c86a5811155271933)
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFileSystemStorageStatistics.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystemStorageStatistics.java


> FileSystemStorageStatistics#getLong(“readOps“) should return readOps + 
> largeReadOps
> ---
>
> Key: HADOOP-13280
> URL: https://issues.apache.org/jira/browse/HADOOP-13280
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Fix For: 2.8.0
>
> Attachments: HADOOP-13280-branch-2.8.000.patch, 
> HADOOP-13280.000.patch, HADOOP-13280.001.patch
>
>
> Currently {{FileSystemStorageStatistics}} instance simply returns data from 
> {{FileSystem$Statistics}}. As to {{readOps}}, the 
> {{FileSystem$Statistics#getReadOps()}} returns {{readOps + largeReadOps}}. We 
> should make the {{FileSystemStorageStatistics#getLong(“readOps“)}} return the 
> sum as well.
> Moreover, there is no unit tests for {{FileSystemStorageStatistics}} and this 
> JIRA will also address this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13280) FileSystemStorageStatistics#getLong(“readOps“) should return readOps + largeReadOps

2016-06-20 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15340169#comment-15340169
 ] 

Mingliang Liu commented on HADOOP-13280:


Thanks [~cmccabe] for the explanation. I find "readOps", "largeReadOps" and 
"writeOps" all use the same approach {{Long.valueOf( readOps + largeReadOps)}}. 
I think we can stick to this. For coding style consistency, I can prepare an 
updated patch very soon.

> FileSystemStorageStatistics#getLong(“readOps“) should return readOps + 
> largeReadOps
> ---
>
> Key: HADOOP-13280
> URL: https://issues.apache.org/jira/browse/HADOOP-13280
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Fix For: 2.8.0
>
> Attachments: HADOOP-13280-branch-2.8.000.patch, 
> HADOOP-13280.000.patch, HADOOP-13280.001.patch
>
>
> Currently {{FileSystemStorageStatistics}} instance simply returns data from 
> {{FileSystem$Statistics}}. As to {{readOps}}, the 
> {{FileSystem$Statistics#getReadOps()}} returns {{readOps + largeReadOps}}. We 
> should make the {{FileSystemStorageStatistics#getLong(“readOps“)}} return the 
> sum as well.
> Moreover, there is no unit tests for {{FileSystemStorageStatistics}} and this 
> JIRA will also address this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-9613) [JDK8] Update jersey version to latest 1.x release

2016-06-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15340164#comment-15340164
 ] 

ASF GitHub Bot commented on HADOOP-9613:


Github user aajisaka commented on a diff in the pull request:

https://github.com/apache/hadoop/pull/76#discussion_r67744731
  
--- Diff: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesForCSWithPartitions.java
 ---
@@ -106,76 +105,79 @@ protected void configureServlets() {
   bind(ResourceManager.class).toInstance(rm);
   serve("/*").with(GuiceContainer.class);
 }
-  });
+  };
 
-  public class GuiceServletConfig extends GuiceServletContextListener {
-@Override
-protected Injector getInjector() {
-  return injector;
-}
+  static {
+GuiceServletConfig.setInjector(
+Guice.createInjector(new WebServletModule()));
   }
 
   private static void setupQueueConfiguration(
-  CapacitySchedulerConfiguration conf, ResourceManager rm) {
+  CapacitySchedulerConfiguration config, ResourceManager 
resourceManager) {
--- End diff --

Thank you for updating the pull request! Mostly looks good to me.
(nit) Unused argument `resourceManager` can be removed. I'm +1 if that is 
addressed.


> [JDK8] Update jersey version to latest 1.x release
> --
>
> Key: HADOOP-9613
> URL: https://issues.apache.org/jira/browse/HADOOP-9613
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Affects Versions: 2.4.0, 3.0.0-alpha1
>Reporter: Timothy St. Clair
>Assignee: Tsuyoshi Ozawa
>  Labels: UpgradeKeyLibrary, maven
> Attachments: HADOOP-2.2.0-9613.patch, 
> HADOOP-9613.004.incompatible.patch, HADOOP-9613.005.incompatible.patch, 
> HADOOP-9613.006.incompatible.patch, HADOOP-9613.007.incompatible.patch, 
> HADOOP-9613.008.incompatible.patch, HADOOP-9613.009.incompatible.patch, 
> HADOOP-9613.010.incompatible.patch, HADOOP-9613.011.incompatible.patch, 
> HADOOP-9613.012.incompatible.patch, HADOOP-9613.013.incompatible.patch, 
> HADOOP-9613.014.incompatible.patch, HADOOP-9613.014.incompatible.patch, 
> HADOOP-9613.015.incompatible.patch, HADOOP-9613.016.incompatible.patch, 
> HADOOP-9613.017.incompatible.patch, HADOOP-9613.019.incompatible.patch, 
> HADOOP-9613.020.incompatible.patch, HADOOP-9613.021.incompatible.patch, 
> HADOOP-9613.1.patch, HADOOP-9613.2.patch, HADOOP-9613.3.patch, 
> HADOOP-9613.patch
>
>
> Update pom.xml dependencies exposed when running a mvn-rpmbuild against 
> system dependencies on Fedora 18.  
> The existing version is 1.8 which is quite old. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13263) Reload cached groups in background after expiry

2016-06-20 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15340131#comment-15340131
 ] 

Arpit Agarwal commented on HADOOP-13263:


Hi [~sodonnell], the v3 patch lgtm.

Can you please fix the issues flagged by checkstyle? Also I think the findbugs 
warning is spurious but we should keep findbugs happy. It can probably be 
suppressed by taking a local reference to the executorService within the lock 
so we make use of the local reference outside the lock. +1 with checkstyle and 
findbugs addressed.

Some of new the tests rely on events occurring with expected delays. We've seen 
such tests can be flaky especially on slower hardware like under-provisioned 
VMs. We can file a follow up jira to make the tests more resilient.

> Reload cached groups in background after expiry
> ---
>
> Key: HADOOP-13263
> URL: https://issues.apache.org/jira/browse/HADOOP-13263
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
> Attachments: HADOOP-13263.001.patch, HADOOP-13263.002.patch, 
> HADOOP-13263.003.patch
>
>
> In HADOOP-11238 the Guava cache was introduced to allow refreshes on the 
> Namenode group cache to run in the background, avoiding many slow group 
> lookups. Even with this change, I have seen quite a few clusters with issues 
> due to slow group lookups. The problem is most prevalent in HA clusters, 
> where a slow group lookup on the hdfs user can fail to return for over 45 
> seconds causing the Failover Controller to kill it.
> The way the current Guava cache implementation works is approximately:
> 1) On initial load, the first thread to request groups for a given user 
> blocks until it returns. Any subsequent threads requesting that user block 
> until that first thread populates the cache.
> 2) When the key expires, the first thread to hit the cache after expiry 
> blocks. While it is blocked, other threads will return the old value.
> I feel it is this blocking thread that still gives the Namenode issues on 
> slow group lookups. If the call from the FC is the one that blocks and 
> lookups are slow, if can cause the NN to be killed.
> Guava has the ability to refresh expired keys completely in the background, 
> where the first thread that hits an expired key schedules a background cache 
> reload, but still returns the old value. Then the cache is eventually 
> updated. This patch introduces this background reload feature. There are two 
> new parameters:
> 1) hadoop.security.groups.cache.background.reload - default false to keep the 
> current behaviour. Set to true to enable a small thread pool and background 
> refresh for expired keys
> 2) hadoop.security.groups.cache.background.reload.threads - only relevant if 
> the above is set to true. Controls how many threads are in the background 
> refresh pool. Default is 1, which is likely to be enough.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-5353) add progress callback feature to the slow FileUtil operations with ability to cancel the work

2016-06-20 Thread Pranav Prakash (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-5353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pranav Prakash updated HADOOP-5353:
---
Status: Patch Available  (was: Open)

> add progress callback feature to the slow FileUtil operations with ability to 
> cancel the work
> -
>
> Key: HADOOP-5353
> URL: https://issues.apache.org/jira/browse/HADOOP-5353
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 0.21.0
>Reporter: Steve Loughran
>Assignee: Pranav Prakash
>Priority: Minor
> Attachments: HADOOP-5353.000.patch, HADOOP-5353.001.patch, 
> HADOOP-5353.002.patch
>
>
> This is something only of relevance of people doing front ends to FS 
> operations, and as they could take the code in FSUtil and add something with 
> this feature, its a blocker to none of them. 
> Current FileUtil.copy can take a long time to move large files around, but 
> there is no progress indicator to GUIs, or a way to cancel the operation 
> mid-way, j interrupting the thread or closing the filesystem.
> I propose a FileIOProgress interface to the copy ops, one that had a single 
> method to notify listeners of bytes read and written, and the number of files 
> handled.
> {code}
> interface FileIOProgress {
>  boolean progress(int files, long bytesRead, long bytesWritten);
> }
> The return value would be true to continue the operation, or false to stop 
> the copy and leave the FS in whatever incomplete state it is in currently. 
> it could even be fancier: have  beginFileOperation and endFileOperation 
> callbacks to pass in the name of the current file being worked on, though I 
> don't have a personal need for that.
> GUIs could show progress bars and cancel buttons, other tools could use the 
> interface to pass any cancellation notice upstream.
> The FileUtil.copy operations would call this interface (blocking) after every 
> block copy, so the frequency of invocation would depend on block size and 
> network/disk speeds. Which is also why I don't propose having any percentage 
> done indicators; it's too hard to predict percentage of time done for 
> distributed file IO with any degree of accuracy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13283) Support reset operation for new global storage statistics and per FS storage stats

2016-06-20 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15340119#comment-15340119
 ] 

Mingliang Liu commented on HADOOP-13283:


Thanks, [~ste...@apache.org]. I also believe reset is preferred for production 
code use case. When I discuss this with [~hitesh], I think from Tez's point of 
view, the reset operation is good enough by now (correct me if I'm wrong 
[~hitesh]). However, I do believe snapshot is a nice-to-have feature, 
especially for testing/debugging cases. As it's non-trivial (to me) to take 
care of the memory overhead and consistency, and basically it needs a different 
implementation, I think we can address this in separate JIRAs later.

> Support reset operation for new global storage statistics and per FS storage 
> stats
> --
>
> Key: HADOOP-13283
> URL: https://issues.apache.org/jira/browse/HADOOP-13283
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Fix For: 2.8.0
>
>
> Applications may reuse the file system object across jobs and its storage 
> statistics should be reset. Specially the {{FileSystem.Statistics}} supports 
> reset and [HADOOP-13032] needs to keep that use case valid.
> This jira is for supporting reset operations for storage statistics.
> Thanks [~hitesh] for reporting this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12893) Verify LICENSE.txt and NOTICE.txt

2016-06-20 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15340108#comment-15340108
 ] 

Sean Busbey commented on HADOOP-12893:
--

{quote}
bq. the current patch moves the module definition from its current home to be a 
child module of the hadoop-project, without changing its location in the 
filesystem.
Agreed, but as Eric said, adding hadoop-build-tools as a dependency does not 
fix the build. (I can reproduce the failure in branch-2.7.3) I'm thinking we 
need to move the module definition to fix the build. Should we move the module 
definition and change the location in the filesystem?
{quote}

Since the immediate problem is masked at the moment, could you make a separate 
jira to track this dependency graph error and I'll look into fixing it?

> Verify LICENSE.txt and NOTICE.txt
> -
>
> Key: HADOOP-12893
> URL: https://issues.apache.org/jira/browse/HADOOP-12893
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0, 2.7.3, 2.6.5, 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Assignee: Xiao Chen
>Priority: Blocker
> Fix For: 2.7.3, 2.6.5
>
> Attachments: HADOOP-12893-addendum-branch-2.7.01.patch, 
> HADOOP-12893.002.patch, HADOOP-12893.003.patch, HADOOP-12893.004.patch, 
> HADOOP-12893.005.patch, HADOOP-12893.006.patch, HADOOP-12893.007.patch, 
> HADOOP-12893.008.patch, HADOOP-12893.009.patch, HADOOP-12893.01.patch, 
> HADOOP-12893.011.patch, HADOOP-12893.012.patch, HADOOP-12893.10.patch, 
> HADOOP-12893.branch-2.01.patch, HADOOP-12893.branch-2.6.01.patch, 
> HADOOP-12893.branch-2.7.01.patch, HADOOP-12893.branch-2.7.02.patch, 
> HADOOP-12893.branch-2.7.3.01.patch
>
>
> We have many bundled dependencies in both the source and the binary artifacts 
> that are not in LICENSE.txt and NOTICE.txt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13291) Probing stats in DFSOpsCountStatistics/S3AStorageStatistics should be correctly implemented

2016-06-20 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15340085#comment-15340085
 ] 

Mingliang Liu commented on HADOOP-13291:


[~ste...@apache.org], yes I prefer 2.8+ as well. Thanks!

> Probing stats in DFSOpsCountStatistics/S3AStorageStatistics should be 
> correctly implemented
> ---
>
> Key: HADOOP-13291
> URL: https://issues.apache.org/jira/browse/HADOOP-13291
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HADOOP-13291.000.patch, HADOOP-13291.001.patch, 
> HADOOP-13291.002.patch
>
>
> To probe a stat in {{StorageStatistics}}, users can use the 
> {{StorageStatistics#isTracked()}} API. Currently {{DFSOpsCountStatistics}} 
> implements this function wrongly. {{S3AStorageStatistics}} borrowed the same 
> idea and also has the same error.
> # The {{isTracked()}} is not correctly implemented. I believe this was an 
> omission in code.
> # {{isTracked()}} checks a stat with operation symbol (instead of enum name). 
> {{getLongStatistics()}} should return LongStatistics iterators with symbol as 
> name, instead of the enum variable name. Or else, 
> {{isTracked(getLongStatistics().next().getName());}} returns false. This will 
> lead to confusion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12893) Verify LICENSE.txt and NOTICE.txt

2016-06-20 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15340086#comment-15340086
 ] 

Akira AJISAKA commented on HADOOP-12893:


Thank you for the comments.
bq. the current patch moves the module definition from its current home to be a 
child module of the hadoop-project, without changing its location in the 
filesystem.
Agreed, but as Eric said, adding hadoop-build-tools as a dependency does not 
fix the build. (I can reproduce the failure in branch-2.7.3) I'm thinking we 
need to move the module definition to fix the build. Should we move the module 
definition and change the location in the filesystem?

FYI: Now we can build trunk, branch-2, branch-2.8, and branch-2.7 without any 
fix because hadoop-build-tools jar and pom are uploaded to snapshot repository.
https://repository.apache.org/content/repositories/snapshots/org/apache/hadoop/hadoop-build-tools/

> Verify LICENSE.txt and NOTICE.txt
> -
>
> Key: HADOOP-12893
> URL: https://issues.apache.org/jira/browse/HADOOP-12893
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0, 2.7.3, 2.6.5, 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Assignee: Xiao Chen
>Priority: Blocker
> Fix For: 2.7.3, 2.6.5
>
> Attachments: HADOOP-12893-addendum-branch-2.7.01.patch, 
> HADOOP-12893.002.patch, HADOOP-12893.003.patch, HADOOP-12893.004.patch, 
> HADOOP-12893.005.patch, HADOOP-12893.006.patch, HADOOP-12893.007.patch, 
> HADOOP-12893.008.patch, HADOOP-12893.009.patch, HADOOP-12893.01.patch, 
> HADOOP-12893.011.patch, HADOOP-12893.012.patch, HADOOP-12893.10.patch, 
> HADOOP-12893.branch-2.01.patch, HADOOP-12893.branch-2.6.01.patch, 
> HADOOP-12893.branch-2.7.01.patch, HADOOP-12893.branch-2.7.02.patch, 
> HADOOP-12893.branch-2.7.3.01.patch
>
>
> We have many bundled dependencies in both the source and the binary artifacts 
> that are not in LICENSE.txt and NOTICE.txt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13280) FileSystemStorageStatistics#getLong(“readOps“) should return readOps + largeReadOps

2016-06-20 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15340017#comment-15340017
 ] 

Colin Patrick McCabe commented on HADOOP-13280:
---

java should be able to widen from int to long without a typecast.  However, 
let's get this important fix  in, and then worry about making it prettier.

Thanks, [~liuml07].  +1.

> FileSystemStorageStatistics#getLong(“readOps“) should return readOps + 
> largeReadOps
> ---
>
> Key: HADOOP-13280
> URL: https://issues.apache.org/jira/browse/HADOOP-13280
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Fix For: 2.8.0
>
> Attachments: HADOOP-13280-branch-2.8.000.patch, 
> HADOOP-13280.000.patch, HADOOP-13280.001.patch
>
>
> Currently {{FileSystemStorageStatistics}} instance simply returns data from 
> {{FileSystem$Statistics}}. As to {{readOps}}, the 
> {{FileSystem$Statistics#getReadOps()}} returns {{readOps + largeReadOps}}. We 
> should make the {{FileSystemStorageStatistics#getLong(“readOps“)}} return the 
> sum as well.
> Moreover, there is no unit tests for {{FileSystemStorageStatistics}} and this 
> JIRA will also address this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-9613) [JDK8] Update jersey version to latest 1.x release

2016-06-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15340014#comment-15340014
 ] 

Hadoop QA commented on HADOOP-9613:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
29s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 33 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
55s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
53s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
22s{color} | {color:green} root: The patch generated 0 new + 369 unchanged - 58 
fixed = 369 total (was 427) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
4s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
11s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 55s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
11s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 13m 
36s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
26s{color} | {color:green} hadoop-yarn-server-applicationhistoryservice in the 
patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 37m 
40s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 26s{color} 
| {color:red} hadoop-yarn-client in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
55s{color} | {color:green} hadoop-mapreduce-client-app in 

[jira] [Commented] (HADOOP-12975) Add jitter to CachingGetSpaceUsed's thread

2016-06-20 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15339947#comment-15339947
 ] 

Elliott Clark commented on HADOOP-12975:


Hahah thanks!

> Add jitter to CachingGetSpaceUsed's thread
> --
>
> Key: HADOOP-12975
> URL: https://issues.apache.org/jira/browse/HADOOP-12975
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 2.8.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Fix For: 2.8.0
>
> Attachments: HADOOP-12975v0.patch, HADOOP-12975v1.patch, 
> HADOOP-12975v2.patch, HADOOP-12975v3.patch, HADOOP-12975v4.patch, 
> HADOOP-12975v5.patch, HADOOP-12975v6.patch
>
>
> Running DU across lots of disks is very expensive and running all of the 
> processes at the same time creates a noticeable IO spike. We should add some 
> jitter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13296) Cleanup javadoc for Path

2016-06-20 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton updated HADOOP-13296:
--
Status: Patch Available  (was: Open)

> Cleanup javadoc for Path
> 
>
> Key: HADOOP-13296
> URL: https://issues.apache.org/jira/browse/HADOOP-13296
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.8.0
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Minor
> Attachments: HADOOP-13296.001.patch
>
>
> The javadoc in the Path class needs lots of help.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13296) Cleanup javadoc for Path

2016-06-20 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton updated HADOOP-13296:
--
Attachment: HADOOP-13296.001.patch

> Cleanup javadoc for Path
> 
>
> Key: HADOOP-13296
> URL: https://issues.apache.org/jira/browse/HADOOP-13296
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.8.0
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Minor
> Attachments: HADOOP-13296.001.patch
>
>
> The javadoc in the Path class needs lots of help.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13296) Cleanup javadoc for Path

2016-06-20 Thread Daniel Templeton (JIRA)
Daniel Templeton created HADOOP-13296:
-

 Summary: Cleanup javadoc for Path
 Key: HADOOP-13296
 URL: https://issues.apache.org/jira/browse/HADOOP-13296
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.8.0
Reporter: Daniel Templeton
Assignee: Daniel Templeton
Priority: Minor


The javadoc in the Path class needs lots of help.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-10048) LocalDirAllocator should avoid holding locks while accessing the filesystem

2016-06-20 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15339794#comment-15339794
 ] 

Jason Lowe commented on HADOOP-10048:
-

Technically this is a performance optimization improvement rather than a bug 
fix. Typically we wouldn't backport since those other releases are in 
maintenance and should only receive fixes rather than features/improvements to 
reduce the risk of destabilizing those releases.  If we feel this is an 
important enough performance improvement and the rewards outweigh the risks 
then I'm OK with it.  It doesn't pick cleanly, but it would if we also 
backported HADOOP-8436, HADOOP-8437, and HADOOP-12252 which are all bug fixes.


> LocalDirAllocator should avoid holding locks while accessing the filesystem
> ---
>
> Key: HADOOP-10048
> URL: https://issues.apache.org/jira/browse/HADOOP-10048
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.3.0
>Reporter: Jason Lowe
>Assignee: Jason Lowe
> Fix For: 2.8.0
>
> Attachments: HADOOP-10048.003.patch, HADOOP-10048.004.patch, 
> HADOOP-10048.005.patch, HADOOP-10048.006.patch, HADOOP-10048.patch, 
> HADOOP-10048.trunk.patch
>
>
> As noted in MAPREDUCE-5584 and HADOOP-7016, LocalDirAllocator can be a 
> bottleneck for multithreaded setups like the ShuffleHandler.  We should 
> consider moving to a lockless design or minimizing the critical sections to a 
> very small amount of time that does not involve I/O operations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-9613) [JDK8] Update jersey version to latest 1.x release

2016-06-20 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HADOOP-9613:
---
Attachment: HADOOP-9613.021.incompatible.patch

Fixing checkstyle again. 


> [JDK8] Update jersey version to latest 1.x release
> --
>
> Key: HADOOP-9613
> URL: https://issues.apache.org/jira/browse/HADOOP-9613
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Affects Versions: 2.4.0, 3.0.0-alpha1
>Reporter: Timothy St. Clair
>Assignee: Tsuyoshi Ozawa
>  Labels: UpgradeKeyLibrary, maven
> Attachments: HADOOP-2.2.0-9613.patch, 
> HADOOP-9613.004.incompatible.patch, HADOOP-9613.005.incompatible.patch, 
> HADOOP-9613.006.incompatible.patch, HADOOP-9613.007.incompatible.patch, 
> HADOOP-9613.008.incompatible.patch, HADOOP-9613.009.incompatible.patch, 
> HADOOP-9613.010.incompatible.patch, HADOOP-9613.011.incompatible.patch, 
> HADOOP-9613.012.incompatible.patch, HADOOP-9613.013.incompatible.patch, 
> HADOOP-9613.014.incompatible.patch, HADOOP-9613.014.incompatible.patch, 
> HADOOP-9613.015.incompatible.patch, HADOOP-9613.016.incompatible.patch, 
> HADOOP-9613.017.incompatible.patch, HADOOP-9613.019.incompatible.patch, 
> HADOOP-9613.020.incompatible.patch, HADOOP-9613.021.incompatible.patch, 
> HADOOP-9613.1.patch, HADOOP-9613.2.patch, HADOOP-9613.3.patch, 
> HADOOP-9613.patch
>
>
> Update pom.xml dependencies exposed when running a mvn-rpmbuild against 
> system dependencies on Fedora 18.  
> The existing version is 1.8 which is quite old. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-12893) Verify LICENSE.txt and NOTICE.txt

2016-06-20 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15339594#comment-15339594
 ] 

Sean Busbey edited comment on HADOOP-12893 at 6/20/16 2:33 PM:
---

-1 (non-binding) on the addendum. the current patch moves the module definition 
from its current home to be a child module of the hadoop-project, without 
changing its location in the filesystem. it would be better to instead just 
declared hadoop-build-tools as a dependency of hadoop-project (in the 
dependencies section of the pom)


was (Author: busbey):
the current patch moves the module definition from its current home to be a 
child module of the hadoop-project, without changing its location in the 
filesystem. it would be better to instead just declared hadoop-build-tools as a 
dependency of hadoop-project (in the dependencies section of the pom)

> Verify LICENSE.txt and NOTICE.txt
> -
>
> Key: HADOOP-12893
> URL: https://issues.apache.org/jira/browse/HADOOP-12893
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0, 2.7.3, 2.6.5, 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Assignee: Xiao Chen
>Priority: Blocker
> Fix For: 2.7.3, 2.6.5
>
> Attachments: HADOOP-12893-addendum-branch-2.7.01.patch, 
> HADOOP-12893.002.patch, HADOOP-12893.003.patch, HADOOP-12893.004.patch, 
> HADOOP-12893.005.patch, HADOOP-12893.006.patch, HADOOP-12893.007.patch, 
> HADOOP-12893.008.patch, HADOOP-12893.009.patch, HADOOP-12893.01.patch, 
> HADOOP-12893.011.patch, HADOOP-12893.012.patch, HADOOP-12893.10.patch, 
> HADOOP-12893.branch-2.01.patch, HADOOP-12893.branch-2.6.01.patch, 
> HADOOP-12893.branch-2.7.01.patch, HADOOP-12893.branch-2.7.02.patch, 
> HADOOP-12893.branch-2.7.3.01.patch
>
>
> We have many bundled dependencies in both the source and the binary artifacts 
> that are not in LICENSE.txt and NOTICE.txt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12893) Verify LICENSE.txt and NOTICE.txt

2016-06-20 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15339594#comment-15339594
 ] 

Sean Busbey commented on HADOOP-12893:
--

the current patch moves the module definition from its current home to be a 
child module of the hadoop-project, without changing its location in the 
filesystem. it would be better to instead just declared hadoop-build-tools as a 
dependency of hadoop-project (in the dependencies section of the pom)

> Verify LICENSE.txt and NOTICE.txt
> -
>
> Key: HADOOP-12893
> URL: https://issues.apache.org/jira/browse/HADOOP-12893
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0, 2.7.3, 2.6.5, 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Assignee: Xiao Chen
>Priority: Blocker
> Fix For: 2.7.3, 2.6.5
>
> Attachments: HADOOP-12893-addendum-branch-2.7.01.patch, 
> HADOOP-12893.002.patch, HADOOP-12893.003.patch, HADOOP-12893.004.patch, 
> HADOOP-12893.005.patch, HADOOP-12893.006.patch, HADOOP-12893.007.patch, 
> HADOOP-12893.008.patch, HADOOP-12893.009.patch, HADOOP-12893.01.patch, 
> HADOOP-12893.011.patch, HADOOP-12893.012.patch, HADOOP-12893.10.patch, 
> HADOOP-12893.branch-2.01.patch, HADOOP-12893.branch-2.6.01.patch, 
> HADOOP-12893.branch-2.7.01.patch, HADOOP-12893.branch-2.7.02.patch, 
> HADOOP-12893.branch-2.7.3.01.patch
>
>
> We have many bundled dependencies in both the source and the binary artifacts 
> that are not in LICENSE.txt and NOTICE.txt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13295) Possible Vulnerability in DataNodes via SSH

2016-06-20 Thread Mobin Ranjbar (JIRA)
Mobin Ranjbar created HADOOP-13295:
--

 Summary: Possible Vulnerability in DataNodes via SSH
 Key: HADOOP-13295
 URL: https://issues.apache.org/jira/browse/HADOOP-13295
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Mobin Ranjbar


I suspected something weird in my Hadoop cluster. When I run datanodes, after a 
while my servers(except namenode) will be down for SSH Max Attempts. When I 
checked the 'systemctl status ssh', I figured out there are some invalid 
username/password attempts via SSH and the SSH daemon blocked all incoming 
connections and I got connection refused.

I have no problem when my datanodes are not running.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12893) Verify LICENSE.txt and NOTICE.txt

2016-06-20 Thread Eric Badger (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15339528#comment-15339528
 ] 

Eric Badger commented on HADOOP-12893:
--

I'm not sure if this is the correct fix, but I have confirmed that this patch 
does fix the build for branch-2.7. 

> Verify LICENSE.txt and NOTICE.txt
> -
>
> Key: HADOOP-12893
> URL: https://issues.apache.org/jira/browse/HADOOP-12893
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0, 2.7.3, 2.6.5, 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Assignee: Xiao Chen
>Priority: Blocker
> Fix For: 2.7.3, 2.6.5
>
> Attachments: HADOOP-12893-addendum-branch-2.7.01.patch, 
> HADOOP-12893.002.patch, HADOOP-12893.003.patch, HADOOP-12893.004.patch, 
> HADOOP-12893.005.patch, HADOOP-12893.006.patch, HADOOP-12893.007.patch, 
> HADOOP-12893.008.patch, HADOOP-12893.009.patch, HADOOP-12893.01.patch, 
> HADOOP-12893.011.patch, HADOOP-12893.012.patch, HADOOP-12893.10.patch, 
> HADOOP-12893.branch-2.01.patch, HADOOP-12893.branch-2.6.01.patch, 
> HADOOP-12893.branch-2.7.01.patch, HADOOP-12893.branch-2.7.02.patch, 
> HADOOP-12893.branch-2.7.3.01.patch
>
>
> We have many bundled dependencies in both the source and the binary artifacts 
> that are not in LICENSE.txt and NOTICE.txt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13294) Test hadoop fs shell against s3a; fix problems

2016-06-20 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13294?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15339514#comment-15339514
 ] 

Steve Loughran commented on HADOOP-13294:
-

{code}
$ bin/hadoop fs -rm -r -f s3a://stevel-ireland/
16/06/20 14:44:55 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
16/06/20 14:44:56 INFO Configuration.deprecation: io.bytes.per.checksum is 
deprecated. Instead, use dfs.bytes-per-checksum

rm: `s3a://stevel-ireland/` Input/output error

{code}

rm failing; maybe related to HADOOP-12977. Not reported very well though

> Test hadoop fs shell against s3a; fix problems
> --
>
> Key: HADOOP-13294
> URL: https://issues.apache.org/jira/browse/HADOOP-13294
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>
> There's no tests of {{hadoop -fs}} commands against s3a; add some. Ideally, 
> generic to all object stores.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13294) Test hadoop fs shell against s3a; fix problems

2016-06-20 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-13294:
---

 Summary: Test hadoop fs shell against s3a; fix problems
 Key: HADOOP-13294
 URL: https://issues.apache.org/jira/browse/HADOOP-13294
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 2.8.0
Reporter: Steve Loughran


There's no tests of {{hadoop -fs}} commands against s3a; add some. Ideally, 
generic to all object stores.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12996) remove @InterfaceAudience.LimitedPrivate({"HDFS"}) from FSInputStream

2016-06-20 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-12996:

Issue Type: Bug  (was: Sub-task)
Parent: (was: HADOOP-12994)

> remove @InterfaceAudience.LimitedPrivate({"HDFS"}) from FSInputStream
> -
>
> Key: HADOOP-12996
> URL: https://issues.apache.org/jira/browse/HADOOP-12996
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>
> FSInputStream is universally used and subclassed, core of the HCFS 
> specifications, yet tagged {{@InterfaceAudience.LimitedPrivate("HDFS")}}
> Remove that tag as its clearly untrue



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12996) remove @InterfaceAudience.LimitedPrivate({"HDFS"}) from FSInputStream

2016-06-20 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-12996:

  Priority: Trivial  (was: Major)
Issue Type: Improvement  (was: Bug)

> remove @InterfaceAudience.LimitedPrivate({"HDFS"}) from FSInputStream
> -
>
> Key: HADOOP-12996
> URL: https://issues.apache.org/jira/browse/HADOOP-12996
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Trivial
>
> FSInputStream is universally used and subclassed, core of the HCFS 
> specifications, yet tagged {{@InterfaceAudience.LimitedPrivate("HDFS")}}
> Remove that tag as its clearly untrue



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13293) add a special 0 byte input stream for empty blobs

2016-06-20 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-13293:
---

 Summary: add a special 0 byte input stream for empty blobs
 Key: HADOOP-13293
 URL: https://issues.apache.org/jira/browse/HADOOP-13293
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 2.8.0
Reporter: Steve Loughran
Priority: Minor


S3a (and the other object stores) have a lot of IO going on, even for 0 byte 
files. They don't need to: that's a special case which can be handled locally. 
A special ZeroByteInputStream class could handle this for all the object stores.

This isn't much of an optimization: code shouldn't normally need to go through 
0 byte files, but we see evidence it does sometimes happen.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13286) add a scale test to do gunzip and linecount

2016-06-20 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15339477#comment-15339477
 ] 

Steve Loughran commented on HADOOP-13286:
-

In a test against s3 ireland, opening the file with the sequential policy,  
9.6s to read
{code}
Running org.apache.hadoop.fs.s3a.scale.TestS3AInputStreamPerformance
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.537 sec - in 
org.apache.hadoop.fs.s3a.scale.TestS3AInputStreamPerformance
{code}

The closest equivalent test is {{testTimeToOpenAndReadWholeFileByByte}}, which, 
interestingly, takes slightly longer, at least for me. (disclaimer, this is 
{code}
Running org.apache.hadoop.fs.s3a.scale.TestS3AInputStreamPerformance
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.329 sec - in 
org.apache.hadoop.fs.s3a.scale.TestS3AInputStreamPerformance
{code}

given decompress+line-by-line is one we see in real code, I'd actually like to 
keep it and cut the {{testTimeToOpenAndReadWholeFileByByte}}, test

> add a scale test to do gunzip and linecount
> ---
>
> Key: HADOOP-13286
> URL: https://issues.apache.org/jira/browse/HADOOP-13286
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13286-branch-2-001.patch
>
>
> the HADOOP-13203 patch proposal showed that there were performance problems 
> downstream which weren't surfacing in the current scale tests.
> Trying to decompress the .gz test file and then go through it with LineReader 
> models a basic use case: parse a .csv.gz data source. 
> Add this, with metric printing



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13287) TestS3ACredentials#testInstantiateFromURL fails if AWS secret key contains '+'.

2016-06-20 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15339434#comment-15339434
 ] 

Steve Loughran commented on HADOOP-13287:
-

+1 from me; tests passing and verified with a hadoop dist build, hadoop fs -ls 
URL with the URL containing a / secret.

[~raviprakash] might want to test it too

> TestS3ACredentials#testInstantiateFromURL fails if AWS secret key contains 
> '+'.
> ---
>
> Key: HADOOP-13287
> URL: https://issues.apache.org/jira/browse/HADOOP-13287
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3, test
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Minor
> Attachments: HADOOP-13287.001.patch, HADOOP-13287.002.patch
>
>
> HADOOP-3733 fixed accessing S3A with credentials on the command line for an 
> AWS secret key containing a '/'.  The patch added a new test suite: 
> {{TestS3ACredentialsInURL}}.  One of the tests fails if your AWS secret key 
> contains a '+'.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13287) TestS3ACredentials#testInstantiateFromURL fails if AWS secret key contains '+'.

2016-06-20 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15339436#comment-15339436
 ] 

Steve Loughran commented on HADOOP-13287:
-

(all tests passing; S3 ireland, this time unintentionally VPNed in to the US 
for extra latency)

> TestS3ACredentials#testInstantiateFromURL fails if AWS secret key contains 
> '+'.
> ---
>
> Key: HADOOP-13287
> URL: https://issues.apache.org/jira/browse/HADOOP-13287
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3, test
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Minor
> Attachments: HADOOP-13287.001.patch, HADOOP-13287.002.patch
>
>
> HADOOP-3733 fixed accessing S3A with credentials on the command line for an 
> AWS secret key containing a '/'.  The patch added a new test suite: 
> {{TestS3ACredentialsInURL}}.  One of the tests fails if your AWS secret key 
> contains a '+'.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12804) Read Proxy Password from Credential Providers in S3 FileSystem

2016-06-20 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15339431#comment-15339431
 ] 

Larry McCay commented on HADOOP-12804:
--

[~ste...@apache.org] - I assume that "Line 357: rethrow IOE to include 
toString() value of caught exception —so it doesn't get lost from logs." is 
supposed to be 367 in S3AFileSystem.java on branch-2 - correct?

> Read Proxy Password from Credential Providers in S3 FileSystem
> --
>
> Key: HADOOP-12804
> URL: https://issues.apache.org/jira/browse/HADOOP-12804
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Larry McCay
>Assignee: Larry McCay
>Priority: Minor
> Attachments: HADOOP-12804-001.patch, HADOOP-12804-003.patch, 
> HADOOP-12804-branch-2-002.patch, HADOOP-12804-branch-2-003.patch
>
>
> HADOOP-12548 added credential provider support for the AWS credentials to 
> S3FileSystem. This JIRA is for considering the use of the credential 
> providers for the proxy password as well.
> Instead of adding the proxy password to the config file directly and in clear 
> text, we could provision it in addition to the AWS credentials into a 
> credential provider and keep it out of clear text.
> In terms of usage, it could be added to the same credential store as the AWS 
> credentials or potentially to a more universally available path - since it is 
> the same for everyone. This would however require multiple providers to be 
> configured in the provider.path property and more open file permissions on 
> the store itself.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-9613) [JDK8] Update jersey version to latest 1.x release

2016-06-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15339415#comment-15339415
 ] 

Hadoop QA commented on HADOOP-9613:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
29s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 33 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
30s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
 9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  5m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  2m 
21s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  7m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
32s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
 6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  8m 
51s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m 
34s{color} | {color:red} root: The patch generated 3 new + 369 unchanged - 58 
fixed = 372 total (was 427) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  2m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
4s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  8m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m  
9s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 17m 26s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
25s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 13m 
14s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
45s{color} | {color:green} hadoop-yarn-server-applicationhistoryservice in the 
patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 31m 48s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 44s{color} 
| {color:red} hadoop-yarn-client in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
49s{color} | {color:green} hadoop-mapreduce-client-app in the patch passed. 

[jira] [Commented] (HADOOP-12804) Read Proxy Password from Credential Providers in S3 FileSystem

2016-06-20 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15339409#comment-15339409
 ] 

Larry McCay commented on HADOOP-12804:
--

Thanks for the review, [~ste...@apache.org]!
I cleaned up some checkstyle errors that exceeded 80 chars with the line wraps.
I can put them back if you like.

I'll address the other two issues.

> Read Proxy Password from Credential Providers in S3 FileSystem
> --
>
> Key: HADOOP-12804
> URL: https://issues.apache.org/jira/browse/HADOOP-12804
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Larry McCay
>Assignee: Larry McCay
>Priority: Minor
> Attachments: HADOOP-12804-001.patch, HADOOP-12804-003.patch, 
> HADOOP-12804-branch-2-002.patch, HADOOP-12804-branch-2-003.patch
>
>
> HADOOP-12548 added credential provider support for the AWS credentials to 
> S3FileSystem. This JIRA is for considering the use of the credential 
> providers for the proxy password as well.
> Instead of adding the proxy password to the config file directly and in clear 
> text, we could provision it in addition to the AWS credentials into a 
> credential provider and keep it out of clear text.
> In terms of usage, it could be added to the same credential store as the AWS 
> credentials or potentially to a more universally available path - since it is 
> the same for everyone. This would however require multiple providers to be 
> configured in the provider.path property and more open file permissions on 
> the store itself.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13139) Branch-2: S3a to use thread pool that blocks clients

2016-06-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15339380#comment-15339380
 ] 

Hadoop QA commented on HADOOP-13139:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
33s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
34s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
28s{color} | {color:green} branch-2 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
23s{color} | {color:green} branch-2 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
20s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
20s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
30s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
10s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} branch-2 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
11s{color} | {color:green} branch-2 passed with JDK v1.7.0_101 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
30s{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
29s{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
29s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m 
20s{color} | {color:red} root: The patch generated 2 new + 14 unchanged - 3 
fixed = 16 total (was 17) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 49 line(s) that end in whitespace. Use 
git apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
0s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
44s{color} | {color:green} hadoop-common in the patch passed with JDK 
v1.7.0_101. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
17s{color} | {color:green} hadoop-aws in the patch passed with JDK v1.7.0_101. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 65m 37s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:d1c475d |
| JIRA Patch URL | 

[jira] [Updated] (HADOOP-13139) Branch-2: S3a to use thread pool that blocks clients

2016-06-20 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13139:

Attachment: HADOOP-13139-branch-2-005.patch

Patch 005 config option ignored warning message is logged only once

> Branch-2: S3a to use thread pool that blocks clients
> 
>
> Key: HADOOP-13139
> URL: https://issues.apache.org/jira/browse/HADOOP-13139
> Project: Hadoop Common
>  Issue Type: Task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Pieter Reuse
>Assignee: Pieter Reuse
> Attachments: HADOOP-13139-001.patch, HADOOP-13139-branch-2-003.patch, 
> HADOOP-13139-branch-2-004.patch, HADOOP-13139-branch-2-005.patch, 
> HADOOP-13139-branch-2.001.patch, HADOOP-13139-branch-2.002.patch
>
>
> HADOOP-11684 is accepted into trunk, but was not applied to branch-2. I will 
> attach a patch applicable to branch-2.
> It should be noted in CHANGES-2.8.0.txt that the config parameter 
> 'fs.s3a.threads.core' has been been removed and the behavior of the 
> ThreadPool for s3a has been changed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13139) Branch-2: S3a to use thread pool that blocks clients

2016-06-20 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13139:

Status: Patch Available  (was: Open)

> Branch-2: S3a to use thread pool that blocks clients
> 
>
> Key: HADOOP-13139
> URL: https://issues.apache.org/jira/browse/HADOOP-13139
> Project: Hadoop Common
>  Issue Type: Task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Pieter Reuse
>Assignee: Pieter Reuse
> Attachments: HADOOP-13139-001.patch, HADOOP-13139-branch-2-003.patch, 
> HADOOP-13139-branch-2-004.patch, HADOOP-13139-branch-2-005.patch, 
> HADOOP-13139-branch-2.001.patch, HADOOP-13139-branch-2.002.patch
>
>
> HADOOP-11684 is accepted into trunk, but was not applied to branch-2. I will 
> attach a patch applicable to branch-2.
> It should be noted in CHANGES-2.8.0.txt that the config parameter 
> 'fs.s3a.threads.core' has been been removed and the behavior of the 
> ThreadPool for s3a has been changed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13291) Probing stats in DFSOpsCountStatistics/S3AStorageStatistics should be correctly implemented

2016-06-20 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15339295#comment-15339295
 ] 

Steve Loughran commented on HADOOP-13291:
-

I'll leave you to choose where to commit it; I'd guess in 2.8+, right?

> Probing stats in DFSOpsCountStatistics/S3AStorageStatistics should be 
> correctly implemented
> ---
>
> Key: HADOOP-13291
> URL: https://issues.apache.org/jira/browse/HADOOP-13291
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HADOOP-13291.000.patch, HADOOP-13291.001.patch, 
> HADOOP-13291.002.patch
>
>
> To probe a stat in {{StorageStatistics}}, users can use the 
> {{StorageStatistics#isTracked()}} API. Currently {{DFSOpsCountStatistics}} 
> implements this function wrongly. {{S3AStorageStatistics}} borrowed the same 
> idea and also has the same error.
> # The {{isTracked()}} is not correctly implemented. I believe this was an 
> omission in code.
> # {{isTracked()}} checks a stat with operation symbol (instead of enum name). 
> {{getLongStatistics()}} should return LongStatistics iterators with symbol as 
> name, instead of the enum variable name. Or else, 
> {{isTracked(getLongStatistics().next().getName());}} returns false. This will 
> lead to confusion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13139) Branch-2: S3a to use thread pool that blocks clients

2016-06-20 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13139:

Release Note: The configuration option 'fs.s3a.threads.core' is no longer 
supported. The string is still defined in 
org.apache.hadoop.fs.s3a.Constants.CORE_THREADS, however its value is ignored. 
If it is set, a warning message will be printed when initializing the S3A 
filesystem

> Branch-2: S3a to use thread pool that blocks clients
> 
>
> Key: HADOOP-13139
> URL: https://issues.apache.org/jira/browse/HADOOP-13139
> Project: Hadoop Common
>  Issue Type: Task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Pieter Reuse
>Assignee: Pieter Reuse
> Attachments: HADOOP-13139-001.patch, HADOOP-13139-branch-2-003.patch, 
> HADOOP-13139-branch-2-004.patch, HADOOP-13139-branch-2.001.patch, 
> HADOOP-13139-branch-2.002.patch
>
>
> HADOOP-11684 is accepted into trunk, but was not applied to branch-2. I will 
> attach a patch applicable to branch-2.
> It should be noted in CHANGES-2.8.0.txt that the config parameter 
> 'fs.s3a.threads.core' has been been removed and the behavior of the 
> ThreadPool for s3a has been changed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13139) Branch-2: S3a to use thread pool that blocks clients

2016-06-20 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13139:

Status: Open  (was: Patch Available)

Cancelling patch. I want to change the logging to only log once on deprecation, 
no matter how many FS instances are created

> Branch-2: S3a to use thread pool that blocks clients
> 
>
> Key: HADOOP-13139
> URL: https://issues.apache.org/jira/browse/HADOOP-13139
> Project: Hadoop Common
>  Issue Type: Task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Pieter Reuse
>Assignee: Pieter Reuse
> Attachments: HADOOP-13139-001.patch, HADOOP-13139-branch-2-003.patch, 
> HADOOP-13139-branch-2-004.patch, HADOOP-13139-branch-2.001.patch, 
> HADOOP-13139-branch-2.002.patch
>
>
> HADOOP-11684 is accepted into trunk, but was not applied to branch-2. I will 
> attach a patch applicable to branch-2.
> It should be noted in CHANGES-2.8.0.txt that the config parameter 
> 'fs.s3a.threads.core' has been been removed and the behavior of the 
> ThreadPool for s3a has been changed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12804) Read Proxy Password from Credential Providers in S3 FileSystem

2016-06-20 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15339257#comment-15339257
 ] 

Steve Loughran commented on HADOOP-12804:
-

* Line 357: rethrow IOE to include toString() value of caught exception —so it 
doesn't get lost from logs.
* looks like the IDE has decided to line wrap some existing lines; those 
changes should be cut from the patch
* in {{testProxyPasswordFromCredentialProvider}}, shouldn't 
{{conf2.getPassword(Constants.PROXY_PASSWORD)}} returning null be a failure? It 
should be detected with an {{assertNotNull()}} call

> Read Proxy Password from Credential Providers in S3 FileSystem
> --
>
> Key: HADOOP-12804
> URL: https://issues.apache.org/jira/browse/HADOOP-12804
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Larry McCay
>Assignee: Larry McCay
>Priority: Minor
> Attachments: HADOOP-12804-001.patch, HADOOP-12804-003.patch, 
> HADOOP-12804-branch-2-002.patch, HADOOP-12804-branch-2-003.patch
>
>
> HADOOP-12548 added credential provider support for the AWS credentials to 
> S3FileSystem. This JIRA is for considering the use of the credential 
> providers for the proxy password as well.
> Instead of adding the proxy password to the config file directly and in clear 
> text, we could provision it in addition to the AWS credentials into a 
> credential provider and keep it out of clear text.
> In terms of usage, it could be added to the same credential store as the AWS 
> credentials or potentially to a more universally available path - since it is 
> the same for everyone. This would however require multiple providers to be 
> configured in the provider.path property and more open file permissions on 
> the store itself.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



  1   2   >