[jira] [Commented] (HADOOP-12756) Incorporate Aliyun OSS file system implementation

2016-06-24 Thread shimingfei (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15349018#comment-15349018
 ] 

shimingfei commented on HADOOP-12756:
-

[~drankye] [~cnauroth] [~steve_l] Thanks for your suggestions. That's great 
that creating a branch for OSS integration, and we can do following up 
optimizations based on that branch. Kai, we can talk about the next work 
off-line.

> Incorporate Aliyun OSS file system implementation
> -
>
> Key: HADOOP-12756
> URL: https://issues.apache.org/jira/browse/HADOOP-12756
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: shimingfei
>Assignee: shimingfei
> Attachments: HADOOP-12756-v02.patch, HADOOP-12756.003.patch, 
> HADOOP-12756.004.patch, HADOOP-12756.005.patch, HADOOP-12756.006.patch, HCFS 
> User manual.md, OSS integration.pdf, OSS integration.pdf
>
>
> Aliyun OSS is widely used among China’s cloud users, but currently it is not 
> easy to access data laid on OSS storage from user’s Hadoop/Spark application, 
> because of no original support for OSS in Hadoop.
> This work aims to integrate Aliyun OSS with Hadoop. By simple configuration, 
> Spark/Hadoop applications can read/write data from OSS without any code 
> change. Narrowing the gap between user’s APP and data storage, like what have 
> been done for S3 in Hadoop 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13067) cleanup the dockerfile

2016-06-24 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-13067:
---
   Resolution: Fixed
Fix Version/s: (was: 3.0.0-alpha1)
   2.8.0
   Status: Resolved  (was: Patch Available)

Backported to branch-2 and branch-2.8. Thanks Andrew and Allen!

> cleanup the dockerfile
> --
>
> Key: HADOOP-13067
> URL: https://issues.apache.org/jira/browse/HADOOP-13067
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Fix For: 2.8.0
>
> Attachments: HADOOP-13067-branch-2.00.patch, HADOOP-13067.00.patch
>
>
> - hackage.haskell.org is pretty unreliable, switch to fpcomplete's mirror
> - quiet some of the output to make jenkins debugging easier
> - some other random fixes



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-8966) Change dfsadmin and haadmin commands to be case insensitive

2016-06-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15348996#comment-15348996
 ] 

Hadoop QA commented on HADOOP-8966:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 11m 
36s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
47s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
41s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 65m 31s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
41s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}149m 32s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.datanode.TestFsDatasetCache |
|   | hadoop.hdfs.server.blockmanagement.TestBlockManager |
|   | hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer |
|   | hadoop.hdfs.server.balancer.TestBalancer |
|   | hadoop.hdfs.server.datanode.TestDataNodeErasureCodingMetrics |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:85209cc |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12813133/HADOOP-8966.001.patch 
|
| JIRA Issue | HADOOP-8966 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 66723f1ff3a4 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / bf74dbf |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| unit | 

[jira] [Commented] (HADOOP-13075) Add support for SSE-KMS and SSE-C in s3a filesystem

2016-06-24 Thread Federico Czerwinski (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15348987#comment-15348987
 ] 

Federico Czerwinski commented on HADOOP-13075:
--

hey [~cnauroth], I've seen that you set the target to 2.9.0, does that mean 
that I have to make the patch against that version and not trunk? I'm not 
familiar with Hadoop's release cycle. 
Thanks

Fede

> Add support for SSE-KMS and SSE-C in s3a filesystem
> ---
>
> Key: HADOOP-13075
> URL: https://issues.apache.org/jira/browse/HADOOP-13075
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Andrew Olson
>Assignee: Federico Czerwinski
>
> S3 provides 3 types of server-side encryption [1],
> * SSE-S3 (Amazon S3-Managed Keys) [2]
> * SSE-KMS (AWS KMS-Managed Keys) [3]
> * SSE-C (Customer-Provided Keys) [4]
> Of which the S3AFileSystem in hadoop-aws only supports opting into SSE-S3 
> (HADOOP-10568) -- the underlying aws-java-sdk makes that very simple [5]. 
> With native support in aws-java-sdk already available it should be fairly 
> straightforward [6],[7] to support the other two types of SSE with some 
> additional fs.s3a configuration properties.
> [1] http://docs.aws.amazon.com/AmazonS3/latest/dev/serv-side-encryption.html
> [2] 
> http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingServerSideEncryption.html
> [3] http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingKMSEncryption.html
> [4] 
> http://docs.aws.amazon.com/AmazonS3/latest/dev/ServerSideEncryptionCustomerKeys.html
> [5] http://docs.aws.amazon.com/AmazonS3/latest/dev/SSEUsingJavaSDK.html
> [6] 
> http://docs.aws.amazon.com/AmazonS3/latest/dev/kms-using-sdks.html#kms-using-sdks-java
> [7] http://docs.aws.amazon.com/AmazonS3/latest/dev/sse-c-using-java-sdk.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13316) Delegation Tokens should not be renewed or retrieved using delegation token authentication

2016-06-24 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13316?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-13316:
---
Summary: Delegation Tokens should not be renewed or retrieved using 
delegation token authentication  (was: Delegation Tokens should not be renewed 
or retrived using delegation token authentication)

> Delegation Tokens should not be renewed or retrieved using delegation token 
> authentication
> --
>
> Key: HADOOP-13316
> URL: https://issues.apache.org/jira/browse/HADOOP-13316
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms, security
>Affects Versions: 2.6.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Blocker
> Attachments: HADOOP-13316.01.patch
>
>
> Delegation tokens are supposed to be exchanged in a secure authentication, 
> for security concerns.
> For example, HDFS [only distribute or renew a delegation token under kerberos 
> authentication|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java#L5164]
> {{DelegationTokenAuthenticationHandler}} used by KMS + HTTPFS doesn't follow 
> this now, and poses security concerns. Details in comments.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-13321) Deprecate FileSystem APIs that promote inefficient call patterns.

2016-06-24 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu reassigned HADOOP-13321:
--

Assignee: Mingliang Liu

> Deprecate FileSystem APIs that promote inefficient call patterns.
> -
>
> Key: HADOOP-13321
> URL: https://issues.apache.org/jira/browse/HADOOP-13321
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Reporter: Chris Nauroth
>Assignee: Mingliang Liu
>
> {{FileSystem}} contains several methods that act as convenience wrappers over 
> calling {{getFileStatus}} and retrieving a single property of the returned 
> {{FileStatus}}.  These methods have a habit of fostering inefficient call 
> patterns in applications, resulting in multiple redundant {{getFileStatus}} 
> calls.  For HDFS, this translates into wasteful NameNode RPC traffic.  For 
> file systems backed by cloud object stores, this translates into wasteful 
> HTTP traffic.  This issue proposes to deprecate these methods and instead 
> encourage applications to call {{getFileStatus}} and then reuse the same 
> {{FileStatus}} instance as needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13321) Deprecate FileSystem APIs that promote inefficient call patterns.

2016-06-24 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15348976#comment-15348976
 ] 

Mingliang Liu commented on HADOOP-13321:


+1 (non-binding) for the proposal, and the set of the wrapper methods. After 
watching https://www.youtube.com/watch?v=R-BjP1iQ5lU I believe this is a 
necessary long term fix.

> Deprecate FileSystem APIs that promote inefficient call patterns.
> -
>
> Key: HADOOP-13321
> URL: https://issues.apache.org/jira/browse/HADOOP-13321
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Reporter: Chris Nauroth
>
> {{FileSystem}} contains several methods that act as convenience wrappers over 
> calling {{getFileStatus}} and retrieving a single property of the returned 
> {{FileStatus}}.  These methods have a habit of fostering inefficient call 
> patterns in applications, resulting in multiple redundant {{getFileStatus}} 
> calls.  For HDFS, this translates into wasteful NameNode RPC traffic.  For 
> file systems backed by cloud object stores, this translates into wasteful 
> HTTP traffic.  This issue proposes to deprecate these methods and instead 
> encourage applications to call {{getFileStatus}} and then reuse the same 
> {{FileStatus}} instance as needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12893) Verify LICENSE.txt and NOTICE.txt

2016-06-24 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-12893:
---
Attachment: HADOOP-12893.addendum-trunk.01.patch

Here's the addendum to address previous comments from Sean and Arpit.

I've gone through the spreadsheet and updated what I see inconsistent. Then 
generated this with some manual line change etc.

- jcip is actually CCAL, not ASL. Added L
- some deps had NOTICE which we thought don't need to include. Included them 
now.

I will be on PTO until 7/5 so won't be able to follow up on this. If any 
findings, please feel free to update. Thanks.

> Verify LICENSE.txt and NOTICE.txt
> -
>
> Key: HADOOP-12893
> URL: https://issues.apache.org/jira/browse/HADOOP-12893
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0, 2.7.3, 2.6.5, 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Assignee: Xiao Chen
>Priority: Blocker
> Fix For: 2.7.3, 2.6.5
>
> Attachments: HADOOP-12893-addendum-branch-2.7.01.patch, 
> HADOOP-12893.002.patch, HADOOP-12893.003.patch, HADOOP-12893.004.patch, 
> HADOOP-12893.005.patch, HADOOP-12893.006.patch, HADOOP-12893.007.patch, 
> HADOOP-12893.008.patch, HADOOP-12893.009.patch, HADOOP-12893.01.patch, 
> HADOOP-12893.011.patch, HADOOP-12893.012.patch, HADOOP-12893.10.patch, 
> HADOOP-12893.addendum-trunk.01.patch, HADOOP-12893.branch-2.01.patch, 
> HADOOP-12893.branch-2.6.01.patch, HADOOP-12893.branch-2.7.01.patch, 
> HADOOP-12893.branch-2.7.02.patch, HADOOP-12893.branch-2.7.3.01.patch
>
>
> We have many bundled dependencies in both the source and the binary artifacts 
> that are not in LICENSE.txt and NOTICE.txt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-12893) Verify LICENSE.txt and NOTICE.txt

2016-06-24 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen reassigned HADOOP-12893:
--

Assignee: Xiao Chen

> Verify LICENSE.txt and NOTICE.txt
> -
>
> Key: HADOOP-12893
> URL: https://issues.apache.org/jira/browse/HADOOP-12893
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0, 2.7.3, 2.6.5, 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Assignee: Xiao Chen
>Priority: Blocker
> Fix For: 2.7.3, 2.6.5
>
> Attachments: HADOOP-12893-addendum-branch-2.7.01.patch, 
> HADOOP-12893.002.patch, HADOOP-12893.003.patch, HADOOP-12893.004.patch, 
> HADOOP-12893.005.patch, HADOOP-12893.006.patch, HADOOP-12893.007.patch, 
> HADOOP-12893.008.patch, HADOOP-12893.009.patch, HADOOP-12893.01.patch, 
> HADOOP-12893.011.patch, HADOOP-12893.012.patch, HADOOP-12893.10.patch, 
> HADOOP-12893.branch-2.01.patch, HADOOP-12893.branch-2.6.01.patch, 
> HADOOP-12893.branch-2.7.01.patch, HADOOP-12893.branch-2.7.02.patch, 
> HADOOP-12893.branch-2.7.3.01.patch
>
>
> We have many bundled dependencies in both the source and the binary artifacts 
> that are not in LICENSE.txt and NOTICE.txt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13305) Define common statistics names across schemes

2016-06-24 Thread Jitendra Nath Pandey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15348954#comment-15348954
 ] 

Jitendra Nath Pandey commented on HADOOP-13305:
---

+1 I will commit it by Monday if there are no objections.

> Define common statistics names across schemes
> -
>
> Key: HADOOP-13305
> URL: https://issues.apache.org/jira/browse/HADOOP-13305
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Fix For: 2.8.0
>
> Attachments: HADOOP-13305.000.patch, HADOOP-13305.001.patch
>
>
> The {{StorageStatistics}} provides a pretty general interface, i.e. 
> {{getLong(name)}} and {{getLongStatistics()}}. There is no shared or standard 
> names for the storage statistics and thus the getLong(name) is up to the 
> implementation of storage statistics. The problems:
> # For the common statistics, downstream applications expect the same 
> statistics name across different storage statistics and/or file system 
> schemes. Chances are they have to use 
> {{DFSOpsCountStorageStatistics#getLong(“getStatus”)}} and 
> {{S3A.Statistics#getLong(“get_status”)}} for retrieving the getStatus 
> operation stat.
> # Moreover, probing per-operation stats is hard if there is no 
> standard/shared common names.
> It makes a lot of sense for different schemes to issue the per-operation 
> stats of the same name. Meanwhile, every FS will have its own internal things 
> to count, which can't be centrally defined or managed. But there are some 
> common which would be easier managed if they all had the same name.
> Another motivation is that having a common set of names here will encourage 
> uniform instrumentation of all filesystems; it will also make it easier to 
> analyze the output of runs, were the stats to be published to a "performance 
> log" similar to the audit log. See Steve's work for S3  (e.g. [HADOOP-13171])
> This jira is track the effort of defining common StorageStatistics entry 
> names. Thanks to [~cmccabe], [~ste...@apache.org], [~hitesh] and [~jnp] for 
> offline discussion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13321) Deprecate FileSystem APIs that promote inefficient call patterns.

2016-06-24 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15348923#comment-15348923
 ] 

Chris Nauroth commented on HADOOP-13321:


HIVE-10223 is an example of an inefficient call pattern: multiple calls to the 
convenience wrapper methods that I was able to trivially consolidate down to a 
single {{getFileStatus}}.

I believe the full set of these wrapper methods to deprecate are:

{{FileSystem#exists}}
{{FileSystem#getBlockSize}}
{{FileSystem#getLength}}
{{FileSystem#getReplication}}
{{FileSystem#isDirectory}}
{{FileSystem#isFile}}

This issue does not apply to {{FileContext}}, where methods like {{exists}} can 
be placed behind {{FileContext.Util}}, which comes with JavaDocs describing 
that these are not primitive operations.

> Deprecate FileSystem APIs that promote inefficient call patterns.
> -
>
> Key: HADOOP-13321
> URL: https://issues.apache.org/jira/browse/HADOOP-13321
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Reporter: Chris Nauroth
>
> {{FileSystem}} contains several methods that act as convenience wrappers over 
> calling {{getFileStatus}} and retrieving a single property of the returned 
> {{FileStatus}}.  These methods have a habit of fostering inefficient call 
> patterns in applications, resulting in multiple redundant {{getFileStatus}} 
> calls.  For HDFS, this translates into wasteful NameNode RPC traffic.  For 
> file systems backed by cloud object stores, this translates into wasteful 
> HTTP traffic.  This issue proposes to deprecate these methods and instead 
> encourage applications to call {{getFileStatus}} and then reuse the same 
> {{FileStatus}} instance as needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13251) DelegationTokenAuthenticationHandler should detect actual renewer when renew token

2016-06-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15348922#comment-15348922
 ] 

Hadoop QA commented on HADOOP-13251:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
31s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
6s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
7s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 20s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m  
7s{color} | {color:green} hadoop-kms in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 50m 18s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.metrics2.impl.TestGangliaMetrics |
|   | hadoop.security.ssl.TestReloadingX509TrustManager |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:85209cc |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12813145/HADOOP-13251.10.patch 
|
| JIRA Issue | HADOOP-13251 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux cb4294aee958 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / bf74dbf |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9877/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9877/testReport/ |
| modules | C: hadoop-common-project/hadoop-common 
hadoop-common-project/hadoop-kms U: hadoop-common-project |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9877/console 

[jira] [Created] (HADOOP-13321) Deprecate FileSystem APIs that promote inefficient call patterns.

2016-06-24 Thread Chris Nauroth (JIRA)
Chris Nauroth created HADOOP-13321:
--

 Summary: Deprecate FileSystem APIs that promote inefficient call 
patterns.
 Key: HADOOP-13321
 URL: https://issues.apache.org/jira/browse/HADOOP-13321
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Reporter: Chris Nauroth


{{FileSystem}} contains several methods that act as convenience wrappers over 
calling {{getFileStatus}} and retrieving a single property of the returned 
{{FileStatus}}.  These methods have a habit of fostering inefficient call 
patterns in applications, resulting in multiple redundant {{getFileStatus}} 
calls.  For HDFS, this translates into wasteful NameNode RPC traffic.  For file 
systems backed by cloud object stores, this translates into wasteful HTTP 
traffic.  This issue proposes to deprecate these methods and instead encourage 
applications to call {{getFileStatus}} and then reuse the same {{FileStatus}} 
instance as needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-8966) Change dfsadmin and haadmin commands to be case insensitive

2016-06-24 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HADOOP-8966?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gergely Novák updated HADOOP-8966:
--
Status: Patch Available  (was: Open)

> Change dfsadmin and haadmin commands to be case insensitive 
> 
>
> Key: HADOOP-8966
> URL: https://issues.apache.org/jira/browse/HADOOP-8966
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ha, tools
>Affects Versions: 3.0.0-alpha1
>Reporter: Brandon Li
>Assignee: Gergely Novák
>Priority: Trivial
> Attachments: HADOOP-8966.001.patch
>
>
> It can be easier to use when these commands are case insensitive.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13251) DelegationTokenAuthenticationHandler should detect actual renewer when renew token

2016-06-24 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13251?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-13251:
---
Attachment: HADOOP-13251.10.patch

patch 10 to fix the failed tests.

> DelegationTokenAuthenticationHandler should detect actual renewer when renew 
> token
> --
>
> Key: HADOOP-13251
> URL: https://issues.apache.org/jira/browse/HADOOP-13251
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.8.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-13251.01.patch, HADOOP-13251.02.patch, 
> HADOOP-13251.03.patch, HADOOP-13251.04.patch, HADOOP-13251.05.patch, 
> HADOOP-13251.06.patch, HADOOP-13251.07.patch, HADOOP-13251.08.patch, 
> HADOOP-13251.08.patch, HADOOP-13251.09.patch, HADOOP-13251.10.patch, 
> HADOOP-13251.innocent.patch
>
>
> Turns out KMS delegation token renewal feature (HADOOP-13155) does not work 
> well with client side impersonation.
> In a MR example, an end user (UGI:user) gets all kinds of DTs (with 
> renewer=yarn), and pass them to Yarn. Yarn's resource manager (UGI:yarn) then 
> renews these DTs as long as the MR jobs are running. But currently, the token 
> is used at the kms server side to decide the renewer, in which case is always 
> the token's owner. This ends up rejecting the renew request due to renewer 
> mismatch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12756) Incorporate Aliyun OSS file system implementation

2016-06-24 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15348816#comment-15348816
 ] 

Kai Zheng commented on HADOOP-12756:


Thanks Chris for the clarifying and further thoughts. It sound good to me.

> Incorporate Aliyun OSS file system implementation
> -
>
> Key: HADOOP-12756
> URL: https://issues.apache.org/jira/browse/HADOOP-12756
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: shimingfei
>Assignee: shimingfei
> Attachments: HADOOP-12756-v02.patch, HADOOP-12756.003.patch, 
> HADOOP-12756.004.patch, HADOOP-12756.005.patch, HADOOP-12756.006.patch, HCFS 
> User manual.md, OSS integration.pdf, OSS integration.pdf
>
>
> Aliyun OSS is widely used among China’s cloud users, but currently it is not 
> easy to access data laid on OSS storage from user’s Hadoop/Spark application, 
> because of no original support for OSS in Hadoop.
> This work aims to integrate Aliyun OSS with Hadoop. By simple configuration, 
> Spark/Hadoop applications can read/write data from OSS without any code 
> change. Narrowing the gap between user’s APP and data storage, like what have 
> been done for S3 in Hadoop 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12756) Incorporate Aliyun OSS file system implementation

2016-06-24 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15348804#comment-15348804
 ] 

Chris Nauroth commented on HADOOP-12756:


bq. According to our previous discussion, a committer to prepare for committing 
the codes (now the branch to merge) could run the tests using the way we do for 
existing cloud modules. Sure for long term, we need to think about a perfect 
solution for such live service integrations, look like Steve Loughran has 
already some ideas and even work about this.

[~drankye], to clarify, I am asking that there be a way for committers who have 
Aliyun cloud credentials to run the contract tests live against the real Aliyun 
Object Storage Service.  I am not demanding that we go beyond that (like a true 
pre-commit solution) within the scope of the Aliyun integration effort.

Another way to think of this is that I am asking for the Aliyun integration 
effort to provide equivalent testing support as our other supported alternative 
file systems, like WASB and S3A.  Currently, that means use of the standard 
contract tests with a capability to configure credentials and run them from a 
developer environment integrated with the corresponding back-end service.

Thank you!

> Incorporate Aliyun OSS file system implementation
> -
>
> Key: HADOOP-12756
> URL: https://issues.apache.org/jira/browse/HADOOP-12756
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: shimingfei
>Assignee: shimingfei
> Attachments: HADOOP-12756-v02.patch, HADOOP-12756.003.patch, 
> HADOOP-12756.004.patch, HADOOP-12756.005.patch, HADOOP-12756.006.patch, HCFS 
> User manual.md, OSS integration.pdf, OSS integration.pdf
>
>
> Aliyun OSS is widely used among China’s cloud users, but currently it is not 
> easy to access data laid on OSS storage from user’s Hadoop/Spark application, 
> because of no original support for OSS in Hadoop.
> This work aims to integrate Aliyun OSS with Hadoop. By simple configuration, 
> Spark/Hadoop applications can read/write data from OSS without any code 
> change. Narrowing the gap between user’s APP and data storage, like what have 
> been done for S3 in Hadoop 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13320) Fix arguments check in documentation for WordCount v2.0

2016-06-24 Thread niccolo becchi (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15348798#comment-15348798
 ] 

niccolo becchi commented on HADOOP-13320:
-

Thanks [~templedf], you are right, I have included it in the ticket's 
description.

> Fix arguments check in documentation for WordCount v2.0
> ---
>
> Key: HADOOP-13320
> URL: https://issues.apache.org/jira/browse/HADOOP-13320
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Reporter: niccolo becchi
>Priority: Trivial
>
> This issue is affecting the documentation page, so the code is not covered by 
> any tests. It's actually visible on the page:
> https://hadoop.apache.org/docs/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduceTutorial.html#Example:_WordCount_v2.0
> On the Example: WordCount v2.0 
> The actual arguments check is wrong, as it's never printing the message of 
> the correct usage. So, running the code with no parameters, as in the 
> following example:
> {code}
> yarn jar /var/tmp/WordCount.jar task0.WordCount2
> {code}
>  
> I have got the following exception message in output:
> {code}
> Exception in thread "main" java.lang.IndexOutOfBoundsException: Index: 0, 
> Size: 0
> at java.util.ArrayList.rangeCheck(ArrayList.java:635)
> at java.util.ArrayList.get(ArrayList.java:411)
> at task0.WordCount2.main(WordCount2.java:131)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> {code}
> Intead than the expected friendly message:
> {code}
>  Usage: wordcount   [-skip skipPatternFile]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13320) Fix arguments check in documentation for WordCount v2.0

2016-06-24 Thread niccolo becchi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13320?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

niccolo becchi updated HADOOP-13320:

Description: 
This issue is affecting the documentation page, so the code is not covered by 
any tests. It's actually visible on the page:
https://hadoop.apache.org/docs/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduceTutorial.html#Example:_WordCount_v2.0
On the Example: WordCount v2.0 

The actual arguments check is wrong, as it's never printing the message of the 
correct usage. So, running the code with no parameters, as in the following 
example:
{code}
yarn jar /var/tmp/WordCount.jar task0.WordCount2
{code}
 
I have got the following exception message in output:
{code}
Exception in thread "main" java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
at java.util.ArrayList.rangeCheck(ArrayList.java:635)
at java.util.ArrayList.get(ArrayList.java:411)
at task0.WordCount2.main(WordCount2.java:131)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
{code}

Intead than the expected friendly message:
{code}
 Usage: wordcount   [-skip skipPatternFile]
{code}

  was:
This issue is affecting the documentation page, so the code is not covered by 
any tests. It's actually visible on the page:
https://hadoop.apache.org/docs/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduceTutorial.html
On the Example: WordCount v2.0 

The actual arguments check is wrong, as it's never printing the message of the 
correct usage. So, running the code with no parameters, as in the following 
example:
{code}
yarn jar /var/tmp/WordCount.jar task0.WordCount2
{code}
 
I have got the following exception message in output:
{code}
Exception in thread "main" java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
at java.util.ArrayList.rangeCheck(ArrayList.java:635)
at java.util.ArrayList.get(ArrayList.java:411)
at task0.WordCount2.main(WordCount2.java:131)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
{code}

Intead than the expected friendly message:
{code}
 Usage: wordcount   [-skip skipPatternFile]
{code}


> Fix arguments check in documentation for WordCount v2.0
> ---
>
> Key: HADOOP-13320
> URL: https://issues.apache.org/jira/browse/HADOOP-13320
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Reporter: niccolo becchi
>Priority: Trivial
>
> This issue is affecting the documentation page, so the code is not covered by 
> any tests. It's actually visible on the page:
> https://hadoop.apache.org/docs/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduceTutorial.html#Example:_WordCount_v2.0
> On the Example: WordCount v2.0 
> The actual arguments check is wrong, as it's never printing the message of 
> the correct usage. So, running the code with no parameters, as in the 
> following example:
> {code}
> yarn jar /var/tmp/WordCount.jar task0.WordCount2
> {code}
>  
> I have got the following exception message in output:
> {code}
> Exception in thread "main" java.lang.IndexOutOfBoundsException: Index: 0, 
> Size: 0
> at java.util.ArrayList.rangeCheck(ArrayList.java:635)
> at java.util.ArrayList.get(ArrayList.java:411)
> at task0.WordCount2.main(WordCount2.java:131)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> {code}
> Intead than the expected friendly message:
> {code}
>  Usage: wordcount   [-skip skipPatternFile]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: 

[jira] [Updated] (HADOOP-13075) Add support for SSE-KMS and SSE-C in s3a filesystem

2016-06-24 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-13075:
---
Target Version/s: 2.9.0

> Add support for SSE-KMS and SSE-C in s3a filesystem
> ---
>
> Key: HADOOP-13075
> URL: https://issues.apache.org/jira/browse/HADOOP-13075
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Andrew Olson
>Assignee: Federico Czerwinski
>
> S3 provides 3 types of server-side encryption [1],
> * SSE-S3 (Amazon S3-Managed Keys) [2]
> * SSE-KMS (AWS KMS-Managed Keys) [3]
> * SSE-C (Customer-Provided Keys) [4]
> Of which the S3AFileSystem in hadoop-aws only supports opting into SSE-S3 
> (HADOOP-10568) -- the underlying aws-java-sdk makes that very simple [5]. 
> With native support in aws-java-sdk already available it should be fairly 
> straightforward [6],[7] to support the other two types of SSE with some 
> additional fs.s3a configuration properties.
> [1] http://docs.aws.amazon.com/AmazonS3/latest/dev/serv-side-encryption.html
> [2] 
> http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingServerSideEncryption.html
> [3] http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingKMSEncryption.html
> [4] 
> http://docs.aws.amazon.com/AmazonS3/latest/dev/ServerSideEncryptionCustomerKeys.html
> [5] http://docs.aws.amazon.com/AmazonS3/latest/dev/SSEUsingJavaSDK.html
> [6] 
> http://docs.aws.amazon.com/AmazonS3/latest/dev/kms-using-sdks.html#kms-using-sdks-java
> [7] http://docs.aws.amazon.com/AmazonS3/latest/dev/sse-c-using-java-sdk.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13075) Add support for SSE-KMS and SSE-C in s3a filesystem

2016-06-24 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-13075:
---
Issue Type: Sub-task  (was: New Feature)
Parent: HADOOP-13204

> Add support for SSE-KMS and SSE-C in s3a filesystem
> ---
>
> Key: HADOOP-13075
> URL: https://issues.apache.org/jira/browse/HADOOP-13075
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Andrew Olson
>Assignee: Federico Czerwinski
>
> S3 provides 3 types of server-side encryption [1],
> * SSE-S3 (Amazon S3-Managed Keys) [2]
> * SSE-KMS (AWS KMS-Managed Keys) [3]
> * SSE-C (Customer-Provided Keys) [4]
> Of which the S3AFileSystem in hadoop-aws only supports opting into SSE-S3 
> (HADOOP-10568) -- the underlying aws-java-sdk makes that very simple [5]. 
> With native support in aws-java-sdk already available it should be fairly 
> straightforward [6],[7] to support the other two types of SSE with some 
> additional fs.s3a configuration properties.
> [1] http://docs.aws.amazon.com/AmazonS3/latest/dev/serv-side-encryption.html
> [2] 
> http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingServerSideEncryption.html
> [3] http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingKMSEncryption.html
> [4] 
> http://docs.aws.amazon.com/AmazonS3/latest/dev/ServerSideEncryptionCustomerKeys.html
> [5] http://docs.aws.amazon.com/AmazonS3/latest/dev/SSEUsingJavaSDK.html
> [6] 
> http://docs.aws.amazon.com/AmazonS3/latest/dev/kms-using-sdks.html#kms-using-sdks-java
> [7] http://docs.aws.amazon.com/AmazonS3/latest/dev/sse-c-using-java-sdk.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11823) Checking for Verifier in RPC Denied Reply

2016-06-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15348767#comment-15348767
 ] 

ASF GitHub Bot commented on HADOOP-11823:
-

GitHub user pradeep1288 opened a pull request:

https://github.com/apache/hadoop/pull/106

HADOOP-11823: dont check for verifier in RpcDeniedReply

When RPC returns a denied reply, the code should not check for a verifier. 
It is a bug as it doesn't match the RPC protocol. (See Page 33 from NFS 
Illustrated book).


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/pradeep1288/hadoop hadoop-11823

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/hadoop/pull/106.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #106


commit b595d74c6bba5cff01c01332fdc4bd39ae536312
Author: Pradeep Nayak 
Date:   2016-06-24T22:05:34Z

HADOOP-11823: dont check for verifier in RpcDeniedReply




> Checking for Verifier in RPC Denied Reply
> -
>
> Key: HADOOP-11823
> URL: https://issues.apache.org/jira/browse/HADOOP-11823
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.6.0
>Reporter: Gokul Soundararajan
>Assignee: Brandon Li
>Priority: Blocker
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> Hi,
> There is a bug in the way hadoop-nfs parses the reply for a RPC denied 
> message. Specifically, this happens in RpcDeniedReply.java line #50.
> When RPC returns a denied reply, the code should not check for a verifier. It 
> is a bug as it doesn't match the RPC protocol. (See Page 33 from NFS 
> Illustrated book). 
> I would be happy to submit the patch but I need some help to commit into 
> mainline. I haven't committed into Hadoop yet.
> Thanks,
> Gokul



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13251) DelegationTokenAuthenticationHandler should detect actual renewer when renew token

2016-06-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15348764#comment-15348764
 ] 

Hadoop QA commented on HADOOP-13251:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 11m 
30s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
7s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  6m 35s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m  
5s{color} | {color:green} hadoop-kms in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 55m 49s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.security.token.delegation.web.TestWebDelegationToken |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:85209cc |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12813123/HADOOP-13251.09.patch 
|
| JIRA Issue | HADOOP-13251 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 42012167d5c9 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 97578649 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9875/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9875/testReport/ |
| modules | C: hadoop-common-project/hadoop-common 
hadoop-common-project/hadoop-kms U: hadoop-common-project |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9875/console |
| Powered by | Apache Yetus 

[jira] [Commented] (HADOOP-12756) Incorporate Aliyun OSS file system implementation

2016-06-24 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15348761#comment-15348761
 ] 

Kai Zheng commented on HADOOP-12756:


Thanks [~ste...@apache.org] and [~cnauroth] for providing the thoughts! You two 
great guys asked for a feature branch for this, according to the latest 
discussion in the community, I thought we should probably have it then. I'm not 
sure the development guys here are familiar with the approach, but let me 
expain about it to them by off-line first. I personally feel that, with the 
feature branch, it could be even faster to get all the thing ready and 
delivered, if it's wanted.

A question could you help clarify, Chris:
bq. a way for committers to run the contract tests integrated with the live 
service.
According to our previous discussion, a committer to prepare for committing the 
codes (now the branch to merge) could run the tests using the way we do for 
existing cloud modules. Sure for long term, we need to think about a perfect 
solution for such live service integrations, look like [~ste...@apache.org] has 
already some ideas and even work about this.

> Incorporate Aliyun OSS file system implementation
> -
>
> Key: HADOOP-12756
> URL: https://issues.apache.org/jira/browse/HADOOP-12756
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: shimingfei
>Assignee: shimingfei
> Attachments: HADOOP-12756-v02.patch, HADOOP-12756.003.patch, 
> HADOOP-12756.004.patch, HADOOP-12756.005.patch, HADOOP-12756.006.patch, HCFS 
> User manual.md, OSS integration.pdf, OSS integration.pdf
>
>
> Aliyun OSS is widely used among China’s cloud users, but currently it is not 
> easy to access data laid on OSS storage from user’s Hadoop/Spark application, 
> because of no original support for OSS in Hadoop.
> This work aims to integrate Aliyun OSS with Hadoop. By simple configuration, 
> Spark/Hadoop applications can read/write data from OSS without any code 
> change. Narrowing the gap between user’s APP and data storage, like what have 
> been done for S3 in Hadoop 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-8966) Change dfsadmin and haadmin commands to be case insensitive

2016-06-24 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HADOOP-8966?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gergely Novák updated HADOOP-8966:
--
Summary: Change dfsadmin and haadmin commands to be case insensitive   
(was: Change dfsadmin and haadmin commands to be case insenitive )

> Change dfsadmin and haadmin commands to be case insensitive 
> 
>
> Key: HADOOP-8966
> URL: https://issues.apache.org/jira/browse/HADOOP-8966
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ha, tools
>Affects Versions: 3.0.0-alpha1
>Reporter: Brandon Li
>Assignee: Gergely Novák
>Priority: Trivial
> Attachments: HADOOP-8966.001.patch
>
>
> It can be easier to use when these commands are case insensitive.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-8966) Change dfsadmin and haadmin commands to be case insenitive

2016-06-24 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HADOOP-8966?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gergely Novák updated HADOOP-8966:
--
Attachment: HADOOP-8966.001.patch

> Change dfsadmin and haadmin commands to be case insenitive 
> ---
>
> Key: HADOOP-8966
> URL: https://issues.apache.org/jira/browse/HADOOP-8966
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ha, tools
>Affects Versions: 3.0.0-alpha1
>Reporter: Brandon Li
>Assignee: Gergely Novák
>Priority: Trivial
> Attachments: HADOOP-8966.001.patch
>
>
> It can be easier to use when these commands are case insensitive.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-8966) Change dfsadmin and haadmin commands to be case insenitive

2016-06-24 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HADOOP-8966?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gergely Novák reassigned HADOOP-8966:
-

Assignee: Gergely Novák

> Change dfsadmin and haadmin commands to be case insenitive 
> ---
>
> Key: HADOOP-8966
> URL: https://issues.apache.org/jira/browse/HADOOP-8966
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ha, tools
>Affects Versions: 3.0.0-alpha1
>Reporter: Brandon Li
>Assignee: Gergely Novák
>Priority: Trivial
>
> It can be easier to use when these commands are case insensitive.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13320) Fix arguments check in documentation for WordCount v2.0

2016-06-24 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15348702#comment-15348702
 ] 

Daniel Templeton commented on HADOOP-13320:
---

I believe you mean here:

https://hadoop.apache.org/docs/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduceTutorial.html#Example:_WordCount_v2.0

[~d...@cloudera.com], wanna take a whack at it?

> Fix arguments check in documentation for WordCount v2.0
> ---
>
> Key: HADOOP-13320
> URL: https://issues.apache.org/jira/browse/HADOOP-13320
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Reporter: niccolo becchi
>Priority: Trivial
>
> This issue is affecting the documentation page, so the code is not covered by 
> any tests. It's actually visible on the page:
> https://hadoop.apache.org/docs/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduceTutorial.html
> On the Example: WordCount v2.0 
> The actual arguments check is wrong, as it's never printing the message of 
> the correct usage. So, running the code with no parameters, as in the 
> following example:
> {code}
> yarn jar /var/tmp/WordCount.jar task0.WordCount2
> {code}
>  
> I have got the following exception message in output:
> {code}
> Exception in thread "main" java.lang.IndexOutOfBoundsException: Index: 0, 
> Size: 0
> at java.util.ArrayList.rangeCheck(ArrayList.java:635)
> at java.util.ArrayList.get(ArrayList.java:411)
> at task0.WordCount2.main(WordCount2.java:131)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> {code}
> Intead than the expected friendly message:
> {code}
>  Usage: wordcount   [-skip skipPatternFile]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13263) Reload cached groups in background after expiry

2016-06-24 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15348695#comment-15348695
 ] 

Wei-Chiu Chuang commented on HADOOP-13263:
--

Thanks for the quick response.

Re: metrics
Thanks for clarification. That makes sense to me and I'm happy to see a 
followup jira to add metrics and to have more visibility into group resolution.

Re: CommonConfigurationKeys
I don't have preference. It looks like people add new property keys into both 
classes regardless of what the Javadoc says.

The core-default.xml and GroupsMapping.md looks good to me too.

> Reload cached groups in background after expiry
> ---
>
> Key: HADOOP-13263
> URL: https://issues.apache.org/jira/browse/HADOOP-13263
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
> Attachments: HADOOP-13263.001.patch, HADOOP-13263.002.patch, 
> HADOOP-13263.003.patch, HADOOP-13263.004.patch, HADOOP-13263.005.patch, 
> HADOOP-13263.006.patch
>
>
> In HADOOP-11238 the Guava cache was introduced to allow refreshes on the 
> Namenode group cache to run in the background, avoiding many slow group 
> lookups. Even with this change, I have seen quite a few clusters with issues 
> due to slow group lookups. The problem is most prevalent in HA clusters, 
> where a slow group lookup on the hdfs user can fail to return for over 45 
> seconds causing the Failover Controller to kill it.
> The way the current Guava cache implementation works is approximately:
> 1) On initial load, the first thread to request groups for a given user 
> blocks until it returns. Any subsequent threads requesting that user block 
> until that first thread populates the cache.
> 2) When the key expires, the first thread to hit the cache after expiry 
> blocks. While it is blocked, other threads will return the old value.
> I feel it is this blocking thread that still gives the Namenode issues on 
> slow group lookups. If the call from the FC is the one that blocks and 
> lookups are slow, if can cause the NN to be killed.
> Guava has the ability to refresh expired keys completely in the background, 
> where the first thread that hits an expired key schedules a background cache 
> reload, but still returns the old value. Then the cache is eventually 
> updated. This patch introduces this background reload feature. There are two 
> new parameters:
> 1) hadoop.security.groups.cache.background.reload - default false to keep the 
> current behaviour. Set to true to enable a small thread pool and background 
> refresh for expired keys
> 2) hadoop.security.groups.cache.background.reload.threads - only relevant if 
> the above is set to true. Controls how many threads are in the background 
> refresh pool. Default is 1, which is likely to be enough.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13263) Reload cached groups in background after expiry

2016-06-24 Thread Stephen O'Donnell (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15348631#comment-15348631
 ] 

Stephen O'Donnell commented on HADOOP-13263:


[~jojochuang] Thanks for the review.

{quote}
What's the purpose of getBackgroundRefreshSuccess(), 
getBackgroundRefreshException, getBackgroundRefreshQueued, 
getBackgroundRefreshRunning in Group class?
{quote}

[~arpitagarwal] suggested that we put some counters in that can be exposed as 
Namenode metrics in a further Jira. I think it makes sense, otherwise it will 
be impossible to know in a running system if the refresh queue is getting very 
large, or if refreshes are hitting an exception frequently.

{quote}
I also wonder if the new properties should be defined in 
CommonConfigurationKeys instead
{quote}
I am happy to move these if you want. I found the existing group cache 
parameters in `CommonConfigurationKeysPublic` so I kept them together. Let me 
know if you want me to move them and I can submit another patch version.

Do you want me to update the GroupsMapping.md and core-default.xml within this 
Jira so it all gets committed together, or should we do docs separately? I've 
got the following ready to go:

{code}

  hadoop.security.groups.cache.background.reload
  false
  
Whether to reload expired user->group mappings using a background thread
pool. If set to true, a pool of
hadoop.security.groups.cache.background.reload.threads is created to
update the cache in the background.
  



  hadoop.security.groups.cache.background.reload.threads
  3
  
Only relevant if hadoop.security.groups.cache.background.reload is true.
Controls the number of concurrent background user->group cache entry
refreshes. Pending refresh requests beyond this value are queued and
processed when a thread is free.
  

{code}

And for the groupMapping.md:

{quote}
With the default caching implementation, after 
`hadoop.security.groups.cache.secs` when the cache entry expires, the next 
thread to request group membership will query the group mapping service 
provider to lookup the current groups for the user. While this lookup is 
running, the thread that initiated it will block, while any other threads 
requesting groups for the same user will retrieve the previously cached values. 
If the refresh fails, the thread performing the refresh will throw an exception 
and the process will repeat for the next thread that requests a lookup for that 
value. If the lookup repeatedly fails, and the cache is not updated, after 
`hadoop.security.groups.cache.secs * 10` seconds the cached entry will be 
evicted and all threads will block until a successful reload is performed.

To avoid any threads blocking when the cached entry expires, set 
`hadoop.security.groups.cache.background.reload` to true. This enables a small 
thread pool of `hadoop.security.groups.cache.background.reload.threads` threads 
having 3 threads by default. With this setting, when the cache is queried for 
an expired entry, the expired result is returned immediately and a task is 
queued to refresh the cache in the background. If the background refresh fails 
a new refresh operation will be queued by the next request to the cache, until 
`hadoop.security.groups.cache.secs * 10` when the cached entry will be evicted 
and all threads will block for that user until a successful reload occurs.
{quote}

If you give this a quick review and let me know if it should be in this patch I 
can get a new version pushed up pretty quickly.

> Reload cached groups in background after expiry
> ---
>
> Key: HADOOP-13263
> URL: https://issues.apache.org/jira/browse/HADOOP-13263
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
> Attachments: HADOOP-13263.001.patch, HADOOP-13263.002.patch, 
> HADOOP-13263.003.patch, HADOOP-13263.004.patch, HADOOP-13263.005.patch, 
> HADOOP-13263.006.patch
>
>
> In HADOOP-11238 the Guava cache was introduced to allow refreshes on the 
> Namenode group cache to run in the background, avoiding many slow group 
> lookups. Even with this change, I have seen quite a few clusters with issues 
> due to slow group lookups. The problem is most prevalent in HA clusters, 
> where a slow group lookup on the hdfs user can fail to return for over 45 
> seconds causing the Failover Controller to kill it.
> The way the current Guava cache implementation works is approximately:
> 1) On initial load, the first thread to request groups for a given user 
> blocks until it returns. Any subsequent threads requesting that user block 
> until that first thread populates the cache.
> 2) When the key expires, the first thread to hit the cache after expiry 
> blocks. While it is blocked, other threads will return 

[jira] [Updated] (HADOOP-13251) DelegationTokenAuthenticationHandler should detect actual renewer when renew token

2016-06-24 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13251?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-13251:
---
Attachment: HADOOP-13251.09.patch

Thanks for the further discussions Andrew, I'm convinced, your proposal makes 
more sense. The fact that we need way less code to achieve it proves that.
Patch 9 sets it in the better way.

> DelegationTokenAuthenticationHandler should detect actual renewer when renew 
> token
> --
>
> Key: HADOOP-13251
> URL: https://issues.apache.org/jira/browse/HADOOP-13251
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.8.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-13251.01.patch, HADOOP-13251.02.patch, 
> HADOOP-13251.03.patch, HADOOP-13251.04.patch, HADOOP-13251.05.patch, 
> HADOOP-13251.06.patch, HADOOP-13251.07.patch, HADOOP-13251.08.patch, 
> HADOOP-13251.08.patch, HADOOP-13251.09.patch, HADOOP-13251.innocent.patch
>
>
> Turns out KMS delegation token renewal feature (HADOOP-13155) does not work 
> well with client side impersonation.
> In a MR example, an end user (UGI:user) gets all kinds of DTs (with 
> renewer=yarn), and pass them to Yarn. Yarn's resource manager (UGI:yarn) then 
> renews these DTs as long as the MR jobs are running. But currently, the token 
> is used at the kms server side to decide the renewer, in which case is always 
> the token's owner. This ends up rejecting the renew request due to renewer 
> mismatch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13263) Reload cached groups in background after expiry

2016-06-24 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15348581#comment-15348581
 ] 

Wei-Chiu Chuang commented on HADOOP-13263:
--

Thanks [~sodonnell] this is a good idea, and thanks [~arpiagariu] for initial 
reviews.
I have a few quick comments:

 [~sodonnell] 
What's the purpose of {{getBackgroundRefreshSuccess()}}, 
{{getBackgroundRefreshException}}, {{getBackgroundRefreshQueued}}, 
{{getBackgroundRefreshRunning}} in Group class?
If they are used by tests only, they should not be {{public}} (most likely 
package-private), and they should be annotated with {{@VisibleForTesting}}.

[~arpiagariu]
bq. We should add the settings to hdfs-default.xml at a minimum. I don't think 
we have any site documentation for setting up group mapping.
The new properties should go into core-default.xml. And there's a 
GroupsMapping.md under hadoop-common-project/hadoop-common/src/site/markdown. 
It would be really nice if we could get this groups mapping resolution feature 
described in this doc.

I also wonder if the new properties should be defined in 
{{CommonConfigurationKeys}} instead, because {{CommonConfigurationKeysPublic}} 
has a javadoc that says:
{code}
/** 
 * This class contains constants for configuration keys used
 * in the common code.
 *
 * It includes all publicly documented configuration keys. In general
 * this class should not be used directly (use CommonConfigurationKeys
 * instead)
 *
 */
{code}

> Reload cached groups in background after expiry
> ---
>
> Key: HADOOP-13263
> URL: https://issues.apache.org/jira/browse/HADOOP-13263
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
> Attachments: HADOOP-13263.001.patch, HADOOP-13263.002.patch, 
> HADOOP-13263.003.patch, HADOOP-13263.004.patch, HADOOP-13263.005.patch, 
> HADOOP-13263.006.patch
>
>
> In HADOOP-11238 the Guava cache was introduced to allow refreshes on the 
> Namenode group cache to run in the background, avoiding many slow group 
> lookups. Even with this change, I have seen quite a few clusters with issues 
> due to slow group lookups. The problem is most prevalent in HA clusters, 
> where a slow group lookup on the hdfs user can fail to return for over 45 
> seconds causing the Failover Controller to kill it.
> The way the current Guava cache implementation works is approximately:
> 1) On initial load, the first thread to request groups for a given user 
> blocks until it returns. Any subsequent threads requesting that user block 
> until that first thread populates the cache.
> 2) When the key expires, the first thread to hit the cache after expiry 
> blocks. While it is blocked, other threads will return the old value.
> I feel it is this blocking thread that still gives the Namenode issues on 
> slow group lookups. If the call from the FC is the one that blocks and 
> lookups are slow, if can cause the NN to be killed.
> Guava has the ability to refresh expired keys completely in the background, 
> where the first thread that hits an expired key schedules a background cache 
> reload, but still returns the old value. Then the cache is eventually 
> updated. This patch introduces this background reload feature. There are two 
> new parameters:
> 1) hadoop.security.groups.cache.background.reload - default false to keep the 
> current behaviour. Set to true to enable a small thread pool and background 
> refresh for expired keys
> 2) hadoop.security.groups.cache.background.reload.threads - only relevant if 
> the above is set to true. Controls how many threads are in the background 
> refresh pool. Default is 1, which is likely to be enough.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13251) DelegationTokenAuthenticationHandler should detect actual renewer when renew token

2016-06-24 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15348568#comment-15348568
 ] 

Andrew Wang commented on HADOOP-13251:
--

Do you feel that conditionally unsetting the DT is hacky? That we don't have 
easy access to the op in authenticate makes me think it should be in the 
implementation-specific doDelegationTokenOperation.

Personally, I find manual query string parsing to be hacky. URL query strings 
can have a key with no value as well as duplicate keys, which is why I wanted 
to use a library.

> DelegationTokenAuthenticationHandler should detect actual renewer when renew 
> token
> --
>
> Key: HADOOP-13251
> URL: https://issues.apache.org/jira/browse/HADOOP-13251
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.8.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-13251.01.patch, HADOOP-13251.02.patch, 
> HADOOP-13251.03.patch, HADOOP-13251.04.patch, HADOOP-13251.05.patch, 
> HADOOP-13251.06.patch, HADOOP-13251.07.patch, HADOOP-13251.08.patch, 
> HADOOP-13251.08.patch, HADOOP-13251.innocent.patch
>
>
> Turns out KMS delegation token renewal feature (HADOOP-13155) does not work 
> well with client side impersonation.
> In a MR example, an end user (UGI:user) gets all kinds of DTs (with 
> renewer=yarn), and pass them to Yarn. Yarn's resource manager (UGI:yarn) then 
> renews these DTs as long as the MR jobs are running. But currently, the token 
> is used at the kms server side to decide the renewer, in which case is always 
> the token's owner. This ends up rejecting the renew request due to renewer 
> mismatch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12975) Add jitter to CachingGetSpaceUsed's thread

2016-06-24 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15348567#comment-15348567
 ] 

Colin Patrick McCabe commented on HADOOP-12975:
---

Thanks for the heads up, [~vinayrpet].  I fixed the accidental revert of 
HADOOP-13072.

> Add jitter to CachingGetSpaceUsed's thread
> --
>
> Key: HADOOP-12975
> URL: https://issues.apache.org/jira/browse/HADOOP-12975
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 2.8.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Fix For: 2.8.0
>
> Attachments: HADOOP-12975v0.patch, HADOOP-12975v1.patch, 
> HADOOP-12975v2.patch, HADOOP-12975v3.patch, HADOOP-12975v4.patch, 
> HADOOP-12975v5.patch, HADOOP-12975v6.patch
>
>
> Running DU across lots of disks is very expensive and running all of the 
> processes at the same time creates a noticeable IO spike. We should add some 
> jitter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13184) Add "Apache" to Hadoop project logo

2016-06-24 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15348561#comment-15348561
 ] 

Sangjin Lee commented on HADOOP-13184:
--

Belated +1 for option 4.

> Add "Apache" to Hadoop project logo
> ---
>
> Key: HADOOP-13184
> URL: https://issues.apache.org/jira/browse/HADOOP-13184
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Chris Douglas
>Assignee: Abhishek
>
> Many ASF projects include "Apache" in their logo. We should add it to Hadoop.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13251) DelegationTokenAuthenticationHandler should detect actual renewer when renew token

2016-06-24 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13251?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-13251:
---
Attachment: HADOOP-13251.08.patch

> DelegationTokenAuthenticationHandler should detect actual renewer when renew 
> token
> --
>
> Key: HADOOP-13251
> URL: https://issues.apache.org/jira/browse/HADOOP-13251
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.8.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-13251.01.patch, HADOOP-13251.02.patch, 
> HADOOP-13251.03.patch, HADOOP-13251.04.patch, HADOOP-13251.05.patch, 
> HADOOP-13251.06.patch, HADOOP-13251.07.patch, HADOOP-13251.08.patch, 
> HADOOP-13251.08.patch, HADOOP-13251.innocent.patch
>
>
> Turns out KMS delegation token renewal feature (HADOOP-13155) does not work 
> well with client side impersonation.
> In a MR example, an end user (UGI:user) gets all kinds of DTs (with 
> renewer=yarn), and pass them to Yarn. Yarn's resource manager (UGI:yarn) then 
> renews these DTs as long as the MR jobs are running. But currently, the token 
> is used at the kms server side to decide the renewer, in which case is always 
> the token's owner. This ends up rejecting the renew request due to renewer 
> mismatch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13295) Possible Vulnerability in DataNodes via SSH

2016-06-24 Thread Ravi Prakash (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15348538#comment-15348538
 ] 

Ravi Prakash commented on HADOOP-13295:
---

Mobin! Could you please answer Steve's original question?
bq. How are you deploying it?

I'm inclined to close this JIRA as invalid. We haven't seen this issue anywhere 
else, and is probably an error in deployment.

> Possible Vulnerability in DataNodes via SSH
> ---
>
> Key: HADOOP-13295
> URL: https://issues.apache.org/jira/browse/HADOOP-13295
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Mobin Ranjbar
>
> I suspected something weird in my Hadoop cluster. When I run datanodes, after 
> a while my servers(except namenode) will be down for SSH Max Attempts. When I 
> checked the 'systemctl status ssh', I figured out there are some invalid 
> username/password attempts via SSH and the SSH daemon blocked all incoming 
> connections and I got connection refused.
> I have no problem when my datanodes are not running.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12756) Incorporate Aliyun OSS file system implementation

2016-06-24 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15348488#comment-15348488
 ] 

Chris Nauroth commented on HADOOP-12756:


+1 for the idea of a feature branch.  That will give you an easier time 
iterating on the code and improving it.  I'd be happy to help set that up.  Let 
me know if you'd like to proceed.

Just reinforcing some of my earlier comments, we consider it very important to 
have documentation in place and a way for committers to run the contract tests 
integrated with the live service.  Without those in place, long-term 
maintenance of this codebase is unlikely to succeed.  I would ask for those 
items to be completed before voting +1 on a branch merge to trunk.

> Incorporate Aliyun OSS file system implementation
> -
>
> Key: HADOOP-12756
> URL: https://issues.apache.org/jira/browse/HADOOP-12756
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: shimingfei
>Assignee: shimingfei
> Attachments: HADOOP-12756-v02.patch, HADOOP-12756.003.patch, 
> HADOOP-12756.004.patch, HADOOP-12756.005.patch, HADOOP-12756.006.patch, HCFS 
> User manual.md, OSS integration.pdf, OSS integration.pdf
>
>
> Aliyun OSS is widely used among China’s cloud users, but currently it is not 
> easy to access data laid on OSS storage from user’s Hadoop/Spark application, 
> because of no original support for OSS in Hadoop.
> This work aims to integrate Aliyun OSS with Hadoop. By simple configuration, 
> Spark/Hadoop applications can read/write data from OSS without any code 
> change. Narrowing the gap between user’s APP and data storage, like what have 
> been done for S3 in Hadoop 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-9479) Ability to plugin custom authentication mechanisms based on Jaas and Sasl

2016-06-24 Thread Mohamed Haggag (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15348339#comment-15348339
 ] 

Mohamed Haggag commented on HADOOP-9479:


Is there any updates to this issue?
 

> Ability to plugin custom authentication mechanisms based on Jaas and Sasl
> -
>
> Key: HADOOP-9479
> URL: https://issues.apache.org/jira/browse/HADOOP-9479
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: security
>Reporter: Benoy Antony
>Assignee: Benoy Antony
> Attachments: HADOOP-9479.patch, customauthentication.pdf
>
>
> Currently, it is not possible to hookup new/modified authentication mechanism 
> to Hadoop.
> The task is to create an extension in hadoop to plugin new Authentication 
> mechanism. The new authentication mechanism should have both Jaas and Sasl 
> implementations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13320) Fix arguments check in documentation for WordCount v2.0

2016-06-24 Thread niccolo becchi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13320?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

niccolo becchi updated HADOOP-13320:

Summary: Fix arguments check in documentation for WordCount v2.0  (was: Fix 
arguments check in the WordCount v2.0 in the MapReduce Documentation )

> Fix arguments check in documentation for WordCount v2.0
> ---
>
> Key: HADOOP-13320
> URL: https://issues.apache.org/jira/browse/HADOOP-13320
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Reporter: niccolo becchi
>Priority: Trivial
>
> This issue is affecting the documentation page, so the code is not covered by 
> any tests. It's actually visible on the page:
> https://hadoop.apache.org/docs/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduceTutorial.html
> On the Example: WordCount v2.0 
> The actual arguments check is wrong, as it's never printing the message of 
> the correct usage. So, running the code with no parameters, as in the 
> following example:
> {code}
> yarn jar /var/tmp/WordCount.jar task0.WordCount2
> {code}
>  
> I have got the following exception message in output:
> {code}
> Exception in thread "main" java.lang.IndexOutOfBoundsException: Index: 0, 
> Size: 0
> at java.util.ArrayList.rangeCheck(ArrayList.java:635)
> at java.util.ArrayList.get(ArrayList.java:411)
> at task0.WordCount2.main(WordCount2.java:131)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> {code}
> Intead than the expected friendly message:
> {code}
>  Usage: wordcount   [-skip skipPatternFile]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13320) Fix arguments check in the WordCount v2.0 in the MapReduce Documentation

2016-06-24 Thread niccolo becchi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13320?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

niccolo becchi updated HADOOP-13320:

Description: 
This issue is affecting the documentation page, so the code is not covered by 
any tests. It's actually visible on the page:
https://hadoop.apache.org/docs/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduceTutorial.html
On the Example: WordCount v2.0 

The actual arguments check is wrong, as it's never printing the message of the 
correct usage. So, running the code with no parameters, as in the following 
example:
{code}
yarn jar /var/tmp/WordCount.jar task0.WordCount2
{code}
 
I have got the following exception message in output:
{code}
Exception in thread "main" java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
at java.util.ArrayList.rangeCheck(ArrayList.java:635)
at java.util.ArrayList.get(ArrayList.java:411)
at task0.WordCount2.main(WordCount2.java:131)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
{code}

Intead than the expected friendly message:
{code}
 Usage: wordcount   [-skip skipPatternFile]
{code}

  was:
This issue is affecting the documentation page, so the code is not covered by 
any tests. It's actually visible on the page:
https://hadoop.apache.org/docs/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduceTutorial.html
On the Example: WordCount v2.0 

The actual arguments check is wrong, as it's never printing the message of the 
correct usage. So, running the code with no parameters, as in the following 
example:
{code}
yarn jar /var/tmp/WordCount.jar task0.WordCount2
{code}
 
I have got the following exception:
{code}
Exception in thread "main" java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
at java.util.ArrayList.rangeCheck(ArrayList.java:635)
at java.util.ArrayList.get(ArrayList.java:411)
at task0.WordCount2.main(WordCount2.java:131)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
{code}

Intead than the expected friendly message:
{code}
 Usage: wordcount   [-skip skipPatternFile]
{code}


> Fix arguments check in the WordCount v2.0 in the MapReduce Documentation 
> -
>
> Key: HADOOP-13320
> URL: https://issues.apache.org/jira/browse/HADOOP-13320
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Reporter: niccolo becchi
>Priority: Trivial
>
> This issue is affecting the documentation page, so the code is not covered by 
> any tests. It's actually visible on the page:
> https://hadoop.apache.org/docs/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduceTutorial.html
> On the Example: WordCount v2.0 
> The actual arguments check is wrong, as it's never printing the message of 
> the correct usage. So, running the code with no parameters, as in the 
> following example:
> {code}
> yarn jar /var/tmp/WordCount.jar task0.WordCount2
> {code}
>  
> I have got the following exception message in output:
> {code}
> Exception in thread "main" java.lang.IndexOutOfBoundsException: Index: 0, 
> Size: 0
> at java.util.ArrayList.rangeCheck(ArrayList.java:635)
> at java.util.ArrayList.get(ArrayList.java:411)
> at task0.WordCount2.main(WordCount2.java:131)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> {code}
> Intead than the expected friendly message:
> {code}
>  Usage: wordcount   [-skip skipPatternFile]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13320) Fix arguments check in the WordCount v2.0 in the MapReduce Documentation

2016-06-24 Thread niccolo becchi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13320?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

niccolo becchi updated HADOOP-13320:

Description: 
This issue is affecting the documentation page, so the code is not covered by 
any tests. It's actually visible on the page:
https://hadoop.apache.org/docs/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduceTutorial.html
On the Example: WordCount v2.0 

The actual arguments check is wrong, as it's never printing the message of the 
correct usage. So, running the code with no parameters, as in the following 
example:
{code}
yarn jar /var/tmp/WordCount.jar task0.WordCount2
{code}
 
I have got the following exception:
{code}
Exception in thread "main" java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
at java.util.ArrayList.rangeCheck(ArrayList.java:635)
at java.util.ArrayList.get(ArrayList.java:411)
at task0.WordCount2.main(WordCount2.java:131)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
{code}

Intead than the expected friendly message:
{code}
 Usage: wordcount   [-skip skipPatternFile]
{code}

  was:
This issue is affecting the documentation page, so the code is not covered by 
any tests. It's actually visible on the page:
https://hadoop.apache.org/docs/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduceTutorial.html
On the Example: WordCount v2.0 

The actual arguments check is wrong, as it's never printing the message of the 
correct usage. So, running the code with a wrong number of parameters we get 
the exception as here, where the application is run with 0 parameters:

yarn jar /var/tmp/WordCount.jar task0.WordCount2

{code}
Exception in thread "main" java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
at java.util.ArrayList.rangeCheck(ArrayList.java:635)
at java.util.ArrayList.get(ArrayList.java:411)
at task0.WordCount2.main(WordCount2.java:131)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
{code}

Intead than the expected friendly message:
{code}
 Usage: wordcount   [-skip skipPatternFile]
{code}


> Fix arguments check in the WordCount v2.0 in the MapReduce Documentation 
> -
>
> Key: HADOOP-13320
> URL: https://issues.apache.org/jira/browse/HADOOP-13320
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Reporter: niccolo becchi
>Priority: Trivial
>
> This issue is affecting the documentation page, so the code is not covered by 
> any tests. It's actually visible on the page:
> https://hadoop.apache.org/docs/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduceTutorial.html
> On the Example: WordCount v2.0 
> The actual arguments check is wrong, as it's never printing the message of 
> the correct usage. So, running the code with no parameters, as in the 
> following example:
> {code}
> yarn jar /var/tmp/WordCount.jar task0.WordCount2
> {code}
>  
> I have got the following exception:
> {code}
> Exception in thread "main" java.lang.IndexOutOfBoundsException: Index: 0, 
> Size: 0
> at java.util.ArrayList.rangeCheck(ArrayList.java:635)
> at java.util.ArrayList.get(ArrayList.java:411)
> at task0.WordCount2.main(WordCount2.java:131)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> {code}
> Intead than the expected friendly message:
> {code}
>  Usage: wordcount   [-skip skipPatternFile]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13320) Fix arguments check in the WordCount v2.0 in the MapReduce Documentation

2016-06-24 Thread niccolo becchi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13320?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

niccolo becchi updated HADOOP-13320:

Description: 
This issue is affecting the documentation page, so the code is not covered by 
any tests. It's actually visible on the page:
https://hadoop.apache.org/docs/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduceTutorial.html
On the Example: WordCount v2.0 

The actual arguments check is wrong, as it's never printing the message of the 
correct usage. So, running the code with a wrong number of parameters we get 
the exception as here, where the application is run with 0 parameters:

yarn jar /var/tmp/WordCount.jar task0.WordCount2

{code}
Exception in thread "main" java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
at java.util.ArrayList.rangeCheck(ArrayList.java:635)
at java.util.ArrayList.get(ArrayList.java:411)
at task0.WordCount2.main(WordCount2.java:131)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
{code}

Intead than the expected friendly message:
{code}
 Usage: wordcount   [-skip skipPatternFile]
{code}

  was:
This issue is affecting the documentation page, so the code is not covered by 
any tests. It's actually visible on the page:
https://hadoop.apache.org/docs/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduceTutorial.html
On the Example: WordCount v2.0 

The actual arguments check is wrong, as it's never printing the message of the 
correct usage. So, running the code with a wrong number of parameters we get 
the exception:

yarn jar /var/tmp/WordCount.jar task0.WordCount2

Exception in thread "main" java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
at java.util.ArrayList.rangeCheck(ArrayList.java:635)
at java.util.ArrayList.get(ArrayList.java:411)
at task0.WordCount2.main(WordCount2.java:131)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)

Intead than the expected message

 Usage: wordcount   [-skip skipPatternFile]


> Fix arguments check in the WordCount v2.0 in the MapReduce Documentation 
> -
>
> Key: HADOOP-13320
> URL: https://issues.apache.org/jira/browse/HADOOP-13320
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Reporter: niccolo becchi
>Priority: Trivial
>
> This issue is affecting the documentation page, so the code is not covered by 
> any tests. It's actually visible on the page:
> https://hadoop.apache.org/docs/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduceTutorial.html
> On the Example: WordCount v2.0 
> The actual arguments check is wrong, as it's never printing the message of 
> the correct usage. So, running the code with a wrong number of parameters we 
> get the exception as here, where the application is run with 0 parameters:
> yarn jar /var/tmp/WordCount.jar task0.WordCount2
> {code}
> Exception in thread "main" java.lang.IndexOutOfBoundsException: Index: 0, 
> Size: 0
> at java.util.ArrayList.rangeCheck(ArrayList.java:635)
> at java.util.ArrayList.get(ArrayList.java:411)
> at task0.WordCount2.main(WordCount2.java:131)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> {code}
> Intead than the expected friendly message:
> {code}
>  Usage: wordcount   [-skip skipPatternFile]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13320) Fix arguments check in the WordCount v2.0 in the MapReduce Documentation

2016-06-24 Thread niccolo becchi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13320?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

niccolo becchi updated HADOOP-13320:

Description: 
This issue is affecting the documentation page, so the code is not covered by 
any tests. It's actually visible on the page:
https://hadoop.apache.org/docs/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduceTutorial.html
On the Example: WordCount v2.0 

The actual arguments check is wrong, as it's never printing the message of the 
correct usage. So, running the code with a wrong number of parameters we get 
the exception:

yarn jar /var/tmp/WordCount.jar task0.WordCount2

Exception in thread "main" java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
at java.util.ArrayList.rangeCheck(ArrayList.java:635)
at java.util.ArrayList.get(ArrayList.java:411)
at task0.WordCount2.main(WordCount2.java:131)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)

Intead than the expected message

 Usage: wordcount   [-skip skipPatternFile]

  was:
The actual arguments check is wrong, as it's never printing the message of the 
correct usage. So, running the code with a wrong number of parameters we get 
the exception:

yarn jar /var/tmp/WordCount.jar task0.WordCount2

Exception in thread "main" java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
at java.util.ArrayList.rangeCheck(ArrayList.java:635)
at java.util.ArrayList.get(ArrayList.java:411)
at task0.WordCount2.main(WordCount2.java:131)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)

Intead than the expected message

 Usage: wordcount   [-skip skipPatternFile]


> Fix arguments check in the WordCount v2.0 in the MapReduce Documentation 
> -
>
> Key: HADOOP-13320
> URL: https://issues.apache.org/jira/browse/HADOOP-13320
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Reporter: niccolo becchi
>Priority: Trivial
>
> This issue is affecting the documentation page, so the code is not covered by 
> any tests. It's actually visible on the page:
> https://hadoop.apache.org/docs/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduceTutorial.html
> On the Example: WordCount v2.0 
> The actual arguments check is wrong, as it's never printing the message of 
> the correct usage. So, running the code with a wrong number of parameters we 
> get the exception:
> yarn jar /var/tmp/WordCount.jar task0.WordCount2
> Exception in thread "main" java.lang.IndexOutOfBoundsException: Index: 0, 
> Size: 0
> at java.util.ArrayList.rangeCheck(ArrayList.java:635)
> at java.util.ArrayList.get(ArrayList.java:411)
> at task0.WordCount2.main(WordCount2.java:131)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> Intead than the expected message
>  Usage: wordcount   [-skip skipPatternFile]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13320) Fix arguments check in the WordCount v2.0 in the MapReduce Documentation

2016-06-24 Thread niccolo becchi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13320?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

niccolo becchi updated HADOOP-13320:

Summary: Fix arguments check in the WordCount v2.0 in the MapReduce 
Documentation   (was: Fix arguments check in the WordCount v2.0 in the 
MapReduce Doc)

> Fix arguments check in the WordCount v2.0 in the MapReduce Documentation 
> -
>
> Key: HADOOP-13320
> URL: https://issues.apache.org/jira/browse/HADOOP-13320
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Reporter: niccolo becchi
>Priority: Trivial
>
> The actual arguments check is wrong, as it's never printing the message of 
> the correct usage. So, running the code with a wrong number of parameters we 
> get the exception:
> yarn jar /var/tmp/WordCount.jar task0.WordCount2
> Exception in thread "main" java.lang.IndexOutOfBoundsException: Index: 0, 
> Size: 0
> at java.util.ArrayList.rangeCheck(ArrayList.java:635)
> at java.util.ArrayList.get(ArrayList.java:411)
> at task0.WordCount2.main(WordCount2.java:131)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> Intead than the expected message
>  Usage: wordcount   [-skip skipPatternFile]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13320) Fix arguments check in the WordCount v2.0 in the MapReduce Doc

2016-06-24 Thread niccolo becchi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13320?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

niccolo becchi updated HADOOP-13320:

Status: Patch Available  (was: Open)

Created Pull Request on:
https://github.com/apache/hadoop/pull/105

> Fix arguments check in the WordCount v2.0 in the MapReduce Doc
> --
>
> Key: HADOOP-13320
> URL: https://issues.apache.org/jira/browse/HADOOP-13320
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Reporter: niccolo becchi
>Priority: Trivial
>
> The actual arguments check is wrong, as it's never printing the message of 
> the correct usage. So, running the code with a wrong number of parameters we 
> get the exception:
> yarn jar /var/tmp/WordCount.jar task0.WordCount2
> Exception in thread "main" java.lang.IndexOutOfBoundsException: Index: 0, 
> Size: 0
> at java.util.ArrayList.rangeCheck(ArrayList.java:635)
> at java.util.ArrayList.get(ArrayList.java:411)
> at task0.WordCount2.main(WordCount2.java:131)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> Intead than the expected message
>  Usage: wordcount   [-skip skipPatternFile]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13320) Fix arguments check in the WordCount v2.0 in the MapReduce Doc

2016-06-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15348301#comment-15348301
 ] 

ASF GitHub Bot commented on HADOOP-13320:
-

GitHub user pippobaudos opened a pull request:

https://github.com/apache/hadoop/pull/105

HADOOP-13320. Fix arguments check in the WordCount v2.0 in Doc. Contr…

https://issues.apache.org/jira/browse/HADOOP-13320

Fixed the check on the number of parameters.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/pippobaudos/hadoop feature/hadoop-13320

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/hadoop/pull/105.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #105


commit 469bc02c6f932bc55fced96de33884cebbe92242
Author: Niccolo Becchi 
Date:   2016-06-24T13:50:24Z

HADOOP-13320. Fix arguments check in the WordCount v2.0 in Doc. Contributed 
by Niccolo Becchi




> Fix arguments check in the WordCount v2.0 in the MapReduce Doc
> --
>
> Key: HADOOP-13320
> URL: https://issues.apache.org/jira/browse/HADOOP-13320
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Reporter: niccolo becchi
>Priority: Trivial
>
> The actual arguments check is wrong, as it's never printing the message of 
> the correct usage. So, running the code with a wrong number of parameters we 
> get the exception:
> yarn jar /var/tmp/WordCount.jar task0.WordCount2
> Exception in thread "main" java.lang.IndexOutOfBoundsException: Index: 0, 
> Size: 0
> at java.util.ArrayList.rangeCheck(ArrayList.java:635)
> at java.util.ArrayList.get(ArrayList.java:411)
> at task0.WordCount2.main(WordCount2.java:131)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> Intead than the expected message
>  Usage: wordcount   [-skip skipPatternFile]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13320) Fix arguments check in the WordCount v2.0 in the MapReduce Doc

2016-06-24 Thread niccolo becchi (JIRA)
niccolo becchi created HADOOP-13320:
---

 Summary: Fix arguments check in the WordCount v2.0 in the 
MapReduce Doc
 Key: HADOOP-13320
 URL: https://issues.apache.org/jira/browse/HADOOP-13320
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Reporter: niccolo becchi
Priority: Trivial


The actual arguments check is wrong, as it's never printing the message of the 
correct usage. So, running the code with a wrong number of parameters we get 
the exception:

yarn jar /var/tmp/WordCount.jar task0.WordCount2

Exception in thread "main" java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
at java.util.ArrayList.rangeCheck(ArrayList.java:635)
at java.util.ArrayList.get(ArrayList.java:411)
at task0.WordCount2.main(WordCount2.java:131)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)

Intead than the expected message

 Usage: wordcount   [-skip skipPatternFile]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13252) Tune S3A provider plugin mechanism

2016-06-24 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13252:

Description: 
We've now got some fairly complex auth mechanisms going on: -hadoop config, 
KMS, env vars, "none". IF something isn't working, it's going to be a lot 
harder to debug.

Review and tune the S3A provider point

* add logging of what's going on in s3 auth to help debug problems
* make a whole chain of logins expressible
* allow the anonymous credentials to be included in the list
* review and updated documents.


I propose *carefully* adding some debug messages to identify which auth 
provider is doing the auth, so we can see if the env vars were kicking in, 
sysprops, etc.

What we mustn't do is leak any secrets: this should be identifying whether 
properties and env vars are set, not what their values are. I don't believe 
that this will generate a security risk.

  was:
We've now got some fairly complex auth mechanisms going on: -hadoop config, 
KMS, env vars, "none". IF something isn't working, it's going to be a lot 
harder to debug.

I propose *carefully* adding some debug messages to identify which auth 
provider is doing the auth, so we can see if the env vars were kicking in, 
sysprops, etc.

What we mustn't do is leak any secrets: this should be identifying whether 
properties and env vars are set, not what their values are. I don't believe 
that this will generate a security risk.


> Tune S3A provider plugin mechanism
> --
>
> Key: HADOOP-13252
> URL: https://issues.apache.org/jira/browse/HADOOP-13252
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-13252-branch-2-001.patch
>
>
> We've now got some fairly complex auth mechanisms going on: -hadoop config, 
> KMS, env vars, "none". IF something isn't working, it's going to be a lot 
> harder to debug.
> Review and tune the S3A provider point
> * add logging of what's going on in s3 auth to help debug problems
> * make a whole chain of logins expressible
> * allow the anonymous credentials to be included in the list
> * review and updated documents.
> I propose *carefully* adding some debug messages to identify which auth 
> provider is doing the auth, so we can see if the env vars were kicking in, 
> sysprops, etc.
> What we mustn't do is leak any secrets: this should be identifying whether 
> properties and env vars are set, not what their values are. I don't believe 
> that this will generate a security risk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13252) Tune S3A provider plugin mechanism

2016-06-24 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13252:

Summary: Tune S3A provider plugin mechanism  (was: add logging of what's 
going on in s3 auth to help debug problems)

> Tune S3A provider plugin mechanism
> --
>
> Key: HADOOP-13252
> URL: https://issues.apache.org/jira/browse/HADOOP-13252
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-13252-branch-2-001.patch
>
>
> We've now got some fairly complex auth mechanisms going on: -hadoop config, 
> KMS, env vars, "none". IF something isn't working, it's going to be a lot 
> harder to debug.
> I propose *carefully* adding some debug messages to identify which auth 
> provider is doing the auth, so we can see if the env vars were kicking in, 
> sysprops, etc.
> What we mustn't do is leak any secrets: this should be identifying whether 
> properties and env vars are set, not what their values are. I don't believe 
> that this will generate a security risk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12756) Incorporate Aliyun OSS file system implementation

2016-06-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15348124#comment-15348124
 ] 

Hadoop QA commented on HADOOP-12756:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
31s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 14 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  8m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
 7s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-tools/hadoop-tools-dist hadoop-tools . {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m 
34s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
36s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 23s{color} | {color:orange} root: The patch generated 6 new + 0 unchanged - 
0 fixed = 6 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  8m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
8s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project . hadoop-tools hadoop-tools/hadoop-tools-dist {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  4m 
39s{color} | {color:red} root generated 1 new + 11566 unchanged - 0 fixed = 
11567 total (was 11566) {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 32m 41s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}114m 46s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.crypto.key.kms.server.TestKMS |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:85209cc |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12813029/HADOOP-12756.006.patch
 |
| JIRA Issue | HADOOP-12756 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  findbugs  checkstyle  |
| uname | Linux beca28ccabf6 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Commented] (HADOOP-12756) Incorporate Aliyun OSS file system implementation

2016-06-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15348106#comment-15348106
 ] 

Hadoop QA commented on HADOOP-12756:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
29s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 14 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  9m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-tools/hadoop-tools-dist hadoop-tools . {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  6m  
2s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
23s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 11m 
 0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
33s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 30s{color} | {color:orange} root: The patch generated 7 new + 0 unchanged - 
0 fixed = 7 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  9m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m 
10s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-tools/hadoop-tools-dist hadoop-tools . {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m 
51s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 33m 32s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}124m 30s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.crypto.key.kms.server.TestKMS |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:85209cc |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12813023/HADOOP-12756.006.patch
 |
| JIRA Issue | HADOOP-12756 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  findbugs  checkstyle  |
| uname | Linux 88120e562b6a 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 6314843 |
| Default Java | 1.8.0_91 |
| checkstyle | 

[jira] [Commented] (HADOOP-13309) Document S3A known limitations in file ownership and permission model.

2016-06-24 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13309?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15348081#comment-15348081
 ] 

Steve Loughran commented on HADOOP-13309:
-

+1 for a documented convention. UGI shortname for user & group in the absence 
of anything else.

For blobstores, maybe one feature that could be queried is something about what 
permissions models are available, Unix-y user+group+other

> Document S3A known limitations in file ownership and permission model.
> --
>
> Key: HADOOP-13309
> URL: https://issues.apache.org/jira/browse/HADOOP-13309
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Chris Nauroth
>Priority: Minor
>
> S3A does not match the implementation of HDFS in its handling of file 
> ownership and permissions.  Fundamental S3 limitations prevent it.  This is a 
> frequent source of confusion for end users.  This issue proposes to document 
> these known limitations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13319) S3A to list InstanceProfileCredentialsProvider after EnvironmentVariableCredentialsProvider

2016-06-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15348060#comment-15348060
 ] 

Hadoop QA commented on HADOOP-13319:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} docker {color} | {color:red}  0m  
3s{color} | {color:red} Docker failed to build yetus/hadoop:fb13ab0. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12813031/HADOOP-13252-branch-2-001.patch
 |
| JIRA Issue | HADOOP-13319 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9873/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> S3A to list InstanceProfileCredentialsProvider after 
> EnvironmentVariableCredentialsProvider
> ---
>
> Key: HADOOP-13319
> URL: https://issues.apache.org/jira/browse/HADOOP-13319
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-13252-branch-2-001.patch
>
>
> S3A now has a default list of credential providers to pick up AWS credentials 
> from
> The environment variable provider added in HADOOP-12807 should go before the 
> {{InstanceProfileCredentialsProvider}} one in the list, as it does a simple 
> env var checkup. In contrast  {{InstanceProfileCredentialsProvider}} does an 
> HTTP request *even when not running on EC2*. It may block for up to 2s to 
> await a timeout, and network problems could take longer.
> Checking env vars is a low cost operation that shouldn't have to wait for a 
> network timeout before being picked up.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13319) S3A to list InstanceProfileCredentialsProvider after EnvironmentVariableCredentialsProvider

2016-06-24 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13319:

Attachment: HADOOP-13252-branch-2-001.patch

Patch 001. Flips the order of creation.

full test run against s3 ireland

> S3A to list InstanceProfileCredentialsProvider after 
> EnvironmentVariableCredentialsProvider
> ---
>
> Key: HADOOP-13319
> URL: https://issues.apache.org/jira/browse/HADOOP-13319
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-13252-branch-2-001.patch
>
>
> S3A now has a default list of credential providers to pick up AWS credentials 
> from
> The environment variable provider added in HADOOP-12807 should go before the 
> {{InstanceProfileCredentialsProvider}} one in the list, as it does a simple 
> env var checkup. In contrast  {{InstanceProfileCredentialsProvider}} does an 
> HTTP request *even when not running on EC2*. It may block for up to 2s to 
> await a timeout, and network problems could take longer.
> Checking env vars is a low cost operation that shouldn't have to wait for a 
> network timeout before being picked up.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13319) S3A to list InstanceProfileCredentialsProvider after EnvironmentVariableCredentialsProvider

2016-06-24 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13319:

Status: Patch Available  (was: Open)

> S3A to list InstanceProfileCredentialsProvider after 
> EnvironmentVariableCredentialsProvider
> ---
>
> Key: HADOOP-13319
> URL: https://issues.apache.org/jira/browse/HADOOP-13319
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-13252-branch-2-001.patch
>
>
> S3A now has a default list of credential providers to pick up AWS credentials 
> from
> The environment variable provider added in HADOOP-12807 should go before the 
> {{InstanceProfileCredentialsProvider}} one in the list, as it does a simple 
> env var checkup. In contrast  {{InstanceProfileCredentialsProvider}} does an 
> HTTP request *even when not running on EC2*. It may block for up to 2s to 
> await a timeout, and network problems could take longer.
> Checking env vars is a low cost operation that shouldn't have to wait for a 
> network timeout before being picked up.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12756) Incorporate Aliyun OSS file system implementation

2016-06-24 Thread shimingfei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shimingfei updated HADOOP-12756:

Attachment: (was: HADOOP-12756.006.patch)

> Incorporate Aliyun OSS file system implementation
> -
>
> Key: HADOOP-12756
> URL: https://issues.apache.org/jira/browse/HADOOP-12756
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: shimingfei
>Assignee: shimingfei
> Attachments: HADOOP-12756-v02.patch, HADOOP-12756.003.patch, 
> HADOOP-12756.004.patch, HADOOP-12756.005.patch, HCFS User manual.md, OSS 
> integration.pdf, OSS integration.pdf
>
>
> Aliyun OSS is widely used among China’s cloud users, but currently it is not 
> easy to access data laid on OSS storage from user’s Hadoop/Spark application, 
> because of no original support for OSS in Hadoop.
> This work aims to integrate Aliyun OSS with Hadoop. By simple configuration, 
> Spark/Hadoop applications can read/write data from OSS without any code 
> change. Narrowing the gap between user’s APP and data storage, like what have 
> been done for S3 in Hadoop 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12756) Incorporate Aliyun OSS file system implementation

2016-06-24 Thread shimingfei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shimingfei updated HADOOP-12756:

Attachment: HADOOP-12756.006.patch

> Incorporate Aliyun OSS file system implementation
> -
>
> Key: HADOOP-12756
> URL: https://issues.apache.org/jira/browse/HADOOP-12756
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: shimingfei
>Assignee: shimingfei
> Attachments: HADOOP-12756-v02.patch, HADOOP-12756.003.patch, 
> HADOOP-12756.004.patch, HADOOP-12756.005.patch, HADOOP-12756.006.patch, HCFS 
> User manual.md, OSS integration.pdf, OSS integration.pdf
>
>
> Aliyun OSS is widely used among China’s cloud users, but currently it is not 
> easy to access data laid on OSS storage from user’s Hadoop/Spark application, 
> because of no original support for OSS in Hadoop.
> This work aims to integrate Aliyun OSS with Hadoop. By simple configuration, 
> Spark/Hadoop applications can read/write data from OSS without any code 
> change. Narrowing the gap between user’s APP and data storage, like what have 
> been done for S3 in Hadoop 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13319) S3A to list InstanceProfileCredentialsProvider after EnvironmentVariableCredentialsProvider

2016-06-24 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-13319:
---

 Summary: S3A to list InstanceProfileCredentialsProvider after 
EnvironmentVariableCredentialsProvider
 Key: HADOOP-13319
 URL: https://issues.apache.org/jira/browse/HADOOP-13319
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 2.8.0
Reporter: Steve Loughran
Assignee: Steve Loughran
Priority: Minor


S3A now has a default list of credential providers to pick up AWS credentials 
from

The environment variable provider added in HADOOP-12807 should go before the 
{{InstanceProfileCredentialsProvider}} one in the list, as it does a simple env 
var checkup. In contrast  {{InstanceProfileCredentialsProvider}} does an HTTP 
request *even when not running on EC2*. It may block for up to 2s to await a 
timeout, and network problems could take longer.

Checking env vars is a low cost operation that shouldn't have to wait for a 
network timeout before being picked up.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12756) Incorporate Aliyun OSS file system implementation

2016-06-24 Thread shimingfei (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15347957#comment-15347957
 ] 

shimingfei commented on HADOOP-12756:
-

[~steve_l] Thanks for your useful suggestions. I have updated the code.

Two main changes:
1. make sure ossClient is not null before calling close() on it.  and make sure 
super.close() can be called.
2. add CredentialProvider support, just like S3A 

> Incorporate Aliyun OSS file system implementation
> -
>
> Key: HADOOP-12756
> URL: https://issues.apache.org/jira/browse/HADOOP-12756
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: shimingfei
>Assignee: shimingfei
> Attachments: HADOOP-12756-v02.patch, HADOOP-12756.003.patch, 
> HADOOP-12756.004.patch, HADOOP-12756.005.patch, HADOOP-12756.006.patch, HCFS 
> User manual.md, OSS integration.pdf, OSS integration.pdf
>
>
> Aliyun OSS is widely used among China’s cloud users, but currently it is not 
> easy to access data laid on OSS storage from user’s Hadoop/Spark application, 
> because of no original support for OSS in Hadoop.
> This work aims to integrate Aliyun OSS with Hadoop. By simple configuration, 
> Spark/Hadoop applications can read/write data from OSS without any code 
> change. Narrowing the gap between user’s APP and data storage, like what have 
> been done for S3 in Hadoop 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12756) Incorporate Aliyun OSS file system implementation

2016-06-24 Thread shimingfei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shimingfei updated HADOOP-12756:

Attachment: HADOOP-12756.006.patch

> Incorporate Aliyun OSS file system implementation
> -
>
> Key: HADOOP-12756
> URL: https://issues.apache.org/jira/browse/HADOOP-12756
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: shimingfei
>Assignee: shimingfei
> Attachments: HADOOP-12756-v02.patch, HADOOP-12756.003.patch, 
> HADOOP-12756.004.patch, HADOOP-12756.005.patch, HADOOP-12756.006.patch, HCFS 
> User manual.md, OSS integration.pdf, OSS integration.pdf
>
>
> Aliyun OSS is widely used among China’s cloud users, but currently it is not 
> easy to access data laid on OSS storage from user’s Hadoop/Spark application, 
> because of no original support for OSS in Hadoop.
> This work aims to integrate Aliyun OSS with Hadoop. By simple configuration, 
> Spark/Hadoop applications can read/write data from OSS without any code 
> change. Narrowing the gap between user’s APP and data storage, like what have 
> been done for S3 in Hadoop 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12756) Incorporate Aliyun OSS file system implementation

2016-06-24 Thread shimingfei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shimingfei updated HADOOP-12756:

Attachment: (was: HADOOP-12756.006.patch)

> Incorporate Aliyun OSS file system implementation
> -
>
> Key: HADOOP-12756
> URL: https://issues.apache.org/jira/browse/HADOOP-12756
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: shimingfei
>Assignee: shimingfei
> Attachments: HADOOP-12756-v02.patch, HADOOP-12756.003.patch, 
> HADOOP-12756.004.patch, HADOOP-12756.005.patch, HCFS User manual.md, OSS 
> integration.pdf, OSS integration.pdf
>
>
> Aliyun OSS is widely used among China’s cloud users, but currently it is not 
> easy to access data laid on OSS storage from user’s Hadoop/Spark application, 
> because of no original support for OSS in Hadoop.
> This work aims to integrate Aliyun OSS with Hadoop. By simple configuration, 
> Spark/Hadoop applications can read/write data from OSS without any code 
> change. Narrowing the gap between user’s APP and data storage, like what have 
> been done for S3 in Hadoop 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12756) Incorporate Aliyun OSS file system implementation

2016-06-24 Thread shimingfei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shimingfei updated HADOOP-12756:

Attachment: HADOOP-12756.006.patch

> Incorporate Aliyun OSS file system implementation
> -
>
> Key: HADOOP-12756
> URL: https://issues.apache.org/jira/browse/HADOOP-12756
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: shimingfei
>Assignee: shimingfei
> Attachments: HADOOP-12756-v02.patch, HADOOP-12756.003.patch, 
> HADOOP-12756.004.patch, HADOOP-12756.005.patch, HADOOP-12756.006.patch, HCFS 
> User manual.md, OSS integration.pdf, OSS integration.pdf
>
>
> Aliyun OSS is widely used among China’s cloud users, but currently it is not 
> easy to access data laid on OSS storage from user’s Hadoop/Spark application, 
> because of no original support for OSS in Hadoop.
> This work aims to integrate Aliyun OSS with Hadoop. By simple configuration, 
> Spark/Hadoop applications can read/write data from OSS without any code 
> change. Narrowing the gap between user’s APP and data storage, like what have 
> been done for S3 in Hadoop 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13312) Update CHANGES.txt to reflect all the changes in branch-2.7

2016-06-24 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-13312:
---
Attachment: HADOOP-13312-branch-2.7.01.patch

> Update CHANGES.txt to reflect all the changes in branch-2.7
> ---
>
> Key: HADOOP-13312
> URL: https://issues.apache.org/jira/browse/HADOOP-13312
> Project: Hadoop Common
>  Issue Type: Task
>Affects Versions: 2.7.3
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Blocker
> Attachments: HADOOP-13312-branch-2.7.00.patch, 
> HADOOP-13312-branch-2.7.01.patch
>
>
> When committing to branch-2.7, we need to edit CHANGES.txt. However, there 
> are some commits to branch-2.7 without editing CHANGES.txt. We need to update 
> the change log.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13251) DelegationTokenAuthenticationHandler should detect actual renewer when renew token

2016-06-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15347867#comment-15347867
 ] 

Hadoop QA commented on HADOOP-13251:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
25s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  8m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m  
5s{color} | {color:red} hadoop-common-project/hadoop-common generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  9m 17s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  2m 15s{color} 
| {color:red} hadoop-kms in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 56m 46s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-common-project/hadoop-common |
|  |  Possible null pointer dereference of queryStr in 
org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.isDTAuthDisallowed(URL)
  Dereferenced at DelegationTokenAuthenticator.java:queryStr in 
org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.isDTAuthDisallowed(URL)
  Dereferenced at DelegationTokenAuthenticator.java:[line 145] |
| Failed junit tests | hadoop.metrics2.impl.TestGangliaMetrics |
|   | hadoop.ha.TestZKFailoverController |
|   | hadoop.security.token.delegation.web.TestWebDelegationToken |
|   | hadoop.crypto.key.kms.server.TestKMS |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:85209cc |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12813003/HADOOP-13251.08.patch 
|
| JIRA Issue | HADOOP-13251 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 5e2b22147e09 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | 

[jira] [Commented] (HADOOP-13312) Update CHANGES.txt to reflect all the changes in branch-2.7

2016-06-24 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15347824#comment-15347824
 ] 

Akira Ajisaka commented on HADOOP-13312:


[Fixed issues in 
2.7.3|https://issues.apache.org/jira/issues/?jql=project%20%3D%20MAPREDUCE%20AND%20resolution%20%3D%20Fixed%20AND%20fixVersion%20%3D%202.7.3%20ORDER%20BY%20updated%20DESC%2C%20priority%20DESC%2C%20created%20ASC]
 not in CHANGES.txt (MapReduce)
MAPREDUCE-6514, MAPREDUCE-6680

[Fixed issues in 
2.7.3|https://issues.apache.org/jira/issues/?jql=project%20%3D%20YARN%20AND%20resolution%20%3D%20Fixed%20AND%20fixVersion%20%3D%202.7.3%20ORDER%20BY%20updated%20DESC%2C%20priority%20DESC%2C%20created%20ASC]
 not in CHANGES.txt (YARN)
YARN-4556, YARN-4794, YARN-4850, YARN-4686

> Update CHANGES.txt to reflect all the changes in branch-2.7
> ---
>
> Key: HADOOP-13312
> URL: https://issues.apache.org/jira/browse/HADOOP-13312
> Project: Hadoop Common
>  Issue Type: Task
>Affects Versions: 2.7.3
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Blocker
> Attachments: HADOOP-13312-branch-2.7.00.patch
>
>
> When committing to branch-2.7, we need to edit CHANGES.txt. However, there 
> are some commits to branch-2.7 without editing CHANGES.txt. We need to update 
> the change log.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org