[jira] [Commented] (HADOOP-11540) Raw Reed-Solomon coder using Intel ISA-L library

2016-03-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15205712#comment-15205712
 ] 

Hadoop QA commented on HADOOP-11540:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
50s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 55s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 48s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
21s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 59s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
33s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
42s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 48s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red} 10m 11s {color} | 
{color:red} root-jdk1.8.0_74 with JDK v1.8.0_74 generated 2 new + 19 unchanged 
- 2 fixed = 21 total (was 21) {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 5m 48s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 48s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 42s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red} 16m 53s {color} | 
{color:red} root-jdk1.7.0_95 with JDK v1.7.0_95 generated 2 new + 29 unchanged 
- 2 fixed = 31 total (was 31) {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 6m 42s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 42s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 21s 
{color} | {color:red} hadoop-common-project/hadoop-common: patch generated 4 
new + 1 unchanged - 0 fixed = 5 total (was 1) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 55s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 0s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
52s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 5s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 6m 20s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_74. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 6m 43s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.7.0_95. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 23s 
{color} | 

[jira] [Commented] (HADOOP-12909) Change ipc.Client to support asynchronous calls

2016-03-21 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15205710#comment-15205710
 ] 

Haohui Mai commented on HADOOP-12909:
-

I'm concerned about the complexity and the effort to support it down the road.

I understand in short term this is something that can be patched. However, we 
have enough troubles in today's IPC, maintaining it is definitely a challenging 
task especially. Does it make more sense to invest in other existing solutions 
that are more well tested, such as gRPC, which will allow us to make more 
effective investments in other areas?

> Change ipc.Client to support asynchronous calls
> ---
>
> Key: HADOOP-12909
> URL: https://issues.apache.org/jira/browse/HADOOP-12909
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: ipc
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Xiaobing Zhou
> Attachments: HADOOP-12909-HDFS-9924.000.patch, 
> HADOOP-12909-HDFS-9924.001.patch, HADOOP-12909-HDFS-9924.002.patch, 
> HADOOP-12909-HDFS-9924.003.patch
>
>
> In ipc.Client, the underlying mechanism is already supporting asynchronous 
> calls -- the calls shares a connection, the call requests are sent using a 
> thread pool and the responses can be out of order.  Indeed, synchronous call 
> is implemented by invoking wait() in the caller thread in order to wait for 
> the server response.
> In this JIRA, we change ipc.Client to support asynchronous mode.  In 
> asynchronous mode, it return once the request has been sent out but not wait 
> for the response from the server.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12726) Unsupported FS operations should throw UnsupportedOperationException

2016-03-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15205634#comment-15205634
 ] 

Hadoop QA commented on HADOOP-12726:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 35s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
42s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
20s {color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 33s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
20s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 36s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
34s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
44s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 37s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 29s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 17s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
6s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 5s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 5s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 25s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 8m 25s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 23s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
28s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
42s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 24s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 31s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 10m 4s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_74. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 14s 
{color} | {color:green} hadoop-aws in the patch passed with JDK v1.8.0_74. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 9m 28s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 13s 
{color} | {color:green} hadoop-aws in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 25s 
{color} | {color:red} Patch generated 2 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 88m 10s 

[jira] [Commented] (HADOOP-12942) hadoop credential commands non-obviously use password of "none"

2016-03-21 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15205626#comment-15205626
 ] 

Larry McCay commented on HADOOP-12942:
--

Let's walk through that proposal.

I think that the password file is marginally more secure because both files 
would have to be accessible in order to access the keystore and some folks may 
be willing to manage more files in order to get that additional protection. In 
addition, gaining access to one of those password files will only provide 
access to keystores that an attacker has access to *and* are protected by that 
particular password.

The AbstractJavaKeyStoreProvider already has support for a password file and 
can easily be used - we definitely need to document this clearly.

I have heard reluctance from folks in the past for having commands prompt for 
passwords and would certainly break the scriptability of it. We would have to 
add a switch that enabled the prompting for a password - if we were to add it 
to the credential create subcommand.

This same password file is used in lots of scenarios though: KMS, javakeystore 
providers for key provider API, oozie, signing secret providers,e tc. I wonder 
whether a separate command for it would make sense.
Keep in mind that we would need to do a number of things for this.

1. prompt for the password
2. persist it
3. set appropriate permissions on the file
4. somehow determine the filename to use (probably based on the password file 
name configuration) which would need to be provided by the user as well
5. allow for use of the same password file for multiple keystores or scenarios
6. allow for random-ish generated password without prompt

So, something like:

hadoop pwdfile -pwdfile.property.name 
hadoop.security.credstore.java-keystore-provider.password-file [-generate true] 
[-permissions 400]

This would check the Configuration for the provided pwdfile.property.name to 
get the file to persist the password to.
If generate is set to true then it doesn't prompt and generates a password to 
use - otherwise, prompts for a password.
(I could also see the opposite approach which would be default to generate 
unless a -interactive --i type switch is provided.)
If permissions are provided the file is created with those permissions 
otherwise, defaults to 400.


> hadoop credential commands non-obviously use password of "none"
> ---
>
> Key: HADOOP-12942
> URL: https://issues.apache.org/jira/browse/HADOOP-12942
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Mike Yoder
>
> The "hadoop credential create" command, when using a jceks provider, defaults 
> to using the value of "none" for the password that protects the jceks file.  
> This is not obvious in the command or in documentation - to users or to other 
> hadoop developers - and leads to jceks files that essentially are not 
> protected.
> In this example, I'm adding a credential entry with name of "foo" and a value 
> specified by the password entered:
> {noformat}
> # hadoop credential create foo -provider localjceks://file/bar.jceks
> Enter password: 
> Enter password again: 
> foo has been successfully created.
> org.apache.hadoop.security.alias.LocalJavaKeyStoreProvider has been updated.
> {noformat}
> However, the password that protects the file bar.jceks is "none", and there 
> is no obvious way to change that. The practical way of supplying the password 
> at this time is something akin to
> {noformat}
> HADOOP_CREDSTORE_PASSWORD=credpass hadoop credential create --provider ...
> {noformat}
> That is, stuffing HADOOP_CREDSTORE_PASSWORD into the environment of the 
> command. 
> This is more than a documentation issue. I believe that the password ought to 
> be _required_.  We have three implementations at this point, the two 
> JavaKeystore ones and the UserCredential. The latter is "transient" which 
> does not make sense to use in this context. The former need some sort of 
> password, and it's relatively easy to envision that any non-transient 
> implementation would need a mechanism by which to protect the store that it's 
> creating.  
> The implementation gets interesting because the password in the 
> AbstractJavaKeyStoreProvider is determined in the constructor, and changing 
> it after the fact would get messy. So this probably means that the 
> CredentialProviderFactory should have another factory method like the first 
> that additionally takes the password, and an additional constructor exist in 
> all the implementations that takes the password. 
> Then we just ask for the password in getCredentialProvider() and that gets 
> passed down to via the factory to the implementation. The code does have 
> logic in the factory to try multiple providers, but I don't really see how 
> 

[jira] [Commented] (HADOOP-12951) Improve documentation on KMS ACLs and delegation tokens

2016-03-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15205517#comment-15205517
 ] 

Hadoop QA commented on HADOOP-12951:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 12m 56s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 18s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 
47s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 58s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 13s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 50s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
24s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 25m 49s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:fbe3e86 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12794624/HADOOP-12951.01.patch 
|
| JIRA Issue | HADOOP-12951 |
| Optional Tests |  asflicense  mvnsite  |
| uname | Linux 70631512b8d0 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / e7ed05e |
| modules | C:  hadoop-common-project/hadoop-auth   
hadoop-common-project/hadoop-kms  U: hadoop-common-project |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build//console |
| Powered by | Apache Yetus 0.2.0   http://yetus.apache.org |


This message was automatically generated.



> Improve documentation on KMS ACLs and delegation tokens
> ---
>
> Key: HADOOP-12951
> URL: https://issues.apache.org/jira/browse/HADOOP-12951
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-12951.01.patch
>
>
> [~andrew.wang] suggested that the current KMS ACL page is not very 
> user-focused, and hard to come by without reading the code.
> I read the document (and the code), and I agree. So this jira puts more 
> documentation to explain the current implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12942) hadoop credential commands non-obviously use password of "none"

2016-03-21 Thread Mike Yoder (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15205487#comment-15205487
 ] 

Mike Yoder commented on HADOOP-12942:
-

{quote}
We could:
{quote}
This is becoming bigger than the intended scope of this jira. :-)

{quote}
Add a command that provisions an encrypted master secret to a well-known 
location in HDFS
{quote}
We'd have to carefully think through what users would be able to perform this 
action. And if something like this could be automated instead. And where that 
"well-known location" might be - could it be configured (I think we'd have to). 
And what about recursion issues if that location was inside an Encryption Zone? 

{quote}
Obviously, this approach would require KMS to be in use and a new manual step 
to provision a master secret.
{quote}
I think what you propose is workable, but these new requirements do concern me. 
We'd also have to think through what users could perform this action (for this 
action and for making the key in the KMS). There are lot of moving parts. Seems 
like a case for a credential server (or credential server functionality in the 
KMS).

Back to the issue in this jira - regardless of the difficulty of handling the 
credential store password throughout the entire workflow, I still believe that 
the credential shell should ask for that password. It's got to be better than 
silently using "none" everywhere. And given that the key store provider has the 
ability to get the password from a file, it seems like it would be possible to 
put the password into a file for basically all use cases.


> hadoop credential commands non-obviously use password of "none"
> ---
>
> Key: HADOOP-12942
> URL: https://issues.apache.org/jira/browse/HADOOP-12942
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Mike Yoder
>
> The "hadoop credential create" command, when using a jceks provider, defaults 
> to using the value of "none" for the password that protects the jceks file.  
> This is not obvious in the command or in documentation - to users or to other 
> hadoop developers - and leads to jceks files that essentially are not 
> protected.
> In this example, I'm adding a credential entry with name of "foo" and a value 
> specified by the password entered:
> {noformat}
> # hadoop credential create foo -provider localjceks://file/bar.jceks
> Enter password: 
> Enter password again: 
> foo has been successfully created.
> org.apache.hadoop.security.alias.LocalJavaKeyStoreProvider has been updated.
> {noformat}
> However, the password that protects the file bar.jceks is "none", and there 
> is no obvious way to change that. The practical way of supplying the password 
> at this time is something akin to
> {noformat}
> HADOOP_CREDSTORE_PASSWORD=credpass hadoop credential create --provider ...
> {noformat}
> That is, stuffing HADOOP_CREDSTORE_PASSWORD into the environment of the 
> command. 
> This is more than a documentation issue. I believe that the password ought to 
> be _required_.  We have three implementations at this point, the two 
> JavaKeystore ones and the UserCredential. The latter is "transient" which 
> does not make sense to use in this context. The former need some sort of 
> password, and it's relatively easy to envision that any non-transient 
> implementation would need a mechanism by which to protect the store that it's 
> creating.  
> The implementation gets interesting because the password in the 
> AbstractJavaKeyStoreProvider is determined in the constructor, and changing 
> it after the fact would get messy. So this probably means that the 
> CredentialProviderFactory should have another factory method like the first 
> that additionally takes the password, and an additional constructor exist in 
> all the implementations that takes the password. 
> Then we just ask for the password in getCredentialProvider() and that gets 
> passed down to via the factory to the implementation. The code does have 
> logic in the factory to try multiple providers, but I don't really see how 
> multiple providers would be rationaly be used in the command shell context.
> This issue was brought to light when a user stored credentials for a Sqoop 
> action in Oozie; upon trying to figure out where the password was coming from 
> we discovered it to be the default value of "none".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11540) Raw Reed-Solomon coder using Intel ISA-L library

2016-03-21 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HADOOP-11540:
---
Attachment: HADOOP-11540-v7.patch

Made an important change forgotten in last patch:
Moving {{doEncodeByConvertingToDirectBuffers}} to {{AbstractNativeRawEncoder}}, 
and similarly for AbstractNativeRawDecoder

> Raw Reed-Solomon coder using Intel ISA-L library
> 
>
> Key: HADOOP-11540
> URL: https://issues.apache.org/jira/browse/HADOOP-11540
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: HDFS-7285
>Reporter: Zhe Zhang
>Assignee: Kai Zheng
> Attachments: HADOOP-11540-initial.patch, HADOOP-11540-v1.patch, 
> HADOOP-11540-v2.patch, HADOOP-11540-v4.patch, HADOOP-11540-v5.patch, 
> HADOOP-11540-v6.patch, HADOOP-11540-v7.patch, 
> HADOOP-11540-with-11996-codes.patch, Native Erasure Coder Performance - Intel 
> ISAL-v1.pdf
>
>
> This is to provide RS codec implementation using Intel ISA-L library for 
> encoding and decoding.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12892) fix/rewrite create-release

2016-03-21 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15205330#comment-15205330
 ] 

Andrew Wang commented on HADOOP-12892:
--

So, is the fix specifying an independent m2 dir? This means we end up 
downloading the world, but I've seen some good improvements by bumping the # of 
download threads and using Google's maven mirror.

> fix/rewrite create-release
> --
>
> Key: HADOOP-12892
> URL: https://issues.apache.org/jira/browse/HADOOP-12892
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-12892.00.patch
>
>
> create-release needs some major surgery.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12951) Improve documentation on KMS ACLs and delegation tokens

2016-03-21 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12951?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-12951:
---
Status: Patch Available  (was: Open)

> Improve documentation on KMS ACLs and delegation tokens
> ---
>
> Key: HADOOP-12951
> URL: https://issues.apache.org/jira/browse/HADOOP-12951
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-12951.01.patch
>
>
> [~andrew.wang] suggested that the current KMS ACL page is not very 
> user-focused, and hard to come by without reading the code.
> I read the document (and the code), and I agree. So this jira puts more 
> documentation to explain the current implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12951) Improve documentation on KMS ACLs and delegation tokens

2016-03-21 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15205310#comment-15205310
 ] 

Xiao Chen commented on HADOOP-12951:


Patch 1 explains the current relation between KMS ACL and Key ACL. Also 
corrected some outdated texts regarding delegation tokens. I have to admit that 
my understanding towards delegation token is limited, but IIUC, we don't need 
extra configs for it on HA since the configurations in 'HTTP Authentication 
Signature' already explains it.

> Improve documentation on KMS ACLs and delegation tokens
> ---
>
> Key: HADOOP-12951
> URL: https://issues.apache.org/jira/browse/HADOOP-12951
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-12951.01.patch
>
>
> [~andrew.wang] suggested that the current KMS ACL page is not very 
> user-focused, and hard to come by without reading the code.
> I read the document (and the code), and I agree. So this jira puts more 
> documentation to explain the current implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12951) Improve documentation on KMS ACLs and delegation tokens

2016-03-21 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12951?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-12951:
---
Attachment: HADOOP-12951.01.patch

> Improve documentation on KMS ACLs and delegation tokens
> ---
>
> Key: HADOOP-12951
> URL: https://issues.apache.org/jira/browse/HADOOP-12951
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-12951.01.patch
>
>
> [~andrew.wang] suggested that the current KMS ACL page is not very 
> user-focused, and hard to come by without reading the code.
> I read the document (and the code), and I agree. So this jira puts more 
> documentation to explain the current implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12951) Improve documentation on KMS ACLs and delegation tokens

2016-03-21 Thread Xiao Chen (JIRA)
Xiao Chen created HADOOP-12951:
--

 Summary: Improve documentation on KMS ACLs and delegation tokens
 Key: HADOOP-12951
 URL: https://issues.apache.org/jira/browse/HADOOP-12951
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Xiao Chen
Assignee: Xiao Chen


[~andrew.wang] suggested that the current KMS ACL page is not very 
user-focused, and hard to come by without reading the code.

I read the document (and the code), and I agree. So this jira puts more 
documentation to explain the current implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12941) abort in Unsafe_GetLong when running IA64 HPUX 64bit mode

2016-03-21 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15205231#comment-15205231
 ] 

Colin Patrick McCabe commented on HADOOP-12941:
---

Hi gene,

It would be better if you put the stack into a comment than in the description.

bq. if (System.getProperty("os.arch").equals("sparc")) || 
System.getProperty("os.arch").equals("ia64")) should be a pretty easy fix just 
testing in would be the issue.

ok

bq. And I can post the fix, but since I have never done this before would 
benefit from some guidance.

Just attach it as a patch file to this JIRA.  Thanks!

> abort in Unsafe_GetLong when running IA64 HPUX 64bit mode 
> --
>
> Key: HADOOP-12941
> URL: https://issues.apache.org/jira/browse/HADOOP-12941
> Project: Hadoop Common
>  Issue Type: Bug
> Environment: hpux IA64  running 64bit mode 
>Reporter: gene bradley
>
> Now that we have a core to look at we can sorta see what is going on#14 
> 0x9fffaf000dd0 in Java native_call_stub frame#15 0x9fffaf014470 in 
> JNI frame: sun.misc.Unsafe::getLong (java.lang.Object, long) ->long#16 
> 0x9fffaf0067a0 in interpreted frame: 
> org.apache.hadoop.hbase.util.Bytes$LexicographicalComparerHolder$UnsafeComparer::compareTo
>  (byte[], int, int, byte[], int, int) ->int bci: 74#17 0x9fffaf0066e0 in 
> interpreted frame: 
> org.apache.hadoop.hbase.util.Bytes$LexicographicalComparerHolder$UnsafeComparer::compareTo
>  (java.lang.Object, int, int, java.lang.Object, int, int) ->int bci: 16#18 
> 0x9fffaf006720 in interpreted frame: 
> org.apache.hadoop.hbase.util.Bytes::compareTo (byte[], int, int, byte[], int, 
> int) ->int bci: 11#19 0x9fffaf0066e0 in interpreted frame: 
> org.apache.hadoop.hbase.KeyValue$KVComparator::compareRowKey 
> (org.apache.hadoop.hbase.Cell, org.apache.hadoop.hbase.Cell) ->int bci: 36#20 
> 0x9fffaf0066e0 in interpreted frame: 
> org.apache.hadoop.hbase.KeyValue$KVComparator::compare 
> (org.apache.hadoop.hbase.Cell, org.apache.hadoop.hbase.Cell) ->int bci: 3#21 
> 0x9fffaf0066e0 in interpreted frame: 
> org.apache.hadoop.hbase.KeyValue$KVComparator::compare (java.lang.Object, 
> java.lang.Object) ->int bci: 9;; Line: 4000xc0003ad84d30:0 
> :(p1)  ld8  
> r45=[r34]0xc0003ad84d30:1 :  adds   
>   r34=16,r320xc0003ad84d30:2 :  adds
>  ret0=8,r32;;0xc0003ad84d40:0 :  add
>   ret1=r35,r45 < r35 is off0xc0003ad84d40:1 
> :  ld8  
> r35=[r34],240xc0003ad84d40:2 :  nop.i   
>  0x00xc0003ad84d50:0 :  ld8 
>  r41=[ret0];;0xc0003ad84d50:1 :  ld8.s  
>   r49=[r34],-240xc0003ad84d50:2 :  
> nop.i0x00xc0003ad84d60:0 :  ld8 
>  r39=[ret1];; <=== abort0xc0003ad84d60:1 
> :  ld8  
> ret0=[r35]0xc0003ad84d60:2 :  nop.i 
>0x0;;0xc0003ad84d70:0 :  cmp.ne.unc  
>  p1=r0,ret0;;M,MI0xc0003ad84d70:1 :(p1)  mov
>   r48=r410xc0003ad84d70:2 :(p1)  
> chk.s.i  r49,Unsafe_GetLong+0x290(gdb) x /10i 
> $pc-48*20x9fffaf000d70:   flushrs 
>MMI0x9fffaf000d71:   mov  
> r44=r320x9fffaf000d72:   mov  
> r45=r330x9fffaf000d80:   mov  r46=r34 
>   MMI0x9fffaf000d81:   mov  
> r47=r350x9fffaf000d82:   mov  
> r48=r360x9fffaf000d90:   mov  r49=r37 
>   MMI0x9fffaf000d91:   mov  
> r50=r380x9fffaf000d92:   mov  r51=r39
> 0x9fffaf000da0:   adds r14=0x270,r4   
>MMI(gdb) p /x $r35$9 = 0x22(gdb) x /x 
> $ret10x9ffe1d0d2bda: 0x677a68676c78743a(gdb) x /x 
> $r45+0x220x9ffe1d0d2bda: 0x677a68676c78743aSo here is the problem,  
> this is a 64bit JVM 0 : /opt/java8/bin/IA64W/java1 : 
> -Djava.util.logging.config.file=/test28/gzh/tomcat/conf/logging.properties2 : 
> -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager3 : 
> -Dorg.apache.catalina.security.SecurityListener.UMASK=0224 : -server5 : 
> -XX:PermSize=128m6 : -XX:MaxPermSize=256m7 : 
> 

[jira] [Updated] (HADOOP-12726) Unsupported FS operations should throw UnsupportedOperationException

2016-03-21 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton updated HADOOP-12726:
--
Attachment: HADOOP-12726.001.patch

Here's a first pass at the change.  I didn't see any test code that tests the 
unsupportedness of any method calls.

> Unsupported FS operations should throw UnsupportedOperationException
> 
>
> Key: HADOOP-12726
> URL: https://issues.apache.org/jira/browse/HADOOP-12726
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.7.1
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
> Attachments: HADOOP-12726.001.patch
>
>
> In the {{FileSystem}} implementation classes, unsupported operations throw 
> {{new IOException("Not supported")}}, which makes it needlessly difficult to 
> distinguish an actual error from an unsupported operation.  They should 
> instead throw {{new UnsupportedOperationException()}}.
> It's possible that this anti-idiom is used elsewhere in the code base.  This 
> JIRA should include finding and cleaning up those instances as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12726) Unsupported FS operations should throw UnsupportedOperationException

2016-03-21 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton updated HADOOP-12726:
--
Status: Patch Available  (was: Open)

> Unsupported FS operations should throw UnsupportedOperationException
> 
>
> Key: HADOOP-12726
> URL: https://issues.apache.org/jira/browse/HADOOP-12726
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.7.1
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
> Attachments: HADOOP-12726.001.patch
>
>
> In the {{FileSystem}} implementation classes, unsupported operations throw 
> {{new IOException("Not supported")}}, which makes it needlessly difficult to 
> distinguish an actual error from an unsupported operation.  They should 
> instead throw {{new UnsupportedOperationException()}}.
> It's possible that this anti-idiom is used elsewhere in the code base.  This 
> JIRA should include finding and cleaning up those instances as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12929) JWTRedirectAuthenticationHandler must accommodate null expiration time

2016-03-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15205121#comment-15205121
 ] 

Hudson commented on HADOOP-12929:
-

FAILURE: Integrated in Hadoop-trunk-Commit #9484 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9484/])
HADOOP-12929. JWTRedirectAuthenticationHandler must accommodate null (benoy: 
rev e7ed05e4f5b0421e93f2f2cadc5beda3d28b9911)
* 
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/JWTRedirectAuthenticationHandler.java
* 
hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/server/TestJWTRedirectAuthentictionHandler.java


> JWTRedirectAuthenticationHandler must accommodate null expiration time
> --
>
> Key: HADOOP-12929
> URL: https://issues.apache.org/jira/browse/HADOOP-12929
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Larry McCay
>Assignee: Larry McCay
> Attachments: HADOOP-12929-001.patch, HADOOP-12929-002.patch, 
> HADOOP-12929-003.patch
>
>
> The underlying JWT token within the hadoop-jwt cookie should be able to have 
> no expiration time. This allows the token lifecycle to be the same as the 
> cookie that contains it.
> Current validation processing of the token interprets the absence of an 
> expiration time as requiring a new token to be acquired. JWT itself considers 
> the exp to be an optional claim. As such, this patch will change the 
> processing to accept a null expiration as valid for as long as the cookie is 
> presented.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12929) JWTRedirectAuthenticationHandler must accommodate null expiration time

2016-03-21 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15205077#comment-15205077
 ] 

Larry McCay commented on HADOOP-12929:
--

Terrific - thanks, [~benoyantony]!

> JWTRedirectAuthenticationHandler must accommodate null expiration time
> --
>
> Key: HADOOP-12929
> URL: https://issues.apache.org/jira/browse/HADOOP-12929
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Larry McCay
>Assignee: Larry McCay
> Attachments: HADOOP-12929-001.patch, HADOOP-12929-002.patch, 
> HADOOP-12929-003.patch
>
>
> The underlying JWT token within the hadoop-jwt cookie should be able to have 
> no expiration time. This allows the token lifecycle to be the same as the 
> cookie that contains it.
> Current validation processing of the token interprets the absence of an 
> expiration time as requiring a new token to be acquired. JWT itself considers 
> the exp to be an optional claim. As such, this patch will change the 
> processing to accept a null expiration as valid for as long as the cookie is 
> presented.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12929) JWTRedirectAuthenticationHandler must accommodate null expiration time

2016-03-21 Thread Benoy Antony (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15205058#comment-15205058
 ] 

Benoy Antony commented on HADOOP-12929:
---

Thanks for the patch, Larry.
Committed to trunk, branch-2 and branch-2.8.

> JWTRedirectAuthenticationHandler must accommodate null expiration time
> --
>
> Key: HADOOP-12929
> URL: https://issues.apache.org/jira/browse/HADOOP-12929
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Larry McCay
>Assignee: Larry McCay
> Attachments: HADOOP-12929-001.patch, HADOOP-12929-002.patch, 
> HADOOP-12929-003.patch
>
>
> The underlying JWT token within the hadoop-jwt cookie should be able to have 
> no expiration time. This allows the token lifecycle to be the same as the 
> cookie that contains it.
> Current validation processing of the token interprets the absence of an 
> expiration time as requiring a new token to be acquired. JWT itself considers 
> the exp to be an optional claim. As such, this patch will change the 
> processing to accept a null expiration as valid for as long as the cookie is 
> presented.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12929) JWTRedirectAuthenticationHandler must accommodate null expiration time

2016-03-21 Thread Benoy Antony (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12929?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benoy Antony updated HADOOP-12929:
--
Release Note:   (was: Thanks for the patch, Larry.
Committed to trunk, branch-2 and branch-2.8.)

> JWTRedirectAuthenticationHandler must accommodate null expiration time
> --
>
> Key: HADOOP-12929
> URL: https://issues.apache.org/jira/browse/HADOOP-12929
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Larry McCay
>Assignee: Larry McCay
> Attachments: HADOOP-12929-001.patch, HADOOP-12929-002.patch, 
> HADOOP-12929-003.patch
>
>
> The underlying JWT token within the hadoop-jwt cookie should be able to have 
> no expiration time. This allows the token lifecycle to be the same as the 
> cookie that contains it.
> Current validation processing of the token interprets the absence of an 
> expiration time as requiring a new token to be acquired. JWT itself considers 
> the exp to be an optional claim. As such, this patch will change the 
> processing to accept a null expiration as valid for as long as the cookie is 
> presented.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12929) JWTRedirectAuthenticationHandler must accommodate null expiration time

2016-03-21 Thread Benoy Antony (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12929?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benoy Antony updated HADOOP-12929:
--
  Resolution: Fixed
Hadoop Flags: Reviewed
Release Note: 
Thanks for the patch, Larry.
Committed to trunk, branch-2 and branch-2.8.
  Status: Resolved  (was: Patch Available)

> JWTRedirectAuthenticationHandler must accommodate null expiration time
> --
>
> Key: HADOOP-12929
> URL: https://issues.apache.org/jira/browse/HADOOP-12929
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Larry McCay
>Assignee: Larry McCay
> Attachments: HADOOP-12929-001.patch, HADOOP-12929-002.patch, 
> HADOOP-12929-003.patch
>
>
> The underlying JWT token within the hadoop-jwt cookie should be able to have 
> no expiration time. This allows the token lifecycle to be the same as the 
> cookie that contains it.
> Current validation processing of the token interprets the absence of an 
> expiration time as requiring a new token to be acquired. JWT itself considers 
> the exp to be an optional claim. As such, this patch will change the 
> processing to accept a null expiration as valid for as long as the cookie is 
> presented.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12941) abort in Unsafe_GetLong when running IA64 HPUX 64bit mode

2016-03-21 Thread gene bradley (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15205054#comment-15205054
 ] 

gene bradley commented on HADOOP-12941:
---


Hi  Colin,

I always add stack trace and register info so if some other unlucky guy hits a 
crash they can do patch and match.

if (System.getProperty("os.arch").equals("sparc"))  || 
System.getProperty("os.arch").equals("ia64"))  

should be a pretty easy fix just testing in would be the issue.

This came in thru a customer of ours in China, let me check with them if they 
can swing a test for us.

And I can post the fix, but since I have never done this before would benefit 
from some guidance.


Gene






> abort in Unsafe_GetLong when running IA64 HPUX 64bit mode 
> --
>
> Key: HADOOP-12941
> URL: https://issues.apache.org/jira/browse/HADOOP-12941
> Project: Hadoop Common
>  Issue Type: Bug
> Environment: hpux IA64  running 64bit mode 
>Reporter: gene bradley
>
> Now that we have a core to look at we can sorta see what is going on#14 
> 0x9fffaf000dd0 in Java native_call_stub frame#15 0x9fffaf014470 in 
> JNI frame: sun.misc.Unsafe::getLong (java.lang.Object, long) ->long#16 
> 0x9fffaf0067a0 in interpreted frame: 
> org.apache.hadoop.hbase.util.Bytes$LexicographicalComparerHolder$UnsafeComparer::compareTo
>  (byte[], int, int, byte[], int, int) ->int bci: 74#17 0x9fffaf0066e0 in 
> interpreted frame: 
> org.apache.hadoop.hbase.util.Bytes$LexicographicalComparerHolder$UnsafeComparer::compareTo
>  (java.lang.Object, int, int, java.lang.Object, int, int) ->int bci: 16#18 
> 0x9fffaf006720 in interpreted frame: 
> org.apache.hadoop.hbase.util.Bytes::compareTo (byte[], int, int, byte[], int, 
> int) ->int bci: 11#19 0x9fffaf0066e0 in interpreted frame: 
> org.apache.hadoop.hbase.KeyValue$KVComparator::compareRowKey 
> (org.apache.hadoop.hbase.Cell, org.apache.hadoop.hbase.Cell) ->int bci: 36#20 
> 0x9fffaf0066e0 in interpreted frame: 
> org.apache.hadoop.hbase.KeyValue$KVComparator::compare 
> (org.apache.hadoop.hbase.Cell, org.apache.hadoop.hbase.Cell) ->int bci: 3#21 
> 0x9fffaf0066e0 in interpreted frame: 
> org.apache.hadoop.hbase.KeyValue$KVComparator::compare (java.lang.Object, 
> java.lang.Object) ->int bci: 9;; Line: 4000xc0003ad84d30:0 
> :(p1)  ld8  
> r45=[r34]0xc0003ad84d30:1 :  adds   
>   r34=16,r320xc0003ad84d30:2 :  adds
>  ret0=8,r32;;0xc0003ad84d40:0 :  add
>   ret1=r35,r45 < r35 is off0xc0003ad84d40:1 
> :  ld8  
> r35=[r34],240xc0003ad84d40:2 :  nop.i   
>  0x00xc0003ad84d50:0 :  ld8 
>  r41=[ret0];;0xc0003ad84d50:1 :  ld8.s  
>   r49=[r34],-240xc0003ad84d50:2 :  
> nop.i0x00xc0003ad84d60:0 :  ld8 
>  r39=[ret1];; <=== abort0xc0003ad84d60:1 
> :  ld8  
> ret0=[r35]0xc0003ad84d60:2 :  nop.i 
>0x0;;0xc0003ad84d70:0 :  cmp.ne.unc  
>  p1=r0,ret0;;M,MI0xc0003ad84d70:1 :(p1)  mov
>   r48=r410xc0003ad84d70:2 :(p1)  
> chk.s.i  r49,Unsafe_GetLong+0x290(gdb) x /10i 
> $pc-48*20x9fffaf000d70:   flushrs 
>MMI0x9fffaf000d71:   mov  
> r44=r320x9fffaf000d72:   mov  
> r45=r330x9fffaf000d80:   mov  r46=r34 
>   MMI0x9fffaf000d81:   mov  
> r47=r350x9fffaf000d82:   mov  
> r48=r360x9fffaf000d90:   mov  r49=r37 
>   MMI0x9fffaf000d91:   mov  
> r50=r380x9fffaf000d92:   mov  r51=r39
> 0x9fffaf000da0:   adds r14=0x270,r4   
>MMI(gdb) p /x $r35$9 = 0x22(gdb) x /x 
> $ret10x9ffe1d0d2bda: 0x677a68676c78743a(gdb) x /x 
> $r45+0x220x9ffe1d0d2bda: 0x677a68676c78743aSo here is the problem,  
> this is a 64bit JVM 0 : /opt/java8/bin/IA64W/java1 : 
> -Djava.util.logging.config.file=/test28/gzh/tomcat/conf/logging.properties2 : 
> -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager3 : 
> -Dorg.apache.catalina.security.SecurityListener.UMASK=0224 

[jira] [Comment Edited] (HADOOP-12892) fix/rewrite create-release

2016-03-21 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15204994#comment-15204994
 ] 

Allen Wittenauer edited comment on HADOOP-12892 at 3/21/16 8:08 PM:


That was my assessment too. 


Hadoop's release process is ... not good.  I've been (slowly) gearing up to 
make a run at a 3.x release for while now. Every corner I turn seems to be 
littered with more crud that needs to be cleaned up and manual processes that 
make no sense. :(

(... and yes, the rewrite of the release notes builder 2-3 years ago is when I 
started on this journey... sigh.)


was (Author: aw):
That was my assessment too. 


Hadoop's release process is ... not good.  I've been gearing up to make a run 
at a 3.x release for while now. Every corner I turn seems to be littered with 
more crud that needs to be cleaned up and manual processes that make no sense. 
:(

> fix/rewrite create-release
> --
>
> Key: HADOOP-12892
> URL: https://issues.apache.org/jira/browse/HADOOP-12892
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-12892.00.patch
>
>
> create-release needs some major surgery.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12892) fix/rewrite create-release

2016-03-21 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15204994#comment-15204994
 ] 

Allen Wittenauer commented on HADOOP-12892:
---

That was my assessment too. 


Hadoop's release process is ... not good.  I've been gearing up to make a run 
at a 3.x release for while now. Every corner I turn seems to be littered with 
more crud that needs to be cleaned up and manual processes that make no sense. 
:(

> fix/rewrite create-release
> --
>
> Key: HADOOP-12892
> URL: https://issues.apache.org/jira/browse/HADOOP-12892
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-12892.00.patch
>
>
> create-release needs some major surgery.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12950) ShutdownHookManager should have a timeout for each of the Registered shutdown hook

2016-03-21 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HADOOP-12950:
---

 Summary: ShutdownHookManager should have a timeout for each of the 
Registered shutdown hook
 Key: HADOOP-12950
 URL: https://issues.apache.org/jira/browse/HADOOP-12950
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao


HADOOP-8325 added a ShutdownHookManager to be used by different components 
instead of the JVM shutdownhook. For each of the shutdown hook registered, we 
currently don't have an upper bound for its execution time. We have seen 
namenode failed to shutdown completely (waiting for shutdown hook to finish 
after failover) for a long period of time, which breaks the namenode high 
availability scenarios. This ticket is opened to allow specifying a timeout 
value for the registered shutdown hook.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12892) fix/rewrite create-release

2016-03-21 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15204974#comment-15204974
 ] 

Sean Busbey commented on HADOOP-12892:
--

the apache-release profile is usually defined in the apache parent pom (ex: 
[the profile in 
org.apache:apache:17|http://svn.apache.org/viewvc/maven/pom/tags/apache-17/pom.xml?view=markup#l341]).

It's usually set up to help make sure projects are generating valid release 
artifacts. It looks like Hadoop doesn't use the ASF parent pom though?

> fix/rewrite create-release
> --
>
> Key: HADOOP-12892
> URL: https://issues.apache.org/jira/browse/HADOOP-12892
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-12892.00.patch
>
>
> create-release needs some major surgery.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12892) fix/rewrite create-release

2016-03-21 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15204930#comment-15204930
 ] 

Allen Wittenauer commented on HADOOP-12892:
---

bq. Regarding the maven cache, is this at all addressed by the release builds 
not being SNAPSHOT versions? All the precommit stuff should be SNAPSHOT, and I 
doubt there are multiple RMs building the same non-SNAPSHOT release version.

I'll get to this in a second, but be aware there is still a process problem 
here:

a) RM fires off a job on a shared machine to make a release
b) Someone submits a patch that changes hadoop's or some other project's build 
such that it either mistakenly nukes the .m2 directory or maliciously changes 
the contents of jars or other blackhat stuff to other running processes on the 
Jenkins server
c) RM takes release artifacts, signs them (!) and then puts them up for vote, 
completely unaware of what is actually in the package...

In other words, doing releases on the Jenkins slaves is an absolutely terrible 
idea.  Docker is not going to protect us here. (See also 
http://www.apache.org/dev/release.html#owned-controlled-hardware, which means 
if we're not breaking the letter of the law, we're definitely breaking the 
spirit...)

Now that we've eliminated running this on jenkins for anything real, we're back 
to RMs running this on their own boxes.  RM who are much more likely to be 
running things in parallel, even on the same branch



> fix/rewrite create-release
> --
>
> Key: HADOOP-12892
> URL: https://issues.apache.org/jira/browse/HADOOP-12892
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-12892.00.patch
>
>
> create-release needs some major surgery.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11123) Uber-JIRA: Hadoop on Java 9

2016-03-21 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15204903#comment-15204903
 ] 

Uwe Schindler commented on HADOOP-11123:


Apache Solr is also waiting for this. We had to disable all Hadoop stuff in 
Solr when running test suite with Java 9: SOLR-8874

> Uber-JIRA: Hadoop on Java 9
> ---
>
> Key: HADOOP-11123
> URL: https://issues.apache.org/jira/browse/HADOOP-11123
> Project: Hadoop Common
>  Issue Type: Task
>Affects Versions: 3.0.0
> Environment: Java 9
>Reporter: Steve Loughran
>
> JIRA to cover/track issues related to Hadoop on Java 9.
> Java 9 will have some significant changes —one of which is the removal of 
> various {{com.sun}} classes. These removals need to be handled or Hadoop will 
> not be able to run on a Java 9 JVM



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12909) Change ipc.Client to support asynchronous calls

2016-03-21 Thread Xiaobing Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15204895#comment-15204895
 ] 

Xiaobing Zhou commented on HADOOP-12909:


[~steve_l] I am not clear for this when you said 'could this be set in the 
constructor?' Can you explain it? Thanks.

LIke I said, Client#asynchronousMode is maintained through thread-local is 
thread safe. Anything other than that in Client is thread safe or thread not 
safe as is when it's been shared across threads.

I will think of implementing FileSystem#getFileSystem and 
FileSystem#getAsyncFilesystem() over getting AsyncFilesystem through an 
instance of FileSystem (e.g. DistributedFileSystem), although in the latter 
case, there's benefit of easy implementation by reusing DistributedFileSystem 
object in async file system.

> Change ipc.Client to support asynchronous calls
> ---
>
> Key: HADOOP-12909
> URL: https://issues.apache.org/jira/browse/HADOOP-12909
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: ipc
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Xiaobing Zhou
> Attachments: HADOOP-12909-HDFS-9924.000.patch, 
> HADOOP-12909-HDFS-9924.001.patch, HADOOP-12909-HDFS-9924.002.patch, 
> HADOOP-12909-HDFS-9924.003.patch
>
>
> In ipc.Client, the underlying mechanism is already supporting asynchronous 
> calls -- the calls shares a connection, the call requests are sent using a 
> thread pool and the responses can be out of order.  Indeed, synchronous call 
> is implemented by invoking wait() in the caller thread in order to wait for 
> the server response.
> In this JIRA, we change ipc.Client to support asynchronous mode.  In 
> asynchronous mode, it return once the request has been sent out but not wait 
> for the response from the server.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12909) Change ipc.Client to support asynchronous calls

2016-03-21 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15204887#comment-15204887
 ] 

Tsz Wo Nicholas Sze commented on HADOOP-12909:
--

BTW, the current patch aligns with the existing code -- the existing callId and 
retryCount are ThreadLocal variables.  The patch just adds two more ThreadLocal 
variables, asynchronousMode and returnValue.

> Change ipc.Client to support asynchronous calls
> ---
>
> Key: HADOOP-12909
> URL: https://issues.apache.org/jira/browse/HADOOP-12909
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: ipc
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Xiaobing Zhou
> Attachments: HADOOP-12909-HDFS-9924.000.patch, 
> HADOOP-12909-HDFS-9924.001.patch, HADOOP-12909-HDFS-9924.002.patch, 
> HADOOP-12909-HDFS-9924.003.patch
>
>
> In ipc.Client, the underlying mechanism is already supporting asynchronous 
> calls -- the calls shares a connection, the call requests are sent using a 
> thread pool and the responses can be out of order.  Indeed, synchronous call 
> is implemented by invoking wait() in the caller thread in order to wait for 
> the server response.
> In this JIRA, we change ipc.Client to support asynchronous mode.  In 
> asynchronous mode, it return once the request has been sent out but not wait 
> for the response from the server.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12909) Change ipc.Client to support asynchronous calls

2016-03-21 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15204880#comment-15204880
 ] 

Tsz Wo Nicholas Sze commented on HADOOP-12909:
--

bq. ... I know it's not easy with some calls potentially being sync and some 
being async, but I'd propose having separate instances here. ... 
Are you saying that it needs two connections for supporting sync and async 
calls?  It seems undesirable to have two connections.  A single client should 
support both sync and async calls.

bq. ... Existing sync FS, through FileSystem.getFileSystem; the async one 
through FileSystem.getAsyncFilesystem() where the latter is (a) refcounted and 
(b) fully asyc

I do agree with the point above but let's raise the FileSystem API discussion 
in  HADOOP-12910 and focus on RPC API here.

bq. I do think it's ugly, and worry that it's dangerously easy to share a 
client across two threads, with consequences.

The current implementation with ThreadLocal is thread safe, i.e. it is safe to 
share a client with two threads.


> Change ipc.Client to support asynchronous calls
> ---
>
> Key: HADOOP-12909
> URL: https://issues.apache.org/jira/browse/HADOOP-12909
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: ipc
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Xiaobing Zhou
> Attachments: HADOOP-12909-HDFS-9924.000.patch, 
> HADOOP-12909-HDFS-9924.001.patch, HADOOP-12909-HDFS-9924.002.patch, 
> HADOOP-12909-HDFS-9924.003.patch
>
>
> In ipc.Client, the underlying mechanism is already supporting asynchronous 
> calls -- the calls shares a connection, the call requests are sent using a 
> thread pool and the responses can be out of order.  Indeed, synchronous call 
> is implemented by invoking wait() in the caller thread in order to wait for 
> the server response.
> In this JIRA, we change ipc.Client to support asynchronous mode.  In 
> asynchronous mode, it return once the request has been sent out but not wait 
> for the response from the server.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12909) Change ipc.Client to support asynchronous calls

2016-03-21 Thread Xiaobing Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15204834#comment-15204834
 ] 

Xiaobing Zhou commented on HADOOP-12909:


[~steve_l] why is the thread-local (i.e. asynchronousMode) not thread safe? The 
reason why thread-local is chosen is to make setting asynchronousMode in a 
thread safe way. If you look at HADOOP-12910 patch, e.g. 
FutureDistributedFileSystem#rename, 
{code}
62Client.setAsynchronousMode(true);
63dfs.getClient().rename(dfs.getPathName(absSrc), 
dfs.getPathName(absDst),
64options);
65return Client.getReturnValue();
{code}

After calling Client#rename, the Client#getReturnValue() is immediately called 
to get Future object used to retrieve final results. 
FutureDistributedFileSystem#rename should be always in the same thread with 
Client. There's no chance of interleaving sync and async to result in any 
thread safety issues.

I agree that Client could implement AutoCloseable. Thanks.

> Change ipc.Client to support asynchronous calls
> ---
>
> Key: HADOOP-12909
> URL: https://issues.apache.org/jira/browse/HADOOP-12909
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: ipc
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Xiaobing Zhou
> Attachments: HADOOP-12909-HDFS-9924.000.patch, 
> HADOOP-12909-HDFS-9924.001.patch, HADOOP-12909-HDFS-9924.002.patch, 
> HADOOP-12909-HDFS-9924.003.patch
>
>
> In ipc.Client, the underlying mechanism is already supporting asynchronous 
> calls -- the calls shares a connection, the call requests are sent using a 
> thread pool and the responses can be out of order.  Indeed, synchronous call 
> is implemented by invoking wait() in the caller thread in order to wait for 
> the server response.
> In this JIRA, we change ipc.Client to support asynchronous mode.  In 
> asynchronous mode, it return once the request has been sent out but not wait 
> for the response from the server.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12948) Maven profile startKdc is broken

2016-03-21 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15204824#comment-15204824
 ] 

Chris Nauroth commented on HADOOP-12948:


I agree with converting these tests to use mini-KDC.  I think the startKdc 
profile has been broken for a long time.  In practice, I suspect this means no 
one really runs and pays attention to those tests.

> Maven profile startKdc is broken
> 
>
> Key: HADOOP-12948
> URL: https://issues.apache.org/jira/browse/HADOOP-12948
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
> Environment: Mac OS
>Reporter: Wei-Chiu Chuang
>
> {noformat}
> mvn install -Dtest=TestUGIWithSecurityOn -DstartKdc=true
> main:
>  [exec] xargs: illegal option -- -
>  [exec] usage: xargs [-0opt] [-E eofstr] [-I replstr [-R replacements]] 
> [-J replstr]
>  [exec]  [-L number] [-n number [-x]] [-P maxprocs] [-s size]
>  [exec]  [utility [argument ...]]
>  [exec] Result: 1
>   [get] Getting: 
> http://newverhost.com/pub//directory/apacheds/unstable/1.5/1.5.7/apacheds-1.5.7.tar.gz
>   [get] To: 
> /Users/weichiu/sandbox/hadoop/hadoop-common-project/hadoop-common/target/test-classes/kdc/downloads/apacheds-1.5.7.tar.gz
>   [get] Error getting 
> http://newverhost.com/pub//directory/apacheds/unstable/1.5/1.5.7/apacheds-1.5.7.tar.gz
>  to 
> /Users/weichiu/sandbox/hadoop/hadoop-common-project/hadoop-common/target/test-classes/kdc/downloads/apacheds-1.5.7.tar.gz
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time: 8.448 s
> [INFO] Finished at: 2016-03-21T10:00:56-07:00
> [INFO] Final Memory: 31M/439M
> [INFO] 
> 
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-antrun-plugin:1.7:run (kdc) on project 
> hadoop-common: An Ant BuildException has occured: 
> java.net.UnknownHostException: newverhost.com
> [ERROR] around Ant part ... dest="/Users/weichiu/sandbox/hadoop/hadoop-common-project/hadoop-common/target/test-classes/kdc/downloads"
>  skipexisting="true" verbose="true" 
> src="http://newverhost.com/pub//directory/apacheds/unstable/1.5/1.5.7/apacheds-1.5.7.tar.gz"/>...
>  @ 7:244 in 
> /Users/weichiu/sandbox/hadoop/hadoop-common-project/hadoop-common/target/antrun/build-main.xml
> [ERROR] -> [Help 1]
> [ERROR]
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
> switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR]
> [ERROR] For more information about the errors and possible solutions, please 
> read the following articles:
> [ERROR] [Help 1] 
> http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
> {noformat}
> I'm using Mac so part of the reason might be my operating system (even though 
> the pom.xml stated it supported Mac), but the major problem is that it 
> attempted to download apacheds from newverhost.com, which does not seem exist 
> any more.
> These tests were implemented in HADOOP-8078, and must have -DstartKdc=true in 
> order to run them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12892) fix/rewrite create-release

2016-03-21 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15204783#comment-15204783
 ] 

Andrew Wang commented on HADOOP-12892:
--

Great, thanks for the updates!

bq. I thought about doing it that way, but the bash way is significantly faster 
due to no JVM overhead. But I can change it if people want.

Dropping a comment about the help plugin is sufficient. The build takes about 
15 mins though, so not sure a couple seconds of savings are a big deal.

bq. Yes. I absolutely must fix the maven cache problem.

Regarding the maven cache, is this at all addressed by the release builds not 
being SNAPSHOT versions? All the precommit stuff should be SNAPSHOT, and I 
doubt there are multiple RMs building the same non-SNAPSHOT release version.

Otherwise, is the fix specifying a fresh maven.repo.local?

Happy to help out however, I'm gonna start trying 3.0 builds as soon as this 
goes in.

> fix/rewrite create-release
> --
>
> Key: HADOOP-12892
> URL: https://issues.apache.org/jira/browse/HADOOP-12892
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-12892.00.patch
>
>
> create-release needs some major surgery.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12892) fix/rewrite create-release

2016-03-21 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12892?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-12892:
-
Target Version/s: 2.8.0

> fix/rewrite create-release
> --
>
> Key: HADOOP-12892
> URL: https://issues.apache.org/jira/browse/HADOOP-12892
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-12892.00.patch
>
>
> create-release needs some major surgery.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12948) Maven profile startKdc is broken

2016-03-21 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15204752#comment-15204752
 ] 

Wei-Chiu Chuang commented on HADOOP-12948:
--

I think that instead of downloading/launching a standalone Apache DS server, it 
should use the embedded MiniKdc, which is based on Apache DS. MiniKdc 
implemented in HADOOP-9848 can be useful for this purpose.

> Maven profile startKdc is broken
> 
>
> Key: HADOOP-12948
> URL: https://issues.apache.org/jira/browse/HADOOP-12948
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
> Environment: Mac OS
>Reporter: Wei-Chiu Chuang
>
> {noformat}
> mvn install -Dtest=TestUGIWithSecurityOn -DstartKdc=true
> main:
>  [exec] xargs: illegal option -- -
>  [exec] usage: xargs [-0opt] [-E eofstr] [-I replstr [-R replacements]] 
> [-J replstr]
>  [exec]  [-L number] [-n number [-x]] [-P maxprocs] [-s size]
>  [exec]  [utility [argument ...]]
>  [exec] Result: 1
>   [get] Getting: 
> http://newverhost.com/pub//directory/apacheds/unstable/1.5/1.5.7/apacheds-1.5.7.tar.gz
>   [get] To: 
> /Users/weichiu/sandbox/hadoop/hadoop-common-project/hadoop-common/target/test-classes/kdc/downloads/apacheds-1.5.7.tar.gz
>   [get] Error getting 
> http://newverhost.com/pub//directory/apacheds/unstable/1.5/1.5.7/apacheds-1.5.7.tar.gz
>  to 
> /Users/weichiu/sandbox/hadoop/hadoop-common-project/hadoop-common/target/test-classes/kdc/downloads/apacheds-1.5.7.tar.gz
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time: 8.448 s
> [INFO] Finished at: 2016-03-21T10:00:56-07:00
> [INFO] Final Memory: 31M/439M
> [INFO] 
> 
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-antrun-plugin:1.7:run (kdc) on project 
> hadoop-common: An Ant BuildException has occured: 
> java.net.UnknownHostException: newverhost.com
> [ERROR] around Ant part ... dest="/Users/weichiu/sandbox/hadoop/hadoop-common-project/hadoop-common/target/test-classes/kdc/downloads"
>  skipexisting="true" verbose="true" 
> src="http://newverhost.com/pub//directory/apacheds/unstable/1.5/1.5.7/apacheds-1.5.7.tar.gz"/>...
>  @ 7:244 in 
> /Users/weichiu/sandbox/hadoop/hadoop-common-project/hadoop-common/target/antrun/build-main.xml
> [ERROR] -> [Help 1]
> [ERROR]
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
> switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR]
> [ERROR] For more information about the errors and possible solutions, please 
> read the following articles:
> [ERROR] [Help 1] 
> http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
> {noformat}
> I'm using Mac so part of the reason might be my operating system (even though 
> the pom.xml stated it supported Mac), but the major problem is that it 
> attempted to download apacheds from newverhost.com, which does not seem exist 
> any more.
> These tests were implemented in HADOOP-8078, and must have -DstartKdc=true in 
> order to run them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12949) Add HTrace to the s3a connector

2016-03-21 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15204750#comment-15204750
 ] 

Colin Patrick McCabe commented on HADOOP-12949:
---

Hi [~madhawa], great idea!  I think the first thing to do is to read a bit 
about how to set up HTrace.  See 
http://blog.cloudera.com/blog/2015/12/new-in-cloudera-labs-apache-htrace-incubating/
If you can get a working setup for HTrace-on-HDFS, it will help for adding 
tracing to other projects such as the s3a connector.

> Add HTrace to the s3a connector
> ---
>
> Key: HADOOP-12949
> URL: https://issues.apache.org/jira/browse/HADOOP-12949
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Madhawa Gunasekara
>
> Hi All, 
> s3, GCS, WASB, and other cloud blob stores are becoming increasingly 
> important in Hadoop. But we don't have distributed tracing for these yet. It 
> would be interesting to add distributed tracing here. It would enable 
> collecting really interesting data like probability distributions of PUT and 
> GET requests to s3 and their impact on MR jobs, etc.
> I would like to implement this feature, Please shed some light on this 
> Thanks,
> Madhawa



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12949) Add HTrace to the s3a connector

2016-03-21 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12949?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-12949:
--
Summary: Add HTrace to the s3a connector  (was: get probability 
distributions of PUT and GET requests to s3 and their impact on MR jobs)

> Add HTrace to the s3a connector
> ---
>
> Key: HADOOP-12949
> URL: https://issues.apache.org/jira/browse/HADOOP-12949
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Madhawa Gunasekara
>
> Hi All, 
> s3, GCS, WASB, and other cloud blob stores are becoming increasingly 
> important in Hadoop. But we don't have distributed tracing for these yet. It 
> would be interesting to add distributed tracing here. It would enable 
> collecting really interesting data like probability distributions of PUT and 
> GET requests to s3 and their impact on MR jobs, etc.
> I would like to implement this feature, Please shed some light on this 
> Thanks,
> Madhawa



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12948) Maven profile startKdc is broken

2016-03-21 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-12948:
-
Description: 
{noformat}
mvn install -Dtest=TestUGIWithSecurityOn -DstartKdc=true

main:
 [exec] xargs: illegal option -- -
 [exec] usage: xargs [-0opt] [-E eofstr] [-I replstr [-R replacements]] [-J 
replstr]
 [exec]  [-L number] [-n number [-x]] [-P maxprocs] [-s size]
 [exec]  [utility [argument ...]]
 [exec] Result: 1
  [get] Getting: 
http://newverhost.com/pub//directory/apacheds/unstable/1.5/1.5.7/apacheds-1.5.7.tar.gz
  [get] To: 
/Users/weichiu/sandbox/hadoop/hadoop-common-project/hadoop-common/target/test-classes/kdc/downloads/apacheds-1.5.7.tar.gz
  [get] Error getting 
http://newverhost.com/pub//directory/apacheds/unstable/1.5/1.5.7/apacheds-1.5.7.tar.gz
 to 
/Users/weichiu/sandbox/hadoop/hadoop-common-project/hadoop-common/target/test-classes/kdc/downloads/apacheds-1.5.7.tar.gz
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 8.448 s
[INFO] Finished at: 2016-03-21T10:00:56-07:00
[INFO] Final Memory: 31M/439M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-antrun-plugin:1.7:run (kdc) on project 
hadoop-common: An Ant BuildException has occured: 
java.net.UnknownHostException: newverhost.com
[ERROR] around Ant part ...http://newverhost.com/pub//directory/apacheds/unstable/1.5/1.5.7/apacheds-1.5.7.tar.gz"/>...
 @ 7:244 in 
/Users/weichiu/sandbox/hadoop/hadoop-common-project/hadoop-common/target/antrun/build-main.xml
[ERROR] -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
{noformat}
I'm using Mac so part of the reason might be my operating system (even though 
the pom.xml stated it supported Mac), but the major problem is that it 
attempted to download apacheds from newverhost.com, which does not seem exist 
any more.

These tests were implemented in HADOOP-8078, and must have -DstartKdc=true in 
order to run them.

  was:
{noformat}
mvn install -Dtest=TestUGIWithSecurityOn -DstartKdc=true

main:
 [exec] xargs: illegal option -- -
 [exec] usage: xargs [-0opt] [-E eofstr] [-I replstr [-R replacements]] [-J 
replstr]
 [exec]  [-L number] [-n number [-x]] [-P maxprocs] [-s size]
 [exec]  [utility [argument ...]]
 [exec] Result: 1
  [get] Getting: 
http://newverhost.com/pub//directory/apacheds/unstable/1.5/1.5.7/apacheds-1.5.7.tar.gz
  [get] To: 
/Users/weichiu/sandbox/hadoop/hadoop-common-project/hadoop-common/target/test-classes/kdc/downloads/apacheds-1.5.7.tar.gz
  [get] Error getting 
http://newverhost.com/pub//directory/apacheds/unstable/1.5/1.5.7/apacheds-1.5.7.tar.gz
 to 
/Users/weichiu/sandbox/hadoop/hadoop-common-project/hadoop-common/target/test-classes/kdc/downloads/apacheds-1.5.7.tar.gz
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 8.448 s
[INFO] Finished at: 2016-03-21T10:00:56-07:00
[INFO] Final Memory: 31M/439M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-antrun-plugin:1.7:run (kdc) on project 
hadoop-common: An Ant BuildException has occured: 
java.net.UnknownHostException: newverhost.com
[ERROR] around Ant part ...http://newverhost.com/pub//directory/apacheds/unstable/1.5/1.5.7/apacheds-1.5.7.tar.gz"/>...
 @ 7:244 in 
/Users/weichiu/sandbox/hadoop/hadoop-common-project/hadoop-common/target/antrun/build-main.xml
[ERROR] -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
{noformat}
I'm using Mac so part of the reason might be my operating system (even though 
the pom.xml stated it supported Mac), but the major problem is that it 
attempted to download apacheds from newverhost.com, which does not seem exist 
any more.

These tests were implemented in HADOOP-8087, and must have -DstartKdc=true in 
order to run them.


> Maven profile 

[jira] [Created] (HADOOP-12949) get probability distributions of PUT and GET requests to s3 and their impact on MR jobs

2016-03-21 Thread Madhawa Gunasekara (JIRA)
Madhawa Gunasekara created HADOOP-12949:
---

 Summary: get probability distributions of PUT and GET requests to 
s3 and their impact on MR jobs
 Key: HADOOP-12949
 URL: https://issues.apache.org/jira/browse/HADOOP-12949
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Madhawa Gunasekara


Hi All, 

s3, GCS, WASB, and other cloud blob stores are becoming increasingly important 
in Hadoop. But we don't have distributed tracing for these yet. It would be 
interesting to add distributed tracing here. It would enable collecting really 
interesting data like probability distributions of PUT and GET requests to s3 
and their impact on MR jobs, etc.

I would like to implement this feature, Please shed some light on this 

Thanks,
Madhawa



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12948) Maven profile startKdc is broken

2016-03-21 Thread Wei-Chiu Chuang (JIRA)
Wei-Chiu Chuang created HADOOP-12948:


 Summary: Maven profile startKdc is broken
 Key: HADOOP-12948
 URL: https://issues.apache.org/jira/browse/HADOOP-12948
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0
 Environment: Mac OS
Reporter: Wei-Chiu Chuang


{noformat}
mvn install -Dtest=TestUGIWithSecurityOn -DstartKdc=true

main:
 [exec] xargs: illegal option -- -
 [exec] usage: xargs [-0opt] [-E eofstr] [-I replstr [-R replacements]] [-J 
replstr]
 [exec]  [-L number] [-n number [-x]] [-P maxprocs] [-s size]
 [exec]  [utility [argument ...]]
 [exec] Result: 1
  [get] Getting: 
http://newverhost.com/pub//directory/apacheds/unstable/1.5/1.5.7/apacheds-1.5.7.tar.gz
  [get] To: 
/Users/weichiu/sandbox/hadoop/hadoop-common-project/hadoop-common/target/test-classes/kdc/downloads/apacheds-1.5.7.tar.gz
  [get] Error getting 
http://newverhost.com/pub//directory/apacheds/unstable/1.5/1.5.7/apacheds-1.5.7.tar.gz
 to 
/Users/weichiu/sandbox/hadoop/hadoop-common-project/hadoop-common/target/test-classes/kdc/downloads/apacheds-1.5.7.tar.gz
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 8.448 s
[INFO] Finished at: 2016-03-21T10:00:56-07:00
[INFO] Final Memory: 31M/439M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-antrun-plugin:1.7:run (kdc) on project 
hadoop-common: An Ant BuildException has occured: 
java.net.UnknownHostException: newverhost.com
[ERROR] around Ant part ...http://newverhost.com/pub//directory/apacheds/unstable/1.5/1.5.7/apacheds-1.5.7.tar.gz"/>...
 @ 7:244 in 
/Users/weichiu/sandbox/hadoop/hadoop-common-project/hadoop-common/target/antrun/build-main.xml
[ERROR] -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
{noformat}
I'm using Mac so part of the reason might be my operating system (even though 
the pom.xml stated it supported Mac), but the major problem is that it 
attempted to download apacheds from newverhost.com, which does not seem exist 
any more.

These tests were implemented in HADOOP-8087, and must have -DstartKdc=true in 
order to run them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12909) Change ipc.Client to support asynchronous calls

2016-03-21 Thread Sanjay Radia (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15204595#comment-15204595
 ] 

Sanjay Radia commented on HADOOP-12909:
---

I haven't had a chance to look at the patch or review all the comments, but 
wanted to bring attention to one issue wrt async rpc that is well known by 
implementors and practitioners of message passing & rpc systems (excuse me if 
this has already been covered): 
* One needs to watch out for buffer management. ie. aync rpc/message passing 
has the potential to use up memory for buffering the messages. This is 
prevented in Sync rpc systems: 
** the sender (client) blocks and cannot flood the receiver unless it uses 
threads
** the receiver (server) is guaranteed that the sender (ie client) is waiting 
to receive and if it has died then the reply can be discarded.

With asyn rpc , my suggestion is to consider something along the following 
lines:
*  the client needs to allocate some buffer (or space for it)  where replies 
are stored. On each async rpc call, it passes a ref to this buffer for storing 
replies. If the client does not pick up the replies fast enough then his next 
async call using that buffer space will block. 
* Note this makes the clients code tricky in what to do if it is blocked since 
one must ensure that a deadlock or starvation  does not happen (but async 
messaging has always been tricky which is why cs community went with sync rpc). 
Note this problem does not arise on server side async-rpc since the client is 
blocked waiting for reply (unless the client also did async call but in that 
case its buffer, as per my suggestion,  must be there to store the reply).

> Change ipc.Client to support asynchronous calls
> ---
>
> Key: HADOOP-12909
> URL: https://issues.apache.org/jira/browse/HADOOP-12909
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: ipc
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Xiaobing Zhou
> Attachments: HADOOP-12909-HDFS-9924.000.patch, 
> HADOOP-12909-HDFS-9924.001.patch, HADOOP-12909-HDFS-9924.002.patch, 
> HADOOP-12909-HDFS-9924.003.patch
>
>
> In ipc.Client, the underlying mechanism is already supporting asynchronous 
> calls -- the calls shares a connection, the call requests are sent using a 
> thread pool and the responses can be out of order.  Indeed, synchronous call 
> is implemented by invoking wait() in the caller thread in order to wait for 
> the server response.
> In this JIRA, we change ipc.Client to support asynchronous mode.  In 
> asynchronous mode, it return once the request has been sent out but not wait 
> for the response from the server.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12876) [Azure Data Lake] Support for process level FileStatus cache to optimize GetFileStatus frequent opeations

2016-03-21 Thread Vishwajeet Dusane (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15204575#comment-15204575
 ] 

Vishwajeet Dusane commented on HADOOP-12876:


[~twu] Once HADOOP-12666 is resolved then i will incorporate FileStatus cache 
related changes in the latest ASF code and raise patch here.

> [Azure Data Lake] Support for process level FileStatus cache to optimize 
> GetFileStatus frequent opeations
> -
>
> Key: HADOOP-12876
> URL: https://issues.apache.org/jira/browse/HADOOP-12876
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs, fs/azure, tools
>Reporter: Vishwajeet Dusane
>Assignee: Vishwajeet Dusane
>
> Add support to cache GetFileStatus and ListStatus response locally for 
> limited period of time. Local cache for limited period of time would optimize 
> number of calls for GetFileStatus operation.
> One of the example  where local limited period cache would be useful - 
> terasort ListStatus on input directory follows with GetFileStatus operation 
> on each file within directory. For 2048 input files in a directory would save 
> 2048 GetFileStatus calls during start up (Using the ListStatus response to 
> cache FileStatus instances).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HADOOP-8145) Automate testing of LdapGroupsMapping against ApacheDS

2016-03-21 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang reassigned HADOOP-8145:
---

Assignee: Wei-Chiu Chuang

> Automate testing of LdapGroupsMapping against ApacheDS
> --
>
> Key: HADOOP-8145
> URL: https://issues.apache.org/jira/browse/HADOOP-8145
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Jonathan Natkins
>Assignee: Wei-Chiu Chuang
>
> HADOOP-8078 introduced an ApacheDS system to the automated tests, and the 
> LdapGroupsMapping could benefit from automated testing against that DS 
> instance



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-8145) Automate testing of LdapGroupsMapping against ApacheDS

2016-03-21 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15204548#comment-15204548
 ] 

Wei-Chiu Chuang commented on HADOOP-8145:
-

Assigning this to myself. I am interested to see if it's possible to add unit 
tests using the existing dependencies.

> Automate testing of LdapGroupsMapping against ApacheDS
> --
>
> Key: HADOOP-8145
> URL: https://issues.apache.org/jira/browse/HADOOP-8145
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Jonathan Natkins
>Assignee: Wei-Chiu Chuang
>
> HADOOP-8078 introduced an ApacheDS system to the automated tests, and the 
> LdapGroupsMapping could benefit from automated testing against that DS 
> instance



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12947) Update documentation Hadoop Groups Mapping to add static group mapping, negative cache

2016-03-21 Thread Wei-Chiu Chuang (JIRA)
Wei-Chiu Chuang created HADOOP-12947:


 Summary: Update documentation Hadoop Groups Mapping to add static 
group mapping, negative cache
 Key: HADOOP-12947
 URL: https://issues.apache.org/jira/browse/HADOOP-12947
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation, security
Affects Versions: 2.7.2
Reporter: Wei-Chiu Chuang
Assignee: Wei-Chiu Chuang
Priority: Minor


After _Hadoop Group Mapping_ was written, I subsequently found a number of 
other things that should be added/updated: 

# static group mapping, statically map users to group names (HADOOP-10142)
# negative cache, to avoid spamming NameNode with invalid user names 
(HADOOP-10755)
# update query pattern for LDAP groups mapping if posix semantics is supported. 
(HADOOP-9477)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12876) [Azure Data Lake] Support for process level FileStatus cache to optimize GetFileStatus frequent opeations

2016-03-21 Thread Tony Wu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15204524#comment-15204524
 ] 

Tony Wu commented on HADOOP-12876:
--

Hi [~vishwajeet.dusane], Thanks a lot for creating a separate JIRA to discuss 
the file status cache. I noticed you have removed the relevant code (i.e. 
{{FileStatusCacheManager}}) from the latest patch in HADOOP-12666. Do you mind 
reposting the cache implementation here?

I think you can post a patch for this JIRA based off the latest patch for 
HADOOP-12666.


> [Azure Data Lake] Support for process level FileStatus cache to optimize 
> GetFileStatus frequent opeations
> -
>
> Key: HADOOP-12876
> URL: https://issues.apache.org/jira/browse/HADOOP-12876
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs, fs/azure, tools
>Reporter: Vishwajeet Dusane
>Assignee: Vishwajeet Dusane
>
> Add support to cache GetFileStatus and ListStatus response locally for 
> limited period of time. Local cache for limited period of time would optimize 
> number of calls for GetFileStatus operation.
> One of the example  where local limited period cache would be useful - 
> terasort ListStatus on input directory follows with GetFileStatus operation 
> on each file within directory. For 2048 input files in a directory would save 
> 2048 GetFileStatus calls during start up (Using the ListStatus response to 
> cache FileStatus instances).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11969) ThreadLocal initialization in several classes is not thread safe

2016-03-21 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15204499#comment-15204499
 ] 

Sangjin Lee commented on HADOOP-11969:
--

I also added a comment on HDFS-10183.

I believe that JLS makes it clear that a memory barrier is required for the 
static initialization (by the JVM) and is expected from the user standpoint. 
This is something we should be able to rely on safely, or we have a bigger 
problem. And I don't think there is anything special about {{ThreadLocal}}.

I think it is a good idea to make these static variables final for a semantic 
reason and possibly to work around a JVM bug. However, for the record, we 
should be able to rely on any initial values of (non-final) static fields in 
general.

> ThreadLocal initialization in several classes is not thread safe
> 
>
> Key: HADOOP-11969
> URL: https://issues.apache.org/jira/browse/HADOOP-11969
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Critical
>  Labels: thread-safety
> Fix For: 2.8.0
>
> Attachments: HADOOP-11969.1.patch, HADOOP-11969.2.patch, 
> HADOOP-11969.3.patch, HADOOP-11969.4.patch, HADOOP-11969.5.patch
>
>
> Right now, the initialization of hte thread local factories for encoder / 
> decoder in Text are not marked final. This means they end up with a static 
> initializer that is not guaranteed to be finished running before the members 
> are visible. 
> Under heavy contention, this means during initialization some users will get 
> an NPE:
> {code}
> (2015-05-05 08:58:03.974 : solr_server_log.log) 
>  org.apache.solr.common.SolrException; null:java.lang.NullPointerException
>   at org.apache.hadoop.io.Text.decode(Text.java:406)
>   at org.apache.hadoop.io.Text.decode(Text.java:389)
>   at org.apache.hadoop.io.Text.toString(Text.java:280)
>   at org.apache.hadoop.hdfs.protocolPB.PBHelper.convert(PBHelper.java:764)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferProtoUtil.buildBaseHeader(DataTransferProtoUtil.java:81)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferProtoUtil.buildClientHeader(DataTransferProtoUtil.java:71)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Sender.readBlock(Sender.java:101)
>   at 
> org.apache.hadoop.hdfs.RemoteBlockReader2.newBlockReader(RemoteBlockReader2.java:400)
>   at 
> org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReader(BlockReaderFactory.java:785)
>   at 
> org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:663)
>   at 
> org.apache.hadoop.hdfs.BlockReaderFactory.build(BlockReaderFactory.java:327)
>   at 
> org.apache.hadoop.hdfs.DFSInputStream.actualGetFromOneDataNode(DFSInputStream.java:1027)
>   at 
> org.apache.hadoop.hdfs.DFSInputStream.fetchBlockByteRange(DFSInputStream.java:974)
>   at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:1305)
>   at org.apache.hadoop.fs.FSInputStream.readFully(FSInputStream.java:78)
>   at 
> org.apache.hadoop.fs.FSDataInputStream.readFully(FSDataInputStream.java:107)
> ... SNIP...
> {code} 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12868) hadoop-openstack's pom has missing and unused dependencies

2016-03-21 Thread Masatake Iwasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15204218#comment-15204218
 ] 

Masatake Iwasaki commented on HADOOP-12868:
---

{noformat}
[WARNING] Used undeclared dependencies found:
[WARNING]commons-logging:commons-logging:jar:1.1.3:compile
[WARNING]org.apache.hadoop:hadoop-annotations:jar:3.0.0-SNAPSHOT:compile
[WARNING] Unused declared dependencies found:
[WARNING]commons-io:commons-io:jar:2.4:compile
[WARNING]org.mockito:mockito-all:jar:1.8.5:provided
[WARNING]com.google.guava:guava:jar:11.0.2:test
{noformat}

I attached a patch to fix issues reported by {{mvn dependency:analyze}}.

> hadoop-openstack's pom has missing and unused dependencies
> --
>
> Key: HADOOP-12868
> URL: https://issues.apache.org/jira/browse/HADOOP-12868
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Priority: Blocker
> Attachments: HADOOP-12868.001.patch
>
>
> Attempting to compile openstack on a fairly fresh maven repo fails due to 
> commons-httpclient not being a declared dependency.  After that is fixed, 
> doing a maven dependency:analyze shows other problems.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12868) hadoop-openstack's pom has missing and unused dependencies

2016-03-21 Thread Masatake Iwasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12868?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki updated HADOOP-12868:
--
Attachment: HADOOP-12868.001.patch

> hadoop-openstack's pom has missing and unused dependencies
> --
>
> Key: HADOOP-12868
> URL: https://issues.apache.org/jira/browse/HADOOP-12868
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Priority: Blocker
> Attachments: HADOOP-12868.001.patch
>
>
> Attempting to compile openstack on a fairly fresh maven repo fails due to 
> commons-httpclient not being a declared dependency.  After that is fixed, 
> doing a maven dependency:analyze shows other problems.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12909) Change ipc.Client to support asynchronous calls

2016-03-21 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15204031#comment-15204031
 ] 

Steve Loughran commented on HADOOP-12909:
-

I don't know of any easy way, though this could be an opportunity to expose 
this functionality. In particular, could this be set in the constructor? I know 
it's not easy with some calls potentially being sync and some being async, but 
I'd propose having separate instances here. Existing sync FS, through 
{{FileSystem.getFileSystem}}; the async one through 
{{FileSystem.getAsyncFilesystem()}} where the latter is (a) refcounted and (b) 
fully asyc

I do think it's ugly, and worry that it's dangerously easy to share a client 
across two threads, with consequences.


I think you may want to involve other people with deep involvement in the IPC 
layer ([~sanjay.radia] for their insight.

> Change ipc.Client to support asynchronous calls
> ---
>
> Key: HADOOP-12909
> URL: https://issues.apache.org/jira/browse/HADOOP-12909
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: ipc
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Xiaobing Zhou
> Attachments: HADOOP-12909-HDFS-9924.000.patch, 
> HADOOP-12909-HDFS-9924.001.patch, HADOOP-12909-HDFS-9924.002.patch, 
> HADOOP-12909-HDFS-9924.003.patch
>
>
> In ipc.Client, the underlying mechanism is already supporting asynchronous 
> calls -- the calls shares a connection, the call requests are sent using a 
> thread pool and the responses can be out of order.  Indeed, synchronous call 
> is implemented by invoking wait() in the caller thread in order to wait for 
> the server response.
> In this JIRA, we change ipc.Client to support asynchronous mode.  In 
> asynchronous mode, it return once the request has been sent out but not wait 
> for the response from the server.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12201) Add tracing to FileSystem#createFileSystem and Globber#glob

2016-03-21 Thread Feng Yuan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Feng Yuan updated HADOOP-12201:
---
Description: Add tracing to FileSystem#createFileSystem and Glober#glob  
(was: Add tracing to FileSystem#createFileSystem and Globber#glob)

> Add tracing to FileSystem#createFileSystem and Globber#glob
> ---
>
> Key: HADOOP-12201
> URL: https://issues.apache.org/jira/browse/HADOOP-12201
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Fix For: 2.8.0
>
> Attachments: HADOOP-12201.001-reupload.patch, HADOOP-12201.001.patch, 
> createfilesystem.png
>
>
> Add tracing to FileSystem#createFileSystem and Glober#glob



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)