[jira] [Commented] (HADOOP-13560) S3ABlockOutputStream to support huge (many GB) file writes

2016-09-27 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15528494#comment-15528494
 ] 

Chris Nauroth commented on HADOOP-13560:


Thank you, Steve.  I have started reviewing patch revision 006.  I haven't read 
through all of it yet, but here is my feedback so far.

This patch does not apply to current trunk, so we'll eventually need a 
different patch for trunk.

All access to {{S3ABlockOutputStream#closed}} happens through {{synchronized}} 
methods.  Would it be simpler to change the data type to straight {{boolean}}, 
or do you prefer to stick with {{AtomicBoolean}}?

{{S3ABlockOutputStream#now}} returns time in milliseconds, but the JavaDocs 
state nanoseconds.  Did you want {{System#nanoTime}} or possibly 
{{org.apache.hadoop.util.Time#monotonicNow}} for a millisecond measurement 
that's safe against system clock changes?

Can ITestS3AHuge* be made to run in parallel instead of sequential?  It appears 
these tests are already sufficiently isolated from one another.  They call 
{{S3AScaleTestBase#getTestPath}}, so they are guaranteed to operate on isolated 
paths within the bucket.  They also disable the multi-part upload purge in 
{{S3AScaleTestBase#setUp}}.  Is there another isolation problem I missed, or is 
the idea more that you don't want activity from another test running in 
parallel to pollute metrics reported from the scale tests due to bandwidth 
limitations or throttling?

> S3ABlockOutputStream to support huge (many GB) file writes
> --
>
> Key: HADOOP-13560
> URL: https://issues.apache.org/jira/browse/HADOOP-13560
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-13560-branch-2-001.patch, 
> HADOOP-13560-branch-2-002.patch, HADOOP-13560-branch-2-003.patch, 
> HADOOP-13560-branch-2-004.patch
>
>
> An AWS SDK [issue|https://github.com/aws/aws-sdk-java/issues/367] highlights 
> that metadata isn't copied on large copies.
> 1. Add a test to do that large copy/rname and verify that the copy really 
> works
> 2. Verify that metadata makes it over.
> Verifying large file rename is important on its own, as it is needed for very 
> large commit operations for committers using rename



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13653) ZKDelegationTokenSecretManager curator client seems to rapidly connect & disconnect from ZK

2016-09-27 Thread Alex Ivanov (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15528459#comment-15528459
 ] 

Alex Ivanov commented on HADOOP-13653:
--

[~xiaochen], thank you for your comments. The zookeeper (ZK) cluster seemed 
healthy, and nothing in any of the zookeeper logs indicated loss of quorum or 
random disconnects.
Instead, it seems ZK connections became unstable after an accumulation of a 
significant number of delegation tokens for KMS (>160,000). I'm not sure how 
this caused the issue, but once we manually deleted the tokens, the disconnects 
stopped. Once we apply the patch you provided for 
[HADOOP-13487|https://issues.apache.org/jira/browse/HADOOP-13487] (thank you!), 
I expect we'll be able to better manage the number of dtokens in ZK.

I do wish we were able to control some of the parameters for curator, so that 
we can adjust the timeouts for our needs, and curtail the repetitive error 
logging when a disconnect happens - these logs have taken up to 70GB of space 
per day, which turns a single log viewing into a big data problem.

On a different note, in a situation with multiple KMS instances, you pointed 
out how the {{LoadBalancingKMSClientProvider}} will try to find a working KMS. 
The problem I've seen is the KMS client timeout seems quite long, so in the 
case of one failed KMS, it takes a long time to talk to KMS from a client 
perspective. Do you know how we can configure this behavior and have a shorter 
timeout?

> ZKDelegationTokenSecretManager curator client seems to rapidly connect & 
> disconnect from ZK
> ---
>
> Key: HADOOP-13653
> URL: https://issues.apache.org/jira/browse/HADOOP-13653
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Reporter: Alex Ivanov
>Priority: Critical
>
> During periods of time, KMS gets in a connect/disconnect loop from Zookeeper. 
> It is not clear what causes the connection to be closed. I didn't see any 
> issues on the ZK server side, so the issue must reside on client side.
> *Example errors*
> NOTE: I had to filter the logs heavily since they were many GB in size 
> (thanks to curator error logging). What is left is an illustration of the 
> delegation token creations, and the Zookeeper sessions getting lost and 
> re-established over the course of 2 hours.
> {code}
> 2016-09-25 01:43:04,377 INFO  AbstractDelegationTokenSecretManager - Creating 
> password for identifier: [75027a21ab399aa7789d6907d70fadc4, 46]
> 2016-09-25 01:43:04,557 INFO  AbstractDelegationTokenSecretManager - Creating 
> password for identifier: [1106d0754d43dcf29324d7be737f51f0, 46]
> 2016-09-25 01:43:11,846 INFO  AbstractDelegationTokenSecretManager - Creating 
> password for identifier: [4426092c861f49c6ba0c60b49b9539e5, 46]
> 2016-09-25 01:43:48,974 INFO  AbstractDelegationTokenSecretManager - Creating 
> password for identifier: [a99efff2705d6489deb059098f18818f, 46]
> 2016-09-25 01:43:49,174 INFO  AbstractDelegationTokenSecretManager - Creating 
> password for identifier: [398b5962fd647880961ba5e86a77b414, 46]
> 2016-09-25 01:44:03,359 INFO  AbstractDelegationTokenSecretManager - Creating 
> password for identifier: [413187e62a21b5459422b5c524315d06, 46]
> 2016-09-25 01:44:03,625 INFO  AbstractDelegationTokenSecretManager - Creating 
> password for identifier: [7cc2c0d82edd40e7e6f6f40af20d04d3, 46]
> 2016-09-25 01:44:06,062 INFO  AbstractDelegationTokenSecretManager - Creating 
> password for identifier: [bd9394fce20607c12bc00104bea49284, 46]
> 2016-09-25 01:44:07,134 INFO  AbstractDelegationTokenSecretManager - Creating 
> password for identifier: [7dad3bd10526517e5e1cfccd2e96074a, 46]
> 2016-09-25 01:44:07,230 INFO  AbstractDelegationTokenSecretManager - Creating 
> password for identifier: [a712ed40687580647d070c9c7f525e15, 46]
> 2016-09-25 01:44:48,481 INFO  AbstractDelegationTokenSecretManager - Creating 
> password for identifier: [44bfefa31192c68e3cc053eec4e57e14, 46]
> 2016-09-25 01:44:48,522 INFO  AbstractDelegationTokenSecretManager - Creating 
> password for identifier: [67efc2aa65eeba701ad7d3d7bab51def, 46]
> 2016-09-25 01:44:50,259 INFO  AbstractDelegationTokenSecretManager - Creating 
> password for identifier: [b43e641f58dfbd2c72550ab6804f37d1, 46]
> 2016-09-25 01:44:54,271 INFO  AbstractDelegationTokenSecretManager - Creating 
> password for identifier: [ac2fbcf404c633759b75e6d6aae00e05, 46]
> 2016-09-25 01:44:56,141 INFO  AbstractDelegationTokenSecretManager - Creating 
> password for identifier: [cdbd224079a4a10400d00d0b8eece008, 46]
> 2016-09-25 01:45:01,328 INFO  AbstractDelegationTokenSecretManager - Creating 
> password for identifier: [e03218f4835524f3d05519d27bb04e35, 46]
> 2016-09-25 01:45:02,728 INFO  AbstractDelegationTokenSecretManager - Creating 
> password for 

[jira] [Commented] (HADOOP-13667) Fix typing mistake of inline document in hadoop-metrics2.properties

2016-09-27 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15528407#comment-15528407
 ] 

Daniel Templeton commented on HADOOP-13667:
---

Ha!  Nevermind.  Should've looked at your profile first. :)

> Fix typing mistake of inline document in hadoop-metrics2.properties
> ---
>
> Key: HADOOP-13667
> URL: https://issues.apache.org/jira/browse/HADOOP-13667
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Rui Gao
>Assignee: Rui Gao
>
> Fix typing mistake of inline document in hadoop-metrics2.properties.
> {code}#*.sink.ganglia.tagsForPrefix.jvm=ProcesName{code} should be 
> {code}#*.sink.ganglia.tagsForPrefix.jvm=ProcessName{code}
> And also could add examples into the inline document for easier understanding 
> of metrics tag related configuration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13667) Fix typing mistake of inline document in hadoop-metrics2.properties

2016-09-27 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15528405#comment-15528405
 ] 

Daniel Templeton commented on HADOOP-13667:
---

Let me know if you need a pointer for how to create and post a patch.

> Fix typing mistake of inline document in hadoop-metrics2.properties
> ---
>
> Key: HADOOP-13667
> URL: https://issues.apache.org/jira/browse/HADOOP-13667
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Rui Gao
>Assignee: Rui Gao
>
> Fix typing mistake of inline document in hadoop-metrics2.properties.
> {code}#*.sink.ganglia.tagsForPrefix.jvm=ProcesName{code} should be 
> {code}#*.sink.ganglia.tagsForPrefix.jvm=ProcessName{code}
> And also could add examples into the inline document for easier understanding 
> of metrics tag related configuration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13667) Fix typing mistake of inline document in hadoop-metrics2.properties

2016-09-27 Thread Rui Gao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13667?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Gao updated HADOOP-13667:
-
Description: 
Fix typing mistake of inline document in hadoop-metrics2.properties.
{code}#*.sink.ganglia.tagsForPrefix.jvm=ProcesName{code} should be 
{code}#*.sink.ganglia.tagsForPrefix.jvm=ProcessName{code}

And also could add examples into the inline document for easier understanding 
of metrics tag related configuration.

  was:
Fix typing mistake of inline document in hadoop-metrics2.properties.
{code}#*.sink.ganglia.tagsForPrefix.jvm=ProcesName{code} should be 
{code}#*.sink.ganglia.tagsForPrefix.jvm=ProcessName{code}.

And also could add examples into the inline document for easier understanding 
of metrics tag related configuration.


> Fix typing mistake of inline document in hadoop-metrics2.properties
> ---
>
> Key: HADOOP-13667
> URL: https://issues.apache.org/jira/browse/HADOOP-13667
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Rui Gao
>Assignee: Rui Gao
>
> Fix typing mistake of inline document in hadoop-metrics2.properties.
> {code}#*.sink.ganglia.tagsForPrefix.jvm=ProcesName{code} should be 
> {code}#*.sink.ganglia.tagsForPrefix.jvm=ProcessName{code}
> And also could add examples into the inline document for easier understanding 
> of metrics tag related configuration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13667) Fix typing mistake of inline document in hadoop-metrics2.properties

2016-09-27 Thread Rui Gao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13667?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Gao updated HADOOP-13667:
-
Description: 
Fix typing mistake of inline document in hadoop-metrics2.properties.
{code}#*.sink.ganglia.tagsForPrefix.jvm=ProcesName{code} should be 
{code}#*.sink.ganglia.tagsForPrefix.jvm=ProcessName{code}.

And also could add examples into the inline document for easier understanding 
of metrics tag related configuration.

  was:
Fix typing mistake of inline document in hadoop-metrics2.properties.
{{#*.sink.ganglia.tagsForPrefix.jvm=ProcesName}} should be 
{{#*.sink.ganglia.tagsForPrefix.jvm=ProcessName}}.

And also could add examples into the inline document for easier understanding 
of metrics tag related configuration.


> Fix typing mistake of inline document in hadoop-metrics2.properties
> ---
>
> Key: HADOOP-13667
> URL: https://issues.apache.org/jira/browse/HADOOP-13667
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Rui Gao
>Assignee: Rui Gao
>
> Fix typing mistake of inline document in hadoop-metrics2.properties.
> {code}#*.sink.ganglia.tagsForPrefix.jvm=ProcesName{code} should be 
> {code}#*.sink.ganglia.tagsForPrefix.jvm=ProcessName{code}.
> And also could add examples into the inline document for easier understanding 
> of metrics tag related configuration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13667) Fix typing mistake of inline document in hadoop-metrics2.properties

2016-09-27 Thread Rui Gao (JIRA)
Rui Gao created HADOOP-13667:


 Summary: Fix typing mistake of inline document in 
hadoop-metrics2.properties
 Key: HADOOP-13667
 URL: https://issues.apache.org/jira/browse/HADOOP-13667
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation
Reporter: Rui Gao
Assignee: Rui Gao


Fix typing mistake of inline document in hadoop-metrics2.properties.
{{#*.sink.ganglia.tagsForPrefix.jvm=ProcesName}} should be 
{{#*.sink.ganglia.tagsForPrefix.jvm=ProcessName}}.

And also could add examples into the inline document for easier understanding 
of metrics tag related configuration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13628) Support to retrieve specific property from configuration via REST API

2016-09-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15528347#comment-15528347
 ] 

Hadoop QA commented on HADOOP-13628:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
 0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} hadoop-common-project/hadoop-common: The patch 
generated 0 new + 285 unchanged - 2 fixed = 285 total (was 287) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
23s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 44m 19s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HADOOP-13628 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12830619/HADOOP-13628.06.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 084ec965c90d 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 
21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 6437ba1 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10621/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10621/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Support to retrieve specific property from configuration via REST API
> -
>
> Key: HADOOP-13628
> URL: https://issues.apache.org/jira/browse/HADOOP-13628
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf
>Affects Versions: 2.7.3
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: 404_error_browser.png, HADOOP-13628.01.patch, 
> 

[jira] [Commented] (HADOOP-13655) document object store use with fs shell and distcp

2016-09-27 Thread Yuanbo Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15528321#comment-15528321
 ] 

Yuanbo Liu commented on HADOOP-13655:
-

[~ste...@apache.org] I've reviewed your pull request in GitHub. Great work! 
Since I don't have much knowledge about object store, I just find some trivial 
mistake there. I would be glad to test those commands if I had object store 
environment.
Thank again for your work, well done!


> document object store use with fs shell and distcp
> --
>
> Key: HADOOP-13655
> URL: https://issues.apache.org/jira/browse/HADOOP-13655
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: documentation, fs, fs/s3
>Affects Versions: 2.7.3
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
>
> There's no specific docs for working with object stores from the {{hadoop 
> fs}} shell or in distcp; people either suffer from this (performance, 
> billing), or learn through trial and error what to do.
> Add a section in both fs shell and distcp docs covering use with object 
> stores.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13655) document object store use with fs shell and distcp

2016-09-27 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15528314#comment-15528314
 ] 

ASF GitHub Bot commented on HADOOP-13655:
-

Github user yuanboliu commented on a diff in the pull request:

https://github.com/apache/hadoop/pull/131#discussion_r80839392
  
--- Diff: 
hadoop-common-project/hadoop-common/src/site/markdown/FileSystemShell.md ---
@@ -729,3 +757,278 @@ usage
 Usage: `hadoop fs -usage command`
 
 Return the help for an individual command.
+
+
+Working with Object Storage
+
+
+The Hadoop FileSystem shell works with Object Stores such as Amazon S3, 
+Azure WASB and OpenStack Swift.
+
+
+
+```bash
+# Create a directory
+hadoop fs -mkdir s3a://bucket/datasets/
+
+# Upload a file from the cluster filesystem
+hadoop fs -put /datasets/example.orc s3a://bucket/datasets/
+
+# touch a file
+hadoop fs -touchz 
wasb://yourcontai...@youraccount.blob.core.windows.net/touched
+```
+
+Unlike a normal filesystem, renaming files and directories in an object 
store
+usually takes time proportional to the size of the objects being 
manipulated.
+As many of the filesystem shell operations
+use renaming as the final stage in operations, skipping that stage
+can avoid long delays.
+ 
+In particular, the `put` and `copyFromLocal` commands should
+both have the `-d` options set for a direct upload.
+
+
+```bash
+# Upload a file from the cluster filesystem
+hadoop fs -put -d /datasets/example.orc s3a://bucket/datasets/
+
+# Upload a file from the local filesystem
+hadoop fs -copyFromLocal -d -f ~/datasets/devices.orc 
s3a://bucket/datasets/
--- End diff --

hadoop fs -copyFromLocal -d -f ~/datasets/devices.orc s3a://bucket/datasets/
The symbol "~" is redundant, right?


> document object store use with fs shell and distcp
> --
>
> Key: HADOOP-13655
> URL: https://issues.apache.org/jira/browse/HADOOP-13655
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: documentation, fs, fs/s3
>Affects Versions: 2.7.3
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
>
> There's no specific docs for working with object stores from the {{hadoop 
> fs}} shell or in distcp; people either suffer from this (performance, 
> billing), or learn through trial and error what to do.
> Add a section in both fs shell and distcp docs covering use with object 
> stores.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13655) document object store use with fs shell and distcp

2016-09-27 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15528312#comment-15528312
 ] 

ASF GitHub Bot commented on HADOOP-13655:
-

Github user yuanboliu commented on a diff in the pull request:

https://github.com/apache/hadoop/pull/131#discussion_r80839511
  
--- Diff: 
hadoop-common-project/hadoop-common/src/site/markdown/FileSystemShell.md ---
@@ -729,3 +757,278 @@ usage
 Usage: `hadoop fs -usage command`
 
 Return the help for an individual command.
+
+
+Working with Object Storage
+
+
+The Hadoop FileSystem shell works with Object Stores such as Amazon S3, 
+Azure WASB and OpenStack Swift.
+
+
+
+```bash
+# Create a directory
+hadoop fs -mkdir s3a://bucket/datasets/
+
+# Upload a file from the cluster filesystem
+hadoop fs -put /datasets/example.orc s3a://bucket/datasets/
+
+# touch a file
+hadoop fs -touchz 
wasb://yourcontai...@youraccount.blob.core.windows.net/touched
+```
+
+Unlike a normal filesystem, renaming files and directories in an object 
store
+usually takes time proportional to the size of the objects being 
manipulated.
+As many of the filesystem shell operations
+use renaming as the final stage in operations, skipping that stage
+can avoid long delays.
+ 
+In particular, the `put` and `copyFromLocal` commands should
+both have the `-d` options set for a direct upload.
+
+
+```bash
+# Upload a file from the cluster filesystem
+hadoop fs -put -d /datasets/example.orc s3a://bucket/datasets/
+
+# Upload a file from the local filesystem
+hadoop fs -copyFromLocal -d -f ~/datasets/devices.orc 
s3a://bucket/datasets/
+
+# create a file from stdin
+echo "hello" | hadoop fs -put -d -f - 
wasb://yourcontai...@youraccount.blob.core.windows.net/hello.txt
--- End diff --

`hadoop fs -put -d -f - wasb:` should be `hadoop fs -put -d -f wasb:`


> document object store use with fs shell and distcp
> --
>
> Key: HADOOP-13655
> URL: https://issues.apache.org/jira/browse/HADOOP-13655
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: documentation, fs, fs/s3
>Affects Versions: 2.7.3
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
>
> There's no specific docs for working with object stores from the {{hadoop 
> fs}} shell or in distcp; people either suffer from this (performance, 
> billing), or learn through trial and error what to do.
> Add a section in both fs shell and distcp docs covering use with object 
> stores.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13655) document object store use with fs shell and distcp

2016-09-27 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15528313#comment-15528313
 ] 

ASF GitHub Bot commented on HADOOP-13655:
-

Github user yuanboliu commented on a diff in the pull request:

https://github.com/apache/hadoop/pull/131#discussion_r80836707
  
--- Diff: 
hadoop-common-project/hadoop-common/src/site/markdown/FileSystemShell.md ---
@@ -315,7 +324,11 @@ Returns 0 on success and -1 on error.
 
 Options:
 
-The -f option will overwrite the destination if it already exists.
+* `-p` : Preserves access and modification times, ownership and the 
permissions.
+(assuming the permissions can be propagated across filesystems)
+* `-f` : Overwrites the destination if it already exists.
+* `-ignorecrc` : Skip CRC checks on the file(s) downloaded.
+* `crc`: write CRC checksums for the files downloaded.
--- End diff --

`crc` should be `-crc`


> document object store use with fs shell and distcp
> --
>
> Key: HADOOP-13655
> URL: https://issues.apache.org/jira/browse/HADOOP-13655
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: documentation, fs, fs/s3
>Affects Versions: 2.7.3
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
>
> There's no specific docs for working with object stores from the {{hadoop 
> fs}} shell or in distcp; people either suffer from this (performance, 
> billing), or learn through trial and error what to do.
> Add a section in both fs shell and distcp docs covering use with object 
> stores.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13640) Fix findbugs warning in VersionInfoMojo.java

2016-09-27 Thread Yuanbo Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15528172#comment-15528172
 ] 

Yuanbo Liu commented on HADOOP-13640:
-

[~ozawa] and [~ajisakaa], please take a look at it if you have time. Thanks in 
advance.

> Fix findbugs warning in VersionInfoMojo.java
> 
>
> Key: HADOOP-13640
> URL: https://issues.apache.org/jira/browse/HADOOP-13640
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Tsuyoshi Ozawa
>Assignee: Yuanbo Liu
> Attachments: HADOOP-13640.001.patch
>
>
> Reported by Arpit on HADOOP-13602
> {quote}
> [INFO] 
> org.apache.hadoop.maven.plugin.versioninfo.VersionInfoMojo.getSvnUriInfo(String)
>  uses String.indexOf(String) instead of String.indexOf(int) 
> ["org.apache.hadoop.maven.plugin.versioninfo.VersionInfoMojo"] At 
> VersionInfoMojo.java:[lines 49-341]
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12756) Incorporate Aliyun OSS file system implementation

2016-09-27 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15528166#comment-15528166
 ] 

Kai Zheng commented on HADOOP-12756:


I have reverted the commit and posted a VOTE thread in the common dev mailing 
list. Kindly review this work and give your vote there, thanks!

> Incorporate Aliyun OSS file system implementation
> -
>
> Key: HADOOP-12756
> URL: https://issues.apache.org/jira/browse/HADOOP-12756
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: HADOOP-12756
>Reporter: shimingfei
>Assignee: shimingfei
> Fix For: HADOOP-12756
>
> Attachments: Aliyun-OSS-integration-v2.pdf, 
> Aliyun-OSS-integration.pdf, HADOOP-12756-v02.patch, HADOOP-12756.003.patch, 
> HADOOP-12756.004.patch, HADOOP-12756.005.patch, HADOOP-12756.006.patch, 
> HADOOP-12756.007.patch, HADOOP-12756.008.patch, HADOOP-12756.009.patch, 
> HADOOP-12756.010.patch, HCFS User manual.md, OSS integration.pdf
>
>
> Aliyun OSS is widely used among China’s cloud users, but currently it is not 
> easy to access data laid on OSS storage from user’s Hadoop/Spark application, 
> because of no original support for OSS in Hadoop.
> This work aims to integrate Aliyun OSS with Hadoop. By simple configuration, 
> Spark/Hadoop applications can read/write data from OSS without any code 
> change. Narrowing the gap between user’s APP and data storage, like what have 
> been done for S3 in Hadoop 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13628) Support to retrieve specific property from configuration via REST API

2016-09-27 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HADOOP-13628:
-
Attachment: HADOOP-13628.06.patch

> Support to retrieve specific property from configuration via REST API
> -
>
> Key: HADOOP-13628
> URL: https://issues.apache.org/jira/browse/HADOOP-13628
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf
>Affects Versions: 2.7.3
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: 404_error_browser.png, HADOOP-13628.01.patch, 
> HADOOP-13628.02.patch, HADOOP-13628.03.patch, HADOOP-13628.04.patch, 
> HADOOP-13628.05.patch, HADOOP-13628.06.patch
>
>
> Currently we can use rest API to retrieve all configuration properties per 
> daemon, but unable to get a specific property by name. This causes extra 
> parse work at client side when dealing with Hadoop configurations, and also 
> it's quite over head to send all configuration in a http response over 
> network. Propose to support following a {{name}} parameter in the http 
> request, by issuing
> {code}
> curl --header "Accept:application/json" 
> http://${RM_HOST}/conf?name=yarn.nodemanager.aux-services
> {code}
> get output
> {code}
> {"property"{"key":"yarn.resourcemanager.hostname","value":"${RM_HOST}","isFinal":false,"resource":"yarn-site.xml"}}
> {code}
> This change is fully backwards compatible.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13658) Replace config key literal strings with config key names I: hadoop common

2016-09-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15528076#comment-15528076
 ] 

Hudson commented on HADOOP-13658:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10502 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10502/])
HADOOP-13658. Replace config key literal strings with names I: hadoop (liuml07: 
rev 9a44a832a99eb967aa4e34338dfa75baf35f9845)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/IOUtils.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/SequenceFile.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/BloomMapFile.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/ftp/FTPFileSystem.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/MapFile.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/ContractTestUtils.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/DefaultCodec.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/SocksSocketFactory.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/hash/Hash.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/GzipCodec.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/file/tfile/Compression.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/CommandWithDestination.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/BZip2Codec.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/LineReader.java


> Replace config key literal strings with config key names I: hadoop common
> -
>
> Key: HADOOP-13658
> URL: https://issues.apache.org/jira/browse/HADOOP-13658
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Minor
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13658.001.patch, HADOOP-13658.002.patch, 
> HADOOP-13658.003.patch
>
>
> In Hadoop Common, there are several places where the config keys are used by 
> the literal strings instead of their names as in configuration key classes. 
> The default values have the same issue. For example
> {code:title=in o.a.h.i.f.t.Compression.java}
> conf.setInt("io.compression.codec.lzo.buffersize", 64 * 1024);
> {code}
> should be
> {code}
> conf.setInt(
> CommonConfigurationKeys.IO_COMPRESSION_CODEC_LZO_BUFFERSIZE_KEY,
> CommonConfigurationKeys.IO_COMPRESSION_CODEC_LZO_BUFFERSIZE_DEFAULT);
> {code}
> instead



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12756) Incorporate Aliyun OSS file system implementation

2016-09-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15528077#comment-15528077
 ] 

Hudson commented on HADOOP-12756:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10502 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10502/])
Revert "HADOOP-13584. hdoop-aliyun: merge HADOOP-12756 branch back" This 
(kai.zheng: rev d1443988f809fe6656f60dfed4ee4e0f4844ee5c)
* (delete) 
hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunCredentials.java
* (delete) 
hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/contract/TestAliyunOSSContractDistCp.java
* (delete) hadoop-tools/hadoop-aliyun/src/test/resources/log4j.properties
* (delete) 
hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/contract/TestAliyunOSSContractCreate.java
* (delete) hadoop-tools/hadoop-aliyun/dev-support/findbugs-exclude.xml
* (delete) 
hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSUtils.java
* (delete) 
hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSTestUtils.java
* (delete) 
hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystemStore.java
* (edit) hadoop-tools/pom.xml
* (delete) 
hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSOutputStream.java
* (delete) 
hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSInputStream.java
* (delete) 
hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/contract/TestAliyunOSSContractSeek.java
* (delete) 
hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystem.java
* (edit) hadoop-project/pom.xml
* (delete) 
hadoop-tools/hadoop-aliyun/src/site/markdown/tools/hadoop-aliyun/index.md
* (delete) 
hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunCredentialsProvider.java
* (delete) 
hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/contract/TestAliyunOSSContractDelete.java
* (delete) 
hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/contract/AliyunOSSContract.java
* (delete) 
hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/contract/TestAliyunOSSContractGetFileStatus.java
* (delete) hadoop-tools/hadoop-aliyun/src/test/resources/contract/aliyun-oss.xml
* (delete) hadoop-tools/hadoop-aliyun/pom.xml
* (delete) 
hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/contract/TestAliyunOSSContractOpen.java
* (delete) 
hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java
* (delete) 
hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/contract/TestAliyunOSSContractRootDir.java
* (edit) .gitignore
* (delete) 
hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/contract/TestAliyunOSSContractMkdir.java
* (delete) 
hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/package-info.java
* (delete) 
hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemStore.java
* (delete) hadoop-tools/hadoop-aliyun/src/test/resources/core-site.xml
* (delete) 
hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSOutputStream.java
* (edit) hadoop-tools/hadoop-tools-dist/pom.xml
* (delete) 
hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/contract/TestAliyunOSSContractRename.java
* (delete) 
hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/Constants.java
* (delete) 
hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSInputStream.java


> Incorporate Aliyun OSS file system implementation
> -
>
> Key: HADOOP-12756
> URL: https://issues.apache.org/jira/browse/HADOOP-12756
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: HADOOP-12756
>Reporter: shimingfei
>Assignee: shimingfei
> Fix For: HADOOP-12756
>
> Attachments: Aliyun-OSS-integration-v2.pdf, 
> Aliyun-OSS-integration.pdf, HADOOP-12756-v02.patch, HADOOP-12756.003.patch, 
> HADOOP-12756.004.patch, HADOOP-12756.005.patch, HADOOP-12756.006.patch, 
> HADOOP-12756.007.patch, HADOOP-12756.008.patch, HADOOP-12756.009.patch, 
> HADOOP-12756.010.patch, HCFS User manual.md, OSS integration.pdf
>
>
> Aliyun OSS is widely used among China’s cloud users, but currently it is not 
> easy to access data laid on OSS storage from user’s Hadoop/Spark application, 
> because of no original support for OSS in Hadoop.
> This work aims to integrate Aliyun OSS with Hadoop. By simple configuration, 
> Spark/Hadoop 

[jira] [Commented] (HADOOP-13584) hadoop-aliyun: merge HADOOP-12756 branch back

2016-09-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15528078#comment-15528078
 ] 

Hudson commented on HADOOP-13584:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10502 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10502/])
Revert "HADOOP-13584. hdoop-aliyun: merge HADOOP-12756 branch back" This 
(kai.zheng: rev d1443988f809fe6656f60dfed4ee4e0f4844ee5c)
* (delete) 
hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunCredentials.java
* (delete) 
hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/contract/TestAliyunOSSContractDistCp.java
* (delete) hadoop-tools/hadoop-aliyun/src/test/resources/log4j.properties
* (delete) 
hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/contract/TestAliyunOSSContractCreate.java
* (delete) hadoop-tools/hadoop-aliyun/dev-support/findbugs-exclude.xml
* (delete) 
hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSUtils.java
* (delete) 
hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSTestUtils.java
* (delete) 
hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystemStore.java
* (edit) hadoop-tools/pom.xml
* (delete) 
hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSOutputStream.java
* (delete) 
hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSInputStream.java
* (delete) 
hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/contract/TestAliyunOSSContractSeek.java
* (delete) 
hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystem.java
* (edit) hadoop-project/pom.xml
* (delete) 
hadoop-tools/hadoop-aliyun/src/site/markdown/tools/hadoop-aliyun/index.md
* (delete) 
hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunCredentialsProvider.java
* (delete) 
hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/contract/TestAliyunOSSContractDelete.java
* (delete) 
hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/contract/AliyunOSSContract.java
* (delete) 
hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/contract/TestAliyunOSSContractGetFileStatus.java
* (delete) hadoop-tools/hadoop-aliyun/src/test/resources/contract/aliyun-oss.xml
* (delete) hadoop-tools/hadoop-aliyun/pom.xml
* (delete) 
hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/contract/TestAliyunOSSContractOpen.java
* (delete) 
hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java
* (delete) 
hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/contract/TestAliyunOSSContractRootDir.java
* (edit) .gitignore
* (delete) 
hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/contract/TestAliyunOSSContractMkdir.java
* (delete) 
hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/package-info.java
* (delete) 
hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemStore.java
* (delete) hadoop-tools/hadoop-aliyun/src/test/resources/core-site.xml
* (delete) 
hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSOutputStream.java
* (edit) hadoop-tools/hadoop-tools-dist/pom.xml
* (delete) 
hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/contract/TestAliyunOSSContractRename.java
* (delete) 
hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/Constants.java
* (delete) 
hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSInputStream.java


> hadoop-aliyun: merge HADOOP-12756 branch back
> -
>
> Key: HADOOP-13584
> URL: https://issues.apache.org/jira/browse/HADOOP-13584
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Reporter: shimingfei
>Assignee: Genmao Yu
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13584.001.patch, HADOOP-13584.002.patch, 
> HADOOP-13584.003.patch, HADOOP-13584.004.patch
>
>
> We have finished a round of improvement over Hadoop-12756 branch, which 
> intends to incorporate Aliyun OSS support in Hadoop. This feature provides 
> basic support for data access to Aliyun OSS from Hadoop applications.
> In the implementation, we follow the style of S3 support in Hadooop. Besides 
> we also provide FileSystem contract test over real Aliyun OSS environment. By 
> simple configuration, it can be enabled/disabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To 

[jira] [Updated] (HADOOP-13631) S3Guard: implement move() for LocalMetadataStore, add unit tests

2016-09-27 Thread Aaron Fabbri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron Fabbri updated HADOOP-13631:
--
Attachment: HADOOP-13631-HADOOP-13345.001.patch

> S3Guard: implement move() for LocalMetadataStore, add unit tests
> 
>
> Key: HADOOP-13631
> URL: https://issues.apache.org/jira/browse/HADOOP-13631
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Aaron Fabbri
>Assignee: Aaron Fabbri
> Attachments: HADOOP-13631-HADOOP-13345.001.patch
>
>
> Building on HADOOP-13573 and HADOOP-13452, implement move() in 
> LocalMetadataStore and associated MetadataStore unit tests.
> (Making this a separate JIRA to break up work into decent-sized and 
> reviewable chunks.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-13584) hadoop-aliyun: merge HADOOP-12756 branch back

2016-09-27 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13584?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng reopened HADOOP-13584:


This was reverted according to the latest discussion in HADOOP-12756.

> hadoop-aliyun: merge HADOOP-12756 branch back
> -
>
> Key: HADOOP-13584
> URL: https://issues.apache.org/jira/browse/HADOOP-13584
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Reporter: shimingfei
>Assignee: Genmao Yu
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13584.001.patch, HADOOP-13584.002.patch, 
> HADOOP-13584.003.patch, HADOOP-13584.004.patch
>
>
> We have finished a round of improvement over Hadoop-12756 branch, which 
> intends to incorporate Aliyun OSS support in Hadoop. This feature provides 
> basic support for data access to Aliyun OSS from Hadoop applications.
> In the implementation, we follow the style of S3 support in Hadooop. Besides 
> we also provide FileSystem contract test over real Aliyun OSS environment. By 
> simple configuration, it can be enabled/disabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13631) S3Guard: implement move() for LocalMetadataStore, add unit tests

2016-09-27 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15528020#comment-15528020
 ] 

Aaron Fabbri commented on HADOOP-13631:
---

Ooops.. attached stale patch.. Updated one coming shortly.

> S3Guard: implement move() for LocalMetadataStore, add unit tests
> 
>
> Key: HADOOP-13631
> URL: https://issues.apache.org/jira/browse/HADOOP-13631
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Aaron Fabbri
>Assignee: Aaron Fabbri
>
> Building on HADOOP-13573 and HADOOP-13452, implement move() in 
> LocalMetadataStore and associated MetadataStore unit tests.
> (Making this a separate JIRA to break up work into decent-sized and 
> reviewable chunks.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13631) S3Guard: implement move() for LocalMetadataStore, add unit tests

2016-09-27 Thread Aaron Fabbri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron Fabbri updated HADOOP-13631:
--
Attachment: (was: HADOOP-13631-HADOOP-13345.001.patch)

> S3Guard: implement move() for LocalMetadataStore, add unit tests
> 
>
> Key: HADOOP-13631
> URL: https://issues.apache.org/jira/browse/HADOOP-13631
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Aaron Fabbri
>Assignee: Aaron Fabbri
>
> Building on HADOOP-13573 and HADOOP-13452, implement move() in 
> LocalMetadataStore and associated MetadataStore unit tests.
> (Making this a separate JIRA to break up work into decent-sized and 
> reviewable chunks.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13631) S3Guard: implement move() for LocalMetadataStore, add unit tests

2016-09-27 Thread Aaron Fabbri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron Fabbri updated HADOOP-13631:
--
Attachment: HADOOP-13631-HADOOP-13345.001.patch

Attaching initial patch.  I will modify this based on our discussion on 
HADOOP-13448 about the move() interface.

{quote}
HADOOP-13631 S3Guard: implement move() for LocalMetadataStore, add unit tests

This is an initial RFC patch.  Based on conversation on HADOOP-13448 we may
simplify the move() interface I've proposed here.

Also, change DirListingMetadata#setAuthoritative() to take a boolean arg.
Setting this to false is a way to trigger clients to re-consult the backing
store if, for example, a user adds new files to a directory.
{quote}

I think we probably need another sub-jira for implementing delete tracking.  I 
did not implement that here as it affects other code and may require other 
interface changes.

> S3Guard: implement move() for LocalMetadataStore, add unit tests
> 
>
> Key: HADOOP-13631
> URL: https://issues.apache.org/jira/browse/HADOOP-13631
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Aaron Fabbri
>Assignee: Aaron Fabbri
> Attachments: HADOOP-13631-HADOOP-13345.001.patch
>
>
> Building on HADOOP-13573 and HADOOP-13452, implement move() in 
> LocalMetadataStore and associated MetadataStore unit tests.
> (Making this a separate JIRA to break up work into decent-sized and 
> reviewable chunks.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13658) Replace config key literal strings with config key names I: hadoop common

2016-09-27 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-13658:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0-alpha2
   2.8.0
   Status: Resolved  (was: Patch Available)

Thanks [~vagarychen] for your contribution. I have committed this patch to 
{{trunk}}, {{branch-2}} and {{branch-2.8}}.

> Replace config key literal strings with config key names I: hadoop common
> -
>
> Key: HADOOP-13658
> URL: https://issues.apache.org/jira/browse/HADOOP-13658
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Minor
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13658.001.patch, HADOOP-13658.002.patch, 
> HADOOP-13658.003.patch
>
>
> In Hadoop Common, there are several places where the config keys are used by 
> the literal strings instead of their names as in configuration key classes. 
> The default values have the same issue. For example
> {code:title=in o.a.h.i.f.t.Compression.java}
> conf.setInt("io.compression.codec.lzo.buffersize", 64 * 1024);
> {code}
> should be
> {code}
> conf.setInt(
> CommonConfigurationKeys.IO_COMPRESSION_CODEC_LZO_BUFFERSIZE_KEY,
> CommonConfigurationKeys.IO_COMPRESSION_CODEC_LZO_BUFFERSIZE_DEFAULT);
> {code}
> instead



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11780) Prevent IPC reader thread death

2016-09-27 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15527936#comment-15527936
 ] 

Konstantin Shvachko commented on HADOOP-11780:
--

The patch looks good. Also fixes HADOOP-13657.
+1 on behalf of [~zhz]

> Prevent IPC reader thread death
> ---
>
> Key: HADOOP-11780
> URL: https://issues.apache.org/jira/browse/HADOOP-11780
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 2.0.0-alpha
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Critical
> Attachments: HADOOP-11780.patch
>
>
> Reader threads can die to a race condition with the responder thread.  If the 
> server's ipc handler cannot send a response in one write, it delegates 
> sending the rest of the response to the responder thread.
> The race occurs when the responder thread has an exception writing to the 
> socket.  The responder closes the socket.  This wakes up the reader polling 
> on the socket.  If a {{CancelledKeyException}} is thrown, which is a runtime 
> exception, the reader dies.  All connections serviced by that reader are now 
> in limbo until the client possibly times out.  New connections play roulette 
> as to whether they are assigned to a defunct reader.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13628) Support to retrieve specific property from configuration via REST API

2016-09-27 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15527916#comment-15527916
 ] 

Mingliang Liu commented on HADOOP-13628:


The patch looks good to me overall. [~ste...@apache.org] can you confirm that 
your concerns are all addressed? Thanks.

One nit is that {{TestConfServlet#setUp()}} can be {{static}} and 
{{@BeforeClass}}?

> Support to retrieve specific property from configuration via REST API
> -
>
> Key: HADOOP-13628
> URL: https://issues.apache.org/jira/browse/HADOOP-13628
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf
>Affects Versions: 2.7.3
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: 404_error_browser.png, HADOOP-13628.01.patch, 
> HADOOP-13628.02.patch, HADOOP-13628.03.patch, HADOOP-13628.04.patch, 
> HADOOP-13628.05.patch
>
>
> Currently we can use rest API to retrieve all configuration properties per 
> daemon, but unable to get a specific property by name. This causes extra 
> parse work at client side when dealing with Hadoop configurations, and also 
> it's quite over head to send all configuration in a http response over 
> network. Propose to support following a {{name}} parameter in the http 
> request, by issuing
> {code}
> curl --header "Accept:application/json" 
> http://${RM_HOST}/conf?name=yarn.nodemanager.aux-services
> {code}
> get output
> {code}
> {"property"{"key":"yarn.resourcemanager.hostname","value":"${RM_HOST}","isFinal":false,"resource":"yarn-site.xml"}}
> {code}
> This change is fully backwards compatible.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13658) Replace config key literal strings with config key names I: hadoop common

2016-09-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15527914#comment-15527914
 ] 

Hadoop QA commented on HADOOP-13658:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
39s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 39m 55s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HADOOP-13658 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12830594/HADOOP-13658.003.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux bcfb6c4562a1 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 
21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 2acfb1e |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10620/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10620/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Replace config key literal strings with config key names I: hadoop common
> -
>
> Key: HADOOP-13658
> URL: https://issues.apache.org/jira/browse/HADOOP-13658
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Minor
> Attachments: HADOOP-13658.001.patch, HADOOP-13658.002.patch, 
> HADOOP-13658.003.patch
>
>
> In Hadoop Common, there are several places where the config 

[jira] [Commented] (HADOOP-12756) Incorporate Aliyun OSS file system implementation

2016-09-27 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15527903#comment-15527903
 ] 

Kai Zheng commented on HADOOP-12756:


Thank you [~arpitagarwal], [~andrew.wang] and [~anu] for the feedback, thoughts 
and suggestions. It sounds like a great community and I love it :).

bq. In which case, do you mind reverting and firing up a VOTE thread on 
common-dev@? 
Sure let me follow this, reverting the commit and calling for the merge vote.

> Incorporate Aliyun OSS file system implementation
> -
>
> Key: HADOOP-12756
> URL: https://issues.apache.org/jira/browse/HADOOP-12756
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: HADOOP-12756
>Reporter: shimingfei
>Assignee: shimingfei
> Fix For: HADOOP-12756
>
> Attachments: Aliyun-OSS-integration-v2.pdf, 
> Aliyun-OSS-integration.pdf, HADOOP-12756-v02.patch, HADOOP-12756.003.patch, 
> HADOOP-12756.004.patch, HADOOP-12756.005.patch, HADOOP-12756.006.patch, 
> HADOOP-12756.007.patch, HADOOP-12756.008.patch, HADOOP-12756.009.patch, 
> HADOOP-12756.010.patch, HCFS User manual.md, OSS integration.pdf
>
>
> Aliyun OSS is widely used among China’s cloud users, but currently it is not 
> easy to access data laid on OSS storage from user’s Hadoop/Spark application, 
> because of no original support for OSS in Hadoop.
> This work aims to integrate Aliyun OSS with Hadoop. By simple configuration, 
> Spark/Hadoop applications can read/write data from OSS without any code 
> change. Narrowing the gap between user’s APP and data storage, like what have 
> been done for S3 in Hadoop 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-12756) Incorporate Aliyun OSS file system implementation

2016-09-27 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15527773#comment-15527773
 ] 

Anu Engineer edited comment on HADOOP-12756 at 9/28/16 12:24 AM:
-

[~drankye] I think what Arpit is saying is that he does *not* have an issue 
with the code. The proper process to bring in this code would be to call for 
vote. Again, it is nothing to do with Aliyun code or technical issues. It gives 
the community a chance to review, understand and comment upon the code base 
before it is committed. That I think would be the best way to build a community 
of contributors around this feature.

If you agree that we should follow the right process, I think we should 
*revert* this change and call for a merge vote and merge based on the results 
of such a voting thread.

The danger of the precedent what we are doing in this branch would be that 
someone else might decide to bring in another feature via this loophole saying 
that this was done in Aliyun code merge. That is what I think we want to avoid, 
in many senses a rule of law remains a rule only if it is followed 
consistently. 

I am really sympathetic to what was done and I appreciate the enthusiasm and 
the spirit of let us get it done,  but I think this list of changes is large 
enough for us  to follow the right process.  As far as I can see, few days 
spend on voting time will only strengthen the sense of community around this 
code base.

[~andrew.wang] Since this is a single commit, reverting and merging will 
actuall be a better experience, because it will allow you follow the policy 
that was suggested by you 

 "git merge --no-ff" is also the preferred way of integrating a feature branch 
to other branches, e.g. branch-2."
>From 
>https://lists.apache.org/thread.html/43cd65c6b6c3c0e8ac2b3c76afd9eff1f78b177fabe9c4a96d9b3d0b@1440189889@%3Ccommon-dev.hadoop.apache.org%3E


was (Author: anu):
[~drankye] I think what Arpit is saying is that he does *not* have an issue 
with the code. The proper process to bring in this code would be to call for 
vote. Again, it is nothing to do with Aliyun code or technical issues. It gives 
the community a chance to review, understand and comment upon the code base 
before it is committed. That I think would be the best way to build a community 
of contributors around this feature.

If you agree that we should follow the right process, I think we should 
*revert* this change and call for a merge vote and merge based on the results 
of such a voting thread.

The danger of the precedent what we are doing in this branch would be that 
someone else might decide to bring in another feature via this loophole saying 
that this was done in Aliyun code merge. That is what I think we want to avoid, 
in many senses a rule of law remains a rule only if it is followed 
consistently. 

I am really sympathetic to what was done and I appreciate the enthusiasm and 
the spirit of let us get it done,  but I think this list of changes is large 
enough for us  for us to follow the right process.  As far as I can see, few 
days spend on voting time will only strengthen the sense of community around 
this code base.

[~andrew.wang] Since this is a single commit, reverting and merging will 
actuall be a better experience, because it will allow you follow the policy 
that was suggested by you 

 "git merge --no-ff" is also the preferred way of integrating a feature branch 
to other branches, e.g. branch-2."
>From 
>https://lists.apache.org/thread.html/43cd65c6b6c3c0e8ac2b3c76afd9eff1f78b177fabe9c4a96d9b3d0b@1440189889@%3Ccommon-dev.hadoop.apache.org%3E

> Incorporate Aliyun OSS file system implementation
> -
>
> Key: HADOOP-12756
> URL: https://issues.apache.org/jira/browse/HADOOP-12756
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: HADOOP-12756
>Reporter: shimingfei
>Assignee: shimingfei
> Fix For: HADOOP-12756
>
> Attachments: Aliyun-OSS-integration-v2.pdf, 
> Aliyun-OSS-integration.pdf, HADOOP-12756-v02.patch, HADOOP-12756.003.patch, 
> HADOOP-12756.004.patch, HADOOP-12756.005.patch, HADOOP-12756.006.patch, 
> HADOOP-12756.007.patch, HADOOP-12756.008.patch, HADOOP-12756.009.patch, 
> HADOOP-12756.010.patch, HCFS User manual.md, OSS integration.pdf
>
>
> Aliyun OSS is widely used among China’s cloud users, but currently it is not 
> easy to access data laid on OSS storage from user’s Hadoop/Spark application, 
> because of no original support for OSS in Hadoop.
> This work aims to integrate Aliyun OSS with Hadoop. By simple configuration, 
> Spark/Hadoop applications can read/write data from OSS without any code 
> change. Narrowing the gap between user’s APP and data storage, like what have 
> been done for S3 in 

[jira] [Commented] (HADOOP-12756) Incorporate Aliyun OSS file system implementation

2016-09-27 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15527871#comment-15527871
 ] 

Anu Engineer commented on HADOOP-12756:
---

bq.  our fellow committers are good actors and behaving with the best interests 
of the project in mind

Absolutely, that is why we try to get the backs of each other, just like you 
would comment on a code review error -- that is trying to help each other -- A 
mistake pointed out by the fellow community member is indeed an appreciation of 
what you have contributed. 

I think Arpit's original comments was pointing out a mistake -- And I think we 
all owe him a bit of gratitude. 

bq. Ultimately, we need to trust the other people we work with in the community 
since no one can personally review every change that goes in.

Again completely agree, that is why we have a community and hopefully someone 
else is there to catch the ball when you miss. In fact, in this particular case 
it is the expression of trust when someone suggests that an error might have 
occurred instead of a -1 veto.  The very fact that this is being discussed in 
the corresponding JIRA without a -1, is indeed an expression of respect and 
trust. In fact, I would think these threads have been really appreciate of the 
work and a very gentle reminder of why we do things the way we do. 


> Incorporate Aliyun OSS file system implementation
> -
>
> Key: HADOOP-12756
> URL: https://issues.apache.org/jira/browse/HADOOP-12756
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: HADOOP-12756
>Reporter: shimingfei
>Assignee: shimingfei
> Fix For: HADOOP-12756
>
> Attachments: Aliyun-OSS-integration-v2.pdf, 
> Aliyun-OSS-integration.pdf, HADOOP-12756-v02.patch, HADOOP-12756.003.patch, 
> HADOOP-12756.004.patch, HADOOP-12756.005.patch, HADOOP-12756.006.patch, 
> HADOOP-12756.007.patch, HADOOP-12756.008.patch, HADOOP-12756.009.patch, 
> HADOOP-12756.010.patch, HCFS User manual.md, OSS integration.pdf
>
>
> Aliyun OSS is widely used among China’s cloud users, but currently it is not 
> easy to access data laid on OSS storage from user’s Hadoop/Spark application, 
> because of no original support for OSS in Hadoop.
> This work aims to integrate Aliyun OSS with Hadoop. By simple configuration, 
> Spark/Hadoop applications can read/write data from OSS without any code 
> change. Narrowing the gap between user’s APP and data storage, like what have 
> been done for S3 in Hadoop 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13658) Replace config key literal strings with config key names I: hadoop common

2016-09-27 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15527843#comment-15527843
 ] 

Mingliang Liu commented on HADOOP-13658:


+1 v3 patch pending on Jenkins.

> Replace config key literal strings with config key names I: hadoop common
> -
>
> Key: HADOOP-13658
> URL: https://issues.apache.org/jira/browse/HADOOP-13658
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Minor
> Attachments: HADOOP-13658.001.patch, HADOOP-13658.002.patch, 
> HADOOP-13658.003.patch
>
>
> In Hadoop Common, there are several places where the config keys are used by 
> the literal strings instead of their names as in configuration key classes. 
> The default values have the same issue. For example
> {code:title=in o.a.h.i.f.t.Compression.java}
> conf.setInt("io.compression.codec.lzo.buffersize", 64 * 1024);
> {code}
> should be
> {code}
> conf.setInt(
> CommonConfigurationKeys.IO_COMPRESSION_CODEC_LZO_BUFFERSIZE_KEY,
> CommonConfigurationKeys.IO_COMPRESSION_CODEC_LZO_BUFFERSIZE_DEFAULT);
> {code}
> instead



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12756) Incorporate Aliyun OSS file system implementation

2016-09-27 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15527836#comment-15527836
 ] 

Andrew Wang commented on HADOOP-12756:
--

Generally speaking, I'd prefer we default to assuming that our fellow 
committers are good actors and behaving with the best interests of the project 
in mind. Rules aren't meant to be blindly enforced, and given our positive 
working relationships with committers like Kai and Steve, we're allowed to let 
these little mistakes slide. Ultimately, we need to trust the other people we 
work with in the community since no one can personally review every change that 
goes in. 

Anu, thanks for bringing up the single commit though. It looks like the branch 
was squashed and committed as a single commit, so we lost all the history. 
Fixing this seems worthwhile, in which case we might as well go through the 
merge vote for completeness.

[~drankye] do you agree? In which case, do you mind reverting and firing up a 
[VOTE] thread on common-dev@? Thanks.

> Incorporate Aliyun OSS file system implementation
> -
>
> Key: HADOOP-12756
> URL: https://issues.apache.org/jira/browse/HADOOP-12756
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: HADOOP-12756
>Reporter: shimingfei
>Assignee: shimingfei
> Fix For: HADOOP-12756
>
> Attachments: Aliyun-OSS-integration-v2.pdf, 
> Aliyun-OSS-integration.pdf, HADOOP-12756-v02.patch, HADOOP-12756.003.patch, 
> HADOOP-12756.004.patch, HADOOP-12756.005.patch, HADOOP-12756.006.patch, 
> HADOOP-12756.007.patch, HADOOP-12756.008.patch, HADOOP-12756.009.patch, 
> HADOOP-12756.010.patch, HCFS User manual.md, OSS integration.pdf
>
>
> Aliyun OSS is widely used among China’s cloud users, but currently it is not 
> easy to access data laid on OSS storage from user’s Hadoop/Spark application, 
> because of no original support for OSS in Hadoop.
> This work aims to integrate Aliyun OSS with Hadoop. By simple configuration, 
> Spark/Hadoop applications can read/write data from OSS without any code 
> change. Narrowing the gap between user’s APP and data storage, like what have 
> been done for S3 in Hadoop 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-13663) Index out of range in SysInfoWindows

2016-09-27 Thread Inigo Goiri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Inigo Goiri reassigned HADOOP-13663:


Assignee: Inigo Goiri

> Index out of range in SysInfoWindows
> 
>
> Key: HADOOP-13663
> URL: https://issues.apache.org/jira/browse/HADOOP-13663
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 2.7.3
> Environment: Windows
>Reporter: Inigo Goiri
>Assignee: Inigo Goiri
> Attachments: HADOOP-13663.000.patch, HADOOP-13663.001.patch
>
>
> Sometimes, the {{NodeResourceMonitor}} tries to read the system utilization 
> from winutils.exe and this return empty values. This triggers the following 
> exception:
> java.lang.StringIndexOutOfBoundsException: String index out of range: -1
>   at java.lang.String.substring(String.java:1911)
>   at 
> org.apache.hadoop.util.SysInfoWindows.refreshIfNeeded(SysInfoWindows.java:158)
>   at 
> org.apache.hadoop.util.SysInfoWindows.getPhysicalMemorySize(SysInfoWindows.java:247)
>   at 
> org.apache.hadoop.yarn.util.ResourceCalculatorPlugin.getPhysicalMemorySize(ResourceCalculatorPlugin.java:63)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.NodeResourceMonitorImpl$MonitoringThread.run(NodeResourceMonitorImpl.java:139)
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13658) Replace config key literal strings with config key names I: hadoop common

2016-09-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15527818#comment-15527818
 ] 

Hadoop QA commented on HADOOP-13658:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 47s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 40m 34s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.net.TestDNS |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HADOOP-13658 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12830591/HADOOP-13658.002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 593046dc33c0 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 
21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 2acfb1e |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10619/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10619/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10619/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Replace config key literal strings with config key names I: hadoop common
> -
>
> Key: HADOOP-13658
> URL: https://issues.apache.org/jira/browse/HADOOP-13658
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf
>Reporter: Chen Liang
>

[jira] [Updated] (HADOOP-13658) Replace config key literal strings with config key names I: hadoop common

2016-09-27 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HADOOP-13658:

Attachment: HADOOP-13658.003.patch

Thanks [~liuml07] for the review and the comments! Uploaded v003 patch to use 
static import instead.

> Replace config key literal strings with config key names I: hadoop common
> -
>
> Key: HADOOP-13658
> URL: https://issues.apache.org/jira/browse/HADOOP-13658
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Minor
> Attachments: HADOOP-13658.001.patch, HADOOP-13658.002.patch, 
> HADOOP-13658.003.patch
>
>
> In Hadoop Common, there are several places where the config keys are used by 
> the literal strings instead of their names as in configuration key classes. 
> The default values have the same issue. For example
> {code:title=in o.a.h.i.f.t.Compression.java}
> conf.setInt("io.compression.codec.lzo.buffersize", 64 * 1024);
> {code}
> should be
> {code}
> conf.setInt(
> CommonConfigurationKeys.IO_COMPRESSION_CODEC_LZO_BUFFERSIZE_KEY,
> CommonConfigurationKeys.IO_COMPRESSION_CODEC_LZO_BUFFERSIZE_DEFAULT);
> {code}
> instead



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12756) Incorporate Aliyun OSS file system implementation

2016-09-27 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15527773#comment-15527773
 ] 

Anu Engineer commented on HADOOP-12756:
---

[~drankye] I think what Arpit is saying is that he does *not* have an issue 
with the code. The proper process to bring in this code would be to call for 
vote. Again, it is nothing to do with Aliyun code or technical issues. It gives 
the community a chance to review, understand and comment upon the code base 
before it is committed. That I think would be the best way to build a community 
of contributors around this feature.

If you agree that we should follow the right process, I think we should 
*revert* this change and call for a merge vote and merge based on the results 
of such a voting thread.

The danger of the precedent what we are doing in this branch would be that 
someone else might decide to bring in another feature via this loophole saying 
that this was done in Aliyun code merge. That is what I think we want to avoid, 
in many senses a rule of law remains a rule only if it is followed 
consistently. 

I am really sympathetic to what was done and I appreciate the enthusiasm and 
the spirit of let us get it done,  but I think this list of changes is large 
enough for us  for us to follow the right process.  As far as I can see, few 
days spend on voting time will only strengthen the sense of community around 
this code base.

[~andrew.wang] Since this is a single commit, reverting and merging will 
actuall be a better experience, because it will allow you follow the policy 
that was suggested by you 

 "git merge --no-ff" is also the preferred way of integrating a feature branch 
to other branches, e.g. branch-2."
>From 
>https://lists.apache.org/thread.html/43cd65c6b6c3c0e8ac2b3c76afd9eff1f78b177fabe9c4a96d9b3d0b@1440189889@%3Ccommon-dev.hadoop.apache.org%3E

> Incorporate Aliyun OSS file system implementation
> -
>
> Key: HADOOP-12756
> URL: https://issues.apache.org/jira/browse/HADOOP-12756
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: HADOOP-12756
>Reporter: shimingfei
>Assignee: shimingfei
> Fix For: HADOOP-12756
>
> Attachments: Aliyun-OSS-integration-v2.pdf, 
> Aliyun-OSS-integration.pdf, HADOOP-12756-v02.patch, HADOOP-12756.003.patch, 
> HADOOP-12756.004.patch, HADOOP-12756.005.patch, HADOOP-12756.006.patch, 
> HADOOP-12756.007.patch, HADOOP-12756.008.patch, HADOOP-12756.009.patch, 
> HADOOP-12756.010.patch, HCFS User manual.md, OSS integration.pdf
>
>
> Aliyun OSS is widely used among China’s cloud users, but currently it is not 
> easy to access data laid on OSS storage from user’s Hadoop/Spark application, 
> because of no original support for OSS in Hadoop.
> This work aims to integrate Aliyun OSS with Hadoop. By simple configuration, 
> Spark/Hadoop applications can read/write data from OSS without any code 
> change. Narrowing the gap between user’s APP and data storage, like what have 
> been done for S3 in Hadoop 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13537) Support external calls in the RPC call queue

2016-09-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15527749#comment-15527749
 ] 

Hadoop QA commented on HADOOP-13537:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
 2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  8m  
4s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 25s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 1 new + 223 unchanged - 0 fixed = 224 total (was 223) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 55s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 43m 40s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ha.TestZKFailoverController |
|   | hadoop.net.TestDNS |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HADOOP-13537 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12830580/HADOOP-13537-2.patch |
| Optional Tests |  asflicense  xml  compile  javac  javadoc  mvninstall  
mvnsite  unit  findbugs  checkstyle  |
| uname | Linux 0ffd510b215b 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 
21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 2acfb1e |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10618/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10618/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10618/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10618/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message 

[jira] [Commented] (HADOOP-12756) Incorporate Aliyun OSS file system implementation

2016-09-27 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15527744#comment-15527744
 ] 

Andrew Wang commented on HADOOP-12756:
--

Also somewhat unrelated, could one of the contributors update the fix version 
of this umbrella JIRA to reflect the merge, and also add some release notes? 
Thanks.

> Incorporate Aliyun OSS file system implementation
> -
>
> Key: HADOOP-12756
> URL: https://issues.apache.org/jira/browse/HADOOP-12756
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: HADOOP-12756
>Reporter: shimingfei
>Assignee: shimingfei
> Fix For: HADOOP-12756
>
> Attachments: Aliyun-OSS-integration-v2.pdf, 
> Aliyun-OSS-integration.pdf, HADOOP-12756-v02.patch, HADOOP-12756.003.patch, 
> HADOOP-12756.004.patch, HADOOP-12756.005.patch, HADOOP-12756.006.patch, 
> HADOOP-12756.007.patch, HADOOP-12756.008.patch, HADOOP-12756.009.patch, 
> HADOOP-12756.010.patch, HCFS User manual.md, OSS integration.pdf
>
>
> Aliyun OSS is widely used among China’s cloud users, but currently it is not 
> easy to access data laid on OSS storage from user’s Hadoop/Spark application, 
> because of no original support for OSS in Hadoop.
> This work aims to integrate Aliyun OSS with Hadoop. By simple configuration, 
> Spark/Hadoop applications can read/write data from OSS without any code 
> change. Narrowing the gap between user’s APP and data storage, like what have 
> been done for S3 in Hadoop 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12756) Incorporate Aliyun OSS file system implementation

2016-09-27 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15527739#comment-15527739
 ] 

Andrew Wang commented on HADOOP-12756:
--

Agree with Arpit that this shouldn't have been merged without a merge vote.

Could we treat this as a learning experience? Looking at the JIRA, at least two 
committers (Kai and Steve) did look at it, and what happened seems like an 
honest mistake not to be repeated.

Sending a ping to common-dev would be good as a heads-up, but I'm hoping we can 
retroactively +1 to avoid the git gymnastics to revert and recommit the code.

> Incorporate Aliyun OSS file system implementation
> -
>
> Key: HADOOP-12756
> URL: https://issues.apache.org/jira/browse/HADOOP-12756
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: HADOOP-12756
>Reporter: shimingfei
>Assignee: shimingfei
> Fix For: HADOOP-12756
>
> Attachments: Aliyun-OSS-integration-v2.pdf, 
> Aliyun-OSS-integration.pdf, HADOOP-12756-v02.patch, HADOOP-12756.003.patch, 
> HADOOP-12756.004.patch, HADOOP-12756.005.patch, HADOOP-12756.006.patch, 
> HADOOP-12756.007.patch, HADOOP-12756.008.patch, HADOOP-12756.009.patch, 
> HADOOP-12756.010.patch, HCFS User manual.md, OSS integration.pdf
>
>
> Aliyun OSS is widely used among China’s cloud users, but currently it is not 
> easy to access data laid on OSS storage from user’s Hadoop/Spark application, 
> because of no original support for OSS in Hadoop.
> This work aims to integrate Aliyun OSS with Hadoop. By simple configuration, 
> Spark/Hadoop applications can read/write data from OSS without any code 
> change. Narrowing the gap between user’s APP and data storage, like what have 
> been done for S3 in Hadoop 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13658) Replace config key literal strings with config key names I: hadoop common

2016-09-27 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15527660#comment-15527660
 ] 

Mingliang Liu commented on HADOOP-13658:


Can you use the static import if possible? That way the code will be shorter 
without verbose line breaks like following:
{code}
212   
conf.getInt(CommonConfigurationKeysPublic
213   .IO_FILE_BUFFER_SIZE_KEY,
214   CommonConfigurationKeysPublic
215   
.IO_FILE_BUFFER_SIZE_DEFAULT));
{code}

Otherwise +1. Thanks.

> Replace config key literal strings with config key names I: hadoop common
> -
>
> Key: HADOOP-13658
> URL: https://issues.apache.org/jira/browse/HADOOP-13658
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Minor
> Attachments: HADOOP-13658.001.patch, HADOOP-13658.002.patch
>
>
> In Hadoop Common, there are several places where the config keys are used by 
> the literal strings instead of their names as in configuration key classes. 
> The default values have the same issue. For example
> {code:title=in o.a.h.i.f.t.Compression.java}
> conf.setInt("io.compression.codec.lzo.buffersize", 64 * 1024);
> {code}
> should be
> {code}
> conf.setInt(
> CommonConfigurationKeys.IO_COMPRESSION_CODEC_LZO_BUFFERSIZE_KEY,
> CommonConfigurationKeys.IO_COMPRESSION_CODEC_LZO_BUFFERSIZE_DEFAULT);
> {code}
> instead



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12756) Incorporate Aliyun OSS file system implementation

2016-09-27 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15527654#comment-15527654
 ] 

Kai Zheng commented on HADOOP-12756:


bq. it does not serve the same purpose because it is not as visible to the 
community.
Yeah, I agree, a separate thread discussion for extra attentions could be 
better, even though this effort was watched by many people.

bq. I don't see the requisite 3 binding +1s between your two comments.
I got it. We need explicit votes rather than no objections, before the action.

bq. This sets a bad precedent.
This wasn't ever my want. Could we save from this and would you think I calling 
for a thread discussion could do the help? If it was sorted out that something 
must be fixed for the work before it being in, I can revert this and then do 
the fix.

> Incorporate Aliyun OSS file system implementation
> -
>
> Key: HADOOP-12756
> URL: https://issues.apache.org/jira/browse/HADOOP-12756
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: HADOOP-12756
>Reporter: shimingfei
>Assignee: shimingfei
> Fix For: HADOOP-12756
>
> Attachments: Aliyun-OSS-integration-v2.pdf, 
> Aliyun-OSS-integration.pdf, HADOOP-12756-v02.patch, HADOOP-12756.003.patch, 
> HADOOP-12756.004.patch, HADOOP-12756.005.patch, HADOOP-12756.006.patch, 
> HADOOP-12756.007.patch, HADOOP-12756.008.patch, HADOOP-12756.009.patch, 
> HADOOP-12756.010.patch, HCFS User manual.md, OSS integration.pdf
>
>
> Aliyun OSS is widely used among China’s cloud users, but currently it is not 
> easy to access data laid on OSS storage from user’s Hadoop/Spark application, 
> because of no original support for OSS in Hadoop.
> This work aims to integrate Aliyun OSS with Hadoop. By simple configuration, 
> Spark/Hadoop applications can read/write data from OSS without any code 
> change. Narrowing the gap between user’s APP and data storage, like what have 
> been done for S3 in Hadoop 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12611) TestZKSignerSecretProvider#testMultipleInit occasionally fail

2016-09-27 Thread Robert Kanter (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15527659#comment-15527659
 ] 

Robert Kanter commented on HADOOP-12611:


Iterating through all of the permutations sounds like the best solution.  It's 
been too long for me to remember why I didn't do that before :)

> TestZKSignerSecretProvider#testMultipleInit occasionally fail
> -
>
> Key: HADOOP-12611
> URL: https://issues.apache.org/jira/browse/HADOOP-12611
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HADOOP-12611.001.patch, HADOOP-12611.002.patch, 
> HADOOP-12611.003.patch
>
>
> https://builds.apache.org/job/Hadoop-Common-trunk/2053/testReport/junit/org.apache.hadoop.security.authentication.util/TestZKSignerSecretProvider/testMultipleInit/
> Error Message
> expected null, but was:<[B@142bad79>
> Stacktrace
> java.lang.AssertionError: expected null, but was:<[B@142bad79>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotNull(Assert.java:664)
>   at org.junit.Assert.assertNull(Assert.java:646)
>   at org.junit.Assert.assertNull(Assert.java:656)
>   at 
> org.apache.hadoop.security.authentication.util.TestZKSignerSecretProvider.testMultipleInit(TestZKSignerSecretProvider.java:149)
> I think the failure was introduced after HADOOP-12181
> This is likely where the root cause is:
> 2015-11-29 00:24:33,325 ERROR ZKSignerSecretProvider - An unexpected 
> exception occurred while pulling data fromZooKeeper
> java.lang.IllegalStateException: instance must be started before calling this 
> method
>   at 
> com.google.common.base.Preconditions.checkState(Preconditions.java:145)
>   at 
> org.apache.curator.framework.imps.CuratorFrameworkImpl.getData(CuratorFrameworkImpl.java:363)
>   at 
> org.apache.hadoop.security.authentication.util.ZKSignerSecretProvider.pullFromZK(ZKSignerSecretProvider.java:341)
>   at 
> org.apache.hadoop.security.authentication.util.ZKSignerSecretProvider.rollSecret(ZKSignerSecretProvider.java:264)
>   at 
> org.apache.hadoop.security.authentication.util.ZKSignerSecretProvider$$EnhancerByMockitoWithCGLIB$$575f06d8.CGLIB$rollSecret$2()
>   at 
> org.apache.hadoop.security.authentication.util.ZKSignerSecretProvider$$EnhancerByMockitoWithCGLIB$$575f06d8$$FastClassByMockitoWithCGLIB$$6f94a716.invoke()
>   at org.mockito.cglib.proxy.MethodProxy.invokeSuper(MethodProxy.java:216)
>   at 
> org.mockito.internal.creation.AbstractMockitoMethodProxy.invokeSuper(AbstractMockitoMethodProxy.java:10)
>   at 
> org.mockito.internal.invocation.realmethod.CGLIBProxyRealMethod.invoke(CGLIBProxyRealMethod.java:22)
>   at 
> org.mockito.internal.invocation.realmethod.FilteredCGLIBProxyRealMethod.invoke(FilteredCGLIBProxyRealMethod.java:27)
>   at 
> org.mockito.internal.invocation.Invocation.callRealMethod(Invocation.java:211)
>   at 
> org.mockito.internal.stubbing.answers.CallsRealMethods.answer(CallsRealMethods.java:36)
>   at org.mockito.internal.MockHandler.handle(MockHandler.java:99)
>   at 
> org.mockito.internal.creation.MethodInterceptorFilter.intercept(MethodInterceptorFilter.java:47)
>   at 
> org.apache.hadoop.security.authentication.util.ZKSignerSecretProvider$$EnhancerByMockitoWithCGLIB$$575f06d8.rollSecret()
>   at 
> org.apache.hadoop.security.authentication.util.RolloverSignerSecretProvider$1.run(RolloverSignerSecretProvider.java:97)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>   at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13658) Replace config key literal strings with config key names I: hadoop common

2016-09-27 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HADOOP-13658:

Description: 
In Hadoop Common, there are several places where the config keys are used by 
the literal strings instead of their names as in configuration key classes. The 
default values have the same issue. For example

{code:title=in o.a.h.i.f.t.Compression.java}
conf.setInt("io.compression.codec.lzo.buffersize", 64 * 1024);
{code}

should be

{code}
conf.setInt(
CommonConfigurationKeys.IO_COMPRESSION_CODEC_LZO_BUFFERSIZE_KEY,
CommonConfigurationKeys.IO_COMPRESSION_CODEC_LZO_BUFFERSIZE_DEFAULT);
{code}

instead

  was:
In Hadoop Common, there are several places where the config keys are used by 
the literal strings instead of their names as in configuration key classes. For 
example

{code:title=in o.a.h.i.f.t.Compression.java}
conf.setInt("io.compression.codec.lzo.buffersize", 64 * 1024);
{code}

should be

{code}
conf.setInt(CommonConfigurationKeys.IO_COMPRESSION_CODEC_LZO_BUFFERSIZE_KEY, 64 
* 1024);
{code}

instead


> Replace config key literal strings with config key names I: hadoop common
> -
>
> Key: HADOOP-13658
> URL: https://issues.apache.org/jira/browse/HADOOP-13658
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Minor
> Attachments: HADOOP-13658.001.patch, HADOOP-13658.002.patch
>
>
> In Hadoop Common, there are several places where the config keys are used by 
> the literal strings instead of their names as in configuration key classes. 
> The default values have the same issue. For example
> {code:title=in o.a.h.i.f.t.Compression.java}
> conf.setInt("io.compression.codec.lzo.buffersize", 64 * 1024);
> {code}
> should be
> {code}
> conf.setInt(
> CommonConfigurationKeys.IO_COMPRESSION_CODEC_LZO_BUFFERSIZE_KEY,
> CommonConfigurationKeys.IO_COMPRESSION_CODEC_LZO_BUFFERSIZE_DEFAULT);
> {code}
> instead



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12756) Incorporate Aliyun OSS file system implementation

2016-09-27 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15527649#comment-15527649
 ] 

Arpit Agarwal commented on HADOOP-12756:


To clarify, I don't object to the content of the change (I haven't looked into 
it). This change is likely safe because it doesn't affect existing code. But 
letting committers override the branch merge procedure selectively is opening a 
can of worms.

> Incorporate Aliyun OSS file system implementation
> -
>
> Key: HADOOP-12756
> URL: https://issues.apache.org/jira/browse/HADOOP-12756
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: HADOOP-12756
>Reporter: shimingfei
>Assignee: shimingfei
> Fix For: HADOOP-12756
>
> Attachments: Aliyun-OSS-integration-v2.pdf, 
> Aliyun-OSS-integration.pdf, HADOOP-12756-v02.patch, HADOOP-12756.003.patch, 
> HADOOP-12756.004.patch, HADOOP-12756.005.patch, HADOOP-12756.006.patch, 
> HADOOP-12756.007.patch, HADOOP-12756.008.patch, HADOOP-12756.009.patch, 
> HADOOP-12756.010.patch, HCFS User manual.md, OSS integration.pdf
>
>
> Aliyun OSS is widely used among China’s cloud users, but currently it is not 
> easy to access data laid on OSS storage from user’s Hadoop/Spark application, 
> because of no original support for OSS in Hadoop.
> This work aims to integrate Aliyun OSS with Hadoop. By simple configuration, 
> Spark/Hadoop applications can read/write data from OSS without any code 
> change. Narrowing the gap between user’s APP and data storage, like what have 
> been done for S3 in Hadoop 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13658) Replace config key literal strings with config key names I: hadoop common

2016-09-27 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HADOOP-13658:

Attachment: HADOOP-13658.002.patch

> Replace config key literal strings with config key names I: hadoop common
> -
>
> Key: HADOOP-13658
> URL: https://issues.apache.org/jira/browse/HADOOP-13658
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Minor
> Attachments: HADOOP-13658.001.patch, HADOOP-13658.002.patch
>
>
> In Hadoop Common, there are several places where the config keys are used by 
> the literal strings instead of their names as in configuration key classes. 
> For example
> {code:title=in o.a.h.i.f.t.Compression.java}
> conf.setInt("io.compression.codec.lzo.buffersize", 64 * 1024);
> {code}
> should be
> {code}
> conf.setInt(CommonConfigurationKeys.IO_COMPRESSION_CODEC_LZO_BUFFERSIZE_KEY, 
> 64 * 1024);
> {code}
> instead



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13658) Replace config key literal strings with config key names I: hadoop common

2016-09-27 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HADOOP-13658:

Attachment: (was: HADOOP-13658.002.patch)

> Replace config key literal strings with config key names I: hadoop common
> -
>
> Key: HADOOP-13658
> URL: https://issues.apache.org/jira/browse/HADOOP-13658
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Minor
> Attachments: HADOOP-13658.001.patch, HADOOP-13658.002.patch
>
>
> In Hadoop Common, there are several places where the config keys are used by 
> the literal strings instead of their names as in configuration key classes. 
> For example
> {code:title=in o.a.h.i.f.t.Compression.java}
> conf.setInt("io.compression.codec.lzo.buffersize", 64 * 1024);
> {code}
> should be
> {code}
> conf.setInt(CommonConfigurationKeys.IO_COMPRESSION_CODEC_LZO_BUFFERSIZE_KEY, 
> 64 * 1024);
> {code}
> instead



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13658) Replace config key literal strings with config key names I: hadoop common

2016-09-27 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HADOOP-13658:

Attachment: HADOOP-13658.002.patch

> Replace config key literal strings with config key names I: hadoop common
> -
>
> Key: HADOOP-13658
> URL: https://issues.apache.org/jira/browse/HADOOP-13658
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Minor
> Attachments: HADOOP-13658.001.patch, HADOOP-13658.002.patch
>
>
> In Hadoop Common, there are several places where the config keys are used by 
> the literal strings instead of their names as in configuration key classes. 
> For example
> {code:title=in o.a.h.i.f.t.Compression.java}
> conf.setInt("io.compression.codec.lzo.buffersize", 64 * 1024);
> {code}
> should be
> {code}
> conf.setInt(CommonConfigurationKeys.IO_COMPRESSION_CODEC_LZO_BUFFERSIZE_KEY, 
> 64 * 1024);
> {code}
> instead



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13448) S3Guard: Define MetadataStore interface.

2016-09-27 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15527640#comment-15527640
 ] 

Aaron Fabbri commented on HADOOP-13448:
---

Hey [~cnauroth] thanks for the feedback

{quote}
Regarding option (c), what is your thoughts about that it moves the complexity 
to the caller
{quote}

I see the complexity as unavoidable if we want (1) on-demand storage of 
metadata (i.e. don't have to pre-load the entire s3a bucket's metadata before 
turning on MetadataStore, it just incrementally picks up state), and (2) the 
ability to do atomic move.  

I feel like #1 is very valuable here, and ultimately more robust.

{quote}
what about we just do the move(srcKey, dstKey) together with copyFile() in the 
loop
{quote}
I'd be fine with this if we don't mind relaxing #2.  That is, there is no 
atomic {{MetadataStore#move()}}.  We are still able to explicitly track path 
deletions.

We can always add back this "batch move" interface I propose as (c), if the 
DynamoDB implementation wants to implement atomic move, or wants to batch this 
stuff to make it more efficient.

That said, it is probably not too hard for s3a to accumulate lists of paths and 
pass them in as a batch towards the end.

Some options:

1. Keep the (c) batch interface (I have a patch for this I can post on 
HADOOP-13631)
2. Implement a non-batch, non-recurive move(srcPath, destPath) as you mention.
3. Implement both single move() and batch move(), with default batch 
implementation just looping over input collections and calling the single 
variant.

I'm good with any of these.  We should feel comfortable punting complexity to 
future patches until we have more context with the dynamo db implementation and 
the s3a integration.



> S3Guard: Define MetadataStore interface.
> 
>
> Key: HADOOP-13448
> URL: https://issues.apache.org/jira/browse/HADOOP-13448
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Fix For: HADOOP-13345
>
> Attachments: HADOOP-13448-HADOOP-13345.001.patch, 
> HADOOP-13448-HADOOP-13345.002.patch, HADOOP-13448-HADOOP-13345.003.patch, 
> HADOOP-13448-HADOOP-13345.004.patch, HADOOP-13448-HADOOP-13345.005.patch
>
>
> Define the common interface for metadata store operations.  This is the 
> interface that any metadata back-end must implement in order to integrate 
> with S3Guard.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12611) TestZKSignerSecretProvider#testMultipleInit occasionally fail

2016-09-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15527618#comment-15527618
 ] 

Hadoop QA commented on HADOOP-12611:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
 2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m  
9s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 13s{color} | {color:orange} hadoop-common-project/hadoop-auth: The patch 
generated 26 new + 2 unchanged - 2 fixed = 28 total (was 4) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m  
8s{color} | {color:green} hadoop-auth in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 29m 21s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HADOOP-12611 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12830582/HADOOP-12611.003.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 920793daf2e8 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 
21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 2acfb1e |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10617/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-auth.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10617/testReport/ |
| modules | C: hadoop-common-project/hadoop-auth U: 
hadoop-common-project/hadoop-auth |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10617/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> TestZKSignerSecretProvider#testMultipleInit occasionally fail
> -
>
> Key: HADOOP-12611
> URL: https://issues.apache.org/jira/browse/HADOOP-12611
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang

[jira] [Commented] (HADOOP-12756) Incorporate Aliyun OSS file system implementation

2016-09-27 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15527584#comment-15527584
 ] 

Arpit Agarwal commented on HADOOP-12756:


bq. I wish it could serve the same purpose and would work for you as well. 
[~drankye], no it does not serve the same purpose because it is not as visible 
to the community. Even discounting the lack of an email thread, I don't see the 
requisite 3 binding +1s between your two comments. This sets a bad precedent.

> Incorporate Aliyun OSS file system implementation
> -
>
> Key: HADOOP-12756
> URL: https://issues.apache.org/jira/browse/HADOOP-12756
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: HADOOP-12756
>Reporter: shimingfei
>Assignee: shimingfei
> Fix For: HADOOP-12756
>
> Attachments: Aliyun-OSS-integration-v2.pdf, 
> Aliyun-OSS-integration.pdf, HADOOP-12756-v02.patch, HADOOP-12756.003.patch, 
> HADOOP-12756.004.patch, HADOOP-12756.005.patch, HADOOP-12756.006.patch, 
> HADOOP-12756.007.patch, HADOOP-12756.008.patch, HADOOP-12756.009.patch, 
> HADOOP-12756.010.patch, HCFS User manual.md, OSS integration.pdf
>
>
> Aliyun OSS is widely used among China’s cloud users, but currently it is not 
> easy to access data laid on OSS storage from user’s Hadoop/Spark application, 
> because of no original support for OSS in Hadoop.
> This work aims to integrate Aliyun OSS with Hadoop. By simple configuration, 
> Spark/Hadoop applications can read/write data from OSS without any code 
> change. Narrowing the gap between user’s APP and data storage, like what have 
> been done for S3 in Hadoop 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12579) Deprecate WriteableRPCEngine

2016-09-27 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HADOOP-12579:
---
Hadoop Flags:   (was: Incompatible change)

> Deprecate WriteableRPCEngine
> 
>
> Key: HADOOP-12579
> URL: https://issues.apache.org/jira/browse/HADOOP-12579
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Haohui Mai
>Assignee: Kai Zheng
> Attachments: HADOOP-12579-v1.patch, HADOOP-12579-v10.patch, 
> HADOOP-12579-v11.patch, HADOOP-12579-v3.patch, HADOOP-12579-v4.patch, 
> HADOOP-12579-v5.patch, HADOOP-12579-v6.patch, HADOOP-12579-v7.patch, 
> HADOOP-12579-v8.patch, HADOOP-12579-v9.patch
>
>
> The {{WriteableRPCEninge}} depends on Java's serialization mechanisms for RPC 
> requests. Without proper checks, it has be shown that it can lead to security 
> vulnerabilities such as remote code execution (e.g., COLLECTIONS-580, 
> HADOOP-12577).
> The current implementation has migrated from {{WriteableRPCEngine}} to 
> {{ProtobufRPCEngine}} now. This jira proposes to deprecate 
> {{WriteableRPCEngine}} in branch-2 and to remove it in trunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12579) Deprecate WriteableRPCEngine

2016-09-27 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HADOOP-12579:
---
Summary: Deprecate WriteableRPCEngine  (was: Deprecate and remove 
WriteableRPCEngine)

> Deprecate WriteableRPCEngine
> 
>
> Key: HADOOP-12579
> URL: https://issues.apache.org/jira/browse/HADOOP-12579
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Haohui Mai
>Assignee: Kai Zheng
> Attachments: HADOOP-12579-v1.patch, HADOOP-12579-v10.patch, 
> HADOOP-12579-v11.patch, HADOOP-12579-v3.patch, HADOOP-12579-v4.patch, 
> HADOOP-12579-v5.patch, HADOOP-12579-v6.patch, HADOOP-12579-v7.patch, 
> HADOOP-12579-v8.patch, HADOOP-12579-v9.patch
>
>
> The {{WriteableRPCEninge}} depends on Java's serialization mechanisms for RPC 
> requests. Without proper checks, it has be shown that it can lead to security 
> vulnerabilities such as remote code execution (e.g., COLLECTIONS-580, 
> HADOOP-12577).
> The current implementation has migrated from {{WriteableRPCEngine}} to 
> {{ProtobufRPCEngine}} now. This jira proposes to deprecate 
> {{WriteableRPCEngine}} in branch-2 and to remove it in trunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12579) Deprecate and remove WriteableRPCEngine

2016-09-27 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15527574#comment-15527574
 ] 

Kai Zheng commented on HADOOP-12579:


In MAPREDUCE-6706 [~djp] had a 
[comment|https://issues.apache.org/jira/browse/MAPREDUCE-6706?focusedCommentId=15500791=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15500791]
 that the old RPC engine WriteableRPCEngine might be still used by other 
projects like Tez. Considering this I'd re-target this as only {{deprecate}} 
rather than {{removal}} of the engine. Please correct if you'd think otherwise. 
Thanks.

> Deprecate and remove WriteableRPCEngine
> ---
>
> Key: HADOOP-12579
> URL: https://issues.apache.org/jira/browse/HADOOP-12579
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Haohui Mai
>Assignee: Kai Zheng
> Attachments: HADOOP-12579-v1.patch, HADOOP-12579-v10.patch, 
> HADOOP-12579-v11.patch, HADOOP-12579-v3.patch, HADOOP-12579-v4.patch, 
> HADOOP-12579-v5.patch, HADOOP-12579-v6.patch, HADOOP-12579-v7.patch, 
> HADOOP-12579-v8.patch, HADOOP-12579-v9.patch
>
>
> The {{WriteableRPCEninge}} depends on Java's serialization mechanisms for RPC 
> requests. Without proper checks, it has be shown that it can lead to security 
> vulnerabilities such as remote code execution (e.g., COLLECTIONS-580, 
> HADOOP-12577).
> The current implementation has migrated from {{WriteableRPCEngine}} to 
> {{ProtobufRPCEngine}} now. This jira proposes to deprecate 
> {{WriteableRPCEngine}} in branch-2 and to remove it in trunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-12756) Incorporate Aliyun OSS file system implementation

2016-09-27 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15527530#comment-15527530
 ] 

Kai Zheng edited comment on HADOOP-12756 at 9/27/16 9:55 PM:
-

Hi [~arpitagarwal],

There wasn't an explicit vote thread called for this in the mailing list. I 
tracked the important discussions in this master issue, in [above 
comment|https://issues.apache.org/jira/browse/HADOOP-12756?focusedCommentId=15511801=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15511801]
 made the summary about this branch work and called for the merge. I wish it 
could serve the same purpose and would work for you as well. The merge was 
recorded 
[here|https://issues.apache.org/jira/browse/HADOOP-12756?focusedCommentId=15520800=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15520800]
 and how would you like it?

Thank you for the discussion.


was (Author: drankye):
Hi [~arpitagarwal],

Yes there wasn't an explicit vote thread called for this in the mailing list. I 
tracked the important discussions in this master issue, in [above 
comment|https://issues.apache.org/jira/browse/HADOOP-12756?focusedCommentId=15511801=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15511801]
 made the summary about this branch work and called for the merge. I wish it 
could serve the same purpose and would work for you as well. The merge was 
recorded 
[here|https://issues.apache.org/jira/browse/HADOOP-12756?focusedCommentId=15520800=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15520800]
 and how would you like it?

Thank you for the discussion.

> Incorporate Aliyun OSS file system implementation
> -
>
> Key: HADOOP-12756
> URL: https://issues.apache.org/jira/browse/HADOOP-12756
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: HADOOP-12756
>Reporter: shimingfei
>Assignee: shimingfei
> Fix For: HADOOP-12756
>
> Attachments: Aliyun-OSS-integration-v2.pdf, 
> Aliyun-OSS-integration.pdf, HADOOP-12756-v02.patch, HADOOP-12756.003.patch, 
> HADOOP-12756.004.patch, HADOOP-12756.005.patch, HADOOP-12756.006.patch, 
> HADOOP-12756.007.patch, HADOOP-12756.008.patch, HADOOP-12756.009.patch, 
> HADOOP-12756.010.patch, HCFS User manual.md, OSS integration.pdf
>
>
> Aliyun OSS is widely used among China’s cloud users, but currently it is not 
> easy to access data laid on OSS storage from user’s Hadoop/Spark application, 
> because of no original support for OSS in Hadoop.
> This work aims to integrate Aliyun OSS with Hadoop. By simple configuration, 
> Spark/Hadoop applications can read/write data from OSS without any code 
> change. Narrowing the gap between user’s APP and data storage, like what have 
> been done for S3 in Hadoop 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13448) S3Guard: Define MetadataStore interface.

2016-09-27 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15527531#comment-15527531
 ] 

Lei (Eddy) Xu commented on HADOOP-13448:


Hi, [~fabbri] Thanks for the proposals.

Regarding option (c), what is your thoughts about that it moves the complexity 
to the caller (i.e., s3a filesystem)? For instance, s3a needs to determine 
where to get a full list of s3 files under "{{/src}}"?  The other concern, as 
you mentioned, is the interface is not very intuitive to the API consumers. 

As moving a directory recursively is implemented by copying actual S3 file 
(object) one by one eventually, what about we just do the {{move(srcKey, 
dstKey)}} together with {{copyFile()}} in the loop in 
{{S3AFileSystem#innerRename()}}, so that {{move()}} does not need to validate 
{{srcPath}} and {{dstPath}}

{code}
  while (true) {
for (S3ObjectSummary summary : objects.getObjectSummaries()) {
  keysToDelete.add(
  new DeleteObjectsRequest.KeyVersion(summary.getKey()));
  String newDstKey =
  dstKey + summary.getKey().substring(srcKey.length());
  copyFile(summary.getKey(), newDstKey, summary.getSize());

  if (keysToDelete.size() == MAX_ENTRIES_TO_DELETE) {
removeKeys(keysToDelete, true);
  }
}
   }
{code}


> S3Guard: Define MetadataStore interface.
> 
>
> Key: HADOOP-13448
> URL: https://issues.apache.org/jira/browse/HADOOP-13448
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Fix For: HADOOP-13345
>
> Attachments: HADOOP-13448-HADOOP-13345.001.patch, 
> HADOOP-13448-HADOOP-13345.002.patch, HADOOP-13448-HADOOP-13345.003.patch, 
> HADOOP-13448-HADOOP-13345.004.patch, HADOOP-13448-HADOOP-13345.005.patch
>
>
> Define the common interface for metadata store operations.  This is the 
> interface that any metadata back-end must implement in order to integrate 
> with S3Guard.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12756) Incorporate Aliyun OSS file system implementation

2016-09-27 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15527530#comment-15527530
 ] 

Kai Zheng commented on HADOOP-12756:


Hi [~arpitagarwal],

Yes there wasn't an explicit vote thread called for this in the mailing list. I 
tracked the important discussions in this master issue, in [above 
comment|https://issues.apache.org/jira/browse/HADOOP-12756?focusedCommentId=15511801=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15511801]
 made the summary about this branch work and called for the merge. I wish it 
could serve the same purpose and would work for you as well. The merge was 
recorded 
[here|https://issues.apache.org/jira/browse/HADOOP-12756?focusedCommentId=15520800=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15520800]
 and how would you like it?

Thank you for the discussion.

> Incorporate Aliyun OSS file system implementation
> -
>
> Key: HADOOP-12756
> URL: https://issues.apache.org/jira/browse/HADOOP-12756
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: HADOOP-12756
>Reporter: shimingfei
>Assignee: shimingfei
> Fix For: HADOOP-12756
>
> Attachments: Aliyun-OSS-integration-v2.pdf, 
> Aliyun-OSS-integration.pdf, HADOOP-12756-v02.patch, HADOOP-12756.003.patch, 
> HADOOP-12756.004.patch, HADOOP-12756.005.patch, HADOOP-12756.006.patch, 
> HADOOP-12756.007.patch, HADOOP-12756.008.patch, HADOOP-12756.009.patch, 
> HADOOP-12756.010.patch, HCFS User manual.md, OSS integration.pdf
>
>
> Aliyun OSS is widely used among China’s cloud users, but currently it is not 
> easy to access data laid on OSS storage from user’s Hadoop/Spark application, 
> because of no original support for OSS in Hadoop.
> This work aims to integrate Aliyun OSS with Hadoop. By simple configuration, 
> Spark/Hadoop applications can read/write data from OSS without any code 
> change. Narrowing the gap between user’s APP and data storage, like what have 
> been done for S3 in Hadoop 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12611) TestZKSignerSecretProvider#testMultipleInit occasionally fail

2016-09-27 Thread Eric Badger (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Badger updated HADOOP-12611:
-
Attachment: HADOOP-12611.003.patch

Uploading a new patch that is built off of [~jojochuang]'s 002 patch. I made 3 
main changes. 

1. I explicitly call realRollSecret() so that we got rid of the race condition 
between rollSecret() being called and checking the secrets of each 
secretProvider. 
2. I changed the verify() functions to use atLeast() instead of times(), 
because it is possible for rollSecret() to be called many times if the main 
code is slow and the scheduler thread is not. 
3. I randomized the order in which realRollSecret() is called for each 
secretProvider. 

In regards to change #3, I had an offline conversation with [~jlowe] that makes 
me wonder why we need this in the first place (though I was the one who 
suggested randomly selecting the order). Why are we relying on the test to give 
us randomness? If there are a finite amount of states like there are here, why 
don't we iterate over all of them instead of trying to randomly get all of the 
states over multiple runs? If we care that every permutation of the order of 
realRollSecret() works, then we should test every permutation. Randomly 
selecting the order means that it would take 6+ runs to go through every 
permutation. So if a change was made in the source that screwed up one of those 
permutations, we might not see it until a long ways down (and then it would be 
really hard to debug since we wouldn't know what order). [~rkanter], I think 
the better way to do this is to explicitly iterate through all of the 
permutations (or at least 2 permutations, 1 with each secretProvider "winning" 
the race). 

> TestZKSignerSecretProvider#testMultipleInit occasionally fail
> -
>
> Key: HADOOP-12611
> URL: https://issues.apache.org/jira/browse/HADOOP-12611
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HADOOP-12611.001.patch, HADOOP-12611.002.patch, 
> HADOOP-12611.003.patch
>
>
> https://builds.apache.org/job/Hadoop-Common-trunk/2053/testReport/junit/org.apache.hadoop.security.authentication.util/TestZKSignerSecretProvider/testMultipleInit/
> Error Message
> expected null, but was:<[B@142bad79>
> Stacktrace
> java.lang.AssertionError: expected null, but was:<[B@142bad79>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotNull(Assert.java:664)
>   at org.junit.Assert.assertNull(Assert.java:646)
>   at org.junit.Assert.assertNull(Assert.java:656)
>   at 
> org.apache.hadoop.security.authentication.util.TestZKSignerSecretProvider.testMultipleInit(TestZKSignerSecretProvider.java:149)
> I think the failure was introduced after HADOOP-12181
> This is likely where the root cause is:
> 2015-11-29 00:24:33,325 ERROR ZKSignerSecretProvider - An unexpected 
> exception occurred while pulling data fromZooKeeper
> java.lang.IllegalStateException: instance must be started before calling this 
> method
>   at 
> com.google.common.base.Preconditions.checkState(Preconditions.java:145)
>   at 
> org.apache.curator.framework.imps.CuratorFrameworkImpl.getData(CuratorFrameworkImpl.java:363)
>   at 
> org.apache.hadoop.security.authentication.util.ZKSignerSecretProvider.pullFromZK(ZKSignerSecretProvider.java:341)
>   at 
> org.apache.hadoop.security.authentication.util.ZKSignerSecretProvider.rollSecret(ZKSignerSecretProvider.java:264)
>   at 
> org.apache.hadoop.security.authentication.util.ZKSignerSecretProvider$$EnhancerByMockitoWithCGLIB$$575f06d8.CGLIB$rollSecret$2()
>   at 
> org.apache.hadoop.security.authentication.util.ZKSignerSecretProvider$$EnhancerByMockitoWithCGLIB$$575f06d8$$FastClassByMockitoWithCGLIB$$6f94a716.invoke()
>   at org.mockito.cglib.proxy.MethodProxy.invokeSuper(MethodProxy.java:216)
>   at 
> org.mockito.internal.creation.AbstractMockitoMethodProxy.invokeSuper(AbstractMockitoMethodProxy.java:10)
>   at 
> org.mockito.internal.invocation.realmethod.CGLIBProxyRealMethod.invoke(CGLIBProxyRealMethod.java:22)
>   at 
> org.mockito.internal.invocation.realmethod.FilteredCGLIBProxyRealMethod.invoke(FilteredCGLIBProxyRealMethod.java:27)
>   at 
> org.mockito.internal.invocation.Invocation.callRealMethod(Invocation.java:211)
>   at 
> org.mockito.internal.stubbing.answers.CallsRealMethods.answer(CallsRealMethods.java:36)
>   at org.mockito.internal.MockHandler.handle(MockHandler.java:99)
>   at 
> org.mockito.internal.creation.MethodInterceptorFilter.intercept(MethodInterceptorFilter.java:47)
>   at 
> org.apache.hadoop.security.authentication.util.ZKSignerSecretProvider$$EnhancerByMockitoWithCGLIB$$575f06d8.rollSecret()
>   at 

[jira] [Updated] (HADOOP-13537) Support external calls in the RPC call queue

2016-09-27 Thread Rushabh S Shah (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rushabh S Shah updated HADOOP-13537:

Attachment: HADOOP-13537-2.patch

Attaching a new patch.

> Support external calls in the RPC call queue
> 
>
> Key: HADOOP-13537
> URL: https://issues.apache.org/jira/browse/HADOOP-13537
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Attachments: HADOOP-13537-1.patch, HADOOP-13537-2.patch, 
> HADOOP-13537.patch
>
>
> Leveraging HADOOP-13465 will allow non-rpc calls to be added to the call 
> queue.  This is intended to support routing webhdfs calls through the call 
> queue to provide a unified and protocol-independent QoS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13537) Support external calls in the RPC call queue

2016-09-27 Thread Rushabh S Shah (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rushabh S Shah updated HADOOP-13537:

Status: Open  (was: Patch Available)

Canceling patch to address checkstyle and findbugs warning.

> Support external calls in the RPC call queue
> 
>
> Key: HADOOP-13537
> URL: https://issues.apache.org/jira/browse/HADOOP-13537
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Attachments: HADOOP-13537-1.patch, HADOOP-13537-2.patch, 
> HADOOP-13537.patch
>
>
> Leveraging HADOOP-13465 will allow non-rpc calls to be added to the call 
> queue.  This is intended to support routing webhdfs calls through the call 
> queue to provide a unified and protocol-independent QoS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13537) Support external calls in the RPC call queue

2016-09-27 Thread Rushabh S Shah (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rushabh S Shah updated HADOOP-13537:

Status: Patch Available  (was: Open)

> Support external calls in the RPC call queue
> 
>
> Key: HADOOP-13537
> URL: https://issues.apache.org/jira/browse/HADOOP-13537
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Attachments: HADOOP-13537-1.patch, HADOOP-13537-2.patch, 
> HADOOP-13537.patch
>
>
> Leveraging HADOOP-13465 will allow non-rpc calls to be added to the call 
> queue.  This is intended to support routing webhdfs calls through the call 
> queue to provide a unified and protocol-independent QoS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13661) Upgrade HTrace version

2016-09-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15527494#comment-15527494
 ] 

Hadoop QA commented on HADOOP-13661:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  8m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m  
8s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m  
8s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 40m 51s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HADOOP-13661 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12830570/HADOOP-13661.002.patch
 |
| Optional Tests |  asflicense  mvnsite  compile  javac  javadoc  mvninstall  
unit  xml  |
| uname | Linux c73c373ffc9d 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 1831be8 |
| Default Java | 1.8.0_101 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10616/testReport/ |
| modules | C: hadoop-project hadoop-common-project/hadoop-common U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10616/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Upgrade HTrace version
> --
>
> Key: HADOOP-13661
> URL: https://issues.apache.org/jira/browse/HADOOP-13661
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
> Attachments: HADOOP-13661.001.patch, HADOOP-13661.002.patch
>
>
> We're currently pulling in version 4.0.1-incubating - I think we should 
> upgrade to the latest 

[jira] [Commented] (HADOOP-13449) S3Guard: Implement DynamoDBMetadataStore.

2016-09-27 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15527444#comment-15527444
 ] 

Lei (Eddy) Xu commented on HADOOP-13449:


Great.Thanks a lot!

> S3Guard: Implement DynamoDBMetadataStore.
> -
>
> Key: HADOOP-13449
> URL: https://issues.apache.org/jira/browse/HADOOP-13449
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Chris Nauroth
>Assignee: Mingliang Liu
>
> Provide an implementation of the metadata store backed by DynamoDB.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13449) S3Guard: Implement DynamoDBMetadataStore.

2016-09-27 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15527442#comment-15527442
 ] 

Mingliang Liu commented on HADOOP-13449:


Will post a WIP patch in one  week.

> S3Guard: Implement DynamoDBMetadataStore.
> -
>
> Key: HADOOP-13449
> URL: https://issues.apache.org/jira/browse/HADOOP-13449
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Chris Nauroth
>Assignee: Mingliang Liu
>
> Provide an implementation of the metadata store backed by DynamoDB.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13449) S3Guard: Implement DynamoDBMetadataStore.

2016-09-27 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15527383#comment-15527383
 ] 

Lei (Eddy) Xu commented on HADOOP-13449:


Hi, [~liuml07] 

Would you mind give some insights about the progress, giving HADOOP-13448 been 
committed?

Much appreciated. 


> S3Guard: Implement DynamoDBMetadataStore.
> -
>
> Key: HADOOP-13449
> URL: https://issues.apache.org/jira/browse/HADOOP-13449
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Chris Nauroth
>Assignee: Mingliang Liu
>
> Provide an implementation of the metadata store backed by DynamoDB.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13661) Upgrade HTrace version

2016-09-27 Thread Sean Mackrory (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Mackrory updated HADOOP-13661:
---
Attachment: HADOOP-13661.002.patch

Good catch - we should definitely update other references to that version. 
Attaching an updated patch.

> Upgrade HTrace version
> --
>
> Key: HADOOP-13661
> URL: https://issues.apache.org/jira/browse/HADOOP-13661
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
> Attachments: HADOOP-13661.001.patch, HADOOP-13661.002.patch
>
>
> We're currently pulling in version 4.0.1-incubating - I think we should 
> upgrade to the latest 4.1.0-incubating.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13590) Retry until TGT expires even if the UGI renewal thread encountered exception

2016-09-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15527351#comment-15527351
 ] 

Hadoop QA commented on HADOOP-13590:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
46s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 43m 37s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HADOOP-13590 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12830558/HADOOP-13590.07.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 832738b28dde 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 
20:42:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 1831be8 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10615/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10615/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Retry until TGT expires even if the UGI renewal thread encountered exception
> 
>
> Key: HADOOP-13590
> URL: https://issues.apache.org/jira/browse/HADOOP-13590
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.8.0, 2.7.3, 2.6.4
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-13590.01.patch, HADOOP-13590.02.patch, 
> HADOOP-13590.03.patch, HADOOP-13590.04.patch, HADOOP-13590.05.patch, 

[jira] [Commented] (HADOOP-13666) Supporting rack exclusion in countNumOfAvailableNodes in NetworkTopology

2016-09-27 Thread Inigo Goiri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13666?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15527313#comment-15527313
 ] 

Inigo Goiri commented on HADOOP-13666:
--

The failed unit test seems unrelated. We should retrigger the build to verify.

> Supporting rack exclusion in countNumOfAvailableNodes in NetworkTopology
> 
>
> Key: HADOOP-13666
> URL: https://issues.apache.org/jira/browse/HADOOP-13666
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: net
>Affects Versions: 2.7.3
>Reporter: Inigo Goiri
>Assignee: Inigo Goiri
> Attachments: HADOOP-13666.000.patch
>
>
> Right now, the counting of nodes in {{NetworkTopology}} assumes the 
> exclusions are leaves. We should count the proper number.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12756) Incorporate Aliyun OSS file system implementation

2016-09-27 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15527290#comment-15527290
 ] 

Arpit Agarwal commented on HADOOP-12756:


Hi [~drankye], was this branch merged without a vote thread?

> Incorporate Aliyun OSS file system implementation
> -
>
> Key: HADOOP-12756
> URL: https://issues.apache.org/jira/browse/HADOOP-12756
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: HADOOP-12756
>Reporter: shimingfei
>Assignee: shimingfei
> Fix For: HADOOP-12756
>
> Attachments: Aliyun-OSS-integration-v2.pdf, 
> Aliyun-OSS-integration.pdf, HADOOP-12756-v02.patch, HADOOP-12756.003.patch, 
> HADOOP-12756.004.patch, HADOOP-12756.005.patch, HADOOP-12756.006.patch, 
> HADOOP-12756.007.patch, HADOOP-12756.008.patch, HADOOP-12756.009.patch, 
> HADOOP-12756.010.patch, HCFS User manual.md, OSS integration.pdf
>
>
> Aliyun OSS is widely used among China’s cloud users, but currently it is not 
> easy to access data laid on OSS storage from user’s Hadoop/Spark application, 
> because of no original support for OSS in Hadoop.
> This work aims to integrate Aliyun OSS with Hadoop. By simple configuration, 
> Spark/Hadoop applications can read/write data from OSS without any code 
> change. Narrowing the gap between user’s APP and data storage, like what have 
> been done for S3 in Hadoop 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13666) Supporting rack exclusion in countNumOfAvailableNodes in NetworkTopology

2016-09-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13666?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15527289#comment-15527289
 ] 

Hadoop QA commented on HADOOP-13666:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
47s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
59s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  9m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
52s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m  3s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 62m 
43s{color} | {color:green} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
40s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}148m 10s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.net.TestDNS |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HADOOP-13666 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12830535/HADOOP-13666.000.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux c2387a2c8c1c 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 8ae4729 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10613/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10613/testReport/ |
| modules | C: hadoop-common-project/hadoop-common 
hadoop-hdfs-project/hadoop-hdfs U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10613/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was 

[jira] [Updated] (HADOOP-13590) Retry until TGT expires even if the UGI renewal thread encountered exception

2016-09-27 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-13590:
---
Attachment: HADOOP-13590.07.patch

Fixing checkstyle.
[~ste...@apache.org], please feel free to share your thoughts. Thank you.

> Retry until TGT expires even if the UGI renewal thread encountered exception
> 
>
> Key: HADOOP-13590
> URL: https://issues.apache.org/jira/browse/HADOOP-13590
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.8.0, 2.7.3, 2.6.4
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-13590.01.patch, HADOOP-13590.02.patch, 
> HADOOP-13590.03.patch, HADOOP-13590.04.patch, HADOOP-13590.05.patch, 
> HADOOP-13590.06.patch, HADOOP-13590.07.patch, HADOOP-13590.07.patch
>
>
> The UGI has a background thread to renew the tgt. On exception, it 
> [terminates 
> itself|https://github.com/apache/hadoop/blob/bee9f57f5ca9f037ade932c6fd01b0dad47a1296/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java#L1013-L1014]
> If something temporarily goes wrong that results in an IOE, even if it 
> recovered no renewal will be done and client will eventually fail to 
> authenticate. We should retry with our best effort, until tgt expires, in the 
> hope that the error recovers before that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12611) TestZKSignerSecretProvider#testMultipleInit occasionally fail

2016-09-27 Thread Robert Kanter (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15527114#comment-15527114
 ] 

Robert Kanter commented on HADOOP-12611:


That sounds right to me.

{quote}You said above to check secrets based on size, but the secrets list is 
only ever 2 elements. So I could check for it to change, but I don't know how I 
would check for each iteration based on the size of the list.{quote}
[~jojochuang]'s 001 patch adds a subclass that remembers all secrets in an 
arraylist.  That should let you determine based on size.

Calling {{rollSecret()}} in a random order is an interesting idea.  I think 
that would cover testing the randomness in the ordering.

> TestZKSignerSecretProvider#testMultipleInit occasionally fail
> -
>
> Key: HADOOP-12611
> URL: https://issues.apache.org/jira/browse/HADOOP-12611
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HADOOP-12611.001.patch, HADOOP-12611.002.patch
>
>
> https://builds.apache.org/job/Hadoop-Common-trunk/2053/testReport/junit/org.apache.hadoop.security.authentication.util/TestZKSignerSecretProvider/testMultipleInit/
> Error Message
> expected null, but was:<[B@142bad79>
> Stacktrace
> java.lang.AssertionError: expected null, but was:<[B@142bad79>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotNull(Assert.java:664)
>   at org.junit.Assert.assertNull(Assert.java:646)
>   at org.junit.Assert.assertNull(Assert.java:656)
>   at 
> org.apache.hadoop.security.authentication.util.TestZKSignerSecretProvider.testMultipleInit(TestZKSignerSecretProvider.java:149)
> I think the failure was introduced after HADOOP-12181
> This is likely where the root cause is:
> 2015-11-29 00:24:33,325 ERROR ZKSignerSecretProvider - An unexpected 
> exception occurred while pulling data fromZooKeeper
> java.lang.IllegalStateException: instance must be started before calling this 
> method
>   at 
> com.google.common.base.Preconditions.checkState(Preconditions.java:145)
>   at 
> org.apache.curator.framework.imps.CuratorFrameworkImpl.getData(CuratorFrameworkImpl.java:363)
>   at 
> org.apache.hadoop.security.authentication.util.ZKSignerSecretProvider.pullFromZK(ZKSignerSecretProvider.java:341)
>   at 
> org.apache.hadoop.security.authentication.util.ZKSignerSecretProvider.rollSecret(ZKSignerSecretProvider.java:264)
>   at 
> org.apache.hadoop.security.authentication.util.ZKSignerSecretProvider$$EnhancerByMockitoWithCGLIB$$575f06d8.CGLIB$rollSecret$2()
>   at 
> org.apache.hadoop.security.authentication.util.ZKSignerSecretProvider$$EnhancerByMockitoWithCGLIB$$575f06d8$$FastClassByMockitoWithCGLIB$$6f94a716.invoke()
>   at org.mockito.cglib.proxy.MethodProxy.invokeSuper(MethodProxy.java:216)
>   at 
> org.mockito.internal.creation.AbstractMockitoMethodProxy.invokeSuper(AbstractMockitoMethodProxy.java:10)
>   at 
> org.mockito.internal.invocation.realmethod.CGLIBProxyRealMethod.invoke(CGLIBProxyRealMethod.java:22)
>   at 
> org.mockito.internal.invocation.realmethod.FilteredCGLIBProxyRealMethod.invoke(FilteredCGLIBProxyRealMethod.java:27)
>   at 
> org.mockito.internal.invocation.Invocation.callRealMethod(Invocation.java:211)
>   at 
> org.mockito.internal.stubbing.answers.CallsRealMethods.answer(CallsRealMethods.java:36)
>   at org.mockito.internal.MockHandler.handle(MockHandler.java:99)
>   at 
> org.mockito.internal.creation.MethodInterceptorFilter.intercept(MethodInterceptorFilter.java:47)
>   at 
> org.apache.hadoop.security.authentication.util.ZKSignerSecretProvider$$EnhancerByMockitoWithCGLIB$$575f06d8.rollSecret()
>   at 
> org.apache.hadoop.security.authentication.util.RolloverSignerSecretProvider$1.run(RolloverSignerSecretProvider.java:97)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>   at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13544) JDiff reports unncessarily show unannotated APIs and cause confusion while our javadocs only show annotated and public APIs

2016-09-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15527088#comment-15527088
 ] 

Hudson commented on HADOOP-13544:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10499 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10499/])
HADOOP-13544. JDiff reports unncessarily show unannotated APIs and cause 
(wangda: rev 875062b5bc789158290bf93dadc71b5328ca4fee)
* (edit) hadoop-project-dist/pom.xml
* (add) 
hadoop-common-project/hadoop-annotations/src/main/java/org/apache/hadoop/classification/tools/IncludePublicAnnotationsJDiffDoclet.java
* (edit) 
hadoop-mapreduce-project/dev-support/jdiff/Apache_Hadoop_MapReduce_JobClient_2.7.2.xml
* (edit) 
hadoop-yarn-project/hadoop-yarn/dev-support/jdiff/Apache_Hadoop_YARN_Server_Common_2.7.2.xml
* (edit) 
hadoop-yarn-project/hadoop-yarn/dev-support/jdiff/Apache_Hadoop_YARN_Client_2.7.2.xml
* (edit) hadoop-yarn-project/hadoop-yarn/pom.xml
* (edit) 
hadoop-yarn-project/hadoop-yarn/dev-support/jdiff/Apache_Hadoop_YARN_API_2.7.2.xml
* (edit) 
hadoop-common-project/hadoop-common/dev-support/jdiff/Apache_Hadoop_Common_2.7.2.xml
* (edit) 
hadoop-mapreduce-project/dev-support/jdiff/Apache_Hadoop_MapReduce_Core_2.7.2.xml
* (edit) 
hadoop-yarn-project/hadoop-yarn/dev-support/jdiff/Apache_Hadoop_YARN_Common_2.7.2.xml
* (edit) 
hadoop-mapreduce-project/dev-support/jdiff/Apache_Hadoop_MapReduce_Common_2.7.2.xml
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/dev-support/jdiff/Apache_Hadoop_HDFS_2.7.2.xml
* (edit) hadoop-mapreduce-project/hadoop-mapreduce-client/pom.xml


> JDiff reports unncessarily show unannotated APIs and cause confusion while 
> our javadocs only show annotated and public APIs
> ---
>
> Key: HADOOP-13544
> URL: https://issues.apache.org/jira/browse/HADOOP-13544
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Vinod Kumar Vavilapalli
>Assignee: Vinod Kumar Vavilapalli
>Priority: Blocker
> Fix For: 2.8.0
>
> Attachments: HADOOP-13544-20160825.txt, HADOOP-13544-20160921.txt
>
>
> Our javadocs only show annotated and @Public APIs (original JIRAs 
> HADOOP-7782, HADOOP-6658).
> But the jdiff shows all APIs that are not annotated @Private. This causes 
> confusion on how we read the reports and what APIs we really broke.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13544) JDiff reports unncessarily show unannotated APIs and cause confusion while our javadocs only show annotated and public APIs

2016-09-27 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated HADOOP-13544:

  Resolution: Fixed
Hadoop Flags: Reviewed
   Fix Version/s: 2.8.0
Target Version/s: 3.0.0-alpha1, 2.8.0  (was: 2.8.0, 3.0.0-alpha1)
  Status: Resolved  (was: Patch Available)

Committed to branch-2/2.8/trunk, thanks [~vinodkv]

> JDiff reports unncessarily show unannotated APIs and cause confusion while 
> our javadocs only show annotated and public APIs
> ---
>
> Key: HADOOP-13544
> URL: https://issues.apache.org/jira/browse/HADOOP-13544
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Vinod Kumar Vavilapalli
>Assignee: Vinod Kumar Vavilapalli
>Priority: Blocker
> Fix For: 2.8.0
>
> Attachments: HADOOP-13544-20160825.txt, HADOOP-13544-20160921.txt
>
>
> Our javadocs only show annotated and @Public APIs (original JIRAs 
> HADOOP-7782, HADOOP-6658).
> But the jdiff shows all APIs that are not annotated @Private. This causes 
> confusion on how we read the reports and what APIs we really broke.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13599) s3a close() to be non-synchronized, so avoid risk of deadlock on shutdown

2016-09-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15527060#comment-15527060
 ] 

Hadoop QA commented on HADOOP-13599:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
43s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
18s{color} | {color:green} branch-2 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
20s{color} | {color:green} branch-2 passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
10s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
34s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} branch-2 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} branch-2 passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
21s{color} | {color:green} hadoop-aws in the patch passed with JDK v1.7.0_111. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 16m 48s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:b59b8b7 |
| JIRA Issue | HADOOP-13599 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12830543/HADOOP-13599-branch-2-003.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux e372b4ac32ec 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 
20:42:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | branch-2 / 80628ee |
| Default Java | 1.7.0_111 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_101 
/usr/lib/jvm/java-7-openjdk-amd64:1.7.0_111 |
| findbugs | v3.0.0 |
| JDK v1.7.0_111  Test Results | 

[jira] [Updated] (HADOOP-13666) Supporting rack exclusion in countNumOfAvailableNodes in NetworkTopology

2016-09-27 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13666?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-13666:
-
Target Version/s: 2.8.0

> Supporting rack exclusion in countNumOfAvailableNodes in NetworkTopology
> 
>
> Key: HADOOP-13666
> URL: https://issues.apache.org/jira/browse/HADOOP-13666
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: net
>Affects Versions: 2.7.3
>Reporter: Inigo Goiri
>Assignee: Inigo Goiri
> Attachments: HADOOP-13666.000.patch
>
>
> Right now, the counting of nodes in {{NetworkTopology}} assumes the 
> exclusions are leaves. We should count the proper number.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13666) Supporting rack exclusion in countNumOfAvailableNodes in NetworkTopology

2016-09-27 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13666?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-13666:
-
Assignee: Inigo Goiri

> Supporting rack exclusion in countNumOfAvailableNodes in NetworkTopology
> 
>
> Key: HADOOP-13666
> URL: https://issues.apache.org/jira/browse/HADOOP-13666
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: net
>Affects Versions: 2.7.3
>Reporter: Inigo Goiri
>Assignee: Inigo Goiri
> Attachments: HADOOP-13666.000.patch
>
>
> Right now, the counting of nodes in {{NetworkTopology}} assumes the 
> exclusions are leaves. We should count the proper number.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13599) s3a close() to be non-synchronized, so avoid risk of deadlock on shutdown

2016-09-27 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13599:

Status: Patch Available  (was: Open)

> s3a close() to be non-synchronized, so avoid risk of deadlock on shutdown
> -
>
> Key: HADOOP-13599
> URL: https://issues.apache.org/jira/browse/HADOOP-13599
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.3
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13599-branch-2-001.patch, 
> HADOOP-13599-branch-2-002.patch, HADOOP-13599-branch-2-003.patch
>
>
> We've had a report of hive deadlocking on teardown, as a synchronous FS close 
> was blocking shutdown threads, similar to HADOOP-3139
> S3a close needs to be made non-synchronized. All we need is some code to 
> prevent re-entrancy at the start; easily done



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13599) s3a close() to be non-synchronized, so avoid risk of deadlock on shutdown

2016-09-27 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13599:

Attachment: HADOOP-13599-branch-2-003.patch

Patch 003, address chris's comments, especially the bit where I broke everything

> s3a close() to be non-synchronized, so avoid risk of deadlock on shutdown
> -
>
> Key: HADOOP-13599
> URL: https://issues.apache.org/jira/browse/HADOOP-13599
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.3
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13599-branch-2-001.patch, 
> HADOOP-13599-branch-2-002.patch, HADOOP-13599-branch-2-003.patch
>
>
> We've had a report of hive deadlocking on teardown, as a synchronous FS close 
> was blocking shutdown threads, similar to HADOOP-3139
> S3a close needs to be made non-synchronized. All we need is some code to 
> prevent re-entrancy at the start; easily done



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13599) s3a close() to be non-synchronized, so avoid risk of deadlock on shutdown

2016-09-27 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13599:

Status: Open  (was: Patch Available)

> s3a close() to be non-synchronized, so avoid risk of deadlock on shutdown
> -
>
> Key: HADOOP-13599
> URL: https://issues.apache.org/jira/browse/HADOOP-13599
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.3
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13599-branch-2-001.patch, 
> HADOOP-13599-branch-2-002.patch, HADOOP-13599-branch-2-003.patch
>
>
> We've had a report of hive deadlocking on teardown, as a synchronous FS close 
> was blocking shutdown threads, similar to HADOOP-3139
> S3a close needs to be made non-synchronized. All we need is some code to 
> prevent re-entrancy at the start; easily done



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-7352) FileSystem#listStatus should throw IOE upon access error

2016-09-27 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15527015#comment-15527015
 ] 

John Zhuge commented on HADOOP-7352:


Thanks [~daryn]. It depends how many callers expect null return from 
{{FileSystem#listStatus}}. So far we only found very few so it is probably not 
major. Really appreciate more eyeballs.

> FileSystem#listStatus should throw IOE upon access error
> 
>
> Key: HADOOP-7352
> URL: https://issues.apache.org/jira/browse/HADOOP-7352
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.6.0
>Reporter: Matt Foley
>Assignee: John Zhuge
> Attachments: HADOOP-7352.001.patch, HADOOP-7352.002.patch, 
> HADOOP-7352.003.patch
>
>
> In HADOOP-6201 and HDFS-538 it was agreed that FileSystem::listStatus should 
> throw FileNotFoundException instead of returning null, when the target 
> directory did not exist.
> However, in LocalFileSystem implementation today, FileSystem::listStatus 
> still may return null, when the target directory exists but does not grant 
> read permission.  This causes NPE in many callers, for all the reasons cited 
> in HADOOP-6201 and HDFS-538.  See HADOOP-7327 and its linked issues for 
> examples.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work started] (HADOOP-13631) S3Guard: implement move() for LocalMetadataStore, add unit tests

2016-09-27 Thread Aaron Fabbri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-13631 started by Aaron Fabbri.
-
> S3Guard: implement move() for LocalMetadataStore, add unit tests
> 
>
> Key: HADOOP-13631
> URL: https://issues.apache.org/jira/browse/HADOOP-13631
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Aaron Fabbri
>Assignee: Aaron Fabbri
>
> Building on HADOOP-13573 and HADOOP-13452, implement move() in 
> LocalMetadataStore and associated MetadataStore unit tests.
> (Making this a separate JIRA to break up work into decent-sized and 
> reviewable chunks.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-7352) FileSystem#listStatus should throw IOE upon access error

2016-09-27 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15526972#comment-15526972
 ] 

Daryn Sharp commented on HADOOP-7352:
-

bq.  It changes the filesystem contract and could have quite an impact.

I'll take a look this afternoon, but you seem to be saying this may be a major 
compatibility break?

> FileSystem#listStatus should throw IOE upon access error
> 
>
> Key: HADOOP-7352
> URL: https://issues.apache.org/jira/browse/HADOOP-7352
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.6.0
>Reporter: Matt Foley
>Assignee: John Zhuge
> Attachments: HADOOP-7352.001.patch, HADOOP-7352.002.patch, 
> HADOOP-7352.003.patch
>
>
> In HADOOP-6201 and HDFS-538 it was agreed that FileSystem::listStatus should 
> throw FileNotFoundException instead of returning null, when the target 
> directory did not exist.
> However, in LocalFileSystem implementation today, FileSystem::listStatus 
> still may return null, when the target directory exists but does not grant 
> read permission.  This causes NPE in many callers, for all the reasons cited 
> in HADOOP-6201 and HDFS-538.  See HADOOP-7327 and its linked issues for 
> examples.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13663) Index out of range in SysInfoWindows

2016-09-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15526941#comment-15526941
 ] 

Hadoop QA commented on HADOOP-13663:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
18s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 38m  8s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HADOOP-13663 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12830522/HADOOP-13663.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux a85538299447 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 
21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 8ae4729 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10612/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10612/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Index out of range in SysInfoWindows
> 
>
> Key: HADOOP-13663
> URL: https://issues.apache.org/jira/browse/HADOOP-13663
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 2.7.3
> Environment: Windows
>Reporter: Inigo Goiri
> Attachments: HADOOP-13663.000.patch, HADOOP-13663.001.patch
>
>
> Sometimes, the {{NodeResourceMonitor}} tries to read the system utilization 
> from winutils.exe and this return empty values. This triggers the following 
> 

[jira] [Updated] (HADOOP-13666) Supporting rack exclusion in countNumOfAvailableNodes in NetworkTopology

2016-09-27 Thread Inigo Goiri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13666?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Inigo Goiri updated HADOOP-13666:
-
Summary: Supporting rack exclusion in countNumOfAvailableNodes in 
NetworkTopology  (was: Supporting rack exclusion in NetworkTopology)

> Supporting rack exclusion in countNumOfAvailableNodes in NetworkTopology
> 
>
> Key: HADOOP-13666
> URL: https://issues.apache.org/jira/browse/HADOOP-13666
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: net
>Affects Versions: 2.7.3
>Reporter: Inigo Goiri
> Attachments: HADOOP-13666.000.patch
>
>
> Right now, the counting of nodes in {{NetworkTopology}} assumes the 
> exclusions are leaves. We should count the proper number.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13666) Supporting rack exclusion in NetworkTopology

2016-09-27 Thread Inigo Goiri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13666?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Inigo Goiri updated HADOOP-13666:
-
Status: Patch Available  (was: Open)

> Supporting rack exclusion in NetworkTopology
> 
>
> Key: HADOOP-13666
> URL: https://issues.apache.org/jira/browse/HADOOP-13666
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: net
>Affects Versions: 2.7.3
>Reporter: Inigo Goiri
> Attachments: HADOOP-13666.000.patch
>
>
> Right now, the counting of nodes in {{NetworkTopology}} assumes the 
> exclusions are leaves. We should count the proper number.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13666) Supporting rack exclusion in NetworkTopology

2016-09-27 Thread Inigo Goiri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13666?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Inigo Goiri updated HADOOP-13666:
-
Attachment: HADOOP-13666.000.patch

Supporting rack exclusions.

> Supporting rack exclusion in NetworkTopology
> 
>
> Key: HADOOP-13666
> URL: https://issues.apache.org/jira/browse/HADOOP-13666
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: net
>Affects Versions: 2.7.3
>Reporter: Inigo Goiri
> Attachments: HADOOP-13666.000.patch
>
>
> Right now, the counting of nodes in {{NetworkTopology}} assumes the 
> exclusions are leaves. We should count the proper number.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13665) Erasure Coding codec should support fallback coder

2016-09-27 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13665?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15526872#comment-15526872
 ] 

Wei-Chiu Chuang commented on HADOOP-13665:
--

HADOOP-13061 is a related jira and the patch is pending. 

> Erasure Coding codec should support fallback coder
> --
>
> Key: HADOOP-13665
> URL: https://issues.apache.org/jira/browse/HADOOP-13665
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: io
>Reporter: Wei-Chiu Chuang
>
> The current EC codec supports a single coder only (by default pure Java 
> implementation). If the native coder is specified but is unavailable, it 
> should fallback to pure Java implementation.
> One possible solution is to follow the convention of existing Hadoop native 
> codec, such as transport encryption (see {{CryptoCodec.java}}). It supports 
> fallback by specifying two or multiple coders as the value of property, and 
> loads coders in order.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13665) Erasure Coding codec should support fallback coder

2016-09-27 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13665?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-13665:
-
Issue Type: Sub-task  (was: Improvement)
Parent: HADOOP-11842

> Erasure Coding codec should support fallback coder
> --
>
> Key: HADOOP-13665
> URL: https://issues.apache.org/jira/browse/HADOOP-13665
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: io
>Reporter: Wei-Chiu Chuang
>
> I think EC issues should be reported in HDFS, but the code lands in Hadoop 
> Common anyway.
> The current EC codec supports a single coder only (by default pure Java 
> implementation). If the native coder is specified but is unavailable, it 
> should fallback to pure Java implementation.
> One possible solution is to follow the convention of existing Hadoop native 
> codec, such as transport encryption (see {{CryptoCodec.java}}). It supports 
> fallback by specifying two or multiple coders as the value of property, and 
> loads coders in order.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13665) Erasure Coding codec should support fallback coder

2016-09-27 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13665?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-13665:
-
Description: 
The current EC codec supports a single coder only (by default pure Java 
implementation). If the native coder is specified but is unavailable, it should 
fallback to pure Java implementation.

One possible solution is to follow the convention of existing Hadoop native 
codec, such as transport encryption (see {{CryptoCodec.java}}). It supports 
fallback by specifying two or multiple coders as the value of property, and 
loads coders in order.

  was:
I think EC issues should be reported in HDFS, but the code lands in Hadoop 
Common anyway.

The current EC codec supports a single coder only (by default pure Java 
implementation). If the native coder is specified but is unavailable, it should 
fallback to pure Java implementation.

One possible solution is to follow the convention of existing Hadoop native 
codec, such as transport encryption (see {{CryptoCodec.java}}). It supports 
fallback by specifying two or multiple coders as the value of property, and 
loads coders in order.


> Erasure Coding codec should support fallback coder
> --
>
> Key: HADOOP-13665
> URL: https://issues.apache.org/jira/browse/HADOOP-13665
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: io
>Reporter: Wei-Chiu Chuang
>
> The current EC codec supports a single coder only (by default pure Java 
> implementation). If the native coder is specified but is unavailable, it 
> should fallback to pure Java implementation.
> One possible solution is to follow the convention of existing Hadoop native 
> codec, such as transport encryption (see {{CryptoCodec.java}}). It supports 
> fallback by specifying two or multiple coders as the value of property, and 
> loads coders in order.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13666) Supporting rack exclusion in NetworkTopology

2016-09-27 Thread Inigo Goiri (JIRA)
Inigo Goiri created HADOOP-13666:


 Summary: Supporting rack exclusion in NetworkTopology
 Key: HADOOP-13666
 URL: https://issues.apache.org/jira/browse/HADOOP-13666
 Project: Hadoop Common
  Issue Type: Improvement
  Components: net
Affects Versions: 2.7.3
Reporter: Inigo Goiri


Right now, the counting of nodes in {{NetworkTopology}} assumes the exclusions 
are leaves. We should count the proper number.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13663) Index out of range in SysInfoWindows

2016-09-27 Thread Inigo Goiri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Inigo Goiri updated HADOOP-13663:
-
Attachment: HADOOP-13663.001.patch

Modified unit test.

> Index out of range in SysInfoWindows
> 
>
> Key: HADOOP-13663
> URL: https://issues.apache.org/jira/browse/HADOOP-13663
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 2.7.3
> Environment: Windows
>Reporter: Inigo Goiri
> Attachments: HADOOP-13663.000.patch, HADOOP-13663.001.patch
>
>
> Sometimes, the {{NodeResourceMonitor}} tries to read the system utilization 
> from winutils.exe and this return empty values. This triggers the following 
> exception:
> java.lang.StringIndexOutOfBoundsException: String index out of range: -1
>   at java.lang.String.substring(String.java:1911)
>   at 
> org.apache.hadoop.util.SysInfoWindows.refreshIfNeeded(SysInfoWindows.java:158)
>   at 
> org.apache.hadoop.util.SysInfoWindows.getPhysicalMemorySize(SysInfoWindows.java:247)
>   at 
> org.apache.hadoop.yarn.util.ResourceCalculatorPlugin.getPhysicalMemorySize(ResourceCalculatorPlugin.java:63)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.NodeResourceMonitorImpl$MonitoringThread.run(NodeResourceMonitorImpl.java:139)
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13663) Index out of range in SysInfoWindows

2016-09-27 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15526774#comment-15526774
 ] 

Steve Loughran commented on HADOOP-13663:
-

Could you add a test for this into {{TestSysInfoWindows}}?

> Index out of range in SysInfoWindows
> 
>
> Key: HADOOP-13663
> URL: https://issues.apache.org/jira/browse/HADOOP-13663
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 2.7.3
> Environment: Windows
>Reporter: Inigo Goiri
> Attachments: HADOOP-13663.000.patch
>
>
> Sometimes, the {{NodeResourceMonitor}} tries to read the system utilization 
> from winutils.exe and this return empty values. This triggers the following 
> exception:
> java.lang.StringIndexOutOfBoundsException: String index out of range: -1
>   at java.lang.String.substring(String.java:1911)
>   at 
> org.apache.hadoop.util.SysInfoWindows.refreshIfNeeded(SysInfoWindows.java:158)
>   at 
> org.apache.hadoop.util.SysInfoWindows.getPhysicalMemorySize(SysInfoWindows.java:247)
>   at 
> org.apache.hadoop.yarn.util.ResourceCalculatorPlugin.getPhysicalMemorySize(ResourceCalculatorPlugin.java:63)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.NodeResourceMonitorImpl$MonitoringThread.run(NodeResourceMonitorImpl.java:139)
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13662) Upgrade jackson2 version

2016-09-27 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15526770#comment-15526770
 ] 

Steve Loughran commented on HADOOP-13662:
-

yeah, typo. sorry

> Upgrade jackson2 version
> 
>
> Key: HADOOP-13662
> URL: https://issues.apache.org/jira/browse/HADOOP-13662
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
> Attachments: HADOOP-13662.001.patch
>
>
> We're currently pulling in version 2.2.3 - I think we should upgrade to the 
> latest 2.8.3.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13665) Erasure Coding codec should support fallback coder

2016-09-27 Thread Wei-Chiu Chuang (JIRA)
Wei-Chiu Chuang created HADOOP-13665:


 Summary: Erasure Coding codec should support fallback coder
 Key: HADOOP-13665
 URL: https://issues.apache.org/jira/browse/HADOOP-13665
 Project: Hadoop Common
  Issue Type: Improvement
  Components: io
Reporter: Wei-Chiu Chuang


I think EC issues should be reported in HDFS, but the code lands in Hadoop 
Common anyway.

The current EC codec supports a single coder only (by default pure Java 
implementation). If the native coder is specified but is unavailable, it should 
fallback to pure Java implementation.

One possible solution is to follow the convention of existing Hadoop native 
codec, such as transport encryption (see {{CryptoCodec.java}}). It supports 
fallback by specifying two or multiple coders as the value of property, and 
loads coders in order.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13663) Index out of range in SysInfoWindows

2016-09-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15526729#comment-15526729
 ] 

Hadoop QA commented on HADOOP-13663:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
10s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 39m 13s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HADOOP-13663 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12830517/HADOOP-13663.000.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 44f0c17f1a17 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 
21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / df1d0f5 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10611/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10611/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Index out of range in SysInfoWindows
> 
>
> Key: HADOOP-13663
> URL: https://issues.apache.org/jira/browse/HADOOP-13663
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 2.7.3
> Environment: Windows
>Reporter: Inigo Goiri
> Attachments: HADOOP-13663.000.patch
>
>
> Sometimes, the {{NodeResourceMonitor}} tries to read the system 

[jira] [Comment Edited] (HADOOP-12611) TestZKSignerSecretProvider#testMultipleInit occasionally fail

2016-09-27 Thread Eric Badger (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15526620#comment-15526620
 ] 

Eric Badger edited comment on HADOOP-12611 at 9/27/16 4:20 PM:
---

[~rkanter], thanks for the detailed response! What you said makes sense with 
regards to the race between the servers calling rollSecret() to push their 
secrets to ZK. Let me make sure that I understand the approach that you 
proposed above. 

1. Seed deterministically as we are currently doing and let rollSecret() happen 
twice. 
  - You said above to check secrets based on size, but the secrets list is only 
ever 2 elements. So I could check for it to change, but I don't know how I 
would check for each iteration based on the size of the list.
 
2. Keep track of the secrets list for both A and B at each iteration.
3. Check to make sure that A and B are correct at each iteration
  - A: [A1, null], [A2, A1], [A3 or B3, A2]
  - B: [A2, A1], [A3 or B3, A2]

I do see a potential problem with this setup though. Right after we call 
secretProviderB.init(), we check to make sure that it's secrets are equal to 
[A2, A1]. But if there is a slow code path for whatever reason in the main 
code, then rollSecret() could be called to update the secrets via either 
secretProviderA or secretProviderB. This would make the secrets [A3 or B3, A2] 
(or something else if rollSecret() was called multiple times) instead of [A2, 
A1]. I'm not sure how to remove this race condition without changing the source 
code. 

A little hokey, but would it be acceptable to explicitly call rollSecret() 
instead of using verify(), but calling them in a random order? This way we 
guarantee the number of times that rollSecret() is called, we guarantee the 
contents of secrets for both secretProviders, and we still provide the 
randomness of each secretProvider being able to talk to ZK first.


was (Author: ebadger):
[~rkanter], thanks for the detailed response! What you said makes sense with 
regards to the race between the servers calling rollSecret() to push their 
secrets to ZK. Let me make sure that I understand the approach that you 
proposed above. 

1. Seed deterministically as we are currently doing and let rollSecret() happen 
twice. 
  - You said above to check secrets based on size, but the secrets list is only 
ever 2 elements. So I could check for it to change, but I don't know how I 
would check for each iteration based on the size of the list. 
2. Keep track of the secrets list for both A and B at each iteration.
3. Check to make sure that A and B are correct at each iteration
  - A: [A1, null], [A2, A1], [A3 or B3, A2]
  - B: [A2, A1], [A3 or B3, A2]

I do see a potential problem with this setup though. Right after we call 
secretProviderB.init(), we check to make sure that it's secrets are equal to 
[A2, A1]. But if there is a slow code path for whatever reason in the main 
code, then rollSecret() could be called to update the secrets via either 
secretProviderA or secretProviderB. This would make the secrets [A3 or B3, A2] 
(or something else if rollSecret() was called multiple times) instead of [A2, 
A1]. I'm not sure how to remove this race condition without changing the source 
code. 

A little hokey, but would it be acceptable to explicitly call rollSecret() 
instead of using verify(), but calling them in a random order? This way we 
guarantee the number of times that rollSecret() is called, we guarantee the 
contents of secrets for both secretProviders, and we still provide the 
randomness of each secretProvider being able to talk to ZK first.

> TestZKSignerSecretProvider#testMultipleInit occasionally fail
> -
>
> Key: HADOOP-12611
> URL: https://issues.apache.org/jira/browse/HADOOP-12611
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HADOOP-12611.001.patch, HADOOP-12611.002.patch
>
>
> https://builds.apache.org/job/Hadoop-Common-trunk/2053/testReport/junit/org.apache.hadoop.security.authentication.util/TestZKSignerSecretProvider/testMultipleInit/
> Error Message
> expected null, but was:<[B@142bad79>
> Stacktrace
> java.lang.AssertionError: expected null, but was:<[B@142bad79>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotNull(Assert.java:664)
>   at org.junit.Assert.assertNull(Assert.java:646)
>   at org.junit.Assert.assertNull(Assert.java:656)
>   at 
> org.apache.hadoop.security.authentication.util.TestZKSignerSecretProvider.testMultipleInit(TestZKSignerSecretProvider.java:149)
> I think the failure was introduced after HADOOP-12181
> This is likely where the root cause is:
> 2015-11-29 00:24:33,325 ERROR ZKSignerSecretProvider - An unexpected 
> exception occurred while 

[jira] [Commented] (HADOOP-12611) TestZKSignerSecretProvider#testMultipleInit occasionally fail

2016-09-27 Thread Eric Badger (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15526620#comment-15526620
 ] 

Eric Badger commented on HADOOP-12611:
--

[~rkanter], thanks for the detailed response! What you said makes sense with 
regards to the race between the servers calling rollSecret() to push their 
secrets to ZK. Let me make sure that I understand the approach that you 
proposed above. 

1. Seed deterministically as we are currently doing and let rollSecret() happen 
twice. 
  - You said above to check secrets based on size, but the secrets list is only 
ever 2 elements. So I could check for it to change, but I don't know how I 
would check for each iteration based on the size of the list. 
2. Keep track of the secrets list for both A and B at each iteration.
3. Check to make sure that A and B are correct at each iteration
  - A: [A1, null], [A2, A1], [A3 or B3, A2]
  - B: [A2, A1], [A3 or B3, A2]

I do see a potential problem with this setup though. Right after we call 
secretProviderB.init(), we check to make sure that it's secrets are equal to 
[A2, A1]. But if there is a slow code path for whatever reason in the main 
code, then rollSecret() could be called to update the secrets via either 
secretProviderA or secretProviderB. This would make the secrets [A3 or B3, A2] 
(or something else if rollSecret() was called multiple times) instead of [A2, 
A1]. I'm not sure how to remove this race condition without changing the source 
code. 

A little hokey, but would it be acceptable to explicitly call rollSecret() 
instead of using verify(), but calling them in a random order? This way we 
guarantee the number of times that rollSecret() is called, we guarantee the 
contents of secrets for both secretProviders, and we still provide the 
randomness of each secretProvider being able to talk to ZK first.

> TestZKSignerSecretProvider#testMultipleInit occasionally fail
> -
>
> Key: HADOOP-12611
> URL: https://issues.apache.org/jira/browse/HADOOP-12611
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HADOOP-12611.001.patch, HADOOP-12611.002.patch
>
>
> https://builds.apache.org/job/Hadoop-Common-trunk/2053/testReport/junit/org.apache.hadoop.security.authentication.util/TestZKSignerSecretProvider/testMultipleInit/
> Error Message
> expected null, but was:<[B@142bad79>
> Stacktrace
> java.lang.AssertionError: expected null, but was:<[B@142bad79>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotNull(Assert.java:664)
>   at org.junit.Assert.assertNull(Assert.java:646)
>   at org.junit.Assert.assertNull(Assert.java:656)
>   at 
> org.apache.hadoop.security.authentication.util.TestZKSignerSecretProvider.testMultipleInit(TestZKSignerSecretProvider.java:149)
> I think the failure was introduced after HADOOP-12181
> This is likely where the root cause is:
> 2015-11-29 00:24:33,325 ERROR ZKSignerSecretProvider - An unexpected 
> exception occurred while pulling data fromZooKeeper
> java.lang.IllegalStateException: instance must be started before calling this 
> method
>   at 
> com.google.common.base.Preconditions.checkState(Preconditions.java:145)
>   at 
> org.apache.curator.framework.imps.CuratorFrameworkImpl.getData(CuratorFrameworkImpl.java:363)
>   at 
> org.apache.hadoop.security.authentication.util.ZKSignerSecretProvider.pullFromZK(ZKSignerSecretProvider.java:341)
>   at 
> org.apache.hadoop.security.authentication.util.ZKSignerSecretProvider.rollSecret(ZKSignerSecretProvider.java:264)
>   at 
> org.apache.hadoop.security.authentication.util.ZKSignerSecretProvider$$EnhancerByMockitoWithCGLIB$$575f06d8.CGLIB$rollSecret$2()
>   at 
> org.apache.hadoop.security.authentication.util.ZKSignerSecretProvider$$EnhancerByMockitoWithCGLIB$$575f06d8$$FastClassByMockitoWithCGLIB$$6f94a716.invoke()
>   at org.mockito.cglib.proxy.MethodProxy.invokeSuper(MethodProxy.java:216)
>   at 
> org.mockito.internal.creation.AbstractMockitoMethodProxy.invokeSuper(AbstractMockitoMethodProxy.java:10)
>   at 
> org.mockito.internal.invocation.realmethod.CGLIBProxyRealMethod.invoke(CGLIBProxyRealMethod.java:22)
>   at 
> org.mockito.internal.invocation.realmethod.FilteredCGLIBProxyRealMethod.invoke(FilteredCGLIBProxyRealMethod.java:27)
>   at 
> org.mockito.internal.invocation.Invocation.callRealMethod(Invocation.java:211)
>   at 
> org.mockito.internal.stubbing.answers.CallsRealMethods.answer(CallsRealMethods.java:36)
>   at org.mockito.internal.MockHandler.handle(MockHandler.java:99)
>   at 
> org.mockito.internal.creation.MethodInterceptorFilter.intercept(MethodInterceptorFilter.java:47)
>   at 
> 

[jira] [Resolved] (HADOOP-13645) Refine TestRackResolver#testCaching to ensure cache is truly tested

2016-09-27 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang resolved HADOOP-13645.
--
Resolution: Not A Bug

Though it was not using neat way, it actually tested the cache. It would be 
good to rewrite this test case for better reading, but lower priority. Close as 
Not a bug for now. Any people who is interested to improve this, feel free to 
reopen.

> Refine TestRackResolver#testCaching to ensure cache is truly tested
> ---
>
> Key: HADOOP-13645
> URL: https://issues.apache.org/jira/browse/HADOOP-13645
> Project: Hadoop Common
>  Issue Type: Test
>  Components: util
>Affects Versions: 2.7.3
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Minor
> Attachments: HADOOP-13645.01.patch
>
>
> TestRackResolver#testCaching seems not cover the cache testing well, the test 
> case won't fail even the cache wasn't used.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13663) Index out of range in SysInfoWindows

2016-09-27 Thread Inigo Goiri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Inigo Goiri updated HADOOP-13663:
-
Attachment: HADOOP-13663.000.patch

Checking if the index is a positive number. I tried to put all the error 
message together in the code.

> Index out of range in SysInfoWindows
> 
>
> Key: HADOOP-13663
> URL: https://issues.apache.org/jira/browse/HADOOP-13663
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 2.7.3
> Environment: Windows
>Reporter: Inigo Goiri
> Attachments: HADOOP-13663.000.patch
>
>
> Sometimes, the {{NodeResourceMonitor}} tries to read the system utilization 
> from winutils.exe and this return empty values. This triggers the following 
> exception:
> java.lang.StringIndexOutOfBoundsException: String index out of range: -1
>   at java.lang.String.substring(String.java:1911)
>   at 
> org.apache.hadoop.util.SysInfoWindows.refreshIfNeeded(SysInfoWindows.java:158)
>   at 
> org.apache.hadoop.util.SysInfoWindows.getPhysicalMemorySize(SysInfoWindows.java:247)
>   at 
> org.apache.hadoop.yarn.util.ResourceCalculatorPlugin.getPhysicalMemorySize(ResourceCalculatorPlugin.java:63)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.NodeResourceMonitorImpl$MonitoringThread.run(NodeResourceMonitorImpl.java:139)
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13663) Index out of range in SysInfoWindows

2016-09-27 Thread Inigo Goiri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Inigo Goiri updated HADOOP-13663:
-
Status: Patch Available  (was: Open)

> Index out of range in SysInfoWindows
> 
>
> Key: HADOOP-13663
> URL: https://issues.apache.org/jira/browse/HADOOP-13663
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 2.7.3
> Environment: Windows
>Reporter: Inigo Goiri
> Attachments: HADOOP-13663.000.patch
>
>
> Sometimes, the {{NodeResourceMonitor}} tries to read the system utilization 
> from winutils.exe and this return empty values. This triggers the following 
> exception:
> java.lang.StringIndexOutOfBoundsException: String index out of range: -1
>   at java.lang.String.substring(String.java:1911)
>   at 
> org.apache.hadoop.util.SysInfoWindows.refreshIfNeeded(SysInfoWindows.java:158)
>   at 
> org.apache.hadoop.util.SysInfoWindows.getPhysicalMemorySize(SysInfoWindows.java:247)
>   at 
> org.apache.hadoop.yarn.util.ResourceCalculatorPlugin.getPhysicalMemorySize(ResourceCalculatorPlugin.java:63)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.NodeResourceMonitorImpl$MonitoringThread.run(NodeResourceMonitorImpl.java:139)
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



  1   2   >