[jira] [Commented] (HADOOP-13006) FileContextMainOperationsBaseTest.testListStatusThrowsExceptionForNonExistentFile() doesnt run

2016-04-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15231647#comment-15231647
 ] 

Hadoop QA commented on HADOOP-13006:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 21s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
28s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 9s 
{color} | {color:green} trunk passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 58s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
21s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 4s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
37s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 56s 
{color} | {color:green} trunk passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 6s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
43s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 53s 
{color} | {color:green} the patch passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 53s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 57s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 57s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
20s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 57s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
54s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 58s 
{color} | {color:green} the patch passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 8s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 39s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.8.0_77. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 25s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
23s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 61m 59s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.7.0_95 Failed junit tests | hadoop.net.TestDNS |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:fbe3e86 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12797663/HADOOP-13006.01.patch 
|
| JIRA Issue | HADOOP-13006 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 91953c09b08a 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / e82f961 |
| 

[jira] [Assigned] (HADOOP-13006) FileContextMainOperationsBaseTest.testListStatusThrowsExceptionForNonExistentFile() doesnt run

2016-04-07 Thread Kai Sasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13006?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Sasaki reassigned HADOOP-13006:
---

Assignee: Kai Sasaki

> FileContextMainOperationsBaseTest.testListStatusThrowsExceptionForNonExistentFile()
>  doesnt run
> --
>
> Key: HADOOP-13006
> URL: https://issues.apache.org/jira/browse/HADOOP-13006
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Kai Sasaki
>Priority: Minor
> Attachments: HADOOP-13006.01.patch
>
>
> The test method 
> {{FileContextMainOperationsBaseTest.testListStatusThrowsExceptionForNonExistentFile()
>  doesnt run, because it's a Junit 3 test in a JUnit 4 class; it needs a 
> {{@Test}} annotation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-13006) FileContextMainOperationsBaseTest.testListStatusThrowsExceptionForNonExistentFile() doesnt run

2016-04-07 Thread Kai Sasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13006?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Sasaki updated HADOOP-13006:

Attachment: HADOOP-13006.01.patch

> FileContextMainOperationsBaseTest.testListStatusThrowsExceptionForNonExistentFile()
>  doesnt run
> --
>
> Key: HADOOP-13006
> URL: https://issues.apache.org/jira/browse/HADOOP-13006
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-13006.01.patch
>
>
> The test method 
> {{FileContextMainOperationsBaseTest.testListStatusThrowsExceptionForNonExistentFile()
>  doesnt run, because it's a Junit 3 test in a JUnit 4 class; it needs a 
> {{@Test}} annotation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-13006) FileContextMainOperationsBaseTest.testListStatusThrowsExceptionForNonExistentFile() doesnt run

2016-04-07 Thread Kai Sasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13006?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Sasaki updated HADOOP-13006:

Status: Patch Available  (was: Open)

> FileContextMainOperationsBaseTest.testListStatusThrowsExceptionForNonExistentFile()
>  doesnt run
> --
>
> Key: HADOOP-13006
> URL: https://issues.apache.org/jira/browse/HADOOP-13006
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Kai Sasaki
>Priority: Minor
> Attachments: HADOOP-13006.01.patch
>
>
> The test method 
> {{FileContextMainOperationsBaseTest.testListStatusThrowsExceptionForNonExistentFile()
>  doesnt run, because it's a Junit 3 test in a JUnit 4 class; it needs a 
> {{@Test}} annotation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12875) [Azure Data Lake] Support for contract test and unit test cases

2016-04-07 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15231362#comment-15231362
 ] 

Chris Nauroth commented on HADOOP-12875:


[~vishwajeet.dusane], thank you for providing a patch to make use of the 
contract tests.  I have a few comments in addition to the helpful feedback from 
Tony.

How can I (and other Apache community members) obtain account credentials that 
we can put into contract-test-options.xml, so that we can run the tests?  I 
have an Azure subscription.  I checked manage.windowsazure.com, but I couldn't 
find an option for provisioning Azure Data Lake access.  It's going to be vital 
for ongoing maintenance that community members have a way to get credentials so 
that they can test patches against the live service before committing.

For configuration of the credentials, I recommend using a technique we've used 
in hadoop-aws and hadoop-azure to split the credentials into a separate XML 
file, which then gets XIncluded from the main XML file.  We can then place the 
name of the file with the credentials into .gitignore.  This helps prevent 
accidentally committing someone's private credentials to the Apache repo, which 
would then compromise the account.  Check out hadoop-aws and hadoop-azure for 
more details on how to do this.

{code}
System.setProperty("hadoop.home.dir", System.getProperty("user.dir"));
{code}

Why is this necessary?

I'm unclear on the intent of the various "benchmark" tests.  They use a mock 
back-end, so they aren't really providing an accurate benchmark of the true 
service interaction.  There are no assertions, so they aren't verifying 
functionality beyond making sure things don't throw exceptions.  They print 
timing information to the console, so is the expectation that these tests could 
be used for manual measurement before and after applying later patches?

Your time measurements are using {{System#currentTimeMillis}}, which may be 
subject to inaccuracy if the system clock changes or NTP makes a negative 
adjustment in the middle of a test run.  Instead, I recommend using 
{{org.apache.hadoop.util.Time#monotonicNow}}, which is a wrapper over 
{{System#nanoTime}}, which is guaranteed to be monotonic increasing.

{code}
  @Override
  protected AbstractFSContract createContract(Configuration configuration) {
try {
  return new AdlStorageContract(configuration);
} catch (URISyntaxException e) {
  return null;
} catch (IOException e) {
  return null;
}
  }
{code}

If any of these exceptions happens, then returning null is likely to cause a 
confusing {{NullPointerException}} later.  I'd prefer that we fail fast by 
throwing an unchecked exception, such as {{IllegalStateException}}, with a 
descriptive error message, and the original exception nested as root cause.

It's unusual to see contract test subclasses adding other test cases specific 
to the file system, like {{readCombinationTest}}.  The abstract contract test 
classes are meant to fully define the test cases, and then the subclasses 
usually just tweak the contract and skip tests that they aren't able to satisfy 
yet.  For clarity, I suggest refactoring those additional tests into separate 
suites.


> [Azure Data Lake] Support for contract test and unit test cases
> ---
>
> Key: HADOOP-12875
> URL: https://issues.apache.org/jira/browse/HADOOP-12875
> Project: Hadoop Common
>  Issue Type: Test
>  Components: fs, fs/azure, tools
>Reporter: Vishwajeet Dusane
>Assignee: Vishwajeet Dusane
> Attachments: Hadoop-12875-001.patch
>
>
> This JIRA describes contract test and unit test cases support for azure data 
> lake file system.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11540) Raw Reed-Solomon coder using Intel ISA-L library

2016-04-07 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15231329#comment-15231329
 ] 

Kai Zheng commented on HADOOP-11540:


Thanks Colin for the confirm and further thoughts! I will update the patch 
accordingly, and think about how to simplify the inheritance relationship as 
follow-on task.

> Raw Reed-Solomon coder using Intel ISA-L library
> 
>
> Key: HADOOP-11540
> URL: https://issues.apache.org/jira/browse/HADOOP-11540
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: HDFS-7285
>Reporter: Zhe Zhang
>Assignee: Kai Zheng
> Attachments: HADOOP-11540-initial.patch, HADOOP-11540-v1.patch, 
> HADOOP-11540-v10.patch, HADOOP-11540-v2.patch, HADOOP-11540-v4.patch, 
> HADOOP-11540-v5.patch, HADOOP-11540-v6.patch, HADOOP-11540-v7.patch, 
> HADOOP-11540-v8.patch, HADOOP-11540-v9.patch, 
> HADOOP-11540-with-11996-codes.patch, Native Erasure Coder Performance - Intel 
> ISAL-v1.pdf
>
>
> This is to provide RS codec implementation using Intel ISA-L library for 
> encoding and decoding.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11540) Raw Reed-Solomon coder using Intel ISA-L library

2016-04-07 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15231306#comment-15231306
 ] 

Colin Patrick McCabe commented on HADOOP-11540:
---

Thanks, [~drankye].  Good progress here.

bq. I agree it will be easier to understand. The only thing I'm not sure about 
is, there are at least 6 Java coders and 2 x 6 encode/decode functions right 
now, if adding a loop to reset the list of output buffers to each function, it 
looks like a major change here. That's why I put the common codes in the 
abstract class.

Hmm.  I still think changing the Java coders is the simplest thing to do.  It's 
a tiny amount of code, or should be (calling one function), and simple to 
understand.

bq. How about introducing AbstractJavaRawEncoder/AbstractJavaRawDecoder similar 
to the native ones for such things, then we can get rid of wantInitOutputs and 
don't have to change into each Java coders?

I don't think this would be a good idea.  We need to start thinking about 
simplifying the inheritance hierarchy and getting rid of some levels.  We have 
too many non-abstract base classes, which makes it difficult to follow.  
Inheritance should not be used to accomplish code reuse, only to express a 
genuine is-a relationship.

> Raw Reed-Solomon coder using Intel ISA-L library
> 
>
> Key: HADOOP-11540
> URL: https://issues.apache.org/jira/browse/HADOOP-11540
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: HDFS-7285
>Reporter: Zhe Zhang
>Assignee: Kai Zheng
> Attachments: HADOOP-11540-initial.patch, HADOOP-11540-v1.patch, 
> HADOOP-11540-v10.patch, HADOOP-11540-v2.patch, HADOOP-11540-v4.patch, 
> HADOOP-11540-v5.patch, HADOOP-11540-v6.patch, HADOOP-11540-v7.patch, 
> HADOOP-11540-v8.patch, HADOOP-11540-v9.patch, 
> HADOOP-11540-with-11996-codes.patch, Native Erasure Coder Performance - Intel 
> ISAL-v1.pdf
>
>
> This is to provide RS codec implementation using Intel ISA-L library for 
> encoding and decoding.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12973) make DU pluggable

2016-04-07 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15231286#comment-15231286
 ] 

Elliott Clark commented on HADOOP-12973:


bq.but I don't want the WindowsDU to be a subclass of the Linux DU
Yeah I can see that.

Closable seems good.

bq.This can all be avoided by just passing the Builder object to the 
constructor.
Ok sounds good. Let me get that out.

> make DU pluggable
> -
>
> Key: HADOOP-12973
> URL: https://issues.apache.org/jira/browse/HADOOP-12973
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Attachments: HADOOP-12973v0.patch, HADOOP-12973v1.patch, 
> HADOOP-12973v10.patch, HADOOP-12973v2.patch, HADOOP-12973v3.patch, 
> HADOOP-12973v5.patch, HADOOP-12973v6.patch, HADOOP-12973v7.patch, 
> HADOOP-12973v8.patch, HADOOP-12973v9.patch
>
>
> If people are concerned about replacing the call to DU. Then an easy first 
> step is to make it pluggable. Then it's possible to replace it with something 
> while leaving the default alone.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12994) Specify PositionedReadable, add contract tests, fix problems

2016-04-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12994?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15231250#comment-15231250
 ] 

Hadoop QA commented on HADOOP-12994:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 14 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 12s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
43s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 54s 
{color} | {color:green} trunk passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 41s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
8s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 10s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
53s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 
10s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 35s 
{color} | {color:green} trunk passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 55s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
42s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 33s 
{color} | {color:green} the patch passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 33s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 57s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 57s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 9s 
{color} | {color:red} root: patch generated 1 new + 279 unchanged - 10 fixed = 
280 total (was 289) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 14s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
47s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 
21s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 2s 
{color} | {color:green} the patch passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 56s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 9s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.8.0_77. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 50s 
{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.8.0_77. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 12s 
{color} | {color:green} hadoop-aws in the patch passed with JDK v1.8.0_77. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 13s 
{color} | {color:green} hadoop-azure in the patch passed with JDK v1.8.0_77. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 59s 
{color} | {color:green} hadoop-common in the 

[jira] [Commented] (HADOOP-11540) Raw Reed-Solomon coder using Intel ISA-L library

2016-04-07 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15231248#comment-15231248
 ] 

Kai Zheng commented on HADOOP-11540:


Thanks [~cmccabe] for the once more review!
bq. Is it intentional that the "output" parameter is ignored here?
Yes it's intended here, because there is actually nothing/content to convert 
since the only need is to allocate a direct bytebuffer. I will check thru the 
codes and see if better to clean up.
bq. Why not just have the encode() function zero the buffer in every case? I 
don't see why the pure java code benefits from doing this differently-- and it 
is much simpler to understand if all the coders do it the same way.
I agree it will be easier to understand. The only thing I'm not sure about is, 
there are at least 6 Java coders and 2 x 6 encode/decode functions right now, 
if adding a loop to reset the list of output buffers to each function, it looks 
like a major change here. That's why I put the common codes in the abstract 
class. How about introducing AbstractJavaRawEncoder/AbstractJavaRawDecoder 
similar to the native ones for such things, then we can get rid of 
{{wantInitOutputs}} and don't have to change into each Java coders?
bq. All these functions can fail. You need to check for, and handle their 
failures.
I agree, even though they're simple calls.
bq. isAllowingChangeInputs, isAllowingVerboseDump: should be allowChangeInputs, 
allowVerboseDump for clarity.
Right, I will do it. 

> Raw Reed-Solomon coder using Intel ISA-L library
> 
>
> Key: HADOOP-11540
> URL: https://issues.apache.org/jira/browse/HADOOP-11540
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: HDFS-7285
>Reporter: Zhe Zhang
>Assignee: Kai Zheng
> Attachments: HADOOP-11540-initial.patch, HADOOP-11540-v1.patch, 
> HADOOP-11540-v10.patch, HADOOP-11540-v2.patch, HADOOP-11540-v4.patch, 
> HADOOP-11540-v5.patch, HADOOP-11540-v6.patch, HADOOP-11540-v7.patch, 
> HADOOP-11540-v8.patch, HADOOP-11540-v9.patch, 
> HADOOP-11540-with-11996-codes.patch, Native Erasure Coder Performance - Intel 
> ISAL-v1.pdf
>
>
> This is to provide RS codec implementation using Intel ISA-L library for 
> encoding and decoding.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12982) Document missing S3A and S3 properties

2016-04-07 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15231083#comment-15231083
 ] 

Lei (Eddy) Xu commented on HADOOP-12982:


[~jojochuang] Thanks a lot for addressing the docs.

{code}
fs.s3.buffer.dir
${hadoop.tmp.dir}/s3
Determines where on the local filesystem the S3 filesystem
{code}

Would it be better to say "local filesystem the s3: and s3n: filesystems..."?

+1 pending to address the above comments.

Thanks.

> Document missing S3A and S3 properties
> --
>
> Key: HADOOP-12982
> URL: https://issues.apache.org/jira/browse/HADOOP-12982
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, fs/s3, tools
>Affects Versions: 2.8.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
> Attachments: HADOOP-12982.001.patch, HADOOP-12982.002.patch
>
>
> * S3: 
> ** {{fs.s3.buffer.dir}}, {{fs.s3.maxRetries}}, {{fs.s3.sleepTimeSeconds}}, 
> {{fs.s3.block.size}}  not in the documentation
> ** Note that {{fs.s3.buffer.dir}}, {{fs.s3.maxRetries}}, 
> {{fs.s3.sleepTimeSeconds}} are also used by S3N.
> * S3A:
> ** {{fs.s3a.server-side-encryption-algorithm}} and {{fs.s3a.block.size}} are 
> missing in core-default.xml and the documentation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12994) Specify PositionedReadable, add contract tests, fix problems

2016-04-07 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12994?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15231078#comment-15231078
 ] 

Chris Nauroth commented on HADOOP-12994:


[~ste...@apache.org], unfortunately the Azure test still failed.  Sorry, I 
really should have given you the full stack trace the first time.  See below.

Your latest patch changed the single-byte read, but this test failure actually 
happens within the bulk positional read.  It's the very last assertion of 
{{testReadSmallFile}}.

{{NativeAzureFsInputStream}} does not override the positional read method, so 
it inherits the seek/bulk read/seek back implementation from {{FSInputStream}}. 
 {{NativeAzureFsInputStream#seek}} is coded to raise the EOF eagerly, so this 
is where it fails.  When I fixed the test locally, I did so by copy-pasting 
your override of the positional read method from {{S3AInputStream}}.

Your comment about covering this logic in the base class seems apt considering 
the copy-pasting to different subclasses.

{code}
testReadSmallFile(org.apache.hadoop.fs.azure.contract.TestAzureNativeContractSeek)
  Time elapsed: 4.038 sec  <<< ERROR!
java.io.EOFException: Attempted to seek or read past the end of the file
at 
org.apache.hadoop.fs.azure.NativeAzureFileSystem$NativeAzureFsInputStream.seek(NativeAzureFileSystem.java:833)
at org.apache.hadoop.fs.FSInputStream.read(FSInputStream.java:70)
at 
org.apache.hadoop.fs.BufferedFSInputStream.read(BufferedFSInputStream.java:108)
at 
org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:92)
at 
org.apache.hadoop.fs.contract.AbstractContractSeekTest.testReadSmallFile(AbstractContractSeekTest.java:568)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
{code}


> Specify PositionedReadable, add contract tests, fix problems
> 
>
> Key: HADOOP-12994
> URL: https://issues.apache.org/jira/browse/HADOOP-12994
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-12994-001.patch, HADOOP-12994-002.patch, 
> HADOOP-12994-003.patch, HADOOP-12994-004.patch
>
>
> Some work on S3a has shown up that there aren't tests catching regressions in 
> readFully, reviewing the documentation shows that its specification could be 
> improved.
> # review the spec
> # review the implementations
> # add tests (proposed: to the seek contract; streams which support seek 
> should support positioned readable)
> # fix code, where it differs significantly from HDFS or LocalFS



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12994) Specify PositionedReadable, add contract tests, fix problems

2016-04-07 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12994?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15231023#comment-15231023
 ] 

Chris Nauroth commented on HADOOP-12994:


[~ste...@apache.org], thank you for patch v004.  This looks good to me.  Nice 
catch on that stats counting bug too.  Let's see what pre-commit thinks.  I'll 
kick off an Azure test run too.

> Specify PositionedReadable, add contract tests, fix problems
> 
>
> Key: HADOOP-12994
> URL: https://issues.apache.org/jira/browse/HADOOP-12994
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-12994-001.patch, HADOOP-12994-002.patch, 
> HADOOP-12994-003.patch, HADOOP-12994-004.patch
>
>
> Some work on S3a has shown up that there aren't tests catching regressions in 
> readFully, reviewing the documentation shows that its specification could be 
> improved.
> # review the spec
> # review the implementations
> # add tests (proposed: to the seek contract; streams which support seek 
> should support positioned readable)
> # fix code, where it differs significantly from HDFS or LocalFS



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12994) Specify PositionedReadable, add contract tests, fix problems

2016-04-07 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-12994:

Status: Patch Available  (was: Open)

> Specify PositionedReadable, add contract tests, fix problems
> 
>
> Key: HADOOP-12994
> URL: https://issues.apache.org/jira/browse/HADOOP-12994
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-12994-001.patch, HADOOP-12994-002.patch, 
> HADOOP-12994-003.patch, HADOOP-12994-004.patch
>
>
> Some work on S3a has shown up that there aren't tests catching regressions in 
> readFully, reviewing the documentation shows that its specification could be 
> improved.
> # review the spec
> # review the implementations
> # add tests (proposed: to the seek contract; streams which support seek 
> should support positioned readable)
> # fix code, where it differs significantly from HDFS or LocalFS



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12994) Specify PositionedReadable, add contract tests, fix problems

2016-04-07 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-12994:

Attachment: HADOOP-12994-004.patch

patch 004; as 003 but fixes a statistics counting bug in azure; a -1 on a read 
was being added to the byte count, so actually lowering it

> Specify PositionedReadable, add contract tests, fix problems
> 
>
> Key: HADOOP-12994
> URL: https://issues.apache.org/jira/browse/HADOOP-12994
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-12994-001.patch, HADOOP-12994-002.patch, 
> HADOOP-12994-003.patch, HADOOP-12994-004.patch
>
>
> Some work on S3a has shown up that there aren't tests catching regressions in 
> readFully, reviewing the documentation shows that its specification could be 
> improved.
> # review the spec
> # review the implementations
> # add tests (proposed: to the seek contract; streams which support seek 
> should support positioned readable)
> # fix code, where it differs significantly from HDFS or LocalFS



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12994) Specify PositionedReadable, add contract tests, fix problems

2016-04-07 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-12994:

Status: Open  (was: Patch Available)

> Specify PositionedReadable, add contract tests, fix problems
> 
>
> Key: HADOOP-12994
> URL: https://issues.apache.org/jira/browse/HADOOP-12994
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-12994-001.patch, HADOOP-12994-002.patch, 
> HADOOP-12994-003.patch
>
>
> Some work on S3a has shown up that there aren't tests catching regressions in 
> readFully, reviewing the documentation shows that its specification could be 
> improved.
> # review the spec
> # review the implementations
> # add tests (proposed: to the seek contract; streams which support seek 
> should support positioned readable)
> # fix code, where it differs significantly from HDFS or LocalFS



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11540) Raw Reed-Solomon coder using Intel ISA-L library

2016-04-07 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15230976#comment-15230976
 ] 

Colin Patrick McCabe commented on HADOOP-11540:
---

Thanks, [~drankye].

{code}
+  /**
+   * Convert an output bytes array buffer to direct ByteBuffer.
+   * @param output
+   * @return direct ByteBuffer
+   */
+  protected ByteBuffer convertOutputBuffer(byte[] output, int len) {
+ByteBuffer directBuffer = ByteBuffer.allocateDirect(len);
+return directBuffer;
+  }
{code}
Is it intentional that the "output" parameter is ignored here?

bq. For initOutputs and resetBuffer, good catch! About this I initially thought 
as you suggested, instead of having initOutputs, just letting concrete coders 
to override resetBuffer, which would be most flexible. Then I realized for Java 
coders, a default behavior can be provided and used; for native coders, we can 
avoid having it because at the beginning of the encode() call the native coder 
can memset the output buffers directly. If instead the native coder has to 
provide resetBuffer, then a JNI function has to be added, which will be called 
some times to reset output buffers. Considering the overhead in both 
implementation and extra JNI calls, I used the initOutputs() approach.

Thanks for the explanation.  Why not just have the encode() function zero the 
buffer in every case?  I don't see why the pure java code benefits from doing 
this differently-- and it is much simpler to understand if all the coders do it 
the same way.

{code}
void setCoder(JNIEnv* env, jobject thiz, IsalCoder* pCoder) {
  jclass clazz = (*env)->GetObjectClass(env, thiz);
  jfieldID fid = (*env)->GetFieldID(env, clazz, "nativeCoder", "J");
  (*env)->SetLongField(env, thiz, fid, (jlong) pCoder);
}
{code}
All these functions can fail.  You need to check for, and handle their failures.

isAllowingChangeInputs, isAllowingVerboseDump: should be {{allowChangeInputs}}, 
{{allowVerboseDump}} for clarity.

> Raw Reed-Solomon coder using Intel ISA-L library
> 
>
> Key: HADOOP-11540
> URL: https://issues.apache.org/jira/browse/HADOOP-11540
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: HDFS-7285
>Reporter: Zhe Zhang
>Assignee: Kai Zheng
> Attachments: HADOOP-11540-initial.patch, HADOOP-11540-v1.patch, 
> HADOOP-11540-v10.patch, HADOOP-11540-v2.patch, HADOOP-11540-v4.patch, 
> HADOOP-11540-v5.patch, HADOOP-11540-v6.patch, HADOOP-11540-v7.patch, 
> HADOOP-11540-v8.patch, HADOOP-11540-v9.patch, 
> HADOOP-11540-with-11996-codes.patch, Native Erasure Coder Performance - Intel 
> ISAL-v1.pdf
>
>
> This is to provide RS codec implementation using Intel ISA-L library for 
> encoding and decoding.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12994) Specify PositionedReadable, add contract tests, fix problems

2016-04-07 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-12994:

Status: Patch Available  (was: Open)

> Specify PositionedReadable, add contract tests, fix problems
> 
>
> Key: HADOOP-12994
> URL: https://issues.apache.org/jira/browse/HADOOP-12994
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-12994-001.patch, HADOOP-12994-002.patch, 
> HADOOP-12994-003.patch
>
>
> Some work on S3a has shown up that there aren't tests catching regressions in 
> readFully, reviewing the documentation shows that its specification could be 
> improved.
> # review the spec
> # review the implementations
> # add tests (proposed: to the seek contract; streams which support seek 
> should support positioned readable)
> # fix code, where it differs significantly from HDFS or LocalFS



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12994) Specify PositionedReadable, add contract tests, fix problems

2016-04-07 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12994?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15230857#comment-15230857
 ] 

Steve Loughran commented on HADOOP-12994:
-

As an aside, I was thinking we could just modify {{FSInputStreamread(long 
position, byte[] buffer, int offset, int length))}} to catch an EOF exception 
and downgrade.

With azure, there's now two classes that handle EOF specially; I think swift 
will need it too. Put it all in one place, and there's one code path that 
everything picks up.

*I can't see how, on any FS, catching an EOF exception on the seek or read and 
downgrading to a -1 would be a mistake*

Remember: the param validation of seeking < 0 has already taken place, so the 
only way an EOF could be raised is if they went off the end and the FS raises 
EOFs here. For which catch and downgrade is the required action

> Specify PositionedReadable, add contract tests, fix problems
> 
>
> Key: HADOOP-12994
> URL: https://issues.apache.org/jira/browse/HADOOP-12994
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-12994-001.patch, HADOOP-12994-002.patch, 
> HADOOP-12994-003.patch
>
>
> Some work on S3a has shown up that there aren't tests catching regressions in 
> readFully, reviewing the documentation shows that its specification could be 
> improved.
> # review the spec
> # review the implementations
> # add tests (proposed: to the seek contract; streams which support seek 
> should support positioned readable)
> # fix code, where it differs significantly from HDFS or LocalFS



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12994) Specify PositionedReadable, add contract tests, fix problems

2016-04-07 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-12994:

Attachment: HADOOP-12994-003.patch

HADOOP-12994 patch 003
-address chris's concerns
-add an option for supports positioned readable; default is driven by "supports 
seek"
-split up test operations into separate test cases
-tested on local, hdfs, s3 filesystems

I've caught the EOF in Azure and downgraded to a -1, but not tested it...my ASF 
MSDN sub hasn't renewed; I'm chasing that up, and am currently azureless

> Specify PositionedReadable, add contract tests, fix problems
> 
>
> Key: HADOOP-12994
> URL: https://issues.apache.org/jira/browse/HADOOP-12994
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-12994-001.patch, HADOOP-12994-002.patch, 
> HADOOP-12994-003.patch
>
>
> Some work on S3a has shown up that there aren't tests catching regressions in 
> readFully, reviewing the documentation shows that its specification could be 
> improved.
> # review the spec
> # review the implementations
> # add tests (proposed: to the seek contract; streams which support seek 
> should support positioned readable)
> # fix code, where it differs significantly from HDFS or LocalFS



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12994) Specify PositionedReadable, add contract tests, fix problems

2016-04-07 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-12994:

Status: Open  (was: Patch Available)

> Specify PositionedReadable, add contract tests, fix problems
> 
>
> Key: HADOOP-12994
> URL: https://issues.apache.org/jira/browse/HADOOP-12994
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-12994-001.patch, HADOOP-12994-002.patch
>
>
> Some work on S3a has shown up that there aren't tests catching regressions in 
> readFully, reviewing the documentation shows that its specification could be 
> improved.
> # review the spec
> # review the implementations
> # add tests (proposed: to the seek contract; streams which support seek 
> should support positioned readable)
> # fix code, where it differs significantly from HDFS or LocalFS



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HADOOP-12973) make DU pluggable

2016-04-07 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15230762#comment-15230762
 ] 

Colin Patrick McCabe edited comment on HADOOP-12973 at 4/7/16 6:31 PM:
---

bq. It makes it more obvious when someone overrides the class where things are.

Hmm.  How about making the class {{final}} instead?

Re: {{DU}} versus {{WindowsDU}}. If you really want to separate the classes, I 
don't object, but I don't want the {{WindowsDU}} to be a subclass of the Linux 
{{DU}}.  That is just weird.

bq. Shutdown is needed. So it's very strange to have a shutdown without a start.

There is a start-- in {{GetSpaceUsedBuilder}}.  Having an "init" method that 
you have to call after initialization is an anti-pattern.  There is no reason 
why the user should have to care whether {{GetSpaceUsedBuilder}} contains a 
thread or not-- many implementations won't need a thread.  The fact that not 
all subclasses need threads is a good sign that thread management doesn't 
belong in the common interface.

I'm also curious how you feel about the idea of making the interface 
{{Closeable}}, as we've done with many other interfaces such as 
{{FailoverProxyProvider}}, {{ServicePlugin}}, {{BlockReader}}, {{Peer}}, 
{{PeerServer}}, {{FsVolumeReference}}, etc. etc.


was (Author: cmccabe):
bq. It makes it more obvious when someone overrides the class where things are.

Hmm.  How about making the class {{final}} instead?

Re: DU versus WindowsDU. If you really want to separate the classes, I don't 
object, but I don't want the WindowsDU to be a subclass of the Linux DU.  That 
is just weird.

bq. Shutdown is needed. So it's very strange to have a shutdown without a start.

There is a start-- in GetSpaceUsedBuilder.  Having an "init" method is an 
anti-pattern.


> make DU pluggable
> -
>
> Key: HADOOP-12973
> URL: https://issues.apache.org/jira/browse/HADOOP-12973
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Attachments: HADOOP-12973v0.patch, HADOOP-12973v1.patch, 
> HADOOP-12973v10.patch, HADOOP-12973v2.patch, HADOOP-12973v3.patch, 
> HADOOP-12973v5.patch, HADOOP-12973v6.patch, HADOOP-12973v7.patch, 
> HADOOP-12973v8.patch, HADOOP-12973v9.patch
>
>
> If people are concerned about replacing the call to DU. Then an easy first 
> step is to make it pluggable. Then it's possible to replace it with something 
> while leaving the default alone.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HADOOP-12973) make DU pluggable

2016-04-07 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15230762#comment-15230762
 ] 

Colin Patrick McCabe edited comment on HADOOP-12973 at 4/7/16 6:32 PM:
---

bq. It makes it more obvious when someone overrides the class where things are.

Hmm.  How about making the class {{final}} instead?

Re: {{DU}} versus {{WindowsDU}}. If you really want to separate the classes, I 
don't object, but I don't want the {{WindowsDU}} to be a subclass of the Linux 
{{DU}}.  That is just weird.

bq. Shutdown is needed. So it's very strange to have a shutdown without a start.

There is a start-- in {{GetSpaceUsedBuilder}}.  Having an "init" method that 
you have to call after initialization is an anti-pattern.  There is no reason 
why the user should have to care whether {{GetSpaceUsedBuilder}} contains a 
thread or not-- many implementations won't need a thread.  The fact that not 
all subclasses need threads is a good sign that thread management doesn't 
belong in the common interface.

I'm also curious how you feel about the idea of making the interface 
{{Closeable}}, as we've done with many other interfaces such as 
{{FailoverProxyProvider}}, {{ServicePlugin}}, {{BlockReader}}, {{Peer}}, 
{{PeerServer}}, {{FsVolumeReference}}, etc. etc.  The compiler and various 
linters warn about failures to close {{Closeable}} objects in many cases, but 
not about failure to call custom shutdown funtions.


was (Author: cmccabe):
bq. It makes it more obvious when someone overrides the class where things are.

Hmm.  How about making the class {{final}} instead?

Re: {{DU}} versus {{WindowsDU}}. If you really want to separate the classes, I 
don't object, but I don't want the {{WindowsDU}} to be a subclass of the Linux 
{{DU}}.  That is just weird.

bq. Shutdown is needed. So it's very strange to have a shutdown without a start.

There is a start-- in {{GetSpaceUsedBuilder}}.  Having an "init" method that 
you have to call after initialization is an anti-pattern.  There is no reason 
why the user should have to care whether {{GetSpaceUsedBuilder}} contains a 
thread or not-- many implementations won't need a thread.  The fact that not 
all subclasses need threads is a good sign that thread management doesn't 
belong in the common interface.

I'm also curious how you feel about the idea of making the interface 
{{Closeable}}, as we've done with many other interfaces such as 
{{FailoverProxyProvider}}, {{ServicePlugin}}, {{BlockReader}}, {{Peer}}, 
{{PeerServer}}, {{FsVolumeReference}}, etc. etc.

> make DU pluggable
> -
>
> Key: HADOOP-12973
> URL: https://issues.apache.org/jira/browse/HADOOP-12973
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Attachments: HADOOP-12973v0.patch, HADOOP-12973v1.patch, 
> HADOOP-12973v10.patch, HADOOP-12973v2.patch, HADOOP-12973v3.patch, 
> HADOOP-12973v5.patch, HADOOP-12973v6.patch, HADOOP-12973v7.patch, 
> HADOOP-12973v8.patch, HADOOP-12973v9.patch
>
>
> If people are concerned about replacing the call to DU. Then an easy first 
> step is to make it pluggable. Then it's possible to replace it with something 
> while leaving the default alone.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HADOOP-12973) make DU pluggable

2016-04-07 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15230772#comment-15230772
 ] 

Colin Patrick McCabe edited comment on HADOOP-12973 at 4/7/16 6:27 PM:
---

Part of the reason I believe there should be a builder is because otherwise, we 
have no way of adding new parameters in a backwards compatible way.  For 
example, if we want to add a Foobar parameter to the constructor, we can't do 
that in a compatible fashion since the Factory is hard-coded to look for a 
3-argument constructor with {{File, long, long}} by this code:

{code}
37  Constructor cons =
38duKlass.getConstructor(File.class, long.class, long.class);
{code}

And if I accidentally implement a constructor with File, long, long that uses 
those parameters for something else, weird stuff happens.  For example, if I 
have a constructor like this:
{{MyGetSpaceUsedSubclass(File birthdayMessage, long numberOfClowns, long 
numberOfBirthdayCakes)}}, the factory will happily find it and pass it 
arguments that make no sense.  Or if someone not deeply familiar with the code 
changes the order of the constructor parameters, we'll have things break in 
weird ways.  This can all be avoided by just passing the Builder object to the 
constructor.


was (Author: cmccabe):
Part of the reason I believe there should be a builder is because otherwise, we 
have no way of adding new parameters in a backwards compatible way.  For 
example, if we want to add a Foobar parameter to the constructor, we can't do 
that in a compatible fashion since the Factory is hard-coded to look for a 
3-argument constructor with {{File, long, long}} by this code:

{code}
37  Constructor cons =
38duKlass.getConstructor(File.class, long.class, long.class);
{code}

And if I accidentally implement a constructor with File, long, long that uses 
those parameters for something else, weird stuff happens.  For example, if I 
have a constructor like this:
{{MyGetSpaceUsedSubclass(File birthdayMessage, long numberOfClowns, long 
numberOfBirthdayCakes)}}, the factory will happily find it and pass it 
arguments that make no sense.  This can all be avoided by just passing the 
Builder object to the constructor.

> make DU pluggable
> -
>
> Key: HADOOP-12973
> URL: https://issues.apache.org/jira/browse/HADOOP-12973
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Attachments: HADOOP-12973v0.patch, HADOOP-12973v1.patch, 
> HADOOP-12973v10.patch, HADOOP-12973v2.patch, HADOOP-12973v3.patch, 
> HADOOP-12973v5.patch, HADOOP-12973v6.patch, HADOOP-12973v7.patch, 
> HADOOP-12973v8.patch, HADOOP-12973v9.patch
>
>
> If people are concerned about replacing the call to DU. Then an easy first 
> step is to make it pluggable. Then it's possible to replace it with something 
> while leaving the default alone.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12973) make DU pluggable

2016-04-07 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15230762#comment-15230762
 ] 

Colin Patrick McCabe commented on HADOOP-12973:
---

bq. It makes it more obvious when someone overrides the class where things are.

Hmm.  How about making the class {{final}} instead?

Re: DU versus WindowsDU. If you really want to separate the classes, I don't 
object, but I don't want the WindowsDU to be a subclass of the Linux DU.  That 
is just weird.

bq. Shutdown is needed. So it's very strange to have a shutdown without a start.

There is a start-- in GetSpaceUsedBuilder.  Having an "init" method is an 
anti-pattern.


> make DU pluggable
> -
>
> Key: HADOOP-12973
> URL: https://issues.apache.org/jira/browse/HADOOP-12973
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Attachments: HADOOP-12973v0.patch, HADOOP-12973v1.patch, 
> HADOOP-12973v10.patch, HADOOP-12973v2.patch, HADOOP-12973v3.patch, 
> HADOOP-12973v5.patch, HADOOP-12973v6.patch, HADOOP-12973v7.patch, 
> HADOOP-12973v8.patch, HADOOP-12973v9.patch
>
>
> If people are concerned about replacing the call to DU. Then an easy first 
> step is to make it pluggable. Then it's possible to replace it with something 
> while leaving the default alone.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12973) make DU pluggable

2016-04-07 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15230772#comment-15230772
 ] 

Colin Patrick McCabe commented on HADOOP-12973:
---

Part of the reason I believe there should be a builder is because otherwise, we 
have no way of adding new parameters in a backwards compatible way.  For 
example, if we want to add a Foobar parameter to the constructor, we can't do 
that in a compatible fashion since the Factory is hard-coded to look for a 
3-argument constructor with {{File, long, long}} by this code:

{code}
37  Constructor cons =
38duKlass.getConstructor(File.class, long.class, long.class);
{code}

And if I accidentally implement a constructor with File, long, long that uses 
those parameters for something else, weird stuff happens.  For example, if I 
have a constructor like this:
{{MyGetSpaceUsedSubclass(File birthdayMessage, long numberOfClowns, long 
numberOfBirthdayCakes)}}, the factory will happily find it and pass it 
arguments that make no sense.  This can all be avoided by just passing the 
Builder object to the constructor.

> make DU pluggable
> -
>
> Key: HADOOP-12973
> URL: https://issues.apache.org/jira/browse/HADOOP-12973
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Attachments: HADOOP-12973v0.patch, HADOOP-12973v1.patch, 
> HADOOP-12973v10.patch, HADOOP-12973v2.patch, HADOOP-12973v3.patch, 
> HADOOP-12973v5.patch, HADOOP-12973v6.patch, HADOOP-12973v7.patch, 
> HADOOP-12973v8.patch, HADOOP-12973v9.patch
>
>
> If people are concerned about replacing the call to DU. Then an easy first 
> step is to make it pluggable. Then it's possible to replace it with something 
> while leaving the default alone.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12969) Mark IPC.Client and IPC.Server as @Public, @Evolving

2016-04-07 Thread Xiaobing Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15230708#comment-15230708
 ] 

Xiaobing Zhou commented on HADOOP-12969:


HADOOP-8813 didn't mention the logic why they must be 
@InterfaceAudience.LimitedPrivate. It is good to make them public as [~steve_l] 
explained.

> Mark IPC.Client and IPC.Server as @Public, @Evolving
> 
>
> Key: HADOOP-12969
> URL: https://issues.apache.org/jira/browse/HADOOP-12969
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Affects Versions: 2.8.0
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
>Priority: Minor
> Attachments: HADOOP-12969.000..patch, HADOOP-12969.001.patch, 
> HADOOP-12969.002.patch
>
>
> Per the discussion in 
> [HADOOP-12909|https://issues.apache.org/jira/browse/HADOOP-12909?focusedCommentId=15211745=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15211745],
>  this is to propose marking IPC.Client and IPC.Server as @Public, @Evolving 
> as a result of HADOOP-12909



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12982) Document missing S3A and S3 properties

2016-04-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15230702#comment-15230702
 ] 

Hadoop QA commented on HADOOP-12982:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 9s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 47s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
15s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 54s 
{color} | {color:green} trunk passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 30s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 25s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
29s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 9s 
{color} | {color:green} trunk passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 27s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
0s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 39s 
{color} | {color:green} the patch passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 39s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 8s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 8s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 19s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
28s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 0s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 6s 
{color} | {color:green} the patch passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 20s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 34s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.8.0_77. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 13s 
{color} | {color:green} hadoop-aws in the patch passed with JDK v1.8.0_77. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 37s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 14s 
{color} | {color:green} hadoop-aws in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
25s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 65m 20s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:fbe3e86 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12797553/HADOOP-12982.002.patch
 |
| JIRA Issue | HADOOP-12982 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  |
| uname | 

[jira] [Commented] (HADOOP-12893) Verify LICENSE.txt and NOTICE.txt

2016-04-07 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15230575#comment-15230575
 ] 

Sean Busbey commented on HADOOP-12893:
--

copying my comments from common-dev@

{quote}

Each artifact that the PMC publishes must abide by the ASF licensing
policy. That includes

* Source release artifact
* any convenience binary artifacts places on dist.apache
* any convenience jars put into the ASF Nexus repository

That likely means that we bundle much more than just what's in the source tree.
{quote}

So we might end up needing different LICENSE/NOTICE entries for source and 
binary tarballs and for some of our jars. They do end up being very large (I 
could attach an example from HBase if folks like, or NiFi, if it would help to 
see what projects that have spent a fair bit of time on this ended up doing). 
One option is to have a directory with various license files and then reference 
them in our LICENSE file. There's no such shortcut available for things that 
require a NOTICE entry.

In HBase this took a long time to get right and we largely had to do it by 
manually reviewing every artifact and leveraging the assembly and 
remote-resources plugins. I had wanted to use Apache Whisker, but I ran into 
the same kind of problems as [~ajisakaa] mentioned.


> Verify LICENSE.txt and NOTICE.txt
> -
>
> Key: HADOOP-12893
> URL: https://issues.apache.org/jira/browse/HADOOP-12893
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0, 3.0.0, 2.7.3, 2.6.5
>Reporter: Allen Wittenauer
>Priority: Blocker
>
> We have many bundled dependencies in both the source and the binary artifacts 
> that are not in LICENSE.txt and NOTICE.txt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-13006) FileContextMainOperationsBaseTest.testListStatusThrowsExceptionForNonExistentFile() doesnt run

2016-04-07 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-13006:
---

 Summary: 
FileContextMainOperationsBaseTest.testListStatusThrowsExceptionForNonExistentFile()
 doesnt run
 Key: HADOOP-13006
 URL: https://issues.apache.org/jira/browse/HADOOP-13006
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 2.8.0
Reporter: Steve Loughran
Priority: Minor


The test method 
{{FileContextMainOperationsBaseTest.testListStatusThrowsExceptionForNonExistentFile()
 doesnt run, because it's a Junit 3 test in a JUnit 4 class; it needs a 
{{@Test}} annotation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12982) Document missing S3A and S3 properties

2016-04-07 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12982?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-12982:
-
Attachment: HADOOP-12982.002.patch

Patch 002. This revision added notice that s3: is being phased out.

> Document missing S3A and S3 properties
> --
>
> Key: HADOOP-12982
> URL: https://issues.apache.org/jira/browse/HADOOP-12982
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, fs/s3, tools
>Affects Versions: 2.8.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
> Attachments: HADOOP-12982.001.patch, HADOOP-12982.002.patch
>
>
> * S3: 
> ** {{fs.s3.buffer.dir}}, {{fs.s3.maxRetries}}, {{fs.s3.sleepTimeSeconds}}, 
> {{fs.s3.block.size}}  not in the documentation
> ** Note that {{fs.s3.buffer.dir}}, {{fs.s3.maxRetries}}, 
> {{fs.s3.sleepTimeSeconds}} are also used by S3N.
> * S3A:
> ** {{fs.s3a.server-side-encryption-algorithm}} and {{fs.s3a.block.size}} are 
> missing in core-default.xml and the documentation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12994) Specify PositionedReadable, add contract tests, fix problems

2016-04-07 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12994?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15230223#comment-15230223
 ] 

Steve Loughran commented on HADOOP-12994:
-

Actually, I will do supports-readfully

there's an implicit requirement there —how seeks past EOF are handled—, where 
even if the FS doesn't support that, readFully is expected to catch and 
downgrade the exception.

> Specify PositionedReadable, add contract tests, fix problems
> 
>
> Key: HADOOP-12994
> URL: https://issues.apache.org/jira/browse/HADOOP-12994
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-12994-001.patch, HADOOP-12994-002.patch
>
>
> Some work on S3a has shown up that there aren't tests catching regressions in 
> readFully, reviewing the documentation shows that its specification could be 
> improved.
> # review the spec
> # review the implementations
> # add tests (proposed: to the seek contract; streams which support seek 
> should support positioned readable)
> # fix code, where it differs significantly from HDFS or LocalFS



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12994) Specify PositionedReadable, add contract tests, fix problems

2016-04-07 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12994?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15230184#comment-15230184
 ] 

Steve Loughran commented on HADOOP-12994:
-

# the protected thing was just an IDE hint; I'll put it as a distraction
# I ended up not using that {{supports-positioned-readable}} flag,  as it comes 
for free if you handle {{seek()}}, and that is implicitly flagged if you 
implement a subclass of the seek tests.
# I'll look at that azure test

> Specify PositionedReadable, add contract tests, fix problems
> 
>
> Key: HADOOP-12994
> URL: https://issues.apache.org/jira/browse/HADOOP-12994
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-12994-001.patch, HADOOP-12994-002.patch
>
>
> Some work on S3a has shown up that there aren't tests catching regressions in 
> readFully, reviewing the documentation shows that its specification could be 
> improved.
> # review the spec
> # review the implementations
> # add tests (proposed: to the seek contract; streams which support seek 
> should support positioned readable)
> # fix code, where it differs significantly from HDFS or LocalFS



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12973) make DU pluggable

2016-04-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15230114#comment-15230114
 ] 

Hadoop QA commented on HADOOP-12973:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 11m 9s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 48s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
35s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 38s 
{color} | {color:green} trunk passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 37s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
3s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 49s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
27s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
28s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 55s 
{color} | {color:green} trunk passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 45s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
24s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 37s 
{color} | {color:green} the patch passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 37s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 36s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 36s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 3s 
{color} | {color:red} root: patch generated 6 new + 21 unchanged - 1 fixed = 27 
total (was 22) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 44s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
27s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
55s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 58s 
{color} | {color:green} the patch passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 52s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 16m 52s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_77. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 56m 53s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_77. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 23s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 53m 51s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_95. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 27s 
{color} | {color:red} Patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 205m 53s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_77 

[jira] [Commented] (HADOOP-12893) Verify LICENSE.txt and NOTICE.txt

2016-04-07 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15230005#comment-15230005
 ] 

Akira AJISAKA commented on HADOOP-12893:


I tried to use Apache Whisker but I couldn't because of poor documentation and 
releases. Instead I used license-maven-plugin 
(http://www.mojohaus.org/license-maven-plugin/), and the output is 
https://gist.github.com/aajisaka/e602d4e5d7569f5fe32149193c81b749

> Verify LICENSE.txt and NOTICE.txt
> -
>
> Key: HADOOP-12893
> URL: https://issues.apache.org/jira/browse/HADOOP-12893
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0, 3.0.0, 2.7.3, 2.6.5
>Reporter: Allen Wittenauer
>Priority: Blocker
>
> We have many bundled dependencies in both the source and the binary artifacts 
> that are not in LICENSE.txt and NOTICE.txt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12969) Mark IPC.Client and IPC.Server as @Public, @Evolving

2016-04-07 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15229902#comment-15229902
 ] 

Steve Loughran commented on HADOOP-12969:
-

Hadoop IPC is invaluable on YARN apps; you may as well assume that if something 
is marked as for MapReduce, it means [MapReduce, Tez, +anything which runs on a 
YARN cluster that hasn't implemented their own IPC and don't want to worry 
about kerberos auth.

> Mark IPC.Client and IPC.Server as @Public, @Evolving
> 
>
> Key: HADOOP-12969
> URL: https://issues.apache.org/jira/browse/HADOOP-12969
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Affects Versions: 2.8.0
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
>Priority: Minor
> Attachments: HADOOP-12969.000..patch, HADOOP-12969.001.patch, 
> HADOOP-12969.002.patch
>
>
> Per the discussion in 
> [HADOOP-12909|https://issues.apache.org/jira/browse/HADOOP-12909?focusedCommentId=15211745=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15211745],
>  this is to propose marking IPC.Client and IPC.Server as @Public, @Evolving 
> as a result of HADOOP-12909



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11874) s3a can throw spurious IOEs on close()

2016-04-07 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15229897#comment-15229897
 ] 

Steve Loughran commented on HADOOP-11874:
-

no problem. FWIW, I just test against s3 using my own account. provided test 
teardowns delete all the s3 files afterwards, cost is nearly nothing

> s3a can throw spurious IOEs on close()
> --
>
> Key: HADOOP-11874
> URL: https://issues.apache.org/jira/browse/HADOOP-11874
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.0
>Reporter: Steve Loughran
>
> from a code review, it's clear that the issue seen in HADOOP-11851 can 
> surface in S3a, though with HADOOP-11570, it's less likely. It will only 
> happen on those cases when abort() isn't called.
> The "clean" close() code path needs to catch IOEs from the wrappedStream and 
> call abort() in that situation too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12973) make DU pluggable

2016-04-07 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HADOOP-12973:
---
Attachment: HADOOP-12973v10.patch

> make DU pluggable
> -
>
> Key: HADOOP-12973
> URL: https://issues.apache.org/jira/browse/HADOOP-12973
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Attachments: HADOOP-12973v0.patch, HADOOP-12973v1.patch, 
> HADOOP-12973v10.patch, HADOOP-12973v2.patch, HADOOP-12973v3.patch, 
> HADOOP-12973v5.patch, HADOOP-12973v6.patch, HADOOP-12973v7.patch, 
> HADOOP-12973v8.patch, HADOOP-12973v9.patch
>
>
> If people are concerned about replacing the call to DU. Then an easy first 
> step is to make it pluggable. Then it's possible to replace it with something 
> while leaving the default alone.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12909) Change ipc.Client to support asynchronous calls

2016-04-07 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HADOOP-12909:
-
   Resolution: Fixed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

I have committed this.  Thanks, Xiaobing!

> Change ipc.Client to support asynchronous calls
> ---
>
> Key: HADOOP-12909
> URL: https://issues.apache.org/jira/browse/HADOOP-12909
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: ipc
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Xiaobing Zhou
> Fix For: 2.8.0
>
> Attachments: HADOOP-12909-HDFS-9924.000.patch, 
> HADOOP-12909-HDFS-9924.001.patch, HADOOP-12909-HDFS-9924.002.patch, 
> HADOOP-12909-HDFS-9924.003.patch, HADOOP-12909-HDFS-9924.004.patch, 
> HADOOP-12909-HDFS-9924.005.patch, HADOOP-12909-HDFS-9924.006.patch, 
> HADOOP-12909-HDFS-9924.007.patch, HADOOP-12909-HDFS-9924.008.patch, 
> HADOOP-12909-HDFS-9924.009.patch
>
>
> In ipc.Client, the underlying mechanism is already supporting asynchronous 
> calls -- the calls shares a connection, the call requests are sent using a 
> thread pool and the responses can be out of order.  Indeed, synchronous call 
> is implemented by invoking wait() in the caller thread in order to wait for 
> the server response.
> In this JIRA, we change ipc.Client to support asynchronous mode.  In 
> asynchronous mode, it return once the request has been sent out but not wait 
> for the response from the server.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11874) s3a can throw spurious IOEs on close()

2016-04-07 Thread Surendra Singh Lilhore (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15229805#comment-15229805
 ] 

Surendra Singh Lilhore commented on HADOOP-11874:
-

Sorry [~ste...@apache.org], I couldn't looked on this. I didn't had S3 
installation, so couldn't proceed.

> s3a can throw spurious IOEs on close()
> --
>
> Key: HADOOP-11874
> URL: https://issues.apache.org/jira/browse/HADOOP-11874
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.0
>Reporter: Steve Loughran
>
> from a code review, it's clear that the issue seen in HADOOP-11851 can 
> surface in S3a, though with HADOOP-11570, it's less likely. It will only 
> happen on those cases when abort() isn't called.
> The "clean" close() code path needs to catch IOEs from the wrappedStream and 
> call abort() in that situation too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-8813) RPC Server and Client classes need InterfaceAudience and InterfaceStability annotations

2016-04-07 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze resolved HADOOP-8813.
-
   Resolution: Fixed
Fix Version/s: (was: 3.0.0)
   2.8.0

> RPC Server and Client classes need InterfaceAudience and InterfaceStability 
> annotations
> ---
>
> Key: HADOOP-8813
> URL: https://issues.apache.org/jira/browse/HADOOP-8813
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Affects Versions: 3.0.0
>Reporter: Brandon Li
>Assignee: Brandon Li
>Priority: Trivial
> Fix For: 2.8.0
>
> Attachments: HADOOP-8813.patch, HADOOP-8813.patch
>
>
> RPC Serever and Client classes need InterfaceAudience and InterfaceStability 
> annotations



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-8813) RPC Server and Client classes need InterfaceAudience and InterfaceStability annotations

2016-04-07 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15229801#comment-15229801
 ] 

Tsz Wo Nicholas Sze commented on HADOOP-8813:
-

Merged to branch-2 and branch-2.8.

> RPC Server and Client classes need InterfaceAudience and InterfaceStability 
> annotations
> ---
>
> Key: HADOOP-8813
> URL: https://issues.apache.org/jira/browse/HADOOP-8813
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Affects Versions: 3.0.0
>Reporter: Brandon Li
>Assignee: Brandon Li
>Priority: Trivial
> Fix For: 2.8.0
>
> Attachments: HADOOP-8813.patch, HADOOP-8813.patch
>
>
> RPC Serever and Client classes need InterfaceAudience and InterfaceStability 
> annotations



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12984) Add GenericTestUtils.getTestDir method and use it for temporary directory in tests

2016-04-07 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15229793#comment-15229793
 ] 

Tsz Wo Nicholas Sze commented on HADOOP-12984:
--

That's great.  Thanks!

> Add GenericTestUtils.getTestDir method and use it for temporary directory in 
> tests
> --
>
> Key: HADOOP-12984
> URL: https://issues.apache.org/jira/browse/HADOOP-12984
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, test
>Affects Versions: 3.0.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Fix For: 2.9.0
>
> Attachments: HADOOP-12984-003.patch, HDFS-9263-001.patch, 
> HDFS-9263-002.patch, HDFS-9263-003.patch
>
>
> We have seen some tests had been used the path {{test/build/data}} to store 
> files, so leaking files which fail the new post-build RAT test checks on 
> Jenkins (and dirtying all development systems with paths which {{mvn clean}} 
> will miss.
> In order not to occur these bugs such as MAPREDUCE-6589 and HDFS-9571 again, 
> we'd like to introduce new utility methods to get a temporary directory path 
> easily, and use the methods in tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (HADOOP-8813) RPC Server and Client classes need InterfaceAudience and InterfaceStability annotations

2016-04-07 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze reopened HADOOP-8813:
-

Reopen for merging to branch-2.

> RPC Server and Client classes need InterfaceAudience and InterfaceStability 
> annotations
> ---
>
> Key: HADOOP-8813
> URL: https://issues.apache.org/jira/browse/HADOOP-8813
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Affects Versions: 3.0.0
>Reporter: Brandon Li
>Assignee: Brandon Li
>Priority: Trivial
> Fix For: 3.0.0
>
> Attachments: HADOOP-8813.patch, HADOOP-8813.patch
>
>
> RPC Serever and Client classes need InterfaceAudience and InterfaceStability 
> annotations



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12984) Add GenericTestUtils.getTestDir method and use it for temporary directory in tests

2016-04-07 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15229788#comment-15229788
 ] 

Vinayakumar B commented on HADOOP-12984:


I have already pushed below one. :)
{code}
 try {
-  Path srcPath = new Path(TEST_ROOT_DIR, src);
-  Path dstPath = new Path(TEST_ROOT_DIR, dst);
+  Path srcPath = new Path(TEST_DIR.getAbsolutePath(), src);
+  Path dstPath = new Path(TEST_DIR.getAbsolutePath(), dst);
   boolean deleteSource = false;
   String addString = null;
   result = FileUtil.copyMerge(fs, srcPath, fs, dstPath, deleteSource, conf,
{code}


> Add GenericTestUtils.getTestDir method and use it for temporary directory in 
> tests
> --
>
> Key: HADOOP-12984
> URL: https://issues.apache.org/jira/browse/HADOOP-12984
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, test
>Affects Versions: 3.0.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Fix For: 2.9.0
>
> Attachments: HADOOP-12984-003.patch, HDFS-9263-001.patch, 
> HDFS-9263-002.patch, HDFS-9263-003.patch
>
>
> We have seen some tests had been used the path {{test/build/data}} to store 
> files, so leaking files which fail the new post-build RAT test checks on 
> Jenkins (and dirtying all development systems with paths which {{mvn clean}} 
> will miss.
> In order not to occur these bugs such as MAPREDUCE-6589 and HDFS-9571 again, 
> we'd like to introduce new utility methods to get a temporary directory path 
> easily, and use the methods in tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12984) Add GenericTestUtils.getTestDir method and use it for temporary directory in tests

2016-04-07 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15229786#comment-15229786
 ] 

Akira AJISAKA commented on HADOOP-12984:


Thanks Nicholas and Vinayakumar! I just prepared the same addendum patch. My 
late +1.

> Add GenericTestUtils.getTestDir method and use it for temporary directory in 
> tests
> --
>
> Key: HADOOP-12984
> URL: https://issues.apache.org/jira/browse/HADOOP-12984
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, test
>Affects Versions: 3.0.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Fix For: 2.9.0
>
> Attachments: HADOOP-12984-003.patch, HDFS-9263-001.patch, 
> HDFS-9263-002.patch, HDFS-9263-003.patch
>
>
> We have seen some tests had been used the path {{test/build/data}} to store 
> files, so leaking files which fail the new post-build RAT test checks on 
> Jenkins (and dirtying all development systems with paths which {{mvn clean}} 
> will miss.
> In order not to occur these bugs such as MAPREDUCE-6589 and HDFS-9571 again, 
> we'd like to introduce new utility methods to get a temporary directory path 
> easily, and use the methods in tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12984) Add GenericTestUtils.getTestDir method and use it for temporary directory in tests

2016-04-07 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15229783#comment-15229783
 ] 

Tsz Wo Nicholas Sze commented on HADOOP-12984:
--

No problem.  Below is a suggested fix.
{code}
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFileUtil.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFileUtil.java
index c478681..76ccb75 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFileUtil.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFileUtil.java
@@ -566,8 +566,8 @@ private boolean copyMerge(String src, String dst)
 final boolean result;
 
 try {
-  Path srcPath = new Path(TEST_ROOT_DIR, src);
-  Path dstPath = new Path(TEST_ROOT_DIR, dst);
+  Path srcPath = new Path(TEST_DIR.toString(), src);
+  Path dstPath = new Path(TEST_DIR.toString(), dst);
   boolean deleteSource = false;
   String addString = null;
   result = FileUtil.copyMerge(fs, srcPath, fs, dstPath, deleteSource, conf,
{code}


> Add GenericTestUtils.getTestDir method and use it for temporary directory in 
> tests
> --
>
> Key: HADOOP-12984
> URL: https://issues.apache.org/jira/browse/HADOOP-12984
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, test
>Affects Versions: 3.0.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Fix For: 2.9.0
>
> Attachments: HADOOP-12984-003.patch, HDFS-9263-001.patch, 
> HDFS-9263-002.patch, HDFS-9263-003.patch
>
>
> We have seen some tests had been used the path {{test/build/data}} to store 
> files, so leaking files which fail the new post-build RAT test checks on 
> Jenkins (and dirtying all development systems with paths which {{mvn clean}} 
> will miss.
> In order not to occur these bugs such as MAPREDUCE-6589 and HDFS-9571 again, 
> we'd like to introduce new utility methods to get a temporary directory path 
> easily, and use the methods in tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11661) Deprecate FileUtil#copyMerge

2016-04-07 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HADOOP-11661:
-
Component/s: util

> Deprecate FileUtil#copyMerge
> 
>
> Key: HADOOP-11661
> URL: https://issues.apache.org/jira/browse/HADOOP-11661
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Fix For: 2.8.0
>
> Attachments: HADOOP-11661-002.patch, HADOOP-11661-003-branch-2.patch, 
> HADOOP-11661-branch-2-002.patch, HADOOP-11661.patch
>
>
>  FileUtil#copyMerge is currently unused in the Hadoop source tree. In 
> branch-1, it had been part of the implementation of the hadoop fs -getmerge 
> shell command. In branch-2, the code for that shell command was rewritten in 
> a way that no longer requires this method.
> Please check more details here..
> https://issues.apache.org/jira/browse/HADOOP-11392?focusedCommentId=14339336=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14339336



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12984) Add GenericTestUtils.getTestDir method and use it for temporary directory in tests

2016-04-07 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15229778#comment-15229778
 ] 

Vinayakumar B commented on HADOOP-12984:


Thanks [~szetszwo], I have pushed addendum commit to fix the compilation.

> Add GenericTestUtils.getTestDir method and use it for temporary directory in 
> tests
> --
>
> Key: HADOOP-12984
> URL: https://issues.apache.org/jira/browse/HADOOP-12984
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, test
>Affects Versions: 3.0.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Fix For: 2.9.0
>
> Attachments: HADOOP-12984-003.patch, HDFS-9263-001.patch, 
> HDFS-9263-002.patch, HDFS-9263-003.patch
>
>
> We have seen some tests had been used the path {{test/build/data}} to store 
> files, so leaking files which fail the new post-build RAT test checks on 
> Jenkins (and dirtying all development systems with paths which {{mvn clean}} 
> will miss.
> In order not to occur these bugs such as MAPREDUCE-6589 and HDFS-9571 again, 
> we'd like to introduce new utility methods to get a temporary directory path 
> easily, and use the methods in tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HADOOP-12909) Change ipc.Client to support asynchronous calls

2016-04-07 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15229768#comment-15229768
 ] 

Tsz Wo Nicholas Sze edited comment on HADOOP-12909 at 4/7/16 6:25 AM:
--

> ... Actually, there aren't any visibility tags on Client and Server at all: 
> I'd recommend @Public, @Evolving for the whole class ...

Indeed, Client and Server are currently annotated as 
@InterfaceAudience.LimitedPrivate and @InterfaceStability.Evolving by 
HADOOP-8813.  I think we don't need to change them to public.  Let's continue 
the discussion in HADOOP-12969.


was (Author: szetszwo):
> ... Actually, there aren't any visibility tags on Client and Server at all: 
> I'd recommend @Public, @Evolving for the whole class ...

Indeed Client and Server are current annotate as 
@InterfaceAudience.LimitedPrivate and @InterfaceStability.Evolving; see 
HADOOP-8813.  I think we need to change it to public.  Let's continue the 
discussion in HADOOP-12969.

> Change ipc.Client to support asynchronous calls
> ---
>
> Key: HADOOP-12909
> URL: https://issues.apache.org/jira/browse/HADOOP-12909
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: ipc
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Xiaobing Zhou
> Attachments: HADOOP-12909-HDFS-9924.000.patch, 
> HADOOP-12909-HDFS-9924.001.patch, HADOOP-12909-HDFS-9924.002.patch, 
> HADOOP-12909-HDFS-9924.003.patch, HADOOP-12909-HDFS-9924.004.patch, 
> HADOOP-12909-HDFS-9924.005.patch, HADOOP-12909-HDFS-9924.006.patch, 
> HADOOP-12909-HDFS-9924.007.patch, HADOOP-12909-HDFS-9924.008.patch, 
> HADOOP-12909-HDFS-9924.009.patch
>
>
> In ipc.Client, the underlying mechanism is already supporting asynchronous 
> calls -- the calls shares a connection, the call requests are sent using a 
> thread pool and the responses can be out of order.  Indeed, synchronous call 
> is implemented by invoking wait() in the caller thread in order to wait for 
> the server response.
> In this JIRA, we change ipc.Client to support asynchronous mode.  In 
> asynchronous mode, it return once the request has been sent out but not wait 
> for the response from the server.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HADOOP-12969) Mark IPC.Client and IPC.Server as @Public, @Evolving

2016-04-07 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15229770#comment-15229770
 ] 

Tsz Wo Nicholas Sze edited comment on HADOOP-12969 at 4/7/16 6:26 AM:
--

Indeed, Client and Server are currently annotated as 
@InterfaceAudience.LimitedPrivate and @InterfaceStability.Evolving; see 
HADOOP-8813. I think we don't need to change them to public. 


was (Author: szetszwo):
Indeed, Client and Server are currently annotated as 
@InterfaceAudience.LimitedPrivate and @InterfaceStability.Evolving; see 
HADOOP-8813. I think we need to change them to public. 

> Mark IPC.Client and IPC.Server as @Public, @Evolving
> 
>
> Key: HADOOP-12969
> URL: https://issues.apache.org/jira/browse/HADOOP-12969
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Affects Versions: 2.8.0
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
>Priority: Minor
> Attachments: HADOOP-12969.000..patch, HADOOP-12969.001.patch, 
> HADOOP-12969.002.patch
>
>
> Per the discussion in 
> [HADOOP-12909|https://issues.apache.org/jira/browse/HADOOP-12909?focusedCommentId=15211745=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15211745],
>  this is to propose marking IPC.Client and IPC.Server as @Public, @Evolving 
> as a result of HADOOP-12909



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HADOOP-12969) Mark IPC.Client and IPC.Server as @Public, @Evolving

2016-04-07 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15229770#comment-15229770
 ] 

Tsz Wo Nicholas Sze edited comment on HADOOP-12969 at 4/7/16 6:24 AM:
--

Indeed, Client and Server are currently annotated as 
@InterfaceAudience.LimitedPrivate and @InterfaceStability.Evolving; see 
HADOOP-8813. I think we need to change them to public. 


was (Author: szetszwo):
Indeed Client and Server are current annotate as 
@InterfaceAudience.LimitedPrivate and @InterfaceStability.Evolving; see 
HADOOP-8813. I think we need to change them to public. 

> Mark IPC.Client and IPC.Server as @Public, @Evolving
> 
>
> Key: HADOOP-12969
> URL: https://issues.apache.org/jira/browse/HADOOP-12969
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Affects Versions: 2.8.0
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
>Priority: Minor
> Attachments: HADOOP-12969.000..patch, HADOOP-12969.001.patch, 
> HADOOP-12969.002.patch
>
>
> Per the discussion in 
> [HADOOP-12909|https://issues.apache.org/jira/browse/HADOOP-12909?focusedCommentId=15211745=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15211745],
>  this is to propose marking IPC.Client and IPC.Server as @Public, @Evolving 
> as a result of HADOOP-12909



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12969) Mark IPC.Client and IPC.Server as @Public, @Evolving

2016-04-07 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15229770#comment-15229770
 ] 

Tsz Wo Nicholas Sze commented on HADOOP-12969:
--

Indeed Client and Server are current annotate as 
@InterfaceAudience.LimitedPrivate and @InterfaceStability.Evolving; see 
HADOOP-8813. I think we need to change them to public. 

> Mark IPC.Client and IPC.Server as @Public, @Evolving
> 
>
> Key: HADOOP-12969
> URL: https://issues.apache.org/jira/browse/HADOOP-12969
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Affects Versions: 2.8.0
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
>Priority: Minor
> Attachments: HADOOP-12969.000..patch, HADOOP-12969.001.patch, 
> HADOOP-12969.002.patch
>
>
> Per the discussion in 
> [HADOOP-12909|https://issues.apache.org/jira/browse/HADOOP-12909?focusedCommentId=15211745=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15211745],
>  this is to propose marking IPC.Client and IPC.Server as @Public, @Evolving 
> as a result of HADOOP-12909



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12909) Change ipc.Client to support asynchronous calls

2016-04-07 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15229768#comment-15229768
 ] 

Tsz Wo Nicholas Sze commented on HADOOP-12909:
--

> ... Actually, there aren't any visibility tags on Client and Server at all: 
> I'd recommend @Public, @Evolving for the whole class ...

Indeed Client and Server are current annotate as 
@InterfaceAudience.LimitedPrivate and @InterfaceStability.Evolving; see 
HADOOP-8813.  I think we need to change it to public.  Let's continue the 
discussion in HADOOP-12969.

> Change ipc.Client to support asynchronous calls
> ---
>
> Key: HADOOP-12909
> URL: https://issues.apache.org/jira/browse/HADOOP-12909
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: ipc
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Xiaobing Zhou
> Attachments: HADOOP-12909-HDFS-9924.000.patch, 
> HADOOP-12909-HDFS-9924.001.patch, HADOOP-12909-HDFS-9924.002.patch, 
> HADOOP-12909-HDFS-9924.003.patch, HADOOP-12909-HDFS-9924.004.patch, 
> HADOOP-12909-HDFS-9924.005.patch, HADOOP-12909-HDFS-9924.006.patch, 
> HADOOP-12909-HDFS-9924.007.patch, HADOOP-12909-HDFS-9924.008.patch, 
> HADOOP-12909-HDFS-9924.009.patch
>
>
> In ipc.Client, the underlying mechanism is already supporting asynchronous 
> calls -- the calls shares a connection, the call requests are sent using a 
> thread pool and the responses can be out of order.  Indeed, synchronous call 
> is implemented by invoking wait() in the caller thread in order to wait for 
> the server response.
> In this JIRA, we change ipc.Client to support asynchronous mode.  In 
> asynchronous mode, it return once the request has been sent out but not wait 
> for the response from the server.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12909) Change ipc.Client to support asynchronous calls

2016-04-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15229762#comment-15229762
 ] 

Hudson commented on HADOOP-12909:
-

FAILURE: Integrated in Hadoop-trunk-Commit #9575 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9575/])
HADOOP-12909. Change ipc.Client to support asynchronous calls.  (szetszwo: rev 
a62637a413ad88c4273d3251892b8fc1c05afa34)
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestIPC.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestAsyncIPC.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java


> Change ipc.Client to support asynchronous calls
> ---
>
> Key: HADOOP-12909
> URL: https://issues.apache.org/jira/browse/HADOOP-12909
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: ipc
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Xiaobing Zhou
> Attachments: HADOOP-12909-HDFS-9924.000.patch, 
> HADOOP-12909-HDFS-9924.001.patch, HADOOP-12909-HDFS-9924.002.patch, 
> HADOOP-12909-HDFS-9924.003.patch, HADOOP-12909-HDFS-9924.004.patch, 
> HADOOP-12909-HDFS-9924.005.patch, HADOOP-12909-HDFS-9924.006.patch, 
> HADOOP-12909-HDFS-9924.007.patch, HADOOP-12909-HDFS-9924.008.patch, 
> HADOOP-12909-HDFS-9924.009.patch
>
>
> In ipc.Client, the underlying mechanism is already supporting asynchronous 
> calls -- the calls shares a connection, the call requests are sent using a 
> thread pool and the responses can be out of order.  Indeed, synchronous call 
> is implemented by invoking wait() in the caller thread in order to wait for 
> the server response.
> In this JIRA, we change ipc.Client to support asynchronous mode.  In 
> asynchronous mode, it return once the request has been sent out but not wait 
> for the response from the server.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12984) Add GenericTestUtils.getTestDir method and use it for temporary directory in tests

2016-04-07 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15229753#comment-15229753
 ] 

Tsz Wo Nicholas Sze commented on HADOOP-12984:
--

There is a complication error in branch-2
{code}
[ERROR] COMPILATION ERROR : 
[INFO] -
[ERROR] 
/Users/szetszwo/hadoop/b-2/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFileUtil.java:[569,31]
 cannot find symbol
  symbol:   variable TEST_ROOT_DIR
  location: class org.apache.hadoop.fs.TestFileUtil
[ERROR] 
/Users/szetszwo/hadoop/b-2/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFileUtil.java:[570,31]
 cannot find symbol
  symbol:   variable TEST_ROOT_DIR
  location: class org.apache.hadoop.fs.TestFileUtil
[INFO] 2 errors 
{code}


> Add GenericTestUtils.getTestDir method and use it for temporary directory in 
> tests
> --
>
> Key: HADOOP-12984
> URL: https://issues.apache.org/jira/browse/HADOOP-12984
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, test
>Affects Versions: 3.0.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Fix For: 2.9.0
>
> Attachments: HADOOP-12984-003.patch, HDFS-9263-001.patch, 
> HDFS-9263-002.patch, HDFS-9263-003.patch
>
>
> We have seen some tests had been used the path {{test/build/data}} to store 
> files, so leaking files which fail the new post-build RAT test checks on 
> Jenkins (and dirtying all development systems with paths which {{mvn clean}} 
> will miss.
> In order not to occur these bugs such as MAPREDUCE-6589 and HDFS-9571 again, 
> we'd like to introduce new utility methods to get a temporary directory path 
> easily, and use the methods in tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12909) Change ipc.Client to support asynchronous calls

2016-04-07 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HADOOP-12909:
-
Hadoop Flags: Reviewed

+1 the new patch looks great.

> Change ipc.Client to support asynchronous calls
> ---
>
> Key: HADOOP-12909
> URL: https://issues.apache.org/jira/browse/HADOOP-12909
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: ipc
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Xiaobing Zhou
> Attachments: HADOOP-12909-HDFS-9924.000.patch, 
> HADOOP-12909-HDFS-9924.001.patch, HADOOP-12909-HDFS-9924.002.patch, 
> HADOOP-12909-HDFS-9924.003.patch, HADOOP-12909-HDFS-9924.004.patch, 
> HADOOP-12909-HDFS-9924.005.patch, HADOOP-12909-HDFS-9924.006.patch, 
> HADOOP-12909-HDFS-9924.007.patch, HADOOP-12909-HDFS-9924.008.patch, 
> HADOOP-12909-HDFS-9924.009.patch
>
>
> In ipc.Client, the underlying mechanism is already supporting asynchronous 
> calls -- the calls shares a connection, the call requests are sent using a 
> thread pool and the responses can be out of order.  Indeed, synchronous call 
> is implemented by invoking wait() in the caller thread in order to wait for 
> the server response.
> In this JIRA, we change ipc.Client to support asynchronous mode.  In 
> asynchronous mode, it return once the request has been sent out but not wait 
> for the response from the server.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)