[jira] [Comment Edited] (HADOOP-10075) Update jetty dependency to version 9

2016-09-07 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15472876#comment-15472876
 ] 

Tsuyoshi Ozawa edited comment on HADOOP-10075 at 9/8/16 5:53 AM:
-

[~rkanter] I looked over the result of  grep -r jetty --include="*.java": it's 
used only in server-side excluding tests, so we can upgrade jetty with minimum 
pain on branch-3 :-)


was (Author: ozawa):
[~rkanter] I looked over the result of  grep -r jetty --include="*.java": it's 
used only server-side, so we can upgrade jetty safely :-)

> Update jetty dependency to version 9
> 
>
> Key: HADOOP-10075
> URL: https://issues.apache.org/jira/browse/HADOOP-10075
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.2.0, 2.6.0
>Reporter: Robert Rati
>Assignee: Robert Kanter
> Attachments: HADOOP-10075-002-wip.patch, HADOOP-10075.patch
>
>
> Jetty6 is no longer maintained.  Update the dependency to jetty9.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13169) Randomize file list in SimpleCopyListing

2016-09-07 Thread Rajesh Balamohan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajesh Balamohan updated HADOOP-13169:
--
Attachment: HADOOP-13169-branch-2-003.patch

> Randomize file list in SimpleCopyListing
> 
>
> Key: HADOOP-13169
> URL: https://issues.apache.org/jira/browse/HADOOP-13169
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Reporter: Rajesh Balamohan
>Priority: Minor
> Attachments: HADOOP-13169-branch-2-001.patch, 
> HADOOP-13169-branch-2-002.patch, HADOOP-13169-branch-2-003.patch
>
>
> When copying files to S3, based on file listing some mappers can get into S3 
> partition hotspots. This would be more visible, when data is copied from hive 
> warehouse with lots of partitions (e.g date partitions). In such cases, some 
> of the tasks would tend to be a lot more slower than others. It would be good 
> to randomize the file paths which are written out in SimpleCopyListing to 
> avoid this issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-10075) Update jetty dependency to version 9

2016-09-07 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15472876#comment-15472876
 ] 

Tsuyoshi Ozawa commented on HADOOP-10075:
-

[~rkanter] I looked over the result of  grep -r jetty --include="*.java": it's 
used only server-side, so we can upgrade jetty safely :-)

> Update jetty dependency to version 9
> 
>
> Key: HADOOP-10075
> URL: https://issues.apache.org/jira/browse/HADOOP-10075
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.2.0, 2.6.0
>Reporter: Robert Rati
>Assignee: Robert Kanter
> Attachments: HADOOP-10075-002-wip.patch, HADOOP-10075.patch
>
>
> Jetty6 is no longer maintained.  Update the dependency to jetty9.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-7352) FileSystem#listStatus should throw IOE upon access error

2016-09-07 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15472754#comment-15472754
 ] 

John Zhuge commented on HADOOP-7352:


Move review comments from HADOOP-13191 to here.

Steve commented:
bq. Be aware that the FS shell expects globStatus to return null in certain 
conditions: someone needs to look at all uses of this call and make sure that 
it isn't being used

John commented:

Went through 99 usages of {{globStatus(Path)}} in the following components: 
hadoop-common, hadoop-distcp, hadoop-rumen, hadoop-streaming, hadoop-yarn-*, 
hadoop-hdfs (test only).

Most {{globStatus}} callers check both {{listFileStatus not null}} and 
{{listFileStatus.length > 0}}, or expect {{listFileStatus}} to be not {{null}}, 
except:
{code:title=fs.shell.PathData}
  public static PathData[] expandAsGlob(String pattern, Configuration conf)
  throws IOException {
Path globPath = new Path(pattern);
FileSystem fs = globPath.getFileSystem(conf);
FileStatus[] stats = fs.globStatus(globPath);
PathData[] items = null;

if (stats == null) {
  // remove any quoting in the glob pattern
  pattern = pattern.replaceAll("(.)", "$1");
  // not a glob & file not found, so add the path with a null stat
  items = new PathData[]{ new PathData(fs, pattern, null) };
} else {
{code}

I will include the fix for {{expandAsGlob}} in the next patch.

> FileSystem#listStatus should throw IOE upon access error
> 
>
> Key: HADOOP-7352
> URL: https://issues.apache.org/jira/browse/HADOOP-7352
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.6.0
>Reporter: Matt Foley
>Assignee: John Zhuge
> Attachments: HADOOP-7352.001.patch
>
>
> In HADOOP-6201 and HDFS-538 it was agreed that FileSystem::listStatus should 
> throw FileNotFoundException instead of returning null, when the target 
> directory did not exist.
> However, in LocalFileSystem implementation today, FileSystem::listStatus 
> still may return null, when the target directory exists but does not grant 
> read permission.  This causes NPE in many callers, for all the reasons cited 
> in HADOOP-6201 and HDFS-538.  See HADOOP-7327 and its linked issues for 
> examples.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-7352) FileSystem#listStatus should throw IOE upon access error

2016-09-07 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-7352:
---
Affects Version/s: 2.6.0
 Target Version/s: 3.0.0-alpha2
   Status: In Progress  (was: Patch Available)

> FileSystem#listStatus should throw IOE upon access error
> 
>
> Key: HADOOP-7352
> URL: https://issues.apache.org/jira/browse/HADOOP-7352
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.6.0
>Reporter: Matt Foley
>Assignee: John Zhuge
> Attachments: HADOOP-7352.001.patch
>
>
> In HADOOP-6201 and HDFS-538 it was agreed that FileSystem::listStatus should 
> throw FileNotFoundException instead of returning null, when the target 
> directory did not exist.
> However, in LocalFileSystem implementation today, FileSystem::listStatus 
> still may return null, when the target directory exists but does not grant 
> read permission.  This causes NPE in many callers, for all the reasons cited 
> in HADOOP-6201 and HDFS-538.  See HADOOP-7327 and its linked issues for 
> examples.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13579) Fix source-level compatibility after HADOOP-11252

2016-09-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15472654#comment-15472654
 ] 

Hadoop QA commented on HADOOP-13579:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
36s{color} | {color:green} branch-2.6 passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  1m 
42s{color} | {color:red} root in branch-2.6 failed with JDK v1.8.0_101. {color} 
|
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  1m 
49s{color} | {color:red} root in branch-2.6 failed with JDK v1.7.0_111. {color} 
|
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} branch-2.6 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
45s{color} | {color:green} branch-2.6 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
16s{color} | {color:green} branch-2.6 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
36s{color} | {color:red} hadoop-common-project/hadoop-common in branch-2.6 has 
66 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} branch-2.6 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} branch-2.6 passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  1m 
35s{color} | {color:red} root in the patch failed with JDK v1.8.0_101. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  1m 35s{color} 
| {color:red} root in the patch failed with JDK v1.8.0_101. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  1m 
46s{color} | {color:red} root in the patch failed with JDK v1.7.0_111. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  1m 46s{color} 
| {color:red} root in the patch failed with JDK v1.7.0_111. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 2351 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m 
55s{color} | {color:red} The patch 70 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  6m  1s{color} 
| {color:red} hadoop-common in the patch failed with JDK v1.7.0_111. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
34s{color} | {color:red} The patch generated 128 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 39m 49s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_101 Failed junit tests | hadoop.ipc.TestDecayRpcScheduler |
|   | hadoop.http.TestSSLHttpServer |
|   | hadoop.security.ssl.TestSSLFactory |
|   | hadoop.security.ssl.TestReloadingX509TrustManager |
|   | hadoop.io.TestUTF8 |
|   | hadoop.http.TestHttpCookieFlag |
| JDK v1.7.0_111 Failed junit tests 

[jira] [Commented] (HADOOP-10075) Update jetty dependency to version 9

2016-09-07 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15472593#comment-15472593
 ] 

Tsuyoshi Ozawa commented on HADOOP-10075:
-

[~rkanter] I think it's okay to upgrade at server side, but I don't know we can 
do so at client-side. Please check HADOOP-13070.

> Update jetty dependency to version 9
> 
>
> Key: HADOOP-10075
> URL: https://issues.apache.org/jira/browse/HADOOP-10075
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.2.0, 2.6.0
>Reporter: Robert Rati
>Assignee: Robert Kanter
> Attachments: HADOOP-10075-002-wip.patch, HADOOP-10075.patch
>
>
> Jetty6 is no longer maintained.  Update the dependency to jetty9.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13579) Fix source-level compatibility after HADOOP-11252

2016-09-07 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HADOOP-13579:

Attachment: HADOOP-13579-branch-2.6.003.patch
HADOOP-13579-branch-2.7.003.patch

> Fix source-level compatibility after HADOOP-11252
> -
>
> Key: HADOOP-13579
> URL: https://issues.apache.org/jira/browse/HADOOP-13579
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.3, 2.6.4
>Reporter: Akira Ajisaka
>Assignee: Tsuyoshi Ozawa
>Priority: Blocker
> Attachments: HADOOP-13579-branch-2.6.001.patch, 
> HADOOP-13579-branch-2.6.002.patch, HADOOP-13579-branch-2.6.003.patch, 
> HADOOP-13579-branch-2.7.001.patch, HADOOP-13579-branch-2.7.002.patch, 
> HADOOP-13579-branch-2.7.003.patch
>
>
> Reported by [~chiwanpark]
> bq. Since 2.7.3 release, Client.get/setPingInterval is changed from public to 
> package-private.
> bq. Giraph is one of broken examples for this changes. 
> (https://github.com/apache/giraph/blob/release-1.0/giraph-core/src/main/java/org/apache/giraph/job/GiraphJob.java#L202)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-10075) Update jetty dependency to version 9

2016-09-07 Thread Robert Kanter (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Kanter reassigned HADOOP-10075:
--

Assignee: Robert Kanter  (was: Robert Rati)

I'm going to pick this up and get it moving forward again, unless anyone has 
any objections?

Jetty 6 was EoL in 2010 (6 years ago!), so we should really do this.  Hadoop 3 
is a great opportunity to finally get this done.  I agree with [~raviprak]: 
even if we later move to Jersey, that doesn't need to block upgrading Jetty.

I've started looking into the code, and I think it will be doable, just a 
little tedious to refactor things like {{HttpServer2}} to the Jetty 9 APIs.

> Update jetty dependency to version 9
> 
>
> Key: HADOOP-10075
> URL: https://issues.apache.org/jira/browse/HADOOP-10075
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.2.0, 2.6.0
>Reporter: Robert Rati
>Assignee: Robert Kanter
> Attachments: HADOOP-10075-002-wip.patch, HADOOP-10075.patch
>
>
> Jetty6 is no longer maintained.  Update the dependency to jetty9.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-10075) Update jetty dependency to version 9

2016-09-07 Thread Robert Kanter (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Kanter updated HADOOP-10075:
---
Target Version/s: 3.0.0-alpha2

> Update jetty dependency to version 9
> 
>
> Key: HADOOP-10075
> URL: https://issues.apache.org/jira/browse/HADOOP-10075
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.2.0, 2.6.0
>Reporter: Robert Rati
>Assignee: Robert Kanter
> Attachments: HADOOP-10075-002-wip.patch, HADOOP-10075.patch
>
>
> Jetty6 is no longer maintained.  Update the dependency to jetty9.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (HADOOP-11684) S3a to use thread pool that blocks clients

2016-09-07 Thread Aaron Fabbri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11684?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron Fabbri updated HADOOP-11684:
--
Comment: was deleted

(was: [~ste...@apache.org] I assume you are testing with fast upload on?  
Should we file a new jira for this?)

> S3a to use thread pool that blocks clients
> --
>
> Key: HADOOP-11684
> URL: https://issues.apache.org/jira/browse/HADOOP-11684
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.0
>Reporter: Thomas Demoor
>Assignee: Thomas Demoor
> Fix For: 3.0.0-alpha1
>
> Attachments: HADOOP-11684-001.patch, HADOOP-11684-002.patch, 
> HADOOP-11684-003.patch, HADOOP-11684-004.patch, HADOOP-11684-005.patch, 
> HADOOP-11684-006.patch
>
>
> Currently, if fs.s3a.max.total.tasks are queued and another (part)upload 
> wants to start, a RejectedExecutionException is thrown. 
> We should use a threadpool that blocks clients, nicely throtthling them, 
> rather than throwing an exception. F.i. something similar to 
> https://github.com/apache/incubator-s4/blob/master/subprojects/s4-comm/src/main/java/org/apache/s4/comm/staging/BlockingThreadPoolExecutorService.java



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13218) Migrate other Hadoop side tests to prepare for removing WritableRPCEngine

2016-09-07 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-13218:
-
Fix Version/s: (was: 3.0.0-alpha2)

> Migrate other Hadoop side tests to prepare for removing WritableRPCEngine
> -
>
> Key: HADOOP-13218
> URL: https://issues.apache.org/jira/browse/HADOOP-13218
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: test
>Reporter: Kai Zheng
>Assignee: Wei Zhou
> Attachments: HADOOP-13218-v01.patch, HADOOP-13218-v02.patch
>
>
> Patch for HADOOP-12579 contains lots of work to migrate the left Hadoop side 
> tests basing on the new RPC engine and nice cleanups. HADOOP-12579 will be 
> reverted to allow some time for YARN/Mapreduce side related changes, open 
> this to recommit most of the test related work in HADOOP-12579 for easier 
> tracking and maintain, as other sub-tasks did.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13218) Migrate other Hadoop side tests to prepare for removing WritableRPCEngine

2016-09-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15471963#comment-15471963
 ] 

Hudson commented on HADOOP-13218:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10406 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10406/])
Revert "HADOOP-13218. Migrate other Hadoop side tests to prepare for 
(kai.zheng: rev d355573f5681f43e760a1bc23ebed553bd35fca5)
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestRPCCallBenchmark.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestDoAsEffectiveUser.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/proto/test_rpc_service.proto
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestSaslRPC.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestMultipleProtocolServer.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestRPCWaitForProxy.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java
* (edit) hadoop-common-project/hadoop-common/src/test/proto/test.proto
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/RPCCallBenchmark.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/RPC.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUserGroupInformation.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/ProtobufRpcEngine.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestRpcBase.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestRPCCompatibility.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/security/TestClientProtocolWithDelegationToken.java
* (edit) 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/server/HSAdminServer.java


> Migrate other Hadoop side tests to prepare for removing WritableRPCEngine
> -
>
> Key: HADOOP-13218
> URL: https://issues.apache.org/jira/browse/HADOOP-13218
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: test
>Reporter: Kai Zheng
>Assignee: Wei Zhou
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13218-v01.patch, HADOOP-13218-v02.patch
>
>
> Patch for HADOOP-12579 contains lots of work to migrate the left Hadoop side 
> tests basing on the new RPC engine and nice cleanups. HADOOP-12579 will be 
> reverted to allow some time for YARN/Mapreduce side related changes, open 
> this to recommit most of the test related work in HADOOP-12579 for easier 
> tracking and maintain, as other sub-tasks did.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-7352) FileSystem#listStatus should throw IOE upon access error

2016-09-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15471925#comment-15471925
 ] 

Hadoop QA commented on HADOOP-7352:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
24s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
47s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 32s{color} | {color:orange} root: The patch generated 1 new + 294 unchanged 
- 0 fixed = 295 total (was 294) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 17m  8s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
6s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
27s{color} | {color:green} hadoop-distcp in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 92m 39s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Timed out junit tests | org.apache.hadoop.http.TestHttpServerLifecycle |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HADOOP-7352 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12827438/HADOOP-7352.001.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux b636fdcfb894 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / f414d5e |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10458/artifact/patchprocess/diff-checkstyle-root.txt
 |
| unit | 

[jira] [Commented] (HADOOP-13587) distcp.map.bandwidth.mb is overwritten even when -bandwidth flag isn't set

2016-09-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15471921#comment-15471921
 ] 

Hadoop QA commented on HADOOP-13587:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
 3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 11s{color} | {color:orange} hadoop-tools/hadoop-distcp: The patch generated 
10 new + 81 unchanged - 1 fixed = 91 total (was 82) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
29s{color} | {color:green} hadoop-distcp in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 21m 50s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HADOOP-13587 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12827451/HADOOP-13587-01.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 174b636187c7 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / f414d5e |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10459/artifact/patchprocess/diff-checkstyle-hadoop-tools_hadoop-distcp.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10459/testReport/ |
| modules | C: hadoop-tools/hadoop-distcp U: hadoop-tools/hadoop-distcp |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10459/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> distcp.map.bandwidth.mb is overwritten even when -bandwidth flag isn't set
> --
>
> Key: HADOOP-13587
> URL: https://issues.apache.org/jira/browse/HADOOP-13587
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 

[jira] [Commented] (HADOOP-13218) Migrate other Hadoop side tests to prepare for removing WritableRPCEngine

2016-09-07 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15471909#comment-15471909
 ] 

Kai Zheng commented on HADOOP-13218:


Done. Thanks all for the discussion.

> Migrate other Hadoop side tests to prepare for removing WritableRPCEngine
> -
>
> Key: HADOOP-13218
> URL: https://issues.apache.org/jira/browse/HADOOP-13218
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: test
>Reporter: Kai Zheng
>Assignee: Wei Zhou
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13218-v01.patch, HADOOP-13218-v02.patch
>
>
> Patch for HADOOP-12579 contains lots of work to migrate the left Hadoop side 
> tests basing on the new RPC engine and nice cleanups. HADOOP-12579 will be 
> reverted to allow some time for YARN/Mapreduce side related changes, open 
> this to recommit most of the test related work in HADOOP-12579 for easier 
> tracking and maintain, as other sub-tasks did.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13587) distcp.map.bandwidth.mb is overwritten even when -bandwidth flag isn't set

2016-09-07 Thread Zoran Dimitrijevic (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoran Dimitrijevic updated HADOOP-13587:

Target Version/s: 3.0.0-alpha1
  Status: Patch Available  (was: Open)

> distcp.map.bandwidth.mb is overwritten even when -bandwidth flag isn't set
> --
>
> Key: HADOOP-13587
> URL: https://issues.apache.org/jira/browse/HADOOP-13587
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 3.0.0-alpha1
>Reporter: Zoran Dimitrijevic
>Priority: Minor
> Attachments: HADOOP-13587-01.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> distcp.map.bandwidth.mb exists in distcp-defaults.xml config file, but it is 
> not honored even when it is . Current code always overwrites it with either 
> default value (java const) or with -bandwidth command line option.
> The expected behavior (at least how I would expect it) is to honor the value 
> set in distcp-defaults.xml unless user explicitly specify -bandwidth command 
> line flag. If there is no value set in .xml file or as a command line flag, 
> then the constant from java code should be used.
> Additionally, I would expect that we also try to get values from 
> distcp-site.xml, similar to other hadoop systems.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13587) distcp.map.bandwidth.mb is overwritten even when -bandwidth flag isn't set

2016-09-07 Thread Zoran Dimitrijevic (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoran Dimitrijevic updated HADOOP-13587:

Attachment: HADOOP-13587-01.patch

> distcp.map.bandwidth.mb is overwritten even when -bandwidth flag isn't set
> --
>
> Key: HADOOP-13587
> URL: https://issues.apache.org/jira/browse/HADOOP-13587
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 3.0.0-alpha1
>Reporter: Zoran Dimitrijevic
>Priority: Minor
> Attachments: HADOOP-13587-01.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> distcp.map.bandwidth.mb exists in distcp-defaults.xml config file, but it is 
> not honored even when it is . Current code always overwrites it with either 
> default value (java const) or with -bandwidth command line option.
> The expected behavior (at least how I would expect it) is to honor the value 
> set in distcp-defaults.xml unless user explicitly specify -bandwidth command 
> line flag. If there is no value set in .xml file or as a command line flag, 
> then the constant from java code should be used.
> Additionally, I would expect that we also try to get values from 
> distcp-site.xml, similar to other hadoop systems.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-13218) Migrate other Hadoop side tests to prepare for removing WritableRPCEngine

2016-09-07 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng reopened HADOOP-13218:


Reopen it as discussed and will revert the work soon.

> Migrate other Hadoop side tests to prepare for removing WritableRPCEngine
> -
>
> Key: HADOOP-13218
> URL: https://issues.apache.org/jira/browse/HADOOP-13218
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: test
>Reporter: Kai Zheng
>Assignee: Wei Zhou
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13218-v01.patch, HADOOP-13218-v02.patch
>
>
> Patch for HADOOP-12579 contains lots of work to migrate the left Hadoop side 
> tests basing on the new RPC engine and nice cleanups. HADOOP-12579 will be 
> reverted to allow some time for YARN/Mapreduce side related changes, open 
> this to recommit most of the test related work in HADOOP-12579 for easier 
> tracking and maintain, as other sub-tasks did.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13587) distcp.map.bandwidth.mb is overwritten even when -bandwidth flag isn't set

2016-09-07 Thread Zoran Dimitrijevic (JIRA)
Zoran Dimitrijevic created HADOOP-13587:
---

 Summary: distcp.map.bandwidth.mb is overwritten even when 
-bandwidth flag isn't set
 Key: HADOOP-13587
 URL: https://issues.apache.org/jira/browse/HADOOP-13587
 Project: Hadoop Common
  Issue Type: Bug
  Components: tools/distcp
Affects Versions: 3.0.0-alpha1
Reporter: Zoran Dimitrijevic
Priority: Minor


distcp.map.bandwidth.mb exists in distcp-defaults.xml config file, but it is 
not honored even when it is . Current code always overwrites it with either 
default value (java const) or with -bandwidth command line option.

The expected behavior (at least how I would expect it) is to honor the value 
set in distcp-defaults.xml unless user explicitly specify -bandwidth command 
line flag. If there is no value set in .xml file or as a command line flag, 
then the constant from java code should be used.

Additionally, I would expect that we also try to get values from 
distcp-site.xml, similar to other hadoop systems.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13218) Migrate other Hadoop side tests to prepare for removing WritableRPCEngine

2016-09-07 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15471857#comment-15471857
 ] 

Kai Zheng commented on HADOOP-13218:


bq. The summary implies this is just about tests, but it is not. It changes the 
default RPC engine. If that change is unexpected then this patch should be 
reverted, that portion removed from the patch, and then reviewed and 
recommitted.
This is convincing. OK, let me proceed in the way.

Thanks for your detailed information and thoughts!


> Migrate other Hadoop side tests to prepare for removing WritableRPCEngine
> -
>
> Key: HADOOP-13218
> URL: https://issues.apache.org/jira/browse/HADOOP-13218
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: test
>Reporter: Kai Zheng
>Assignee: Wei Zhou
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13218-v01.patch, HADOOP-13218-v02.patch
>
>
> Patch for HADOOP-12579 contains lots of work to migrate the left Hadoop side 
> tests basing on the new RPC engine and nice cleanups. HADOOP-12579 will be 
> reverted to allow some time for YARN/Mapreduce side related changes, open 
> this to recommit most of the test related work in HADOOP-12579 for easier 
> tracking and maintain, as other sub-tasks did.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13218) Migrate other Hadoop side tests to prepare for removing WritableRPCEngine

2016-09-07 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15471847#comment-15471847
 ] 

Jason Lowe commented on HADOOP-13218:
-

The Jenkins build here didn't hit the failures because the precommit builds 
only run the tests in the maven projects that were touched by the patch.  This 
patch touched MapReduce but only in the history server project.  If it had 
touched the jobclient project then you would have seen the failures.  In short, 
you cannot know for certain that a patch doesn't break something when the 
precommit build is clean because the precommit build doesn't run *all* of the 
tests due to how long that would take for each precommit build.

See 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/157/#showFailuresLink
 for a nightly build sample.  Any precommit build that needs to run tests in 
jobclient is going to report a lot of failures due to this still being in the 
build.

bq. The changing of the default RPC engine is unexpected and deeply sorry for 
that.

The summary implies this is just about tests, but it is not.  It changes the 
default RPC engine.  If that change is unexpected then this patch should be 
reverted, that portion removed from the patch, and then reviewed and 
recommitted.

> Migrate other Hadoop side tests to prepare for removing WritableRPCEngine
> -
>
> Key: HADOOP-13218
> URL: https://issues.apache.org/jira/browse/HADOOP-13218
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: test
>Reporter: Kai Zheng
>Assignee: Wei Zhou
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13218-v01.patch, HADOOP-13218-v02.patch
>
>
> Patch for HADOOP-12579 contains lots of work to migrate the left Hadoop side 
> tests basing on the new RPC engine and nice cleanups. HADOOP-12579 will be 
> reverted to allow some time for YARN/Mapreduce side related changes, open 
> this to recommit most of the test related work in HADOOP-12579 for easier 
> tracking and maintain, as other sub-tasks did.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13218) Migrate other Hadoop side tests to prepare for removing WritableRPCEngine

2016-09-07 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15471836#comment-15471836
 ] 

Kihwal Lee commented on HADOOP-13218:
-

-1 Please revert it first and engage in further discussions. It is also 
confusing who is the real author of the patch.

> Migrate other Hadoop side tests to prepare for removing WritableRPCEngine
> -
>
> Key: HADOOP-13218
> URL: https://issues.apache.org/jira/browse/HADOOP-13218
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: test
>Reporter: Kai Zheng
>Assignee: Wei Zhou
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13218-v01.patch, HADOOP-13218-v02.patch
>
>
> Patch for HADOOP-12579 contains lots of work to migrate the left Hadoop side 
> tests basing on the new RPC engine and nice cleanups. HADOOP-12579 will be 
> reverted to allow some time for YARN/Mapreduce side related changes, open 
> this to recommit most of the test related work in HADOOP-12579 for easier 
> tracking and maintain, as other sub-tasks did.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13218) Migrate other Hadoop side tests to prepare for removing WritableRPCEngine

2016-09-07 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15471814#comment-15471814
 ] 

Kai Zheng commented on HADOOP-13218:


I have a hard understanding why the Jenkins building here din't hit the 
failures found in precommit and nightly builds, could you post the relevant 
test cases either here or in MAPREDUCE-6775? Kindly let me check more.

bq. The same is true here. Why are we treating it differently?
HADOOP-12579 aims to and will remove the old RPC engine from the code base. It 
has to be done when all the sub tasks are finished. The work here is part of 
its work to prepare for the removal. The changing of the default RPC engine is 
unexpected and deeply sorry for that.

> Migrate other Hadoop side tests to prepare for removing WritableRPCEngine
> -
>
> Key: HADOOP-13218
> URL: https://issues.apache.org/jira/browse/HADOOP-13218
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: test
>Reporter: Kai Zheng
>Assignee: Wei Zhou
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13218-v01.patch, HADOOP-13218-v02.patch
>
>
> Patch for HADOOP-12579 contains lots of work to migrate the left Hadoop side 
> tests basing on the new RPC engine and nice cleanups. HADOOP-12579 will be 
> reverted to allow some time for YARN/Mapreduce side related changes, open 
> this to recommit most of the test related work in HADOOP-12579 for easier 
> tracking and maintain, as other sub-tasks did.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13218) Migrate other Hadoop side tests to prepare for removing WritableRPCEngine

2016-09-07 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15471796#comment-15471796
 ] 

Jason Lowe commented on HADOOP-13218:
-

What's particularly upsetting about this instance is that HADOOP-12579 was 
reverted for essentially the same reason.  It completely broke MapReduce.  The 
same is true here.  Why are we treating it differently?

> Migrate other Hadoop side tests to prepare for removing WritableRPCEngine
> -
>
> Key: HADOOP-13218
> URL: https://issues.apache.org/jira/browse/HADOOP-13218
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: test
>Reporter: Kai Zheng
>Assignee: Wei Zhou
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13218-v01.patch, HADOOP-13218-v02.patch
>
>
> Patch for HADOOP-12579 contains lots of work to migrate the left Hadoop side 
> tests basing on the new RPC engine and nice cleanups. HADOOP-12579 will be 
> reverted to allow some time for YARN/Mapreduce side related changes, open 
> this to recommit most of the test related work in HADOOP-12579 for easier 
> tracking and maintain, as other sub-tasks did.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13191) FileSystem#listStatus should not return null

2016-09-07 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15471785#comment-15471785
 ] 

John Zhuge commented on HADOOP-13191:
-

Went through 99 usages of {{globStatus(Path)}} in the following components: 
hadoop-common, hadoop-distcp, hadoop-rumen, hadoop-streaming, hadoop-yarn-*, 
hadoop-hdfs (test only).

Most {{globStatus}} callers check both {{listFileStatus not null}} and 
{{listFileStatus.length > 0}}, or expect {{listFileStatus}} to be not {{null}}, 
except:
{code:title=fs.shell.PathData}
  public static PathData[] expandAsGlob(String pattern, Configuration conf)
  throws IOException {
Path globPath = new Path(pattern);
FileSystem fs = globPath.getFileSystem(conf);
FileStatus[] stats = fs.globStatus(globPath);
PathData[] items = null;

if (stats == null) {
  // remove any quoting in the glob pattern
  pattern = pattern.replaceAll("(.)", "$1");
  // not a glob & file not found, so add the path with a null stat
  items = new PathData[]{ new PathData(fs, pattern, null) };
} else {
{code}


> FileSystem#listStatus should not return null
> 
>
> Key: HADOOP-13191
> URL: https://issues.apache.org/jira/browse/HADOOP-13191
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Attachments: HADOOP-13191.001.patch, HADOOP-13191.002.patch, 
> HADOOP-13191.003.patch, HADOOP-13191.004.patch
>
>
> This came out of discussion in HADOOP-12718. The {{FileSystem#listStatus}} 
> contract does not indicate {{null}} is a valid return and some callers do not 
> test {{null}} before use:
> AbstractContractGetFileStatusTest#testListStatusEmptyDirectory:
> {code}
> assertEquals("ls on an empty directory not of length 0", 0,
> fs.listStatus(subfolder).length);
> {code}
> ChecksumFileSystem#copyToLocalFile:
> {code}
>   FileStatus[] srcs = listStatus(src);
>   for (FileStatus srcFile : srcs) {
> {code}
> SimpleCopyLIsting#getFileStatus:
> {code}
>   FileStatus[] fileStatuses = fileSystem.listStatus(path);
>   if (excludeList != null && excludeList.size() > 0) {
> ArrayList fileStatusList = new ArrayList<>();
> for(FileStatus status : fileStatuses) {
> {code}
> IMHO, there is no good reason for {{listStatus}} to return {{null}}. It 
> should throw IOExceptions upon errors or return empty list.
> To enforce the contract that null is an invalid return, update javadoc and 
> leverage @Nullable/@NotNull/@Nonnull annotations.
> So far, I am only aware of the following functions that can return null:
> * RawLocalFileSystem#listStatus



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13218) Migrate other Hadoop side tests to prepare for removing WritableRPCEngine

2016-09-07 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15471776#comment-15471776
 ] 

Jason Lowe commented on HADOOP-13218:
-

bq. we need it in MapReduce side to trigger the MR side tests.

This patch modified the HDFS project and the MapReduce project, and apparently 
we didn't need a separate JIRA for that.  IMHO we don't need another JIRA for 
this.  This is breaking precommit and nightly builds until it's fixed.

> Migrate other Hadoop side tests to prepare for removing WritableRPCEngine
> -
>
> Key: HADOOP-13218
> URL: https://issues.apache.org/jira/browse/HADOOP-13218
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: test
>Reporter: Kai Zheng
>Assignee: Wei Zhou
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13218-v01.patch, HADOOP-13218-v02.patch
>
>
> Patch for HADOOP-12579 contains lots of work to migrate the left Hadoop side 
> tests basing on the new RPC engine and nice cleanups. HADOOP-12579 will be 
> reverted to allow some time for YARN/Mapreduce side related changes, open 
> this to recommit most of the test related work in HADOOP-12579 for easier 
> tracking and maintain, as other sub-tasks did.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13218) Migrate other Hadoop side tests to prepare for removing WritableRPCEngine

2016-09-07 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15471775#comment-15471775
 ] 

Kai Zheng commented on HADOOP-13218:


I thought your concern about cherry-picking is a good one, so the new issue 
should be linked in the master HADOOP-12579 one as part of tasks. HADOOP-12579 
contained quite a few sub-tasks and should be picked together in future.

> Migrate other Hadoop side tests to prepare for removing WritableRPCEngine
> -
>
> Key: HADOOP-13218
> URL: https://issues.apache.org/jira/browse/HADOOP-13218
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: test
>Reporter: Kai Zheng
>Assignee: Wei Zhou
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13218-v01.patch, HADOOP-13218-v02.patch
>
>
> Patch for HADOOP-12579 contains lots of work to migrate the left Hadoop side 
> tests basing on the new RPC engine and nice cleanups. HADOOP-12579 will be 
> reverted to allow some time for YARN/Mapreduce side related changes, open 
> this to recommit most of the test related work in HADOOP-12579 for easier 
> tracking and maintain, as other sub-tasks did.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13218) Migrate other Hadoop side tests to prepare for removing WritableRPCEngine

2016-09-07 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15471754#comment-15471754
 ] 

Kai Zheng commented on HADOOP-13218:


Hi [~jlowe],

Thanks for your detailed thoughts. The major reason why we need another JIRA is 
we need it in MapReduce side to trigger the MR side tests. As you could see 
here, the Jenkins building results look pretty clean and no relevant test 
failures were found. My other humble consideration is when a simple fix does 
the work, we could do it for easy understanding of the editing history of 
related codes. 

> Migrate other Hadoop side tests to prepare for removing WritableRPCEngine
> -
>
> Key: HADOOP-13218
> URL: https://issues.apache.org/jira/browse/HADOOP-13218
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: test
>Reporter: Kai Zheng
>Assignee: Wei Zhou
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13218-v01.patch, HADOOP-13218-v02.patch
>
>
> Patch for HADOOP-12579 contains lots of work to migrate the left Hadoop side 
> tests basing on the new RPC engine and nice cleanups. HADOOP-12579 will be 
> reverted to allow some time for YARN/Mapreduce side related changes, open 
> this to recommit most of the test related work in HADOOP-12579 for easier 
> tracking and maintain, as other sub-tasks did.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13582) Implement logExpireToken in ZKDelegationTokenSecretManager

2016-09-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13582?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15471704#comment-15471704
 ] 

Hadoop QA commented on HADOOP-13582:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 17m  2s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 47m  3s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Timed out junit tests | org.apache.hadoop.http.TestHttpServerLifecycle |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HADOOP-13582 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12827261/HADOOP-13582.01.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 093ef7f5777a 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / f414d5e |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10457/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10457/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10457/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Implement logExpireToken in ZKDelegationTokenSecretManager
> --
>
> Key: HADOOP-13582
> URL: https://issues.apache.org/jira/browse/HADOOP-13582
> 

[jira] [Commented] (HADOOP-13218) Migrate other Hadoop side tests to prepare for removing WritableRPCEngine

2016-09-07 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15471718#comment-15471718
 ] 

Jason Lowe commented on HADOOP-13218:
-

Why do we need yet another JIRA?  This just went in, breaks a major piece of 
functionality in the project, and reverts cleanly.  I don't see why this 
wouldn't be fixed as part of this JIRA.  Creating a separate JIRA has a couple 
of drawbacks:
# If this ever gets picked to other branches we need to remember to pick the 
"fix" as well or it breaks again.
# MapReduce remains broken until the other JIRA gets a patch up, reviewed, etc.

IMHO punting a followup fix only makes sense if the original can't be easily 
reverted, but that's not the case here.



> Migrate other Hadoop side tests to prepare for removing WritableRPCEngine
> -
>
> Key: HADOOP-13218
> URL: https://issues.apache.org/jira/browse/HADOOP-13218
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: test
>Reporter: Kai Zheng
>Assignee: Wei Zhou
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13218-v01.patch, HADOOP-13218-v02.patch
>
>
> Patch for HADOOP-12579 contains lots of work to migrate the left Hadoop side 
> tests basing on the new RPC engine and nice cleanups. HADOOP-12579 will be 
> reverted to allow some time for YARN/Mapreduce side related changes, open 
> this to recommit most of the test related work in HADOOP-12579 for easier 
> tracking and maintain, as other sub-tasks did.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13218) Migrate other Hadoop side tests to prepare for removing WritableRPCEngine

2016-09-07 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15471712#comment-15471712
 ] 

Kai Zheng commented on HADOOP-13218:


MAPREDUCE-6775 was just fired for the fix.

> Migrate other Hadoop side tests to prepare for removing WritableRPCEngine
> -
>
> Key: HADOOP-13218
> URL: https://issues.apache.org/jira/browse/HADOOP-13218
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: test
>Reporter: Kai Zheng
>Assignee: Wei Zhou
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13218-v01.patch, HADOOP-13218-v02.patch
>
>
> Patch for HADOOP-12579 contains lots of work to migrate the left Hadoop side 
> tests basing on the new RPC engine and nice cleanups. HADOOP-12579 will be 
> reverted to allow some time for YARN/Mapreduce side related changes, open 
> this to recommit most of the test related work in HADOOP-12579 for easier 
> tracking and maintain, as other sub-tasks did.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13191) FileSystem#listStatus should not return null

2016-09-07 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15471696#comment-15471696
 ] 

John Zhuge commented on HADOOP-13191:
-

Thanks [~ste...@apache.org] for the comments. I have dup this JIRA to 
HADOOP-7352 and posted patch 001 there. Patch 001 removed  "@Nonnull".

> FileSystem#listStatus should not return null
> 
>
> Key: HADOOP-13191
> URL: https://issues.apache.org/jira/browse/HADOOP-13191
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Attachments: HADOOP-13191.001.patch, HADOOP-13191.002.patch, 
> HADOOP-13191.003.patch, HADOOP-13191.004.patch
>
>
> This came out of discussion in HADOOP-12718. The {{FileSystem#listStatus}} 
> contract does not indicate {{null}} is a valid return and some callers do not 
> test {{null}} before use:
> AbstractContractGetFileStatusTest#testListStatusEmptyDirectory:
> {code}
> assertEquals("ls on an empty directory not of length 0", 0,
> fs.listStatus(subfolder).length);
> {code}
> ChecksumFileSystem#copyToLocalFile:
> {code}
>   FileStatus[] srcs = listStatus(src);
>   for (FileStatus srcFile : srcs) {
> {code}
> SimpleCopyLIsting#getFileStatus:
> {code}
>   FileStatus[] fileStatuses = fileSystem.listStatus(path);
>   if (excludeList != null && excludeList.size() > 0) {
> ArrayList fileStatusList = new ArrayList<>();
> for(FileStatus status : fileStatuses) {
> {code}
> IMHO, there is no good reason for {{listStatus}} to return {{null}}. It 
> should throw IOExceptions upon errors or return empty list.
> To enforce the contract that null is an invalid return, update javadoc and 
> leverage @Nullable/@NotNull/@Nonnull annotations.
> So far, I am only aware of the following functions that can return null:
> * RawLocalFileSystem#listStatus



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13218) Migrate other Hadoop side tests to prepare for removing WritableRPCEngine

2016-09-07 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15471697#comment-15471697
 ] 

Kai Zheng commented on HADOOP-13218:


Sorry for the inconvenience. I'll fire a new JIRA and sort out a fix soon.

The cause should that the default engine should remain as the old one, before 
MAPREDUCE-6706 is solved. The fix should be simple in YARN/MapReduce side so 
necessary test cases are involved and triggered.
{code}
@@ -207,7 +208,7 @@ static synchronized RpcEngine getProtocolEngine(Class 
protocol,
 RpcEngine engine = PROTOCOL_ENGINES.get(protocol);
 if (engine == null) {
   Class impl = conf.getClass(ENGINE_PROP+"."+protocol.getName(),
-WritableRpcEngine.class);
+ProtobufRpcEngine.class);
   engine = (RpcEngine)ReflectionUtils.newInstance(impl, conf);
   PROTOCOL_ENGINES.put(protocol, engine);
 }
{code}

> Migrate other Hadoop side tests to prepare for removing WritableRPCEngine
> -
>
> Key: HADOOP-13218
> URL: https://issues.apache.org/jira/browse/HADOOP-13218
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: test
>Reporter: Kai Zheng
>Assignee: Wei Zhou
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13218-v01.patch, HADOOP-13218-v02.patch
>
>
> Patch for HADOOP-12579 contains lots of work to migrate the left Hadoop side 
> tests basing on the new RPC engine and nice cleanups. HADOOP-12579 will be 
> reverted to allow some time for YARN/Mapreduce side related changes, open 
> this to recommit most of the test related work in HADOOP-12579 for easier 
> tracking and maintain, as other sub-tasks did.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-7352) FileSystem#listStatus should throw IOE upon access error

2016-09-07 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-7352:
---
Status: Patch Available  (was: In Progress)

> FileSystem#listStatus should throw IOE upon access error
> 
>
> Key: HADOOP-7352
> URL: https://issues.apache.org/jira/browse/HADOOP-7352
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Reporter: Matt Foley
>Assignee: John Zhuge
> Attachments: HADOOP-7352.001.patch
>
>
> In HADOOP-6201 and HDFS-538 it was agreed that FileSystem::listStatus should 
> throw FileNotFoundException instead of returning null, when the target 
> directory did not exist.
> However, in LocalFileSystem implementation today, FileSystem::listStatus 
> still may return null, when the target directory exists but does not grant 
> read permission.  This causes NPE in many callers, for all the reasons cited 
> in HADOOP-6201 and HDFS-538.  See HADOOP-7327 and its linked issues for 
> examples.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13191) FileSystem#listStatus should not return null

2016-09-07 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15471684#comment-15471684
 ] 

Steve Loughran commented on HADOOP-13191:
-

Be aware that the FS shell expects globStatus to return null in certain 
conditions: someone needs to look at all uses of this call and make sure that 
it isn't being used

-1 to adding @NonNull, for the dependency reasons.

> FileSystem#listStatus should not return null
> 
>
> Key: HADOOP-13191
> URL: https://issues.apache.org/jira/browse/HADOOP-13191
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Attachments: HADOOP-13191.001.patch, HADOOP-13191.002.patch, 
> HADOOP-13191.003.patch, HADOOP-13191.004.patch
>
>
> This came out of discussion in HADOOP-12718. The {{FileSystem#listStatus}} 
> contract does not indicate {{null}} is a valid return and some callers do not 
> test {{null}} before use:
> AbstractContractGetFileStatusTest#testListStatusEmptyDirectory:
> {code}
> assertEquals("ls on an empty directory not of length 0", 0,
> fs.listStatus(subfolder).length);
> {code}
> ChecksumFileSystem#copyToLocalFile:
> {code}
>   FileStatus[] srcs = listStatus(src);
>   for (FileStatus srcFile : srcs) {
> {code}
> SimpleCopyLIsting#getFileStatus:
> {code}
>   FileStatus[] fileStatuses = fileSystem.listStatus(path);
>   if (excludeList != null && excludeList.size() > 0) {
> ArrayList fileStatusList = new ArrayList<>();
> for(FileStatus status : fileStatuses) {
> {code}
> IMHO, there is no good reason for {{listStatus}} to return {{null}}. It 
> should throw IOExceptions upon errors or return empty list.
> To enforce the contract that null is an invalid return, update javadoc and 
> leverage @Nullable/@NotNull/@Nonnull annotations.
> So far, I am only aware of the following functions that can return null:
> * RawLocalFileSystem#listStatus



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-7352) FileSystem#listStatus should throw IOE upon access error

2016-09-07 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-7352:
---
Attachment: HADOOP-7352.001.patch

Patch 001:
* {{RawLocalFileSystem#listStatus}} throws IOE upon access error. Leverage 
{{FileUtil#list}}.
* Update {{filesystem.md}}
* Pass unit test {{TestFSMainOperationsLocalFileSystem}} and 
{{TestFSMainOperationsWebHdfs}}

[~xiaochen] This patches follows {{HADOOP-13191.004.patch}}, addressing all 
your review comments.

> FileSystem#listStatus should throw IOE upon access error
> 
>
> Key: HADOOP-7352
> URL: https://issues.apache.org/jira/browse/HADOOP-7352
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Reporter: Matt Foley
>Assignee: John Zhuge
> Attachments: HADOOP-7352.001.patch
>
>
> In HADOOP-6201 and HDFS-538 it was agreed that FileSystem::listStatus should 
> throw FileNotFoundException instead of returning null, when the target 
> directory did not exist.
> However, in LocalFileSystem implementation today, FileSystem::listStatus 
> still may return null, when the target directory exists but does not grant 
> read permission.  This causes NPE in many callers, for all the reasons cited 
> in HADOOP-6201 and HDFS-538.  See HADOOP-7327 and its linked issues for 
> examples.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-7352) FileSystem#listStatus should throw IOE upon access error

2016-09-07 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-7352:
---
Summary: FileSystem#listStatus should throw IOE upon access error  (was: 
FileSystem#listStatus should throw IOException upon access error)

> FileSystem#listStatus should throw IOE upon access error
> 
>
> Key: HADOOP-7352
> URL: https://issues.apache.org/jira/browse/HADOOP-7352
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Reporter: Matt Foley
>Assignee: John Zhuge
>
> In HADOOP-6201 and HDFS-538 it was agreed that FileSystem::listStatus should 
> throw FileNotFoundException instead of returning null, when the target 
> directory did not exist.
> However, in LocalFileSystem implementation today, FileSystem::listStatus 
> still may return null, when the target directory exists but does not grant 
> read permission.  This causes NPE in many callers, for all the reasons cited 
> in HADOOP-6201 and HDFS-538.  See HADOOP-7327 and its linked issues for 
> examples.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-7352) FileSystem#listStatus should throw IOException upon access error

2016-09-07 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-7352:
---
Summary: FileSystem#listStatus should throw IOException upon access error  
(was: FileSystem#listStatus should throw IOException upon error)

> FileSystem#listStatus should throw IOException upon access error
> 
>
> Key: HADOOP-7352
> URL: https://issues.apache.org/jira/browse/HADOOP-7352
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Reporter: Matt Foley
>Assignee: John Zhuge
>
> In HADOOP-6201 and HDFS-538 it was agreed that FileSystem::listStatus should 
> throw FileNotFoundException instead of returning null, when the target 
> directory did not exist.
> However, in LocalFileSystem implementation today, FileSystem::listStatus 
> still may return null, when the target directory exists but does not grant 
> read permission.  This causes NPE in many callers, for all the reasons cited 
> in HADOOP-6201 and HDFS-538.  See HADOOP-7327 and its linked issues for 
> examples.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-7352) FileSystem::listStatus should throw IOException upon error

2016-09-07 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-7352:
---
Summary: FileSystem::listStatus should throw IOException upon error  (was: 
Contracts of LocalFileSystem and DistributedFileSystem should require 
FileSystem::listStatus throw IOException not return null upon access error)

> FileSystem::listStatus should throw IOException upon error
> --
>
> Key: HADOOP-7352
> URL: https://issues.apache.org/jira/browse/HADOOP-7352
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs, fs/s3
>Reporter: Matt Foley
>Assignee: John Zhuge
>
> In HADOOP-6201 and HDFS-538 it was agreed that FileSystem::listStatus should 
> throw FileNotFoundException instead of returning null, when the target 
> directory did not exist.
> However, in LocalFileSystem implementation today, FileSystem::listStatus 
> still may return null, when the target directory exists but does not grant 
> read permission.  This causes NPE in many callers, for all the reasons cited 
> in HADOOP-6201 and HDFS-538.  See HADOOP-7327 and its linked issues for 
> examples.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-7352) FileSystem#listStatus should throw IOException upon error

2016-09-07 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-7352:
---
Component/s: (was: fs/s3)

> FileSystem#listStatus should throw IOException upon error
> -
>
> Key: HADOOP-7352
> URL: https://issues.apache.org/jira/browse/HADOOP-7352
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Reporter: Matt Foley
>Assignee: John Zhuge
>
> In HADOOP-6201 and HDFS-538 it was agreed that FileSystem::listStatus should 
> throw FileNotFoundException instead of returning null, when the target 
> directory did not exist.
> However, in LocalFileSystem implementation today, FileSystem::listStatus 
> still may return null, when the target directory exists but does not grant 
> read permission.  This causes NPE in many callers, for all the reasons cited 
> in HADOOP-6201 and HDFS-538.  See HADOOP-7327 and its linked issues for 
> examples.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-7352) FileSystem#listStatus should throw IOException upon error

2016-09-07 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-7352:
---
Summary: FileSystem#listStatus should throw IOException upon error  (was: 
FileSystem::listStatus should throw IOException upon error)

> FileSystem#listStatus should throw IOException upon error
> -
>
> Key: HADOOP-7352
> URL: https://issues.apache.org/jira/browse/HADOOP-7352
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs, fs/s3
>Reporter: Matt Foley
>Assignee: John Zhuge
>
> In HADOOP-6201 and HDFS-538 it was agreed that FileSystem::listStatus should 
> throw FileNotFoundException instead of returning null, when the target 
> directory did not exist.
> However, in LocalFileSystem implementation today, FileSystem::listStatus 
> still may return null, when the target directory exists but does not grant 
> read permission.  This causes NPE in many callers, for all the reasons cited 
> in HADOOP-6201 and HDFS-538.  See HADOOP-7327 and its linked issues for 
> examples.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13388) Clean up TestLocalFileSystemPermission

2016-09-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15471407#comment-15471407
 ] 

Hudson commented on HADOOP-13388:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10405 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10405/])
HADOOP-13388. Clean up TestLocalFileSystemPermission. Contributed by 
(aengineer: rev f414d5e118940cb98015c0b66e11102a9704a505)
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestLocalFileSystemPermission.java


> Clean up TestLocalFileSystemPermission
> --
>
> Key: HADOOP-13388
> URL: https://issues.apache.org/jira/browse/HADOOP-13388
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Trivial
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13388.01.patch, HADOOP-13388.02.patch, 
> HADOOP-13388.03.patch, HADOOP-13388.04.patch
>
>
> I see more problems with {{TestLocalFileSystemPermission}}:
> * Many checkstyle warnings
> * Relays on JUnit3 so Assume framework cannot be used for Windows checks.
> * In the tests in case of exception we get an error message but the test 
> itself will pass (because of the return).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13388) Clean up TestLocalFileSystemPermission

2016-09-07 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HADOOP-13388:
--
  Resolution: Fixed
Hadoop Flags: Reviewed
Target Version/s: 3.0.0-alpha2
  Status: Resolved  (was: Patch Available)

[~boky01] Thanks for the contribution. I have committed this to trunk.

> Clean up TestLocalFileSystemPermission
> --
>
> Key: HADOOP-13388
> URL: https://issues.apache.org/jira/browse/HADOOP-13388
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Trivial
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13388.01.patch, HADOOP-13388.02.patch, 
> HADOOP-13388.03.patch, HADOOP-13388.04.patch
>
>
> I see more problems with {{TestLocalFileSystemPermission}}:
> * Many checkstyle warnings
> * Relays on JUnit3 so Assume framework cannot be used for Windows checks.
> * In the tests in case of exception we get an error message but the test 
> itself will pass (because of the return).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13218) Migrate other Hadoop side tests to prepare for removing WritableRPCEngine

2016-09-07 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15471322#comment-15471322
 ] 

Jason Lowe commented on HADOOP-13218:
-

Not just tests, MapReduce jobs can't run after this change. The tasks fail with 
this error:
{noformat}
2016-09-07 17:51:56,296 WARN [main] org.apache.hadoop.mapred.YarnChild: 
Exception running child : java.lang.reflect.UndeclaredThrowableException
at com.sun.proxy.$Proxy10.getTask(Unknown Source)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:137)
Caused by: com.google.protobuf.ServiceException: Too many or few parameters for 
request. Method: [getTask], Expected: 2, Actual: 1
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:199)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
... 2 more
{noformat}

I think this needs to be reverted until things are sorted out.

> Migrate other Hadoop side tests to prepare for removing WritableRPCEngine
> -
>
> Key: HADOOP-13218
> URL: https://issues.apache.org/jira/browse/HADOOP-13218
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: test
>Reporter: Kai Zheng
>Assignee: Wei Zhou
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13218-v01.patch, HADOOP-13218-v02.patch
>
>
> Patch for HADOOP-12579 contains lots of work to migrate the left Hadoop side 
> tests basing on the new RPC engine and nice cleanups. HADOOP-12579 will be 
> reverted to allow some time for YARN/Mapreduce side related changes, open 
> this to recommit most of the test related work in HADOOP-12579 for easier 
> tracking and maintain, as other sub-tasks did.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13518) backport HADOOP-9258 to branch-2

2016-09-07 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15471344#comment-15471344
 ] 

Chris Nauroth commented on HADOOP-13518:


I tried again, but I still saw failures:

{code}
Running org.apache.hadoop.fs.s3.ITestInMemoryS3FileSystemContract
Tests run: 43, Failures: 5, Errors: 1, Skipped: 0, Time elapsed: 0.912 sec <<< 
FAILURE! - in org.apache.hadoop.fs.s3.ITestInMemoryS3FileSystemContract
{code}

Steve, I noticed my output says 43 tests were run, but your output says 31 
tests were run.  The patch adds 12 new tests in the base class, so maybe you 
still need to run {{mvn install}} in hadoop-common before running the test 
suite.

bq. FWIW, as this test is local, could it be converted from an Integration test 
to a simple Test?

That was my first instinct when I worked on HADOOP-13446.  Unfortunately, I 
discovered that even though the test is local, the code still has a dependency 
on the presence of auth-keys.xml because of extending 
{{S3FileSystemContractBaseTest}}.  A similar issue applies to 
{{ITestInMemoryNativeS3FileSystemContract}}.  It would be nice to break that 
dependency, but since we're avoiding changes in "s3:" and "s3n:", I chose to 
take the shorter path of converting them to integration tests.

> backport HADOOP-9258 to branch-2
> 
>
> Key: HADOOP-13518
> URL: https://issues.apache.org/jira/browse/HADOOP-13518
> Project: Hadoop Common
>  Issue Type: Task
>  Components: fs, fs/s3, test
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13518-branch-2-001.patch
>
>
> I've just realised that HADOOP-9258 was never backported to branch 2. It went 
> in to branch 1, and into trunk, but not in the bit in the middle.
> It adds
> -more fs contract tests
> -s3 and s3n rename don't let you rename under yourself (and delete)
> I'm going to try to create a patch for this, though it'll be tricky given how 
> things have moved around a lot since then. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13388) Clean up TestLocalFileSystemPermission

2016-09-07 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15471345#comment-15471345
 ] 

Anu Engineer commented on HADOOP-13388:
---

+1, LGTM. Thanks for the patch [~boky01]. I will commit this shortly.

> Clean up TestLocalFileSystemPermission
> --
>
> Key: HADOOP-13388
> URL: https://issues.apache.org/jira/browse/HADOOP-13388
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Trivial
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13388.01.patch, HADOOP-13388.02.patch, 
> HADOOP-13388.03.patch, HADOOP-13388.04.patch
>
>
> I see more problems with {{TestLocalFileSystemPermission}}:
> * Many checkstyle warnings
> * Relays on JUnit3 so Assume framework cannot be used for Windows checks.
> * In the tests in case of exception we get an error message but the test 
> itself will pass (because of the return).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work started] (HADOOP-7352) Contracts of LocalFileSystem and DistributedFileSystem should require FileSystem::listStatus throw IOException not return null upon access error

2016-09-07 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-7352 started by John Zhuge.
--
> Contracts of LocalFileSystem and DistributedFileSystem should require 
> FileSystem::listStatus throw IOException not return null upon access error
> 
>
> Key: HADOOP-7352
> URL: https://issues.apache.org/jira/browse/HADOOP-7352
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs, fs/s3
>Reporter: Matt Foley
>Assignee: John Zhuge
>
> In HADOOP-6201 and HDFS-538 it was agreed that FileSystem::listStatus should 
> throw FileNotFoundException instead of returning null, when the target 
> directory did not exist.
> However, in LocalFileSystem implementation today, FileSystem::listStatus 
> still may return null, when the target directory exists but does not grant 
> read permission.  This causes NPE in many callers, for all the reasons cited 
> in HADOOP-6201 and HDFS-538.  See HADOOP-7327 and its linked issues for 
> examples.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13218) Migrate other Hadoop side tests to prepare for removing WritableRPCEngine

2016-09-07 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15471304#comment-15471304
 ] 

Kihwal Lee commented on HADOOP-13218:
-

So is it expected that many mapred tests are broken after this?

> Migrate other Hadoop side tests to prepare for removing WritableRPCEngine
> -
>
> Key: HADOOP-13218
> URL: https://issues.apache.org/jira/browse/HADOOP-13218
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: test
>Reporter: Kai Zheng
>Assignee: Wei Zhou
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13218-v01.patch, HADOOP-13218-v02.patch
>
>
> Patch for HADOOP-12579 contains lots of work to migrate the left Hadoop side 
> tests basing on the new RPC engine and nice cleanups. HADOOP-12579 will be 
> reverted to allow some time for YARN/Mapreduce side related changes, open 
> this to recommit most of the test related work in HADOOP-12579 for easier 
> tracking and maintain, as other sub-tasks did.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-13191) FileSystem#listStatus should not return null

2016-09-07 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge resolved HADOOP-13191.
-
Resolution: Duplicate

> FileSystem#listStatus should not return null
> 
>
> Key: HADOOP-13191
> URL: https://issues.apache.org/jira/browse/HADOOP-13191
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Attachments: HADOOP-13191.001.patch, HADOOP-13191.002.patch, 
> HADOOP-13191.003.patch, HADOOP-13191.004.patch
>
>
> This came out of discussion in HADOOP-12718. The {{FileSystem#listStatus}} 
> contract does not indicate {{null}} is a valid return and some callers do not 
> test {{null}} before use:
> AbstractContractGetFileStatusTest#testListStatusEmptyDirectory:
> {code}
> assertEquals("ls on an empty directory not of length 0", 0,
> fs.listStatus(subfolder).length);
> {code}
> ChecksumFileSystem#copyToLocalFile:
> {code}
>   FileStatus[] srcs = listStatus(src);
>   for (FileStatus srcFile : srcs) {
> {code}
> SimpleCopyLIsting#getFileStatus:
> {code}
>   FileStatus[] fileStatuses = fileSystem.listStatus(path);
>   if (excludeList != null && excludeList.size() > 0) {
> ArrayList fileStatusList = new ArrayList<>();
> for(FileStatus status : fileStatuses) {
> {code}
> IMHO, there is no good reason for {{listStatus}} to return {{null}}. It 
> should throw IOExceptions upon errors or return empty list.
> To enforce the contract that null is an invalid return, update javadoc and 
> leverage @Nullable/@NotNull/@Nonnull annotations.
> So far, I am only aware of the following functions that can return null:
> * RawLocalFileSystem#listStatus



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-7352) Contracts of LocalFileSystem and DistributedFileSystem should require FileSystem::listStatus throw IOException not return null upon access error

2016-09-07 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge reassigned HADOOP-7352:
--

Assignee: John Zhuge  (was: Matt Foley)

> Contracts of LocalFileSystem and DistributedFileSystem should require 
> FileSystem::listStatus throw IOException not return null upon access error
> 
>
> Key: HADOOP-7352
> URL: https://issues.apache.org/jira/browse/HADOOP-7352
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs, fs/s3
>Reporter: Matt Foley
>Assignee: John Zhuge
>
> In HADOOP-6201 and HDFS-538 it was agreed that FileSystem::listStatus should 
> throw FileNotFoundException instead of returning null, when the target 
> directory did not exist.
> However, in LocalFileSystem implementation today, FileSystem::listStatus 
> still may return null, when the target directory exists but does not grant 
> read permission.  This causes NPE in many callers, for all the reasons cited 
> in HADOOP-6201 and HDFS-538.  See HADOOP-7327 and its linked issues for 
> examples.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11684) S3a to use thread pool that blocks clients

2016-09-07 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15471079#comment-15471079
 ] 

Aaron Fabbri commented on HADOOP-11684:
---

[~ste...@apache.org] I assume you are testing with fast upload on?  Should we 
file a new jira for this?

> S3a to use thread pool that blocks clients
> --
>
> Key: HADOOP-11684
> URL: https://issues.apache.org/jira/browse/HADOOP-11684
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.0
>Reporter: Thomas Demoor
>Assignee: Thomas Demoor
> Fix For: 3.0.0-alpha1
>
> Attachments: HADOOP-11684-001.patch, HADOOP-11684-002.patch, 
> HADOOP-11684-003.patch, HADOOP-11684-004.patch, HADOOP-11684-005.patch, 
> HADOOP-11684-006.patch
>
>
> Currently, if fs.s3a.max.total.tasks are queued and another (part)upload 
> wants to start, a RejectedExecutionException is thrown. 
> We should use a threadpool that blocks clients, nicely throtthling them, 
> rather than throwing an exception. F.i. something similar to 
> https://github.com/apache/incubator-s4/blob/master/subprojects/s4-comm/src/main/java/org/apache/s4/comm/staging/BlockingThreadPoolExecutorService.java



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11684) S3a to use thread pool that blocks clients

2016-09-07 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15471078#comment-15471078
 ] 

Aaron Fabbri commented on HADOOP-11684:
---

[~ste...@apache.org] I assume you are testing with fast upload on?  Should we 
file a new jira for this?

> S3a to use thread pool that blocks clients
> --
>
> Key: HADOOP-11684
> URL: https://issues.apache.org/jira/browse/HADOOP-11684
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.0
>Reporter: Thomas Demoor
>Assignee: Thomas Demoor
> Fix For: 3.0.0-alpha1
>
> Attachments: HADOOP-11684-001.patch, HADOOP-11684-002.patch, 
> HADOOP-11684-003.patch, HADOOP-11684-004.patch, HADOOP-11684-005.patch, 
> HADOOP-11684-006.patch
>
>
> Currently, if fs.s3a.max.total.tasks are queued and another (part)upload 
> wants to start, a RejectedExecutionException is thrown. 
> We should use a threadpool that blocks clients, nicely throtthling them, 
> rather than throwing an exception. F.i. something similar to 
> https://github.com/apache/incubator-s4/blob/master/subprojects/s4-comm/src/main/java/org/apache/s4/comm/staging/BlockingThreadPoolExecutorService.java



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11090) [Umbrella] Support Java 8 in Hadoop

2016-09-07 Thread Benoit Sigoure (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15470855#comment-15470855
 ] 

Benoit Sigoure commented on HADOOP-11090:
-

Nope.  What else is known to break?

> [Umbrella] Support Java 8 in Hadoop
> ---
>
> Key: HADOOP-11090
> URL: https://issues.apache.org/jira/browse/HADOOP-11090
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Mohammad Kamrul Islam
>Assignee: Mohammad Kamrul Islam
>
> Java 8 is coming quickly to various clusters. Making sure Hadoop seamlessly 
> works  with Java 8 is important for the Apache community.
>   
> This JIRA is to track  the issues/experiences encountered during Java 8 
> migration. If you find a potential bug , please create a separate JIRA either 
> as a sub-task or linked into this JIRA.
> If you find a Hadoop or JVM configuration tuning, you can create a JIRA as 
> well. Or you can add  a comment  here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13518) backport HADOOP-9258 to branch-2

2016-09-07 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15470799#comment-15470799
 ] 

Steve Loughran commented on HADOOP-13518:
-

worksforme
{code}

---
 T E S T S
---
Running org.apache.hadoop.fs.s3.ITestInMemoryS3FileSystemContract
Tests run: 31, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.946 sec - in 
org.apache.hadoop.fs.s3.ITestInMemoryS3FileSystemContract
{code}

> backport HADOOP-9258 to branch-2
> 
>
> Key: HADOOP-13518
> URL: https://issues.apache.org/jira/browse/HADOOP-13518
> Project: Hadoop Common
>  Issue Type: Task
>  Components: fs, fs/s3, test
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13518-branch-2-001.patch
>
>
> I've just realised that HADOOP-9258 was never backported to branch 2. It went 
> in to branch 1, and into trunk, but not in the bit in the middle.
> It adds
> -more fs contract tests
> -s3 and s3n rename don't let you rename under yourself (and delete)
> I'm going to try to create a patch for this, though it'll be tricky given how 
> things have moved around a lot since then. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13518) backport HADOOP-9258 to branch-2

2016-09-07 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15470801#comment-15470801
 ] 

Steve Loughran commented on HADOOP-13518:
-

FWIW, as this test is local, could it be converted from an Integration test to 
a simple Test?

> backport HADOOP-9258 to branch-2
> 
>
> Key: HADOOP-13518
> URL: https://issues.apache.org/jira/browse/HADOOP-13518
> Project: Hadoop Common
>  Issue Type: Task
>  Components: fs, fs/s3, test
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13518-branch-2-001.patch
>
>
> I've just realised that HADOOP-9258 was never backported to branch 2. It went 
> in to branch 1, and into trunk, but not in the bit in the middle.
> It adds
> -more fs contract tests
> -s3 and s3n rename don't let you rename under yourself (and delete)
> I'm going to try to create a patch for this, though it'll be tricky given how 
> things have moved around a lot since then. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13560) S3A to support huge file writes and operations -with tests

2016-09-07 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15470729#comment-15470729
 ] 

Steve Loughran commented on HADOOP-13560:
-

Looks related to this [stack overflow 
topic|http://stackoverflow.com/questions/30121218/aws-s3-uploading-large-file-fails-with-resetexception-failed-to-reset-the-requ]
 and AWS issue [427|https://github.com/aws/aws-sdk-java/issues/427]

Passing the file direct to the xfer manager apparently helps, though that will 
complicate the process in two ways: 

1. the buffering mechanism is now visible to the S3aBlockOutputStream
2. there's the problem of deleting the file after the async xfer operation 
completes. Currently the stream deletes it in close(); without that a progress 
callback would need to react to the completed event and delete the file. Viable.

before then: experiment without the buffering (performance impact?) and with 
smaller partition sizes.

Also an unrelated idea: what about an option for always making the first block 
a memory block. That way, small files will be written without going near the 
local FS, but larger files will be uploaded.

> S3A to support huge file writes and operations -with tests
> --
>
> Key: HADOOP-13560
> URL: https://issues.apache.org/jira/browse/HADOOP-13560
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-13560-branch-2-001.patch, 
> HADOOP-13560-branch-2-002.patch
>
>
> An AWS SDK [issue|https://github.com/aws/aws-sdk-java/issues/367] highlights 
> that metadata isn't copied on large copies.
> 1. Add a test to do that large copy/rname and verify that the copy really 
> works
> 2. Verify that metadata makes it over.
> Verifying large file rename is important on its own, as it is needed for very 
> large commit operations for committers using rename



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13560) S3A to support huge file writes and operations -with tests

2016-09-07 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15470707#comment-15470707
 ] 

Steve Loughran commented on HADOOP-13560:
-

2GB uploads when file buffered fails; AWS complains
{code}
Running org.apache.hadoop.fs.s3a.scale.STestS3AHugeFilesDiskBlocks
Tests run: 5, Failures: 0, Errors: 1, Skipped: 3, Time elapsed: 1,258.424 sec 
<<< FAILURE! - in org.apache.hadoop.fs.s3a.scale.STestS3AHugeFilesDiskBlocks
test_010_CreateHugeFile(org.apache.hadoop.fs.s3a.scale.STestS3AHugeFilesDiskBlocks)
  Time elapsed: 1,256.013 sec  <<< ERROR!
org.apache.hadoop.fs.s3a.AWSClientIOException: Multi-part upload with id 
'dZga.hig99Nxdm1S5dlcilzpg1kiav7ZF2QCJZZydN0qyE7U_pMUEYdACOavY_us3q9CgIxfKaQadXLhgUseUw--'
 on tests3a/scale/hugefile: com.amazonaws.ResetException: Failed to reset the 
request input stream;  If the request involves an input stream, the maximum 
stream buffer size can be configured via 
request.getRequestClientOptions().setReadLimit(int): Failed to reset the 
request input stream;  If the request involves an input stream, the maximum 
stream buffer size can be configured via 
request.getRequestClientOptions().setReadLimit(int)
at 
org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:108)
at org.apache.hadoop.fs.s3a.S3AUtils.extractException(S3AUtils.java:165)
at 
org.apache.hadoop.fs.s3a.S3ABlockOutputStream$MultiPartUpload.waitForAllPartUploads(S3ABlockOutputStream.java:418)
at 
org.apache.hadoop.fs.s3a.S3ABlockOutputStream$MultiPartUpload.access$100(S3ABlockOutputStream.java:356)
at 
org.apache.hadoop.fs.s3a.S3ABlockOutputStream.close(S3ABlockOutputStream.java:261)
at 
org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
at 
org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:106)
at 
org.apache.hadoop.fs.s3a.scale.AbstractSTestS3AHugeFiles.test_010_CreateHugeFile(AbstractSTestS3AHugeFiles.java:149)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
Caused by: com.amazonaws.ResetException: Failed to reset the request input 
stream;  If the request involves an input stream, the maximum stream buffer 
size can be configured via request.getRequestClientOptions().setReadLimit(int)
at 
com.amazonaws.http.AmazonHttpClient.executeOneRequest(AmazonHttpClient.java:665)
at 
com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:489)
at 
com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:310)
at 
com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3785)
at 
com.amazonaws.services.s3.AmazonS3Client.doUploadPart(AmazonS3Client.java:2921)
at 
com.amazonaws.services.s3.AmazonS3Client.uploadPart(AmazonS3Client.java:2906)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.uploadPart(S3AFileSystem.java:1141)
at 
org.apache.hadoop.fs.s3a.S3ABlockOutputStream$MultiPartUpload$1.call(S3ABlockOutputStream.java:391)
at 
org.apache.hadoop.fs.s3a.S3ABlockOutputStream$MultiPartUpload$1.call(S3ABlockOutputStream.java:384)
at 
org.apache.hadoop.fs.s3a.BlockingThreadPoolExecutorService$CallableWithPermitRelease.call(BlockingThreadPoolExecutorService.java:239)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: Resetting to invalid mark
at java.io.BufferedInputStream.reset(BufferedInputStream.java:448)
at 
org.apache.hadoop.fs.s3a.S3ADataBlocks$ForwardingInputStream.reset(S3ADataBlocks.java:432)
at 
com.amazonaws.internal.SdkFilterInputStream.reset(SdkFilterInputStream.java:102)

[jira] [Updated] (HADOOP-13341) Deprecate HADOOP_SERVERNAME_OPTS; replace with (command)_(subcommand)_OPTS

2016-09-07 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13341?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-13341:
--
Release Note: 

Users:
* Ability to set per-command+sub-command options from the command line.
* Makes daemon environment variable options consistent across the project. (See 
deprecation list below)
* HADOOP\_CLIENT\_OPTS is now honored for every non-daemon sub-command. Prior 
to this change, many sub-commands did not use it.

Developers:
* No longer need to do custom handling for options in the case section of the 
shell scripts.
* Consolidates all \_OPTS handling into hadoop-functions.sh to enable future 
projects.
* All daemons running with secure mode features now get \_SECURE\_EXTRA\_OPTS 
support.

\_OPTS Changes:

| Old | New |
|: |: |
| HADOOP\_BALANCER\_OPTS | HDFS\_BALANCER\_OPTS | 
| HADOOP\_DATANODE\_OPTS | HDFS\_DATANODE\_OPTS | 
| HADOOP\_DN\_SECURE_EXTRA_OPTS | HDFS\_DATANODE\_SECURE\_EXTRA\_OPTS | 
| HADOOP\_JOB\_HISTORYSERVER\_OPTS | MAPRED\_HISTORYSERVER\_OPTS | 
| HADOOP\_JOURNALNODE\_OPTS | HDFS\_JOURNALNODE\_OPTS | 
| HADOOP\_MOVER\_OPTS | HDFS\_MOVER\_OPTS | 
| HADOOP\_NAMENODE\_OPTS | HDFS\_NAMENODE\_OPTS | 
| HADOOP\_NFS3\_OPTS | HDFS\_NFS3\_OPTS | 
| HADOOP\_NFS3\_SECURE\_EXTRA\_OPTS | HDFS\_NFS3\_SECURE\_EXTRA\_OPTS | | 
HADOOP\_PORTMAP\_OPTS | HDFS\_PORTMAP\_OPTS | 
| HADOOP\_SECONDARYNAMENODE\_OPTS | 
HDFS\_SECONDARYNAMENODE\_OPTS | 
| HADOOP\_ZKFC\_OPTS | HDFS\_ZKFC\_OPTS | 




  was:

Users:
* Ability to set per-command+sub-command options from the command line.
* Makes daemon options consistent across the project. (See deprecation list 
below)
* HADOOP\_CLIENT\_OPTS is now honored for every non-daemon sub-command. Prior 
to this change, many sub-commands did not use it.

Developers:
* No longer need to do custom handling for options in the case section of the 
shell scripts.
* Consolidates all \_OPTS handling into hadoop-functions.sh to enable future 
projects.
* All daemons running with secure mode features now get \_SECURE\_EXTRA\_OPTS 
support.

\_OPTS Changes:

| Old | New |
|: |: |
| HADOOP\_BALANCER\_OPTS | HDFS\_BALANCER\_OPTS | 
| HADOOP\_DATANODE\_OPTS | HDFS\_DATANODE\_OPTS | 
| HADOOP\_DN\_SECURE_EXTRA_OPTS | HDFS\_DATANODE\_SECURE\_EXTRA\_OPTS | 
| HADOOP\_JOB\_HISTORYSERVER\_OPTS | MAPRED\_HISTORYSERVER\_OPTS | 
| HADOOP\_JOURNALNODE\_OPTS | HDFS\_JOURNALNODE\_OPTS | 
| HADOOP\_MOVER\_OPTS | HDFS\_MOVER\_OPTS | 
| HADOOP\_NAMENODE\_OPTS | HDFS\_NAMENODE\_OPTS | 
| HADOOP\_NFS3\_OPTS | HDFS\_NFS3\_OPTS | 
| HADOOP\_NFS3\_SECURE\_EXTRA\_OPTS | HDFS\_NFS3\_SECURE\_EXTRA\_OPTS | | 
HADOOP\_PORTMAP\_OPTS | HDFS\_PORTMAP\_OPTS | 
| HADOOP\_SECONDARYNAMENODE\_OPTS | 
HDFS\_SECONDARYNAMENODE\_OPTS | 
| HADOOP\_ZKFC\_OPTS | HDFS\_ZKFC\_OPTS | 





> Deprecate HADOOP_SERVERNAME_OPTS; replace with (command)_(subcommand)_OPTS
> --
>
> Key: HADOOP-13341
> URL: https://issues.apache.org/jira/browse/HADOOP-13341
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Affects Versions: 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-13341.00.patch
>
>
> Big features like YARN-2928 demonstrate that even senior level Hadoop 
> developers forget that daemons need a custom _OPTS env var.  We can replace 
> all of the custom vars with generic handling just like we do for the username 
> check.
> For example, with generic handling in place:
> || Old Var || New Var ||
> | HADOOP_NAMENODE_OPTS | HDFS_NAMENODE_OPTS |
> | YARN_RESOURCEMANAGER_OPTS | YARN_RESOURCEMANAGER_OPTS |
> | n/a | YARN_TIMELINEREADER_OPTS |
> | n/a | HADOOP_DISTCP_OPTS |
> | n/a | MAPRED_DISTCP_OPTS |
> | HADOOP_DN_SECURE_EXTRA_OPTS | HDFS_DATANODE_SECURE_EXTRA_OPTS |
> | HADOOP_NFS3_SECURE_EXTRA_OPTS | HDFS_NFS3_SECURE_EXTRA_OPTS |
> | HADOOP_JOB_HISTORYSERVER_OPTS | MAPRED_HISTORYSERVER_OPTS |
> This makes it:
> a) consistent across the entire project
> b) consistent for every subcommand
> c) eliminates almost all of the custom appending in the case statements
> It's worth pointing out that subcommands like distcp that sometimes need a 
> higher than normal client-side heapsize or custom options are a huge win.  
> Combined with .hadooprc and/or dynamic subcommands, it means users can easily 
> do customizations based upon their needs without a lot of weirdo shell 
> aliasing or one line shell scripts off to the side.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13341) Deprecate HADOOP_SERVERNAME_OPTS; replace with (command)_(subcommand)_OPTS

2016-09-07 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13341?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15470640#comment-15470640
 ] 

Allen Wittenauer commented on HADOOP-13341:
---

Thanks.  I guess I'll just call for a vote and see what happens. 

> Deprecate HADOOP_SERVERNAME_OPTS; replace with (command)_(subcommand)_OPTS
> --
>
> Key: HADOOP-13341
> URL: https://issues.apache.org/jira/browse/HADOOP-13341
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Affects Versions: 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-13341.00.patch
>
>
> Big features like YARN-2928 demonstrate that even senior level Hadoop 
> developers forget that daemons need a custom _OPTS env var.  We can replace 
> all of the custom vars with generic handling just like we do for the username 
> check.
> For example, with generic handling in place:
> || Old Var || New Var ||
> | HADOOP_NAMENODE_OPTS | HDFS_NAMENODE_OPTS |
> | YARN_RESOURCEMANAGER_OPTS | YARN_RESOURCEMANAGER_OPTS |
> | n/a | YARN_TIMELINEREADER_OPTS |
> | n/a | HADOOP_DISTCP_OPTS |
> | n/a | MAPRED_DISTCP_OPTS |
> | HADOOP_DN_SECURE_EXTRA_OPTS | HDFS_DATANODE_SECURE_EXTRA_OPTS |
> | HADOOP_NFS3_SECURE_EXTRA_OPTS | HDFS_NFS3_SECURE_EXTRA_OPTS |
> | HADOOP_JOB_HISTORYSERVER_OPTS | MAPRED_HISTORYSERVER_OPTS |
> This makes it:
> a) consistent across the entire project
> b) consistent for every subcommand
> c) eliminates almost all of the custom appending in the case statements
> It's worth pointing out that subcommands like distcp that sometimes need a 
> higher than normal client-side heapsize or custom options are a huge win.  
> Combined with .hadooprc and/or dynamic subcommands, it means users can easily 
> do customizations based upon their needs without a lot of weirdo shell 
> aliasing or one line shell scripts off to the side.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13586) Hadoop 3.0 doesn't build on windows

2016-09-07 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15470498#comment-15470498
 ] 

Steve Loughran commented on HADOOP-13586:
-

{code}

main:
[INFO] Executed tasks
[INFO]
[INFO] <<< maven-source-plugin:2.3:test-jar (default) @ hadoop-project-dist <<<
[INFO]
[INFO] --- maven-source-plugin:2.3:test-jar (default) @ hadoop-project-dist ---
[INFO]
[INFO] --- exec-maven-plugin:1.3.1:exec (pre-dist) @ hadoop-project-dist ---
C:\Work\hadoop-trunk\hadoop-project/../dev-support/bin/dist-copynativelibs: 
line 16: $'\r': command not found
: invalid option namehadoop-project/../dev-support/bin/dist-copynativelibs: 
line 17: set: pipefail
C:\Work\hadoop-trunk\hadoop-project/../dev-support/bin/dist-copynativelibs: 
line 18: $'\r': command not found
C:\Work\hadoop-trunk\hadoop-project/../dev-support/bin/dist-copynativelibs: 
line 21: syntax error near unexpected token
`$'\r''
':\Work\hadoop-trunk\hadoop-project/../dev-support/bin/dist-copynativelibs: 
line 21: `function bundle_native_lib()
[INFO] 
[INFO] Reactor Summary:
[INFO]
[INFO] Apache Hadoop Main  SUCCESS [2.968s]
[INFO] Apache Hadoop Build Tools . SUCCESS [2.203s]
[INFO] Apache Hadoop Project POM . SUCCESS [2.453s]
[INFO] Apache Hadoop Annotations . SUCCESS [1.422s]
[INFO] Apache Hadoop Assemblies .. SUCCESS [0.531s]
[INFO] Apache Hadoop Project Dist POM  FAILURE [1.500s]
[INFO] Apache Hadoop Maven Plugins ... SKIPPED
[INFO] Apache Hadoop MiniKDC . SKIPPED
{code}

> Hadoop 3.0 doesn't build on windows
> ---
>
> Key: HADOOP-13586
> URL: https://issues.apache.org/jira/browse/HADOOP-13586
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-alpha1
> Environment: Windows Server
>Reporter: Steve Loughran
>
> Builds on windows fail, even before getting to the native bits
> Looks like dev-support/bin/dist-copynativelibs isn't windows-ready



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13586) Hadoop 3.0 doesn't build on windows

2016-09-07 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-13586:
---

 Summary: Hadoop 3.0 doesn't build on windows
 Key: HADOOP-13586
 URL: https://issues.apache.org/jira/browse/HADOOP-13586
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 3.0.0-alpha1
 Environment: Windows Server
Reporter: Steve Loughran


Builds on windows fail, even before getting to the native bits

Looks like dev-support/bin/dist-copynativelibs isn't windows-ready



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11684) S3a to use thread pool that blocks clients

2016-09-07 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15470440#comment-15470440
 ] 

Steve Loughran commented on HADOOP-11684:
-

Playing with this and large files, i'm starting to think we should actually 
have defaults of lower fs.s3a.threads.max and a longer queue. Why? it's too 
easy with 10 threads to OOM a big distcp from outside an AWS DC.

> S3a to use thread pool that blocks clients
> --
>
> Key: HADOOP-11684
> URL: https://issues.apache.org/jira/browse/HADOOP-11684
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.0
>Reporter: Thomas Demoor
>Assignee: Thomas Demoor
> Fix For: 3.0.0-alpha1
>
> Attachments: HADOOP-11684-001.patch, HADOOP-11684-002.patch, 
> HADOOP-11684-003.patch, HADOOP-11684-004.patch, HADOOP-11684-005.patch, 
> HADOOP-11684-006.patch
>
>
> Currently, if fs.s3a.max.total.tasks are queued and another (part)upload 
> wants to start, a RejectedExecutionException is thrown. 
> We should use a threadpool that blocks clients, nicely throtthling them, 
> rather than throwing an exception. F.i. something similar to 
> https://github.com/apache/incubator-s4/blob/master/subprojects/s4-comm/src/main/java/org/apache/s4/comm/staging/BlockingThreadPoolExecutorService.java



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13579) Fix source-level compatibility after HADOOP-11252

2016-09-07 Thread Masatake Iwasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15470385#comment-15470385
 ] 

Masatake Iwasaki commented on HADOOP-13579:
---

I do not think we should fix all checksytle issues in Client.java here. We 
should fix only relevant lines (215 and 226).

> Fix source-level compatibility after HADOOP-11252
> -
>
> Key: HADOOP-13579
> URL: https://issues.apache.org/jira/browse/HADOOP-13579
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.3, 2.6.4
>Reporter: Akira Ajisaka
>Assignee: Tsuyoshi Ozawa
>Priority: Blocker
> Attachments: HADOOP-13579-branch-2.6.001.patch, 
> HADOOP-13579-branch-2.6.002.patch, HADOOP-13579-branch-2.7.001.patch, 
> HADOOP-13579-branch-2.7.002.patch
>
>
> Reported by [~chiwanpark]
> bq. Since 2.7.3 release, Client.get/setPingInterval is changed from public to 
> package-private.
> bq. Giraph is one of broken examples for this changes. 
> (https://github.com/apache/giraph/blob/release-1.0/giraph-core/src/main/java/org/apache/giraph/job/GiraphJob.java#L202)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13541) explicitly declare the Joda time version S3A depends on

2016-09-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15470367#comment-15470367
 ] 

Hudson commented on HADOOP-13541:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10404 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10404/])
HADOOP-13541 explicitly declare the Joda time version S3A depends on. (stevel: 
rev 7fdfcd8a6c9e2dd9b0fb6d4196bc371f6f9a676c)
* (edit) hadoop-project/pom.xml
* (edit) hadoop-tools/hadoop-aws/pom.xml


> explicitly declare the Joda time version S3A depends on
> ---
>
> Key: HADOOP-13541
> URL: https://issues.apache.org/jira/browse/HADOOP-13541
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build, fs/s3
>Affects Versions: 2.8.0, 2.7.3
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-13541-branch-2.8-001.patch
>
>
> Different builds of Hadoop are pulling in wildly different versions of Joda 
> time, depending on what other transitive dependencies are involved. Example: 
> 2.7.3 is somehow picking up Joda time 2.9.4; branch-2.8 is actually behind on 
> 2.8.1. That's going to cause confusion when people upgrade from 2.7.x to 2.8 
> and find a dependency has got older
> I propose explicitly declaring a dependency on joda-time in s3a, then set the 
> version to 2.9.4; upgrades are things we can manage



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13541) explicitly declare the Joda time version S3A depends on

2016-09-07 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13541:

   Resolution: Fixed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

thanks, fixed in 2.8+

> explicitly declare the Joda time version S3A depends on
> ---
>
> Key: HADOOP-13541
> URL: https://issues.apache.org/jira/browse/HADOOP-13541
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build, fs/s3
>Affects Versions: 2.8.0, 2.7.3
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-13541-branch-2.8-001.patch
>
>
> Different builds of Hadoop are pulling in wildly different versions of Joda 
> time, depending on what other transitive dependencies are involved. Example: 
> 2.7.3 is somehow picking up Joda time 2.9.4; branch-2.8 is actually behind on 
> 2.8.1. That's going to cause confusion when people upgrade from 2.7.x to 2.8 
> and find a dependency has got older
> I propose explicitly declaring a dependency on joda-time in s3a, then set the 
> version to 2.9.4; upgrades are things we can manage



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13585) shell rm command to not rename to ~/.Trash in object stores

2016-09-07 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-13585:
---

 Summary: shell rm command to not rename to ~/.Trash in object 
stores
 Key: HADOOP-13585
 URL: https://issues.apache.org/jira/browse/HADOOP-13585
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: util
Affects Versions: 2.8.0
Reporter: Steve Loughran


When you do a {{hadoop fs -rm -s3a://bucket/large-file}} there's a long delay 
and then you are told that it's been moved to 
{{s3a://Users/stevel/.Trash/current/large-file}}. Where it still incurs costs. 
You need to then delete that file using {{-skipTrash}} because the {{fs 
-expunge}} command only works on the local fs: you can't point it at an object 
store unless that is the default FS.

I'd like an option to tell the shell to tell it that it should bypass the 
renaming on an FS-by-FS basis. And the for {{fs expunge}} to take a filesystem 
as an optional argument.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11090) [Umbrella] Support Java 8 in Hadoop

2016-09-07 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15470045#comment-15470045
 ] 

Steve Loughran commented on HADOOP-11090:
-

it is, as its the corner cases. For example: were you on kerberos?

> [Umbrella] Support Java 8 in Hadoop
> ---
>
> Key: HADOOP-11090
> URL: https://issues.apache.org/jira/browse/HADOOP-11090
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Mohammad Kamrul Islam
>Assignee: Mohammad Kamrul Islam
>
> Java 8 is coming quickly to various clusters. Making sure Hadoop seamlessly 
> works  with Java 8 is important for the Apache community.
>   
> This JIRA is to track  the issues/experiences encountered during Java 8 
> migration. If you find a potential bug , please create a separate JIRA either 
> as a sub-task or linked into this JIRA.
> If you find a Hadoop or JVM configuration tuning, you can create a JIRA as 
> well. Or you can add  a comment  here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-9819) FileSystem#rename is broken, deletes target when renaming link to itself

2016-09-07 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15470043#comment-15470043
 ] 

Steve Loughran commented on HADOOP-9819:


yes, it does sound broken. A bit like s3n and s3a would allow you to rename a 
path under itself, then recursively delete the base path...

> FileSystem#rename is broken, deletes target when renaming link to itself
> 
>
> Key: HADOOP-9819
> URL: https://issues.apache.org/jira/browse/HADOOP-9819
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 3.0.0-alpha1
>Reporter: Arpit Agarwal
>Assignee: Andras Bokor
> Attachments: HADOOP-9819.01.patch, HADOOP-9819.02.patch, 
> HADOOP-9819.03.patch
>
>
> Uncovered while fixing TestSymlinkLocalFsFileSystem on Windows.
> This block of code deletes the symlink, the correct behavior is to do nothing.
> {code:java}
> try {
>   dstStatus = getFileLinkStatus(dst);
> } catch (IOException e) {
>   dstStatus = null;
> }
> if (dstStatus != null) {
>   if (srcStatus.isDirectory() != dstStatus.isDirectory()) {
> throw new IOException("Source " + src + " Destination " + dst
> + " both should be either file or directory");
>   }
>   if (!overwrite) {
> throw new FileAlreadyExistsException("rename destination " + dst
> + " already exists.");
>   }
>   // Delete the destination that is a file or an empty directory
>   if (dstStatus.isDirectory()) {
> FileStatus[] list = listStatus(dst);
> if (list != null && list.length != 0) {
>   throw new IOException(
>   "rename cannot overwrite non empty destination directory " + 
> dst);
> }
>   }
>   delete(dst, false);
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13560) S3A to support huge file writes and operations -with tests

2016-09-07 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13560:

Status: Open  (was: Patch Available)

> S3A to support huge file writes and operations -with tests
> --
>
> Key: HADOOP-13560
> URL: https://issues.apache.org/jira/browse/HADOOP-13560
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-13560-branch-2-001.patch, 
> HADOOP-13560-branch-2-002.patch
>
>
> An AWS SDK [issue|https://github.com/aws/aws-sdk-java/issues/367] highlights 
> that metadata isn't copied on large copies.
> 1. Add a test to do that large copy/rname and verify that the copy really 
> works
> 2. Verify that metadata makes it over.
> Verifying large file rename is important on its own, as it is needed for very 
> large commit operations for committers using rename



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13584) Merge HADOOP-12756 branch to latest trunk

2016-09-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15469969#comment-15469969
 ] 

Hadoop QA commented on HADOOP-13584:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 19 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
26s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  9m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
 0s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-tools/hadoop-tools-dist hadoop-tools . {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m 
42s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  9m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
 5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
9s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project . hadoop-tools hadoop-tools/hadoop-tools-dist {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m 
45s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 98m 17s{color} 
| {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
30s{color} | {color:red} The patch generated 2 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}186m 13s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.tools.TestDFSZKFailoverController |
|   | hadoop.hdfs.server.datanode.TestDataNodeUUID |
|   | hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA |
|   | hadoop.yarn.server.applicationhistoryservice.webapp.TestAHSWebServices |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HADOOP-13584 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12827304/HADOOP-13584.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  findbugs  checkstyle  |
| uname | Linux ac852d56797d 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Commented] (HADOOP-12502) SetReplication OutOfMemoryError

2016-09-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15469866#comment-15469866
 ] 

Hadoop QA commented on HADOOP-12502:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
30s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  8m  
5s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 29s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 1 new + 266 unchanged - 0 fixed = 267 total (was 266) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 17m 21s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 53m 45s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.fs.TestFilterFileSystem |
|   | hadoop.fs.TestHarFileSystem |
| Timed out junit tests | org.apache.hadoop.http.TestHttpServerLifecycle |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HADOOP-12502 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12827318/HADOOP-12502-03.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux e1cc907c49bc 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / c0e492e |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10456/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10456/artifact/patchprocess/whitespace-eol.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10456/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10456/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 

[jira] [Updated] (HADOOP-12502) SetReplication OutOfMemoryError

2016-09-07 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HADOOP-12502:
---
Attachment: HADOOP-12502-03.patch

Fixed {{TestFsShellCopy}}. Failure was due to non-sorted {{listStatus()}}.

earlier sorting was done in {{PathData#getDirectoryContents()}}.
Now sorting will happen in {{ChecksumFileSystem#listStatus()}}.

> SetReplication OutOfMemoryError
> ---
>
> Key: HADOOP-12502
> URL: https://issues.apache.org/jira/browse/HADOOP-12502
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.3.0
>Reporter: Philipp Schuegerl
>Assignee: Vinayakumar B
> Attachments: HADOOP-12502-01.patch, HADOOP-12502-02.patch, 
> HADOOP-12502-03.patch
>
>
> Setting the replication of a HDFS folder recursively can run out of memory. 
> E.g. with a large /var/log directory:
> hdfs dfs -setrep -R -w 1 /var/log
> Exception in thread "main" java.lang.OutOfMemoryError: GC overhead limit 
> exceeded
>   at java.util.Arrays.copyOfRange(Arrays.java:2694)
>   at java.lang.String.(String.java:203)
>   at java.lang.String.substring(String.java:1913)
>   at java.net.URI$Parser.substring(URI.java:2850)
>   at java.net.URI$Parser.parse(URI.java:3046)
>   at java.net.URI.(URI.java:753)
>   at org.apache.hadoop.fs.Path.initialize(Path.java:203)
>   at org.apache.hadoop.fs.Path.(Path.java:116)
>   at org.apache.hadoop.fs.Path.(Path.java:94)
>   at 
> org.apache.hadoop.hdfs.protocol.HdfsFileStatus.getFullPath(HdfsFileStatus.java:222)
>   at 
> org.apache.hadoop.hdfs.protocol.HdfsFileStatus.makeQualified(HdfsFileStatus.java:246)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:689)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.access$600(DistributedFileSystem.java:102)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:712)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:708)
>   at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:708)
>   at 
> org.apache.hadoop.fs.shell.PathData.getDirectoryContents(PathData.java:268)
>   at org.apache.hadoop.fs.shell.Command.recursePath(Command.java:347)
>   at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:308)
>   at org.apache.hadoop.fs.shell.Command.recursePath(Command.java:347)
>   at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:308)
>   at org.apache.hadoop.fs.shell.Command.recursePath(Command.java:347)
>   at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:308)
>   at org.apache.hadoop.fs.shell.Command.recursePath(Command.java:347)
>   at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:308)
>   at org.apache.hadoop.fs.shell.Command.recursePath(Command.java:347)
>   at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:308)
>   at 
> org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:278)
>   at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:260)
>   at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:244)
>   at 
> org.apache.hadoop.fs.shell.SetReplication.processArguments(SetReplication.java:76)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org