[jira] [Updated] (HADOOP-13169) Randomize file list in SimpleCopyListing

2016-09-12 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-13169:
---
Assignee: Rajesh Balamohan

> Randomize file list in SimpleCopyListing
> 
>
> Key: HADOOP-13169
> URL: https://issues.apache.org/jira/browse/HADOOP-13169
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
>Priority: Minor
> Attachments: HADOOP-13169-branch-2-001.patch, 
> HADOOP-13169-branch-2-002.patch, HADOOP-13169-branch-2-003.patch, 
> HADOOP-13169-branch-2-004.patch, HADOOP-13169-branch-2-005.patch
>
>
> When copying files to S3, based on file listing some mappers can get into S3 
> partition hotspots. This would be more visible, when data is copied from hive 
> warehouse with lots of partitions (e.g date partitions). In such cases, some 
> of the tasks would tend to be a lot more slower than others. It would be good 
> to randomize the file paths which are written out in SimpleCopyListing to 
> avoid this issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13597) Switch KMS to use Jetty

2016-09-12 Thread John Zhuge (JIRA)
John Zhuge created HADOOP-13597:
---

 Summary: Switch KMS to use Jetty
 Key: HADOOP-13597
 URL: https://issues.apache.org/jira/browse/HADOOP-13597
 Project: Hadoop Common
  Issue Type: Improvement
  Components: kms
Affects Versions: 2.6.0
Reporter: John Zhuge
Assignee: John Zhuge


The Tomcat 6 we are using will reach EOL at the end of 2017. While there are 
other good options, I would propose switching to {{Jetty 9}} for the following 
reasons:
* Easier migration. Both Tomcat and Jetty are based on {{Servlet Containers}}, 
so we don't have change client code that much. It would require more work to 
switch to {{JAX-RS}}.
* Well established.
* Good performance and scalability.

Other alternatives:
* Jersey + Grizzly
* Tomcat 8

Your opinions will be greatly appreciated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-7352) FileSystem#listStatus should throw IOE upon access error

2016-09-12 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15486254#comment-15486254
 ] 

John Zhuge commented on HADOOP-7352:


[~ste...@apache.org], [~mattf], [~cmccabe], [~daryn], could you please review 
Patch 003? It changes the filesystem contract and could have quite an impact.

Please consider 3 possible behaviors:
* Patch 003: {{listStatus, {{globStatus}}, and {{expandAsGlob}} all throw IOE
* {{listStatus}} throws IOE, {{globStatus}} and {{expandAsGlob}} return empty 
array
* All 3 return empty array

> FileSystem#listStatus should throw IOE upon access error
> 
>
> Key: HADOOP-7352
> URL: https://issues.apache.org/jira/browse/HADOOP-7352
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.6.0
>Reporter: Matt Foley
>Assignee: John Zhuge
> Attachments: HADOOP-7352.001.patch, HADOOP-7352.002.patch, 
> HADOOP-7352.003.patch
>
>
> In HADOOP-6201 and HDFS-538 it was agreed that FileSystem::listStatus should 
> throw FileNotFoundException instead of returning null, when the target 
> directory did not exist.
> However, in LocalFileSystem implementation today, FileSystem::listStatus 
> still may return null, when the target directory exists but does not grant 
> read permission.  This causes NPE in many callers, for all the reasons cited 
> in HADOOP-6201 and HDFS-538.  See HADOOP-7327 and its linked issues for 
> examples.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13596) Support to declare policy including a set of UI template, Policy Template, Context Variables

2016-09-12 Thread Hao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13596?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hao Chen updated HADOOP-13596:
--
Issue Type: New Feature  (was: Bug)

> Support to declare policy including a set of UI template, Policy Template, 
> Context Variables
> 
>
> Key: HADOOP-13596
> URL: https://issues.apache.org/jira/browse/HADOOP-13596
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Hao Chen
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13596) Support to declare policy including a set of UI template, Policy Template, Context Variables

2016-09-12 Thread Hao Chen (JIRA)
Hao Chen created HADOOP-13596:
-

 Summary: Support to declare policy including a set of UI template, 
Policy Template, Context Variables
 Key: HADOOP-13596
 URL: https://issues.apache.org/jira/browse/HADOOP-13596
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Hao Chen






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13169) Randomize file list in SimpleCopyListing

2016-09-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15486078#comment-15486078
 ] 

Hadoop QA commented on HADOOP-13169:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
33s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
15s{color} | {color:green} branch-2 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
18s{color} | {color:green} branch-2 passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
23s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
19s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
31s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} branch-2 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} branch-2 passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 13s{color} | {color:orange} hadoop-tools/hadoop-distcp: The patch generated 
7 new + 51 unchanged - 1 fixed = 58 total (was 52) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m  
9s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
13s{color} | {color:green} hadoop-distcp in the patch passed with JDK 
v1.7.0_111. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 29m  5s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:b59b8b7 |
| JIRA Issue | HADOOP-13169 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12828163/HADOOP-13169-branch-2-005.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 51795f4053e4 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | branch-2 / 6948691 |
| Default Java | 1.7.0_111 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_101 

[jira] [Commented] (HADOOP-12981) Remove/deprecate s3native properties from S3NativeFileSystemConfigKeys and core-default.xml

2016-09-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15486033#comment-15486033
 ] 

Hadoop QA commented on HADOOP-12981:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
22s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
15s{color} | {color:red} hadoop-aws in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  6m 
54s{color} | {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  6m 54s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
28s{color} | {color:green} root: The patch generated 0 new + 0 unchanged - 5 
fixed = 0 total (was 5) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
19s{color} | {color:red} hadoop-aws in the patch failed. {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
20s{color} | {color:red} hadoop-aws in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
19s{color} | {color:red} hadoop-tools_hadoop-aws generated 18 new + 0 unchanged 
- 0 fixed = 18 total (was 0) {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 49s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 21s{color} 
| {color:red} hadoop-aws in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 70m 53s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.net.TestClusterTopology |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HADOOP-12981 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12828154/HADOOP-12981.002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  findbugs  checkstyle  |
| uname | Linux 31e2cd49d6e6 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 72dfb04 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| mvninstall | 

[jira] [Updated] (HADOOP-13169) Randomize file list in SimpleCopyListing

2016-09-12 Thread Rajesh Balamohan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajesh Balamohan updated HADOOP-13169:
--
Attachment: HADOOP-13169-branch-2-005.patch

Sharing the HDFS based numbers based on the patch:

{noformat}
Source data size
448782412981  /apps/hive/warehouse/tpcds_bin_partitioned_orc.db

Without Patch:
==
real21m46.126s
user0m42.566s
sys 0m3.282s

With Patch:
==
real12m23.091s
user0m38.096s
sys 0m2.686s
{noformat}

This was on 20 node cluster, which shows good improvement with HDFS as well. 
With randomization, CopyMapper could get better locality when copying over data 
in HDFS. Without the patch, CopyMapper could end up reading data remotely for 
file copying (e.g file paths from the listing 
"web_returns/wr_returned_date_sk=2452626/000135_0, 
web_returns/wr_returned_date_sk=2452624/000121_0 ..."). 

Recent patch also has the option to turn off this feature on optional basis 
{{distcp.simplelisting.randomize.files=false}}, 
{{distcp.simplelisting.file.status.size=1000}}. Also included a test case
in {{TestCopyListing}}

> Randomize file list in SimpleCopyListing
> 
>
> Key: HADOOP-13169
> URL: https://issues.apache.org/jira/browse/HADOOP-13169
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Reporter: Rajesh Balamohan
>Priority: Minor
> Attachments: HADOOP-13169-branch-2-001.patch, 
> HADOOP-13169-branch-2-002.patch, HADOOP-13169-branch-2-003.patch, 
> HADOOP-13169-branch-2-004.patch, HADOOP-13169-branch-2-005.patch
>
>
> When copying files to S3, based on file listing some mappers can get into S3 
> partition hotspots. This would be more visible, when data is copied from hive 
> warehouse with lots of partitions (e.g date partitions). In such cases, some 
> of the tasks would tend to be a lot more slower than others. It would be good 
> to randomize the file paths which are written out in SimpleCopyListing to 
> avoid this issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13218) Migrate other Hadoop side tests to prepare for removing WritableRPCEngine

2016-09-12 Thread Wei Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei Zhou updated HADOOP-13218:
--
Attachment: HADOOP-13218-v03.patch

Fix MapReduce failure caused by changing the default rpc engine, thanks!

> Migrate other Hadoop side tests to prepare for removing WritableRPCEngine
> -
>
> Key: HADOOP-13218
> URL: https://issues.apache.org/jira/browse/HADOOP-13218
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: test
>Reporter: Kai Zheng
>Assignee: Wei Zhou
> Attachments: HADOOP-13218-v01.patch, HADOOP-13218-v02.patch, 
> HADOOP-13218-v03.patch
>
>
> Patch for HADOOP-12579 contains lots of work to migrate the left Hadoop side 
> tests basing on the new RPC engine and nice cleanups. HADOOP-12579 will be 
> reverted to allow some time for YARN/Mapreduce side related changes, open 
> this to recommit most of the test related work in HADOOP-12579 for easier 
> tracking and maintain, as other sub-tasks did.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12981) Remove/deprecate s3native properties from S3NativeFileSystemConfigKeys and core-default.xml

2016-09-12 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12981?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-12981:
-
Attachment: HADOOP-12981.002.patch

v02. the same patch after rebase.

> Remove/deprecate s3native properties from S3NativeFileSystemConfigKeys and 
> core-default.xml
> ---
>
> Key: HADOOP-12981
> URL: https://issues.apache.org/jira/browse/HADOOP-12981
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, tools
>Affects Versions: 3.0.0-alpha1
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
>  Labels: aws
> Attachments: HADOOP-12981.001.patch, HADOOP-12981.002.patch
>
>
> It seems all properties defined in {{S3NativeFileSystemConfigKeys}} are not 
> used. Those properties are prefixed by {{s3native}}, and the current s3native 
> properties are all prefixed by {{fs.s3n}}, so this is likely not used 
> currently. Additionally, core-default.xml has the description of these unused 
> properties:
> {noformat}
> 
> 
>   s3native.stream-buffer-size
>   4096
>   The size of buffer to stream files.
>   The size of this buffer should probably be a multiple of hardware
>   page size (4096 on Intel x86), and it determines how much data is
>   buffered during read and write operations.
> 
> 
>   s3native.bytes-per-checksum
>   512
>   The number of bytes per checksum.  Must not be larger than
>   s3native.stream-buffer-size
> 
> 
>   s3native.client-write-packet-size
>   65536
>   Packet size for clients to write
> 
> 
>   s3native.blocksize
>   67108864
>   Block size
> 
> 
>   s3native.replication
>   3
>   Replication factor
> 
> {noformat}
> I think they should be removed (or deprecated) to avoid confusion if these 
> properties are defunct.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13591) Unit test failed in 'TestOSSContractGetFileStatus' and 'TestOSSContractRootDir'

2016-09-12 Thread Genmao Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13591?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Genmao Yu updated HADOOP-13591:
---
Summary: Unit test failed in 'TestOSSContractGetFileStatus' and 
'TestOSSContractRootDir'  (was: There are somethings wrong in 
'TestOSSContractGetFileStatus' and 'TestOSSContractRootDir')

> Unit test failed in 'TestOSSContractGetFileStatus' and 
> 'TestOSSContractRootDir'
> ---
>
> Key: HADOOP-13591
> URL: https://issues.apache.org/jira/browse/HADOOP-13591
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: HADOOP-12756
>Reporter: Genmao Yu
>Assignee: Genmao Yu
> Fix For: HADOOP-12756
>
> Attachments: HADOOP-13591-HADOOP-12756.001.patch, 
> HADOOP-13591-HADOOP-12756.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13546) Override equals and hashCode to avoid connection leakage

2016-09-12 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15485703#comment-15485703
 ] 

Jing Zhao commented on HADOOP-13546:


Thanks for updating the patch, Xiaobing. The 007 patch looks good to me. +1. I 
will commit the patch early tomorrow if no objections.

> Override equals and hashCode to avoid connection leakage
> 
>
> Key: HADOOP-13546
> URL: https://issues.apache.org/jira/browse/HADOOP-13546
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ipc
>Affects Versions: 2.7.0
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HADOOP-13546-HADOOP-13436.000.patch, 
> HADOOP-13546-HADOOP-13436.001.patch, HADOOP-13546-HADOOP-13436.002.patch, 
> HADOOP-13546-HADOOP-13436.003.patch, HADOOP-13546-HADOOP-13436.004.patch, 
> HADOOP-13546-HADOOP-13436.005.patch, HADOOP-13546-HADOOP-13436.006.patch, 
> HADOOP-13546-HADOOP-13436.007.patch
>
>
> Override #equals and #hashcode to ensure multiple instances are equivalent. 
> They eventually
> share the same RPC connection given the other arguments of constructing 
> ConnectionId are
> the same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13588) ConfServlet should respect Accept request header

2016-09-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15485644#comment-15485644
 ] 

Hudson commented on HADOOP-13588:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10428 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10428/])
HADOOP-13588. ConfServlet should respect Accept request header. (liuml07: rev 
59d59667a8b1d3fb4a744a41774b2397fd91cbb3)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/ConfServlet.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestConfServlet.java


> ConfServlet should respect Accept request header
> 
>
> Key: HADOOP-13588
> URL: https://issues.apache.org/jira/browse/HADOOP-13588
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13588.001.patch, HADOOP-13588.002.patch
>
>
> ConfServlet provides a general service to retrieve daemon configurations. 
> However it doesn't set response content-type according to *Accept* header. 
> For example, by issuing following command, 
> {code}
> curl --header "Accept:application/json" 
> http://${resourcemanager_host}:8088/conf
> {code}
> I am expecting the response would be in JSON format, however it is still in 
> XML. I can only get JSON if I issue
> {code}
> curl http://${resourcemanager_host}:8088/conf?format="json;
> {code}
> This is not the common way how clients set content-type.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13588) ConfServlet should respect Accept request header

2016-09-12 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15485611#comment-15485611
 ] 

Mingliang Liu commented on HADOOP-13588:


Committed to {{trunk}} branch. Thanks for contribution, [~cheersyang].

> ConfServlet should respect Accept request header
> 
>
> Key: HADOOP-13588
> URL: https://issues.apache.org/jira/browse/HADOOP-13588
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13588.001.patch, HADOOP-13588.002.patch
>
>
> ConfServlet provides a general service to retrieve daemon configurations. 
> However it doesn't set response content-type according to *Accept* header. 
> For example, by issuing following command, 
> {code}
> curl --header "Accept:application/json" 
> http://${resourcemanager_host}:8088/conf
> {code}
> I am expecting the response would be in JSON format, however it is still in 
> XML. I can only get JSON if I issue
> {code}
> curl http://${resourcemanager_host}:8088/conf?format="json;
> {code}
> This is not the common way how clients set content-type.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13588) ConfServlet should respect Accept request header

2016-09-12 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-13588:
---
   Resolution: Fixed
 Hadoop Flags: Incompatible change,Reviewed  (was: Incompatible change)
Fix Version/s: 3.0.0-alpha2
   Status: Resolved  (was: Patch Available)

> ConfServlet should respect Accept request header
> 
>
> Key: HADOOP-13588
> URL: https://issues.apache.org/jira/browse/HADOOP-13588
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13588.001.patch, HADOOP-13588.002.patch
>
>
> ConfServlet provides a general service to retrieve daemon configurations. 
> However it doesn't set response content-type according to *Accept* header. 
> For example, by issuing following command, 
> {code}
> curl --header "Accept:application/json" 
> http://${resourcemanager_host}:8088/conf
> {code}
> I am expecting the response would be in JSON format, however it is still in 
> XML. I can only get JSON if I issue
> {code}
> curl http://${resourcemanager_host}:8088/conf?format="json;
> {code}
> This is not the common way how clients set content-type.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13588) ConfServlet should respect Accept request header

2016-09-12 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-13588:
---
Summary: ConfServlet should respect Accept request header  (was: 
ConfServlet doesn't set response content type according to the Accept header )

> ConfServlet should respect Accept request header
> 
>
> Key: HADOOP-13588
> URL: https://issues.apache.org/jira/browse/HADOOP-13588
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HADOOP-13588.001.patch, HADOOP-13588.002.patch
>
>
> ConfServlet provides a general service to retrieve daemon configurations. 
> However it doesn't set response content-type according to *Accept* header. 
> For example, by issuing following command, 
> {code}
> curl --header "Accept:application/json" 
> http://${resourcemanager_host}:8088/conf
> {code}
> I am expecting the response would be in JSON format, however it is still in 
> XML. I can only get JSON if I issue
> {code}
> curl http://${resourcemanager_host}:8088/conf?format="json;
> {code}
> This is not the common way how clients set content-type.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13595) Rework hadoop_usage to be broken up by clients/daemons/etc.

2016-09-12 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HADOOP-13595:
-

 Summary: Rework hadoop_usage to be broken up by 
clients/daemons/etc.
 Key: HADOOP-13595
 URL: https://issues.apache.org/jira/browse/HADOOP-13595
 Project: Hadoop Common
  Issue Type: Improvement
  Components: scripts
Affects Versions: 3.0.0-alpha2
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer


Part of the feedback from HADOOP-13341 was that it wasn't obvious what was a 
client and what was a daemon.  Reworking the hadoop_usage output so that it is 
obvious helps fix this issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13365) Convert _OPTS to arrays to enable spaces in file paths

2016-09-12 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15485294#comment-15485294
 ] 

Allen Wittenauer commented on HADOOP-13365:
---

One of the feedbacks from HADOOP-13341 from [~ste...@apache.org] was that the 
docs should include an example of _OPTS ordering.  We have an opportunity to 
de-dupe here.  If this JIRA does end up de-duping, then that should be 
documented.  If it doesn't de-dupe, then the docs should specifically give an 
example of ordering.

> Convert _OPTS to arrays to enable spaces in file paths
> --
>
> Key: HADOOP-13365
> URL: https://issues.apache.org/jira/browse/HADOOP-13365
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Affects Versions: 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-13365-HADOOP-13341.00.patch
>
>
> While we are mucking with all of the _OPTS variables, this is a good time to 
> convert them to arrays so that filesystems with spaces in them can be used.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13594) findbugs warnings to block a build

2016-09-12 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15485271#comment-15485271
 ] 

Arpit Agarwal commented on HADOOP-13594:


What will be the effect on the compile time, assuming the 6 Maven plugin issues 
are addressed?

> findbugs warnings to block a build
> --
>
> Key: HADOOP-13594
> URL: https://issues.apache.org/jira/browse/HADOOP-13594
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Tsuyoshi Ozawa
> Attachments: HADOOP-13594.001.patch
>
>
> findbugs is good tool, but we need to run separate command(mvn 
> findbugs:check). 
> Instead, it's better to run findbugs at compile time and block builds if it 
> find errors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13580) If user is unauthorized, log "unauthorized" instead of "Invalid signed text:"

2016-09-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15485241#comment-15485241
 ] 

Hadoop QA commented on HADOOP-13580:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
15s{color} | {color:green} hadoop-auth in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 28m 12s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HADOOP-13580 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12828097/HADOOP-13580.002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 994ec0158383 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 58ed4fa |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10487/testReport/ |
| modules | C: hadoop-common-project/hadoop-auth U: 
hadoop-common-project/hadoop-auth |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10487/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> If user is unauthorized, log "unauthorized" instead of "Invalid signed text:"
> -
>
> Key: HADOOP-13580
> URL: https://issues.apache.org/jira/browse/HADOOP-13580
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
>   

[jira] [Updated] (HADOOP-13580) If user is unauthorized, log "unauthorized" instead of "Invalid signed text:"

2016-09-12 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-13580:
-
Attachment: HADOOP-13580.002.patch

The resubmit doesn't seem to work. Resubmit v01 patch as v02 and see if that 
works

> If user is unauthorized, log "unauthorized" instead of "Invalid signed text:"
> -
>
> Key: HADOOP-13580
> URL: https://issues.apache.org/jira/browse/HADOOP-13580
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
>  Labels: supportability
> Attachments: HADOOP-13580.001.patch, HADOOP-13580.002.patch
>
>
> If a user is unauthorized to access web interface, the signed text is empty, 
> and the web interface prints 
> bq. org.apache.hadoop.security.authentication.util.SignerException: Invalid 
> signed text: 
> This error message is obscure. It should be a more meaningful message like 
> "Unauthorized access."



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-10075) Update jetty dependency to version 9

2016-09-12 Thread Ravi Prakash (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15485046#comment-15485046
 ] 

Ravi Prakash commented on HADOOP-10075:
---

bq. IMO we'd be better off moving out of Jetty and into jersey as the server; 
this would eliminate jersey version problems altogether, and more importantly, 
jersey "quirks"
For those of us less familiar with Jersey, could you please elaborate on this 
Steve? Or did you mean "this would eliminate *jetty* version problems 
altogether" ? Or does Jersey promise never to change its API ever?

In any case we can always make that happen later, so we shouldn't block the 
upgrade of an old and crufty Jetty if someone wants to do it.

> Update jetty dependency to version 9
> 
>
> Key: HADOOP-10075
> URL: https://issues.apache.org/jira/browse/HADOOP-10075
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.2.0, 2.6.0
>Reporter: Robert Rati
>Assignee: Robert Kanter
> Attachments: HADOOP-10075-002-wip.patch, HADOOP-10075.patch
>
>
> Jetty6 is no longer maintained.  Update the dependency to jetty9.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13341) Deprecate HADOOP_SERVERNAME_OPTS; replace with (command)_(subcommand)_OPTS

2016-09-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13341?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15484919#comment-15484919
 ] 

Hudson commented on HADOOP-13341:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10426 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10426/])
HADOOP-13341.  Deprecate HADOOP_SERVERNAME_OPTS; replace with (aw: rev 
58ed4fa5449872d65efd52d840f02dd60af2771a)
* (edit) hadoop-yarn-project/hadoop-yarn/bin/yarn
* (add) 
hadoop-common-project/hadoop-common/src/test/scripts/hadoop_add_client_opts.bats
* (edit) 
hadoop-tools/hadoop-streaming/src/main/shellprofile.d/hadoop-streaming.sh
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs-config.sh
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsNfsGateway.md
* (add) 
hadoop-common-project/hadoop-common/src/test/scripts/hadoop_subcommand_opts.bats
* (edit) hadoop-common-project/hadoop-common/src/site/markdown/ClusterSetup.md
* (edit) hadoop-common-project/hadoop-common/src/main/conf/hadoop-env.sh
* (edit) hadoop-mapreduce-project/conf/mapred-env.sh
* (edit) hadoop-common-project/hadoop-common/src/main/bin/hadoop-functions.sh
* (add) 
hadoop-common-project/hadoop-common/src/test/scripts/hadoop_subcommand_secure_opts.bats
* (add) 
hadoop-common-project/hadoop-common/src/test/scripts/hadoop_verify_user.bats
* (edit) hadoop-tools/hadoop-sls/src/main/bin/slsrun.sh
* (edit) 
hadoop-tools/hadoop-archive-logs/src/main/shellprofile.d/hadoop-archive-logs.sh
* (edit) hadoop-tools/hadoop-rumen/src/main/shellprofile.d/hadoop-rumen.sh
* (edit) hadoop-common-project/hadoop-common/src/site/markdown/UnixShellGuide.md
* (edit) hadoop-common-project/hadoop-common/src/main/bin/hadoop
* (edit) hadoop-tools/hadoop-distcp/src/main/shellprofile.d/hadoop-distcp.sh
* (edit) hadoop-tools/hadoop-extras/src/main/shellprofile.d/hadoop-extras.sh
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs
* (edit) hadoop-mapreduce-project/bin/mapred
* (edit) hadoop-mapreduce-project/bin/mapred-config.sh
* (edit) hadoop-tools/hadoop-sls/src/main/bin/rumen2sls.sh


> Deprecate HADOOP_SERVERNAME_OPTS; replace with (command)_(subcommand)_OPTS
> --
>
> Key: HADOOP-13341
> URL: https://issues.apache.org/jira/browse/HADOOP-13341
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Affects Versions: 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13341.00.patch
>
>
> Big features like YARN-2928 demonstrate that even senior level Hadoop 
> developers forget that daemons need a custom _OPTS env var.  We can replace 
> all of the custom vars with generic handling just like we do for the username 
> check.
> For example, with generic handling in place:
> || Old Var || New Var ||
> | HADOOP_NAMENODE_OPTS | HDFS_NAMENODE_OPTS |
> | YARN_RESOURCEMANAGER_OPTS | YARN_RESOURCEMANAGER_OPTS |
> | n/a | YARN_TIMELINEREADER_OPTS |
> | n/a | HADOOP_DISTCP_OPTS |
> | n/a | MAPRED_DISTCP_OPTS |
> | HADOOP_DN_SECURE_EXTRA_OPTS | HDFS_DATANODE_SECURE_EXTRA_OPTS |
> | HADOOP_NFS3_SECURE_EXTRA_OPTS | HDFS_NFS3_SECURE_EXTRA_OPTS |
> | HADOOP_JOB_HISTORYSERVER_OPTS | MAPRED_HISTORYSERVER_OPTS |
> This makes it:
> a) consistent across the entire project
> b) consistent for every subcommand
> c) eliminates almost all of the custom appending in the case statements
> It's worth pointing out that subcommands like distcp that sometimes need a 
> higher than normal client-side heapsize or custom options are a huge win.  
> Combined with .hadooprc and/or dynamic subcommands, it means users can easily 
> do customizations based upon their needs without a lot of weirdo shell 
> aliasing or one line shell scripts off to the side.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12981) Remove/deprecate s3native properties from S3NativeFileSystemConfigKeys and core-default.xml

2016-09-12 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15484893#comment-15484893
 ] 

Wei-Chiu Chuang commented on HADOOP-12981:
--

Hi Steve, sorry I somehow missed your message. Will do it today. :)

> Remove/deprecate s3native properties from S3NativeFileSystemConfigKeys and 
> core-default.xml
> ---
>
> Key: HADOOP-12981
> URL: https://issues.apache.org/jira/browse/HADOOP-12981
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, tools
>Affects Versions: 3.0.0-alpha1
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
>  Labels: aws
> Attachments: HADOOP-12981.001.patch
>
>
> It seems all properties defined in {{S3NativeFileSystemConfigKeys}} are not 
> used. Those properties are prefixed by {{s3native}}, and the current s3native 
> properties are all prefixed by {{fs.s3n}}, so this is likely not used 
> currently. Additionally, core-default.xml has the description of these unused 
> properties:
> {noformat}
> 
> 
>   s3native.stream-buffer-size
>   4096
>   The size of buffer to stream files.
>   The size of this buffer should probably be a multiple of hardware
>   page size (4096 on Intel x86), and it determines how much data is
>   buffered during read and write operations.
> 
> 
>   s3native.bytes-per-checksum
>   512
>   The number of bytes per checksum.  Must not be larger than
>   s3native.stream-buffer-size
> 
> 
>   s3native.client-write-packet-size
>   65536
>   Packet size for clients to write
> 
> 
>   s3native.blocksize
>   67108864
>   Block size
> 
> 
>   s3native.replication
>   3
>   Replication factor
> 
> {noformat}
> I think they should be removed (or deprecated) to avoid confusion if these 
> properties are defunct.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13341) Deprecate HADOOP_SERVERNAME_OPTS; replace with (command)_(subcommand)_OPTS

2016-09-12 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13341?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-13341:
--
   Resolution: Fixed
Fix Version/s: 3.0.0-alpha2
   Status: Resolved  (was: Patch Available)

Committed to trunk.

Thanks all!

> Deprecate HADOOP_SERVERNAME_OPTS; replace with (command)_(subcommand)_OPTS
> --
>
> Key: HADOOP-13341
> URL: https://issues.apache.org/jira/browse/HADOOP-13341
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Affects Versions: 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13341.00.patch
>
>
> Big features like YARN-2928 demonstrate that even senior level Hadoop 
> developers forget that daemons need a custom _OPTS env var.  We can replace 
> all of the custom vars with generic handling just like we do for the username 
> check.
> For example, with generic handling in place:
> || Old Var || New Var ||
> | HADOOP_NAMENODE_OPTS | HDFS_NAMENODE_OPTS |
> | YARN_RESOURCEMANAGER_OPTS | YARN_RESOURCEMANAGER_OPTS |
> | n/a | YARN_TIMELINEREADER_OPTS |
> | n/a | HADOOP_DISTCP_OPTS |
> | n/a | MAPRED_DISTCP_OPTS |
> | HADOOP_DN_SECURE_EXTRA_OPTS | HDFS_DATANODE_SECURE_EXTRA_OPTS |
> | HADOOP_NFS3_SECURE_EXTRA_OPTS | HDFS_NFS3_SECURE_EXTRA_OPTS |
> | HADOOP_JOB_HISTORYSERVER_OPTS | MAPRED_HISTORYSERVER_OPTS |
> This makes it:
> a) consistent across the entire project
> b) consistent for every subcommand
> c) eliminates almost all of the custom appending in the case statements
> It's worth pointing out that subcommands like distcp that sometimes need a 
> higher than normal client-side heapsize or custom options are a huge win.  
> Combined with .hadooprc and/or dynamic subcommands, it means users can easily 
> do customizations based upon their needs without a lot of weirdo shell 
> aliasing or one line shell scripts off to the side.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13341) Deprecate HADOOP_SERVERNAME_OPTS; replace with (command)_(subcommand)_OPTS

2016-09-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13341?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15484867#comment-15484867
 ] 

ASF GitHub Bot commented on HADOOP-13341:
-

Github user asfgit closed the pull request at:

https://github.com/apache/hadoop/pull/126


> Deprecate HADOOP_SERVERNAME_OPTS; replace with (command)_(subcommand)_OPTS
> --
>
> Key: HADOOP-13341
> URL: https://issues.apache.org/jira/browse/HADOOP-13341
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Affects Versions: 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-13341.00.patch
>
>
> Big features like YARN-2928 demonstrate that even senior level Hadoop 
> developers forget that daemons need a custom _OPTS env var.  We can replace 
> all of the custom vars with generic handling just like we do for the username 
> check.
> For example, with generic handling in place:
> || Old Var || New Var ||
> | HADOOP_NAMENODE_OPTS | HDFS_NAMENODE_OPTS |
> | YARN_RESOURCEMANAGER_OPTS | YARN_RESOURCEMANAGER_OPTS |
> | n/a | YARN_TIMELINEREADER_OPTS |
> | n/a | HADOOP_DISTCP_OPTS |
> | n/a | MAPRED_DISTCP_OPTS |
> | HADOOP_DN_SECURE_EXTRA_OPTS | HDFS_DATANODE_SECURE_EXTRA_OPTS |
> | HADOOP_NFS3_SECURE_EXTRA_OPTS | HDFS_NFS3_SECURE_EXTRA_OPTS |
> | HADOOP_JOB_HISTORYSERVER_OPTS | MAPRED_HISTORYSERVER_OPTS |
> This makes it:
> a) consistent across the entire project
> b) consistent for every subcommand
> c) eliminates almost all of the custom appending in the case statements
> It's worth pointing out that subcommands like distcp that sometimes need a 
> higher than normal client-side heapsize or custom options are a huge win.  
> Combined with .hadooprc and/or dynamic subcommands, it means users can easily 
> do customizations based upon their needs without a lot of weirdo shell 
> aliasing or one line shell scripts off to the side.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12981) Remove/deprecate s3native properties from S3NativeFileSystemConfigKeys and core-default.xml

2016-09-12 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15484826#comment-15484826
 ] 

Steve Loughran commented on HADOOP-12981:
-

I Want to get this patch in. Can you resync it with trunk & I'll apply it

> Remove/deprecate s3native properties from S3NativeFileSystemConfigKeys and 
> core-default.xml
> ---
>
> Key: HADOOP-12981
> URL: https://issues.apache.org/jira/browse/HADOOP-12981
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, tools
>Affects Versions: 3.0.0-alpha1
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
>  Labels: aws
> Attachments: HADOOP-12981.001.patch
>
>
> It seems all properties defined in {{S3NativeFileSystemConfigKeys}} are not 
> used. Those properties are prefixed by {{s3native}}, and the current s3native 
> properties are all prefixed by {{fs.s3n}}, so this is likely not used 
> currently. Additionally, core-default.xml has the description of these unused 
> properties:
> {noformat}
> 
> 
>   s3native.stream-buffer-size
>   4096
>   The size of buffer to stream files.
>   The size of this buffer should probably be a multiple of hardware
>   page size (4096 on Intel x86), and it determines how much data is
>   buffered during read and write operations.
> 
> 
>   s3native.bytes-per-checksum
>   512
>   The number of bytes per checksum.  Must not be larger than
>   s3native.stream-buffer-size
> 
> 
>   s3native.client-write-packet-size
>   65536
>   Packet size for clients to write
> 
> 
>   s3native.blocksize
>   67108864
>   Block size
> 
> 
>   s3native.replication
>   3
>   Replication factor
> 
> {noformat}
> I think they should be removed (or deprecated) to avoid confusion if these 
> properties are defunct.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13587) distcp.map.bandwidth.mb is overwritten even when -bandwidth flag isn't set

2016-09-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15484440#comment-15484440
 ] 

Hudson commented on HADOOP-13587:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10425 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10425/])
HADOOP-13587. distcp.map.bandwidth.mb is overwritten even when (raviprak: rev 
9faccd104672dfef123735ca8ada178fc3a6196f)
* (edit) 
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCp.java
* (edit) 
hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestOptionsParser.java
* (edit) 
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpOptions.java


> distcp.map.bandwidth.mb is overwritten even when -bandwidth flag isn't set
> --
>
> Key: HADOOP-13587
> URL: https://issues.apache.org/jira/browse/HADOOP-13587
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 3.0.0-alpha1
>Reporter: Zoran Dimitrijevic
>Assignee: Zoran Dimitrijevic
>Priority: Minor
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13587-01.patch, HADOOP-13587-02.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> distcp.map.bandwidth.mb exists in distcp-defaults.xml config file, but it is 
> not honored even when it is . Current code always overwrites it with either 
> default value (java const) or with -bandwidth command line option.
> The expected behavior (at least how I would expect it) is to honor the value 
> set in distcp-defaults.xml unless user explicitly specify -bandwidth command 
> line flag. If there is no value set in .xml file or as a command line flag, 
> then the constant from java code should be used.
> Additionally, I would expect that we also try to get values from 
> distcp-site.xml, similar to other hadoop systems.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13273) start-build-env.sh fails

2016-09-12 Thread Denis Bolshakov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denis Bolshakov updated HADOOP-13273:
-
Resolution: Cannot Reproduce
Status: Resolved  (was: Patch Available)

> start-build-env.sh fails
> 
>
> Key: HADOOP-13273
> URL: https://issues.apache.org/jira/browse/HADOOP-13273
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
> Environment: OS X EI Capitan 10.11.5
>Reporter: Denis Bolshakov
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-13273.001.patch
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> running start-build-env.sh on Mac fails on execution
> RUN apt-get install -y software-properties-common



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13587) distcp.map.bandwidth.mb is overwritten even when -bandwidth flag isn't set

2016-09-12 Thread Ravi Prakash (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Prakash updated HADOOP-13587:
--
  Resolution: Fixed
   Fix Version/s: 3.0.0-alpha2
Target Version/s: 3.0.0-alpha2  (was: 3.0.0-alpha1)
  Status: Resolved  (was: Patch Available)

Committed to trunk! Thanks for the contribution Zoran!

> distcp.map.bandwidth.mb is overwritten even when -bandwidth flag isn't set
> --
>
> Key: HADOOP-13587
> URL: https://issues.apache.org/jira/browse/HADOOP-13587
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 3.0.0-alpha1
>Reporter: Zoran Dimitrijevic
>Assignee: Zoran Dimitrijevic
>Priority: Minor
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13587-01.patch, HADOOP-13587-02.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> distcp.map.bandwidth.mb exists in distcp-defaults.xml config file, but it is 
> not honored even when it is . Current code always overwrites it with either 
> default value (java const) or with -bandwidth command line option.
> The expected behavior (at least how I would expect it) is to honor the value 
> set in distcp-defaults.xml unless user explicitly specify -bandwidth command 
> line flag. If there is no value set in .xml file or as a command line flag, 
> then the constant from java code should be used.
> Additionally, I would expect that we also try to get values from 
> distcp-site.xml, similar to other hadoop systems.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13587) distcp.map.bandwidth.mb is overwritten even when -bandwidth flag isn't set

2016-09-12 Thread Ravi Prakash (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15484408#comment-15484408
 ] 

Ravi Prakash commented on HADOOP-13587:
---

+1. LGTM. Thanks a lot for your contribution Zoran. Committing shortly!

> distcp.map.bandwidth.mb is overwritten even when -bandwidth flag isn't set
> --
>
> Key: HADOOP-13587
> URL: https://issues.apache.org/jira/browse/HADOOP-13587
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 3.0.0-alpha1
>Reporter: Zoran Dimitrijevic
>Assignee: Zoran Dimitrijevic
>Priority: Minor
> Attachments: HADOOP-13587-01.patch, HADOOP-13587-02.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> distcp.map.bandwidth.mb exists in distcp-defaults.xml config file, but it is 
> not honored even when it is . Current code always overwrites it with either 
> default value (java const) or with -bandwidth command line option.
> The expected behavior (at least how I would expect it) is to honor the value 
> set in distcp-defaults.xml unless user explicitly specify -bandwidth command 
> line flag. If there is no value set in .xml file or as a command line flag, 
> then the constant from java code should be used.
> Additionally, I would expect that we also try to get values from 
> distcp-site.xml, similar to other hadoop systems.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13591) There are somethings wrong in 'TestOSSContractGetFileStatus' and 'TestOSSContractRootDir'

2016-09-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15484404#comment-15484404
 ] 

Hadoop QA commented on HADOOP-13591:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
55s{color} | {color:green} HADOOP-12756 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
14s{color} | {color:green} HADOOP-12756 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} HADOOP-12756 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
19s{color} | {color:green} HADOOP-12756 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} HADOOP-12756 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
23s{color} | {color:green} HADOOP-12756 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} HADOOP-12756 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
12s{color} | {color:green} hadoop-aliyun in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 12m 14s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HADOOP-13591 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12828054/HADOOP-13591-HADOOP-12756.002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux 0b0d313a33ed 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HADOOP-12756 / e671a0f |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10486/testReport/ |
| modules | C: hadoop-tools/hadoop-aliyun U: hadoop-tools/hadoop-aliyun |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10486/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> There are somethings wrong in 'TestOSSContractGetFileStatus' and 
> 'TestOSSContractRootDir'
> -
>
> Key: HADOOP-13591
> URL: https://issues.apache.org/jira/browse/HADOOP-13591
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects 

[jira] [Updated] (HADOOP-13591) There are somethings wrong in 'TestOSSContractGetFileStatus' and 'TestOSSContractRootDir'

2016-09-12 Thread Genmao Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13591?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Genmao Yu updated HADOOP-13591:
---
Attachment: HADOOP-13591-HADOOP-12756.002.patch

> There are somethings wrong in 'TestOSSContractGetFileStatus' and 
> 'TestOSSContractRootDir'
> -
>
> Key: HADOOP-13591
> URL: https://issues.apache.org/jira/browse/HADOOP-13591
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: HADOOP-12756
>Reporter: Genmao Yu
>Assignee: Genmao Yu
> Fix For: HADOOP-12756
>
> Attachments: HADOOP-13591-HADOOP-12756.001.patch, 
> HADOOP-13591-HADOOP-12756.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13591) There are somethings wrong in 'TestOSSContractGetFileStatus' and 'TestOSSContractRootDir'

2016-09-12 Thread Genmao Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13591?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Genmao Yu updated HADOOP-13591:
---
Attachment: (was: HADOOP-13591-HADOOP-12756.002.patch)

> There are somethings wrong in 'TestOSSContractGetFileStatus' and 
> 'TestOSSContractRootDir'
> -
>
> Key: HADOOP-13591
> URL: https://issues.apache.org/jira/browse/HADOOP-13591
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: HADOOP-12756
>Reporter: Genmao Yu
>Assignee: Genmao Yu
> Fix For: HADOOP-12756
>
> Attachments: HADOOP-13591-HADOOP-12756.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13591) There are somethings wrong in 'TestOSSContractGetFileStatus' and 'TestOSSContractRootDir'

2016-09-12 Thread Genmao Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13591?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Genmao Yu updated HADOOP-13591:
---
Attachment: HADOOP-13591-HADOOP-12756.002.patch

> There are somethings wrong in 'TestOSSContractGetFileStatus' and 
> 'TestOSSContractRootDir'
> -
>
> Key: HADOOP-13591
> URL: https://issues.apache.org/jira/browse/HADOOP-13591
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: HADOOP-12756
>Reporter: Genmao Yu
>Assignee: Genmao Yu
> Fix For: HADOOP-12756
>
> Attachments: HADOOP-13591-HADOOP-12756.001.patch, 
> HADOOP-13591-HADOOP-12756.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13591) There are somethings wrong in 'TestOSSContractGetFileStatus' and 'TestOSSContractRootDir'

2016-09-12 Thread Genmao Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13591?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Genmao Yu updated HADOOP-13591:
---
Attachment: (was: HADOOP-13591-HADOOP-12756.002.patch)

> There are somethings wrong in 'TestOSSContractGetFileStatus' and 
> 'TestOSSContractRootDir'
> -
>
> Key: HADOOP-13591
> URL: https://issues.apache.org/jira/browse/HADOOP-13591
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: HADOOP-12756
>Reporter: Genmao Yu
>Assignee: Genmao Yu
> Fix For: HADOOP-12756
>
> Attachments: HADOOP-13591-HADOOP-12756.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-9819) FileSystem#rename is broken, deletes target when renaming link to itself

2016-09-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15484333#comment-15484333
 ] 

Hadoop QA commented on HADOOP-9819:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m  
9s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 41m 31s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HADOOP-9819 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12828047/HADOOP-9819.04.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 11afb71201fc 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / cc01ed70 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10484/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10484/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> FileSystem#rename is broken, deletes target when renaming link to itself
> 
>
> Key: HADOOP-9819
> URL: https://issues.apache.org/jira/browse/HADOOP-9819
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 3.0.0-alpha1
>Reporter: Arpit Agarwal
>Assignee: Andras Bokor
> Attachments: HADOOP-9819.01.patch, HADOOP-9819.02.patch, 
> HADOOP-9819.03.patch, HADOOP-9819.04.patch
>
>
> Uncovered while fixing 

[jira] [Commented] (HADOOP-13591) There are somethings wrong in 'TestOSSContractGetFileStatus' and 'TestOSSContractRootDir'

2016-09-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15484306#comment-15484306
 ] 

Hadoop QA commented on HADOOP-13591:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  8s{color} 
| {color:red} HADOOP-13591 does not apply to HADOOP-12756. Rebase required? 
Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. 
{color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-13591 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12828052/HADOOP-13591-HADOOP-12756.002.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10485/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> There are somethings wrong in 'TestOSSContractGetFileStatus' and 
> 'TestOSSContractRootDir'
> -
>
> Key: HADOOP-13591
> URL: https://issues.apache.org/jira/browse/HADOOP-13591
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: HADOOP-12756
>Reporter: Genmao Yu
>Assignee: Genmao Yu
> Fix For: HADOOP-12756
>
> Attachments: HADOOP-13591-HADOOP-12756.001.patch, 
> HADOOP-13591-HADOOP-12756.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13591) There are somethings wrong in 'TestOSSContractGetFileStatus' and 'TestOSSContractRootDir'

2016-09-12 Thread Genmao Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13591?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Genmao Yu updated HADOOP-13591:
---
Attachment: HADOOP-13591-HADOOP-12756.002.patch

> There are somethings wrong in 'TestOSSContractGetFileStatus' and 
> 'TestOSSContractRootDir'
> -
>
> Key: HADOOP-13591
> URL: https://issues.apache.org/jira/browse/HADOOP-13591
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: HADOOP-12756
>Reporter: Genmao Yu
>Assignee: Genmao Yu
> Fix For: HADOOP-12756
>
> Attachments: HADOOP-13591-HADOOP-12756.001.patch, 
> HADOOP-13591-HADOOP-12756.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13591) There are somethings wrong in 'TestOSSContractGetFileStatus' and 'TestOSSContractRootDir'

2016-09-12 Thread Genmao Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13591?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Genmao Yu updated HADOOP-13591:
---
Summary: There are somethings wrong in 'TestOSSContractGetFileStatus' and 
'TestOSSContractRootDir'  (was: issue about the logic of `directory` in Aliyun 
OSS)

> There are somethings wrong in 'TestOSSContractGetFileStatus' and 
> 'TestOSSContractRootDir'
> -
>
> Key: HADOOP-13591
> URL: https://issues.apache.org/jira/browse/HADOOP-13591
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: HADOOP-12756
>Reporter: Genmao Yu
>Assignee: Genmao Yu
> Fix For: HADOOP-12756
>
> Attachments: HADOOP-13591-HADOOP-12756.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-9819) FileSystem#rename is broken, deletes target when renaming link to itself

2016-09-12 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-9819:
-
Attachment: HADOOP-9819.04.patch

[~cnauroth],

I rebased my patch. Please check patch 04.

> FileSystem#rename is broken, deletes target when renaming link to itself
> 
>
> Key: HADOOP-9819
> URL: https://issues.apache.org/jira/browse/HADOOP-9819
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 3.0.0-alpha1
>Reporter: Arpit Agarwal
>Assignee: Andras Bokor
> Attachments: HADOOP-9819.01.patch, HADOOP-9819.02.patch, 
> HADOOP-9819.03.patch, HADOOP-9819.04.patch
>
>
> Uncovered while fixing TestSymlinkLocalFsFileSystem on Windows.
> This block of code deletes the symlink, the correct behavior is to do nothing.
> {code:java}
> try {
>   dstStatus = getFileLinkStatus(dst);
> } catch (IOException e) {
>   dstStatus = null;
> }
> if (dstStatus != null) {
>   if (srcStatus.isDirectory() != dstStatus.isDirectory()) {
> throw new IOException("Source " + src + " Destination " + dst
> + " both should be either file or directory");
>   }
>   if (!overwrite) {
> throw new FileAlreadyExistsException("rename destination " + dst
> + " already exists.");
>   }
>   // Delete the destination that is a file or an empty directory
>   if (dstStatus.isDirectory()) {
> FileStatus[] list = listStatus(dst);
> if (list != null && list.length != 0) {
>   throw new IOException(
>   "rename cannot overwrite non empty destination directory " + 
> dst);
> }
>   }
>   delete(dst, false);
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13580) If user is unauthorized, log "unauthorized" instead of "Invalid signed text:"

2016-09-12 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-13580:
-
Status: Patch Available  (was: Open)

cancel & resubmit

> If user is unauthorized, log "unauthorized" instead of "Invalid signed text:"
> -
>
> Key: HADOOP-13580
> URL: https://issues.apache.org/jira/browse/HADOOP-13580
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
>  Labels: supportability
> Attachments: HADOOP-13580.001.patch
>
>
> If a user is unauthorized to access web interface, the signed text is empty, 
> and the web interface prints 
> bq. org.apache.hadoop.security.authentication.util.SignerException: Invalid 
> signed text: 
> This error message is obscure. It should be a more meaningful message like 
> "Unauthorized access."



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13580) If user is unauthorized, log "unauthorized" instead of "Invalid signed text:"

2016-09-12 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-13580:
-
Status: Open  (was: Patch Available)

> If user is unauthorized, log "unauthorized" instead of "Invalid signed text:"
> -
>
> Key: HADOOP-13580
> URL: https://issues.apache.org/jira/browse/HADOOP-13580
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
>  Labels: supportability
> Attachments: HADOOP-13580.001.patch
>
>
> If a user is unauthorized to access web interface, the signed text is empty, 
> and the web interface prints 
> bq. org.apache.hadoop.security.authentication.util.SignerException: Invalid 
> signed text: 
> This error message is obscure. It should be a more meaningful message like 
> "Unauthorized access."



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13592) Outputs errors and warnings by checkstyle at compile time

2016-09-12 Thread Andras Bokor (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15484005#comment-15484005
 ] 

Andras Bokor commented on HADOOP-13592:
---

I agree with [~ste...@apache.org]. E.g. in YARN-5159 I could not chop down the 
line length under 80. 

> Outputs errors and warnings by checkstyle at compile time
> -
>
> Key: HADOOP-13592
> URL: https://issues.apache.org/jira/browse/HADOOP-13592
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Tsuyoshi Ozawa
> Attachments: HADOOP-13592.001.patch
>
>
> Currently, Apache Hadoop has lots checkstyle errors and warnings, but it's 
> not outputted at compile time. This prevents us from fixing the errors and 
> warnings.
> We should output errors and warnings at compile time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13560) S3ABlockOutputStream to support huge (many GB) file writes

2016-09-12 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13560:

Status: Patch Available  (was: Open)

> S3ABlockOutputStream to support huge (many GB) file writes
> --
>
> Key: HADOOP-13560
> URL: https://issues.apache.org/jira/browse/HADOOP-13560
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-13560-branch-2-001.patch, 
> HADOOP-13560-branch-2-002.patch
>
>
> An AWS SDK [issue|https://github.com/aws/aws-sdk-java/issues/367] highlights 
> that metadata isn't copied on large copies.
> 1. Add a test to do that large copy/rname and verify that the copy really 
> works
> 2. Verify that metadata makes it over.
> Verifying large file rename is important on its own, as it is needed for very 
> large commit operations for committers using rename



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13582) Implement logExpireToken in ZKDelegationTokenSecretManager

2016-09-12 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13582?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15483869#comment-15483869
 ] 

Wei-Chiu Chuang commented on HADOOP-13582:
--

Looks like a good one to have. Would it make sense to also log other info other 
than sequence number? Like issue date and max date?

Thanks.

> Implement logExpireToken in ZKDelegationTokenSecretManager
> --
>
> Key: HADOOP-13582
> URL: https://issues.apache.org/jira/browse/HADOOP-13582
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Affects Versions: 2.8.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Minor
>  Labels: supportability
> Attachments: HADOOP-13582.01.patch
>
>
> During my investigation on HADOOP-13487, I found it pretty difficult to track 
> KMS DTs in ZK, even in debug level.
> Adding a simple debug log to make future triages easier.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13560) S3ABlockOutputStream to support huge (many GB) file writes

2016-09-12 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15483862#comment-15483862
 ] 

Steve Loughran commented on HADOOP-13560:
-

latest PR update includes documentation.

One feature we could also consider is caching write failures in the background 
threads and raising them in the output stream thread once one one has been 
caught and logged. This will not let the caller re-attempt the write but it 
will make it visible, *and it will make it visible fast*.

As it is, failures are delayed until the close operation and 
waitForAllPartUploads(), which is at risk of having the exception swallowed, 
or, at best, logged —so risking hiding the fact that a write has failed and 
that data has been lost.

> S3ABlockOutputStream to support huge (many GB) file writes
> --
>
> Key: HADOOP-13560
> URL: https://issues.apache.org/jira/browse/HADOOP-13560
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-13560-branch-2-001.patch, 
> HADOOP-13560-branch-2-002.patch
>
>
> An AWS SDK [issue|https://github.com/aws/aws-sdk-java/issues/367] highlights 
> that metadata isn't copied on large copies.
> 1. Add a test to do that large copy/rname and verify that the copy really 
> works
> 2. Verify that metadata makes it over.
> Verifying large file rename is important on its own, as it is needed for very 
> large commit operations for committers using rename



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13560) S3ABlockOutputStream to support huge (many GB) file writes

2016-09-12 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13560:

Summary: S3ABlockOutputStream to support huge (many GB) file writes  (was: 
S3A to support huge file writes and operations -with tests)

> S3ABlockOutputStream to support huge (many GB) file writes
> --
>
> Key: HADOOP-13560
> URL: https://issues.apache.org/jira/browse/HADOOP-13560
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-13560-branch-2-001.patch, 
> HADOOP-13560-branch-2-002.patch
>
>
> An AWS SDK [issue|https://github.com/aws/aws-sdk-java/issues/367] highlights 
> that metadata isn't copied on large copies.
> 1. Add a test to do that large copy/rname and verify that the copy really 
> works
> 2. Verify that metadata makes it over.
> Verifying large file rename is important on its own, as it is needed for very 
> large commit operations for committers using rename



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13591) issue about the logic of `directory` in Aliyun OSS

2016-09-12 Thread Genmao Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15483818#comment-15483818
 ] 

Genmao Yu commented on HADOOP-13591:


[~ste...@apache.org] [~shimingfei]  I see and will update patch as soon as 
possible.

> issue about the logic of `directory` in Aliyun OSS
> --
>
> Key: HADOOP-13591
> URL: https://issues.apache.org/jira/browse/HADOOP-13591
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: HADOOP-12756
>Reporter: Genmao Yu
>Assignee: Genmao Yu
> Fix For: HADOOP-12756
>
> Attachments: HADOOP-13591-HADOOP-12756.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13592) Outputs errors and warnings by checkstyle at compile time

2016-09-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15483781#comment-15483781
 ] 

Hadoop QA commented on HADOOP-13592:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  9m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
 2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 13m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
 6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
21s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}109m 18s{color} 
| {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
27s{color} | {color:red} The patch generated 2 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}189m 12s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.applicationhistoryservice.webapp.TestAHSWebServices |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HADOOP-13592 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12827976/HADOOP-13592.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  |
| uname | Linux 36695d60f694 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / cc01ed70 |
| Default Java | 1.8.0_101 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10479/artifact/patchprocess/patch-unit-root.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10479/testReport/ |
| asflicense | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10479/artifact/patchprocess/patch-asflicense-problems.txt
 |
| modules | C: . U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10479/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Outputs errors and warnings by checkstyle at compile time
> -
>
> Key: HADOOP-13592
> URL: https://issues.apache.org/jira/browse/HADOOP-13592
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Tsuyoshi Ozawa
> Attachments: HADOOP-13592.001.patch
>
>
> Currently, Apache Hadoop has lots checkstyle errors and warnings, but it's 
> not outputted at compile time. This prevents us from fixing the errors and 
> warnings.
> We should output errors and warnings at compile time.



--
This message 

[jira] [Commented] (HADOOP-12667) s3a: Support createNonRecursive API

2016-09-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15483755#comment-15483755
 ] 

Hadoop QA commented on HADOOP-12667:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 14m 
37s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
 8s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
20s{color} | {color:green} branch-2 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green} branch-2 passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
19s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
44s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} branch-2 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} branch-2 passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
25s{color} | {color:green} hadoop-aws in the patch passed with JDK v1.7.0_111. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 32m  6s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:b59b8b7 |
| JIRA Issue | HADOOP-12667 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12828013/HADOOP-12667-branch-2-004.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 30f2a1821c26 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | branch-2 / 6948691 |
| Default Java | 1.7.0_111 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_101 
/usr/lib/jvm/java-7-openjdk-amd64:1.7.0_111 |
| findbugs | v3.0.0 |
| JDK v1.7.0_111  Test Results 

[jira] [Commented] (HADOOP-13593) `hadoop distcp -atomic` invokes improper host check while copying data from HDFS to Swift

2016-09-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15483704#comment-15483704
 ] 

Hadoop QA commented on HADOOP-13593:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  8m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 57s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 44m 43s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ha.TestZKFailoverController |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HADOOP-13593 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12828001/HADOOP-13593.002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 648bffbe0d87 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / cc01ed70 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10481/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10481/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10481/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> `hadoop distcp -atomic` invokes improper host check while copying data from 
> HDFS to Swift
> -
>
> Key: HADOOP-13593
> URL: 

[jira] [Commented] (HADOOP-13593) `hadoop distcp -atomic` invokes improper host check while copying data from HDFS to Swift

2016-09-12 Thread Yuanbo Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15483703#comment-15483703
 ] 

Yuanbo Liu commented on HADOOP-13593:
-

[~ste...@apache.org] Yes there is a schema check in {{FileUtils#compareFs}}

> `hadoop distcp -atomic` invokes improper host check while copying data from 
> HDFS to Swift
> -
>
> Key: HADOOP-13593
> URL: https://issues.apache.org/jira/browse/HADOOP-13593
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Yuanbo Liu
> Attachments: HADOOP-13593.001.patch, HADOOP-13593.002.patch
>
>
> While copying data from HDFS to Swift by using `hadoop distcp -atomic`, for 
> example:
> {code}
> hadoop distcp -atomic /tmp/100M  swift://testhadoop.softlayer//tmp
> {code}
> it throws
> {code}
> java.lang.IllegalArgumentException: Work path 
> swift://testhadoop.softlayer/._WIP_tmp546958075 and target path 
> swift://testhadoop.softlayer/tmp are in different file system
>   at org.apache.hadoop.tools.DistCp.configureOutputFormat(DistCp.java:351)
> .
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13593) `hadoop distcp -atomic` invokes improper host check while copying data from HDFS to Swift

2016-09-12 Thread Yuanbo Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15483686#comment-15483686
 ] 

Yuanbo Liu edited comment on HADOOP-13593 at 9/12/16 9:58 AM:
--

[~steve_l] Thanks a lot for your comments, that's really helpful.
{quote}
1. please can you stick the full stack trace of the exception in as a comment..
{quote}
Sorry for omitting the stack info and I will edit my comment 1 to add the 
information.

{quote}
2. anything checking hostnames is going
{quote}
In fact there is a code segment in {{FileUtils#compareFs}} as below:
{code}
String srcHost = srcUri.getHost();
String dstHost = dstUri.getHost();
if (!srcHost.equals(dstHost)) {
return false;
}
{code}
and I think it can cover the case you mentioned above. Using 
"getCanonicalHostName" to double check whether hosts are equal seems good, but 
if the host name is an alias name, it may throw UnknownHostException here. If 
you don't agree to remove the check, at least we can do is to make the output 
info more accurate, "Work path..in different file system" is not right.

{quote}
3. none of the object stores support atomic renames...
{quote]
Thanks for your info, yes you're right, if object store doesn't support atomic 
rename, it's not proper to use `distcp -atomic` here.

{quote}
If there were to be a patch on this, it'd need tests. Here I'd recommend 
{quote}
Thanks for your suggestions. I will investigate them later.
Thanks again for your time!


was (Author: yuanbo):
[~steve_l] Thanks a lot for your comments, that's really helpful.
{quote}
1. please can you stick the full stack trace of the exception in as a comment..
{quote}
Sorry for omitting the stack info and I will edit my comment 1 to add the 
information.

{quote}
2. anything checking hostnames is going
{quote}
In fact there is a code segment in {{FileUtils#compareFs}} as below:
{code}
String srcHost = srcUri.getHost();
String dstHost = dstUri.getHost();
if (!srcHost.equals(dstHost)) {
return false;
}
{code}
and I think it can cover the case you mentioned above. Using 
"getCanonicalHostName" to double check whether hosts are equal seems good, but 
if the host name is an alias name, it may throw UnknownHostException here. If 
you don't agree to remove the check, at least we can do is to make the output 
info more accurate, "Work path..in different file system" is not right.

{quote}
3. none of the object stores support atomic renames...
{quote]
Thanks for your info, yes you're right, if object store doesn't support atomic 
rename, it's not proper to use `distcp -atomic` here.

{quote}
If there were to be a patch on this, it'd need tests. Here I'd recommend 
{quote}
Thanks for your suggestions. I will investigate them later.
Thanks again for your time!

> `hadoop distcp -atomic` invokes improper host check while copying data from 
> HDFS to Swift
> -
>
> Key: HADOOP-13593
> URL: https://issues.apache.org/jira/browse/HADOOP-13593
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Yuanbo Liu
> Attachments: HADOOP-13593.001.patch, HADOOP-13593.002.patch
>
>
> While copying data from HDFS to Swift by using `hadoop distcp -atomic`, for 
> example:
> {code}
> hadoop distcp -atomic /tmp/100M  swift://testhadoop.softlayer//tmp
> {code}
> it throws
> {code}
> java.lang.IllegalArgumentException: Work path 
> swift://testhadoop.softlayer/._WIP_tmp546958075 and target path 
> swift://testhadoop.softlayer/tmp are in different file system
>   at org.apache.hadoop.tools.DistCp.configureOutputFormat(DistCp.java:351)
> .
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13593) `hadoop distcp -atomic` invokes improper host check while copying data from HDFS to Swift

2016-09-12 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15483692#comment-15483692
 ] 

Steve Loughran commented on HADOOP-13593:
-

one more thought: the code should be comparing FS schemas. If the filesystem is 
different, there's no point checking hostnames, is there? 

> `hadoop distcp -atomic` invokes improper host check while copying data from 
> HDFS to Swift
> -
>
> Key: HADOOP-13593
> URL: https://issues.apache.org/jira/browse/HADOOP-13593
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Yuanbo Liu
> Attachments: HADOOP-13593.001.patch, HADOOP-13593.002.patch
>
>
> While copying data from HDFS to Swift by using `hadoop distcp -atomic`, for 
> example:
> {code}
> hadoop distcp -atomic /tmp/100M  swift://testhadoop.softlayer//tmp
> {code}
> it throws
> {code}
> java.lang.IllegalArgumentException: Work path 
> swift://testhadoop.softlayer/._WIP_tmp546958075 and target path 
> swift://testhadoop.softlayer/tmp are in different file system
>   at org.apache.hadoop.tools.DistCp.configureOutputFormat(DistCp.java:351)
> .
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13593) `hadoop distcp -atomic` invokes improper host check while copying data from HDFS to Swift

2016-09-12 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15483693#comment-15483693
 ] 

Steve Loughran commented on HADOOP-13593:
-

one more thought: the code should be comparing FS schemas. If the filesystem is 
different, there's no point checking hostnames, is there? 

> `hadoop distcp -atomic` invokes improper host check while copying data from 
> HDFS to Swift
> -
>
> Key: HADOOP-13593
> URL: https://issues.apache.org/jira/browse/HADOOP-13593
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Yuanbo Liu
> Attachments: HADOOP-13593.001.patch, HADOOP-13593.002.patch
>
>
> While copying data from HDFS to Swift by using `hadoop distcp -atomic`, for 
> example:
> {code}
> hadoop distcp -atomic /tmp/100M  swift://testhadoop.softlayer//tmp
> {code}
> it throws
> {code}
> java.lang.IllegalArgumentException: Work path 
> swift://testhadoop.softlayer/._WIP_tmp546958075 and target path 
> swift://testhadoop.softlayer/tmp are in different file system
>   at org.apache.hadoop.tools.DistCp.configureOutputFormat(DistCp.java:351)
> .
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13593) `hadoop distcp -atomic` invokes improper host check while copying data from HDFS to Swift

2016-09-12 Thread Yuanbo Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15483686#comment-15483686
 ] 

Yuanbo Liu edited comment on HADOOP-13593 at 9/12/16 9:59 AM:
--

[~steve_l] Thanks a lot for your comments, that's really helpful.
{quote}
1. please can you stick the full stack trace of the exception in as a comment..
{quote}
Sorry for omitting the stack info and I will edit my comment 1 to add the 
information.

{quote}
2. anything checking hostnames is going
{quote}
In fact there is a code segment in {{FileUtils#compareFs}} as below:
{code}
String srcHost = srcUri.getHost();
String dstHost = dstUri.getHost();
if (!srcHost.equals(dstHost)) {
return false;
}
{code}
and I think it can cover the case you mentioned above. Using 
"getCanonicalHostName" to double check whether hosts are equal seems good, but 
if the host name is an alias name, it may throw UnknownHostException here. If 
you don't agree to remove the check, at least we can do is to make the output 
info more accurate, "Work path..in different file system" is not right.

{quote}
3. none of the object stores support atomic renames...
{quote}
Thanks for your info, yes you're right, if object store doesn't support atomic 
rename, it's not proper to use `distcp -atomic` here.

{quote}
If there were to be a patch on this, it'd need tests. Here I'd recommend 
{quote}
Thanks for your suggestions. I will investigate them later.
Thanks again for your time!


was (Author: yuanbo):
[~steve_l] Thanks a lot for your comments, that's really helpful.
{quote}
1. please can you stick the full stack trace of the exception in as a comment..
{quote}
Sorry for omitting the stack info and I will edit my comment 1 to add the 
information.

{quote}
2. anything checking hostnames is going
{quote}
In fact there is a code segment in {{FileUtils#compareFs}} as below:
{code}
String srcHost = srcUri.getHost();
String dstHost = dstUri.getHost();
if (!srcHost.equals(dstHost)) {
return false;
}
{code}
and I think it can cover the case you mentioned above. Using 
"getCanonicalHostName" to double check whether hosts are equal seems good, but 
if the host name is an alias name, it may throw UnknownHostException here. If 
you don't agree to remove the check, at least we can do is to make the output 
info more accurate, "Work path..in different file system" is not right.

{quote}
3. none of the object stores support atomic renames...
{quote]
Thanks for your info, yes you're right, if object store doesn't support atomic 
rename, it's not proper to use `distcp -atomic` here.

{quote}
If there were to be a patch on this, it'd need tests. Here I'd recommend 
{quote}
Thanks for your suggestions. I will investigate them later.
Thanks again for your time!

> `hadoop distcp -atomic` invokes improper host check while copying data from 
> HDFS to Swift
> -
>
> Key: HADOOP-13593
> URL: https://issues.apache.org/jira/browse/HADOOP-13593
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Yuanbo Liu
> Attachments: HADOOP-13593.001.patch, HADOOP-13593.002.patch
>
>
> While copying data from HDFS to Swift by using `hadoop distcp -atomic`, for 
> example:
> {code}
> hadoop distcp -atomic /tmp/100M  swift://testhadoop.softlayer//tmp
> {code}
> it throws
> {code}
> java.lang.IllegalArgumentException: Work path 
> swift://testhadoop.softlayer/._WIP_tmp546958075 and target path 
> swift://testhadoop.softlayer/tmp are in different file system
>   at org.apache.hadoop.tools.DistCp.configureOutputFormat(DistCp.java:351)
> .
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12667) s3a: Support createNonRecursive API

2016-09-12 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12667?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-12667:

Status: Patch Available  (was: Open)

> s3a: Support createNonRecursive API
> ---
>
> Key: HADOOP-12667
> URL: https://issues.apache.org/jira/browse/HADOOP-12667
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.7.3
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
> Attachments: HADOOP-12667-branch-2-002.patch, 
> HADOOP-12667-branch-2-003.patch, HADOOP-12667-branch-2-004.patch, 
> HADOOP-12667.001.patch
>
>
> HBase and other clients rely on the createNonRecursive API, which was 
> recently un-deprecated. S3A currently does not support it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12667) s3a: Support createNonRecursive API

2016-09-12 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12667?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-12667:

Attachment: HADOOP-12667-branch-2-004.patch

patch 004, correct branch, test named ITest, etc.

testing: s3 ireland

> s3a: Support createNonRecursive API
> ---
>
> Key: HADOOP-12667
> URL: https://issues.apache.org/jira/browse/HADOOP-12667
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.7.3
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
> Attachments: HADOOP-12667-branch-2-002.patch, 
> HADOOP-12667-branch-2-003.patch, HADOOP-12667-branch-2-004.patch, 
> HADOOP-12667.001.patch
>
>
> HBase and other clients rely on the createNonRecursive API, which was 
> recently un-deprecated. S3A currently does not support it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13593) `hadoop distcp -atomic` invokes improper host check while copying data from HDFS to Swift

2016-09-12 Thread Yuanbo Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15483686#comment-15483686
 ] 

Yuanbo Liu commented on HADOOP-13593:
-

[~steve_l] Thanks a lot for your comments, that's really helpful.
{quote}
1. please can you stick the full stack trace of the exception in as a comment..
{quote}
Sorry for omitting the stack and I will edit my comment 1 to add the 
information.

{quote}
2. anything checking hostnames is going
{quote}
In fact there is a code segment in {{FileUtils#compareFs}} as below:
{code}
String srcHost = srcUri.getHost();
String dstHost = dstUri.getHost();
if (!srcHost.equals(dstHost)) {
return false;
}
{code}
and I think it can cover the case you mentioned above. Using 
"getCanonicalHostName" to double check whether hosts are equal seems good, but 
if the host name is an alias name, it may throw UnknownHostException here. If 
you don't agree to remove the check, at least we can do is to make the output 
info more accurate, "Work path..in different file system" is not right.

{quote}
3. none of the object stores support atomic renames...
{quote]
Thanks for your info, yes you're right, if object store doesn't support atomic 
rename, it's not proper to use `distcp -atomic` here.

{quote}
If there were to be a patch on this, it'd need tests. Here I'd recommend 
{quote}
Thanks for your suggestions. I will investigate them later.
Thanks again for your time!

> `hadoop distcp -atomic` invokes improper host check while copying data from 
> HDFS to Swift
> -
>
> Key: HADOOP-13593
> URL: https://issues.apache.org/jira/browse/HADOOP-13593
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Yuanbo Liu
> Attachments: HADOOP-13593.001.patch, HADOOP-13593.002.patch
>
>
> While copying data from HDFS to Swift by using `hadoop distcp -atomic`, for 
> example:
> {code}
> hadoop distcp -atomic /tmp/100M  swift://testhadoop.softlayer//tmp
> {code}
> it throws
> {code}
> java.lang.IllegalArgumentException: Work path 
> swift://testhadoop.softlayer/._WIP_tmp546958075 and target path 
> swift://testhadoop.softlayer/tmp are in different file system
>   at org.apache.hadoop.tools.DistCp.configureOutputFormat(DistCp.java:351)
> .
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13593) `hadoop distcp -atomic` invokes improper host check while copying data from HDFS to Swift

2016-09-12 Thread Yuanbo Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15483686#comment-15483686
 ] 

Yuanbo Liu edited comment on HADOOP-13593 at 9/12/16 9:56 AM:
--

[~steve_l] Thanks a lot for your comments, that's really helpful.
{quote}
1. please can you stick the full stack trace of the exception in as a comment..
{quote}
Sorry for omitting the stack info and I will edit my comment 1 to add the 
information.

{quote}
2. anything checking hostnames is going
{quote}
In fact there is a code segment in {{FileUtils#compareFs}} as below:
{code}
String srcHost = srcUri.getHost();
String dstHost = dstUri.getHost();
if (!srcHost.equals(dstHost)) {
return false;
}
{code}
and I think it can cover the case you mentioned above. Using 
"getCanonicalHostName" to double check whether hosts are equal seems good, but 
if the host name is an alias name, it may throw UnknownHostException here. If 
you don't agree to remove the check, at least we can do is to make the output 
info more accurate, "Work path..in different file system" is not right.

{quote}
3. none of the object stores support atomic renames...
{quote]
Thanks for your info, yes you're right, if object store doesn't support atomic 
rename, it's not proper to use `distcp -atomic` here.

{quote}
If there were to be a patch on this, it'd need tests. Here I'd recommend 
{quote}
Thanks for your suggestions. I will investigate them later.
Thanks again for your time!


was (Author: yuanbo):
[~steve_l] Thanks a lot for your comments, that's really helpful.
{quote}
1. please can you stick the full stack trace of the exception in as a comment..
{quote}
Sorry for omitting the stack and I will edit my comment 1 to add the 
information.

{quote}
2. anything checking hostnames is going
{quote}
In fact there is a code segment in {{FileUtils#compareFs}} as below:
{code}
String srcHost = srcUri.getHost();
String dstHost = dstUri.getHost();
if (!srcHost.equals(dstHost)) {
return false;
}
{code}
and I think it can cover the case you mentioned above. Using 
"getCanonicalHostName" to double check whether hosts are equal seems good, but 
if the host name is an alias name, it may throw UnknownHostException here. If 
you don't agree to remove the check, at least we can do is to make the output 
info more accurate, "Work path..in different file system" is not right.

{quote}
3. none of the object stores support atomic renames...
{quote]
Thanks for your info, yes you're right, if object store doesn't support atomic 
rename, it's not proper to use `distcp -atomic` here.

{quote}
If there were to be a patch on this, it'd need tests. Here I'd recommend 
{quote}
Thanks for your suggestions. I will investigate them later.
Thanks again for your time!

> `hadoop distcp -atomic` invokes improper host check while copying data from 
> HDFS to Swift
> -
>
> Key: HADOOP-13593
> URL: https://issues.apache.org/jira/browse/HADOOP-13593
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Yuanbo Liu
> Attachments: HADOOP-13593.001.patch, HADOOP-13593.002.patch
>
>
> While copying data from HDFS to Swift by using `hadoop distcp -atomic`, for 
> example:
> {code}
> hadoop distcp -atomic /tmp/100M  swift://testhadoop.softlayer//tmp
> {code}
> it throws
> {code}
> java.lang.IllegalArgumentException: Work path 
> swift://testhadoop.softlayer/._WIP_tmp546958075 and target path 
> swift://testhadoop.softlayer/tmp are in different file system
>   at org.apache.hadoop.tools.DistCp.configureOutputFormat(DistCp.java:351)
> .
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13594) findbugs warnings to block a build

2016-09-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15483673#comment-15483673
 ] 

Hadoop QA commented on HADOOP-13594:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
 9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
 7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m  
7s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 10m 23s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HADOOP-13594 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12828005/HADOOP-13594.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  |
| uname | Linux cab50b0e3a60 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / cc01ed70 |
| Default Java | 1.8.0_101 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10482/testReport/ |
| modules | C: hadoop-project U: hadoop-project |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10482/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> findbugs warnings to block a build
> --
>
> Key: HADOOP-13594
> URL: https://issues.apache.org/jira/browse/HADOOP-13594
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Tsuyoshi Ozawa
> Attachments: HADOOP-13594.001.patch
>
>
> findbugs is good tool, but we need to run separate command(mvn 
> findbugs:check). 
> Instead, it's better to run findbugs at compile time and block builds if it 
> find errors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13593) `hadoop distcp -atomic` invokes improper host check while copying data from HDFS to Swift

2016-09-12 Thread Yuanbo Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15483433#comment-15483433
 ] 

Yuanbo Liu edited comment on HADOOP-13593 at 9/12/16 9:39 AM:
--

I find that in {{FileUtil#compareFs}}, it uses 
"InetAddress.getByName(srcHost).getCanonicalHostName()" to get the host name of 
Swift file system. In fact "testhadoop.softlayer" is an alias name of Swift, 
it's impossible to use "getCanonicalHostName" to get the real host name. I 
propose to delete it and upload v1 patch for this issue.

See the full stack info here:
{code}
16/09/08 20:59:36 WARN security.UserGroupInformation: Exception encountered 
while running the renewal command. Aborting renew thread. ExitCodeException 
exitCode=1: kinit: Ticket expired while renewing credentials

16/09/08 20:59:37 INFO tools.DistCp: Input Options: 
DistCpOptions{atomicCommit=true, syncFolder=false, deleteMissing=false, 
ignoreFailures=false, maxMaps=20, sslConfigurationFile='null', 
copyStrategy='uniformsize', sourceFileListing=null, sourcePaths=[/tmp/100M], 
targetPath=swift://testhadoop.softlayer/tmp, targetPathExists=true, 
preserveRawXattrs=false}
16/09/08 20:59:38 INFO impl.TimelineClientImpl: Timeline service address: 
https://wangxjrhel672.fyre.ibm.com:8190/ws/v1/timeline/
16/09/08 20:59:39 ERROR tools.DistCp: Exception encountered
java.lang.IllegalArgumentException: Work path 
swift://testhadoop.softlayer/._WIP_tmp1072690165 and target path 
swift://testhadoop.softlayer/tmp are in different file system
at org.apache.hadoop.tools.DistCp.configureOutputFormat(DistCp.java:351)
at org.apache.hadoop.tools.DistCp.createJob(DistCp.java:237)
at org.apache.hadoop.tools.DistCp.createAndSubmitJob(DistCp.java:174)
at org.apache.hadoop.tools.DistCp.execute(DistCp.java:153)
at org.apache.hadoop.tools.DistCp.run(DistCp.java:126)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
{code}


was (Author: yuanbo):
I find that in {{FileUtil#compareFs}}, it uses 
"InetAddress.getByName(srcHost).getCanonicalHostName()" to get the host name of 
Swift file system. In fact "testhadoop.softlayer" is an alias name of Swift, 
it's impossible to use "getCanonicalHostName" to get the real host name. I 
propose to delete it and upload v1 patch for this issue.

> `hadoop distcp -atomic` invokes improper host check while copying data from 
> HDFS to Swift
> -
>
> Key: HADOOP-13593
> URL: https://issues.apache.org/jira/browse/HADOOP-13593
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Yuanbo Liu
> Attachments: HADOOP-13593.001.patch, HADOOP-13593.002.patch
>
>
> While copying data from HDFS to Swift by using `hadoop distcp -atomic`, for 
> example:
> {code}
> hadoop distcp -atomic /tmp/100M  swift://testhadoop.softlayer//tmp
> {code}
> it throws
> {code}
> java.lang.IllegalArgumentException: Work path 
> swift://testhadoop.softlayer/._WIP_tmp546958075 and target path 
> swift://testhadoop.softlayer/tmp are in different file system
>   at org.apache.hadoop.tools.DistCp.configureOutputFormat(DistCp.java:351)
> .
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13592) Outputs errors and warnings by checkstyle at compile time

2016-09-12 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13592?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HADOOP-13592:

Assignee: (was: Tsuyoshi Ozawa)

> Outputs errors and warnings by checkstyle at compile time
> -
>
> Key: HADOOP-13592
> URL: https://issues.apache.org/jira/browse/HADOOP-13592
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Tsuyoshi Ozawa
> Attachments: HADOOP-13592.001.patch
>
>
> Currently, Apache Hadoop has lots checkstyle errors and warnings, but it's 
> not outputted at compile time. This prevents us from fixing the errors and 
> warnings.
> We should output errors and warnings at compile time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13592) Outputs errors and warnings by checkstyle at compile time

2016-09-12 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15483650#comment-15483650
 ] 

Tsuyoshi Ozawa commented on HADOOP-13592:
-

[~ste...@apache.org] thanks for your comment! sound reasonable. Filed 
HADOOP-13594.

> Outputs errors and warnings by checkstyle at compile time
> -
>
> Key: HADOOP-13592
> URL: https://issues.apache.org/jira/browse/HADOOP-13592
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Tsuyoshi Ozawa
>Assignee: Tsuyoshi Ozawa
> Attachments: HADOOP-13592.001.patch
>
>
> Currently, Apache Hadoop has lots checkstyle errors and warnings, but it's 
> not outputted at compile time. This prevents us from fixing the errors and 
> warnings.
> We should output errors and warnings at compile time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13594) findbugs warnings to block a build

2016-09-12 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15483618#comment-15483618
 ] 

Tsuyoshi Ozawa edited comment on HADOOP-13594 at 9/12/16 9:32 AM:
--

This patch results in build failure of build since findbugs reports 6 bugs 
about Apache Hadoop Maven Plugins.


was (Author: ozawa):
This patch leads build failure of build since findbugs reports 6 bugs about 
Apache Hadoop Maven Plugins.

> findbugs warnings to block a build
> --
>
> Key: HADOOP-13594
> URL: https://issues.apache.org/jira/browse/HADOOP-13594
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Tsuyoshi Ozawa
> Attachments: HADOOP-13594.001.patch
>
>
> findbugs is good tool, but we need to run separate command(mvn 
> findbugs:check). 
> Instead, it's better to run findbugs at compile time and block builds if it 
> find errors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13594) findbugs warnings to block a build

2016-09-12 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HADOOP-13594:

Status: Patch Available  (was: Open)

> findbugs warnings to block a build
> --
>
> Key: HADOOP-13594
> URL: https://issues.apache.org/jira/browse/HADOOP-13594
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Tsuyoshi Ozawa
> Attachments: HADOOP-13594.001.patch
>
>
> findbugs is good tool, but we need to run separate command(mvn 
> findbugs:check). 
> Instead, it's better to run findbugs at compile time and block builds if it 
> find errors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13594) findbugs warnings to block a build

2016-09-12 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HADOOP-13594:

Attachment: HADOOP-13594.001.patch

This patch leads build failure of build since findbugs reports 6 bugs about 
Apache Hadoop Maven Plugins.

> findbugs warnings to block a build
> --
>
> Key: HADOOP-13594
> URL: https://issues.apache.org/jira/browse/HADOOP-13594
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Tsuyoshi Ozawa
> Attachments: HADOOP-13594.001.patch
>
>
> findbugs is good tool, but we need to run separate command(mvn 
> findbugs:check). 
> Instead, it's better to run findbugs at compile time and block builds if it 
> find errors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13594) findbugs warnings to block a build

2016-09-12 Thread Tsuyoshi Ozawa (JIRA)
Tsuyoshi Ozawa created HADOOP-13594:
---

 Summary: findbugs warnings to block a build
 Key: HADOOP-13594
 URL: https://issues.apache.org/jira/browse/HADOOP-13594
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Tsuyoshi Ozawa


findbugs is good tool, but we need to run separate command(mvn findbugs:check). 

Instead, it's better to run findbugs at compile time and block builds if it 
find errors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13593) `hadoop distcp -atomic` invokes improper host check while copying data from HDFS to Swift

2016-09-12 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15483594#comment-15483594
 ] 

Steve Loughran commented on HADOOP-13593:
-

# please can you stick the full stack trace of the exception in as a comment; 
good for future searches of the same lines, & IDEs that navigate up the stack. 
Leaving out the bug description —as you did— is great, because it keeps out of 
all the emails, but having that stack in the comments is always useful
# anything checking hostnames is going to be there for a reason. Presumably so 
that people can use URLs like hdfs://server:4040/ and 
hdfs://server.example.org:4040/ and have atomic operations. Changing this 
fairly fundamental behaviour is not going to happen, because it affects so much 
more than swift For that reason, it's going to have to be a -1 there, sorry.
# none of the object stores support atomic renames, so an atomic distcp isn't 
going to work. in fact, maybe they should all reject the option outright to 
stop people thinking of it.


If there were to be a patch on this, it'd need tests. Here I'd recommend that 
(a) swift adds an implementation of {{AbstractContractDistCpTest}}, and that 
(b), that base test added a check for the atomic flag —one that every fs 
contract XML would have to declare whether or not it supported. The option. 



> `hadoop distcp -atomic` invokes improper host check while copying data from 
> HDFS to Swift
> -
>
> Key: HADOOP-13593
> URL: https://issues.apache.org/jira/browse/HADOOP-13593
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Yuanbo Liu
> Attachments: HADOOP-13593.001.patch, HADOOP-13593.002.patch
>
>
> While copying data from HDFS to Swift by using `hadoop distcp -atomic`, for 
> example:
> {code}
> hadoop distcp -atomic /tmp/100M  swift://testhadoop.softlayer//tmp
> {code}
> it throws
> {code}
> java.lang.IllegalArgumentException: Work path 
> swift://testhadoop.softlayer/._WIP_tmp546958075 and target path 
> swift://testhadoop.softlayer/tmp are in different file system
>   at org.apache.hadoop.tools.DistCp.configureOutputFormat(DistCp.java:351)
> .
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13593) `hadoop distcp -atomic` invokes improper host check while copying data from HDFS to Swift

2016-09-12 Thread Yuanbo Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13593?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuanbo Liu updated HADOOP-13593:

Attachment: HADOOP-13593.002.patch

uploaded v2 patch to address check style issue.

> `hadoop distcp -atomic` invokes improper host check while copying data from 
> HDFS to Swift
> -
>
> Key: HADOOP-13593
> URL: https://issues.apache.org/jira/browse/HADOOP-13593
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Yuanbo Liu
> Attachments: HADOOP-13593.001.patch, HADOOP-13593.002.patch
>
>
> While copying data from HDFS to Swift by using `hadoop distcp -atomic`, for 
> example:
> {code}
> hadoop distcp -atomic /tmp/100M  swift://testhadoop.softlayer//tmp
> {code}
> it throws
> {code}
> java.lang.IllegalArgumentException: Work path 
> swift://testhadoop.softlayer/._WIP_tmp546958075 and target path 
> swift://testhadoop.softlayer/tmp are in different file system
>   at org.apache.hadoop.tools.DistCp.configureOutputFormat(DistCp.java:351)
> .
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13592) Outputs errors and warnings by checkstyle at compile time

2016-09-12 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15483570#comment-15483570
 ] 

Steve Loughran commented on HADOOP-13592:
-

fixing those warnings is going to be impossible. Sometimes checkstyle whines 
about things (line length) which produces worse code if chopped down. 

I'm happy for findbugs to block a build, but checkstyle is simply a hint

> Outputs errors and warnings by checkstyle at compile time
> -
>
> Key: HADOOP-13592
> URL: https://issues.apache.org/jira/browse/HADOOP-13592
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Tsuyoshi Ozawa
>Assignee: Tsuyoshi Ozawa
> Attachments: HADOOP-13592.001.patch
>
>
> Currently, Apache Hadoop has lots checkstyle errors and warnings, but it's 
> not outputted at compile time. This prevents us from fixing the errors and 
> warnings.
> We should output errors and warnings at compile time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13593) `hadoop distcp -atomic` invokes improper host check while copying data from HDFS to Swift

2016-09-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15483562#comment-15483562
 ] 

Hadoop QA commented on HADOOP-13593:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
 3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
23s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 24s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 2 new + 59 unchanged - 0 fixed = 61 total (was 59) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
34s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 42m 44s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HADOOP-13593 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12827988/HADOOP-13593.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 5ee61cb8044b 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / cc01ed70 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10480/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10480/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10480/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> `hadoop distcp -atomic` invokes improper host check while copying data from 
> HDFS to Swift
> -
>
> Key: HADOOP-13593
>  

[jira] [Updated] (HADOOP-13593) `hadoop distcp -atomic` invokes improper host check while copying data from HDFS to Swift

2016-09-12 Thread Yuanbo Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13593?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuanbo Liu updated HADOOP-13593:

Status: Patch Available  (was: Open)

> `hadoop distcp -atomic` invokes improper host check while copying data from 
> HDFS to Swift
> -
>
> Key: HADOOP-13593
> URL: https://issues.apache.org/jira/browse/HADOOP-13593
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Yuanbo Liu
> Attachments: HADOOP-13593.001.patch
>
>
> While copying data from HDFS to Swift by using `hadoop distcp -atomic`, for 
> example:
> {code}
> hadoop distcp -atomic /tmp/100M  swift://testhadoop.softlayer//tmp
> {code}
> it throws
> {code}
> java.lang.IllegalArgumentException: Work path 
> swift://testhadoop.softlayer/._WIP_tmp546958075 and target path 
> swift://testhadoop.softlayer/tmp are in different file system
>   at org.apache.hadoop.tools.DistCp.configureOutputFormat(DistCp.java:351)
> .
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13593) `hadoop distcp -atomic` invokes improper host check while copying data from HDFS to Swift

2016-09-12 Thread Yuanbo Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13593?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuanbo Liu updated HADOOP-13593:

Attachment: HADOOP-13593.001.patch

> `hadoop distcp -atomic` invokes improper host check while copying data from 
> HDFS to Swift
> -
>
> Key: HADOOP-13593
> URL: https://issues.apache.org/jira/browse/HADOOP-13593
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Yuanbo Liu
> Attachments: HADOOP-13593.001.patch
>
>
> While copying data from HDFS to Swift by using `hadoop distcp -atomic`, for 
> example:
> {code}
> hadoop distcp -atomic /tmp/100M  swift://testhadoop.softlayer//tmp
> {code}
> it throws
> {code}
> java.lang.IllegalArgumentException: Work path 
> swift://testhadoop.softlayer/._WIP_tmp546958075 and target path 
> swift://testhadoop.softlayer/tmp are in different file system
>   at org.apache.hadoop.tools.DistCp.configureOutputFormat(DistCp.java:351)
> .
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13593) `hadoop distcp -atomic` invokes improper host check while copying data from HDFS to Swift

2016-09-12 Thread Yuanbo Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15483433#comment-15483433
 ] 

Yuanbo Liu commented on HADOOP-13593:
-

I find that in {{FileUtil#compareFs}}, it uses 
"InetAddress.getByName(srcHost).getCanonicalHostName()" to get the host name of 
Swift file system. In fact "testhadoop.softlayer" is an alias name of Swift, 
it's impossible to use "getCanonicalHostName" to get the real host name. I 
propose to delete it and upload v1 patch for this issue.

> `hadoop distcp -atomic` invokes improper host check while copying data from 
> HDFS to Swift
> -
>
> Key: HADOOP-13593
> URL: https://issues.apache.org/jira/browse/HADOOP-13593
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Yuanbo Liu
>
> While copying data from HDFS to Swift by using `hadoop distcp -atomic`, for 
> example:
> {code}
> hadoop distcp -atomic /tmp/100M  swift://testhadoop.softlayer//tmp
> {code}
> it throws
> {code}
> java.lang.IllegalArgumentException: Work path 
> swift://testhadoop.softlayer/._WIP_tmp546958075 and target path 
> swift://testhadoop.softlayer/tmp are in different file system
>   at org.apache.hadoop.tools.DistCp.configureOutputFormat(DistCp.java:351)
> .
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13593) `hadoop distcp -atomic` invokes improper host check while copying data from HDFS to Swift

2016-09-12 Thread Yuanbo Liu (JIRA)
Yuanbo Liu created HADOOP-13593:
---

 Summary: `hadoop distcp -atomic` invokes improper host check while 
copying data from HDFS to Swift
 Key: HADOOP-13593
 URL: https://issues.apache.org/jira/browse/HADOOP-13593
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Yuanbo Liu


While copying data from HDFS to Swift by using `hadoop distcp -atomic`, for 
example:
{code}
hadoop distcp -atomic /tmp/100M  swift://testhadoop.softlayer//tmp
{code}
it throws
{code}
java.lang.IllegalArgumentException: Work path 
swift://testhadoop.softlayer/._WIP_tmp546958075 and target path 
swift://testhadoop.softlayer/tmp are in different file system
at org.apache.hadoop.tools.DistCp.configureOutputFormat(DistCp.java:351)
.
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13592) Outputs errors and warnings by checkstyle at compile time

2016-09-12 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13592?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HADOOP-13592:

Assignee: Tsuyoshi Ozawa
Target Version/s: 3.0.0-alpha1, 2.9.0  (was: 2.9.0, 3.0.0-alpha1)
  Status: Patch Available  (was: Open)

Attaching a patch to output errors and warnings at compile time by making 
consoleOutput true. By default, maven-checkstyle-plugin will throw error and 
stop the compile. It's impossible now to fix all and errors since the number is 
over 8 errors, I suggest that failOnError should be false for now.

In the future, after fixing all warnings, failOnError should be true. Ideally, 
failOnViolation should be also true.

> Outputs errors and warnings by checkstyle at compile time
> -
>
> Key: HADOOP-13592
> URL: https://issues.apache.org/jira/browse/HADOOP-13592
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Tsuyoshi Ozawa
>Assignee: Tsuyoshi Ozawa
> Attachments: HADOOP-13592.001.patch
>
>
> Currently, Apache Hadoop has lots checkstyle errors and warnings, but it's 
> not outputted at compile time. This prevents us from fixing the errors and 
> warnings.
> We should output errors and warnings at compile time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13592) Outputs errors and warnings by checkstyle at compile time

2016-09-12 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13592?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HADOOP-13592:

Attachment: HADOOP-13592.001.patch

> Outputs errors and warnings by checkstyle at compile time
> -
>
> Key: HADOOP-13592
> URL: https://issues.apache.org/jira/browse/HADOOP-13592
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Tsuyoshi Ozawa
> Attachments: HADOOP-13592.001.patch
>
>
> Currently, Apache Hadoop has lots checkstyle errors and warnings, but it's 
> not outputted at compile time. This prevents us from fixing the errors and 
> warnings.
> We should output errors and warnings at compile time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13592) Outputs errors and warnings by checkstyle at compile time

2016-09-12 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15483343#comment-15483343
 ] 

Tsuyoshi Ozawa commented on HADOOP-13592:
-

The current number of errors is over 8.
{quote}
$ mvn checkstyle:checkstyle | grep "errors\ reported" | awk '{sum += $4} END 
{print sum}'
86040

$ mvn checkstyle:checkstyle | grep "errors\ reported"
Picked up _JAVA_OPTIONS: -Dfile.encoding=UTF-8
[INFO] There are 24 errors reported by Checkstyle 6.6 with 
checkstyle/checkstyle.xml ruleset.
[INFO] There are 73 errors reported by Checkstyle 6.6 with 
checkstyle/checkstyle.xml ruleset.
[INFO] There are 9 errors reported by Checkstyle 6.6 with 
checkstyle/checkstyle.xml ruleset.
[INFO] There are 535 errors reported by Checkstyle 6.6 with 
checkstyle/checkstyle.xml ruleset.
[INFO] There are 20 errors reported by Checkstyle 6.6 with 
checkstyle/checkstyle.xml ruleset.
[INFO] There are 17277 errors reported by Checkstyle 6.6 with 
checkstyle/checkstyle.xml ruleset.
[INFO] There are 292 errors reported by Checkstyle 6.6 with 
checkstyle/checkstyle.xml ruleset.
[INFO] There are 225 errors reported by Checkstyle 6.6 with 
checkstyle/checkstyle.xml ruleset.
[INFO] There are 1249 errors reported by Checkstyle 6.6 with 
checkstyle/checkstyle.xml ruleset.
[INFO] There are 18531 errors reported by Checkstyle 6.6 with 
checkstyle/checkstyle.xml ruleset.
[INFO] There are 1207 errors reported by Checkstyle 6.6 with 
checkstyle/checkstyle.xml ruleset.
[INFO] There are 138 errors reported by Checkstyle 6.6 with 
checkstyle/checkstyle.xml ruleset.
[INFO] There are 289 errors reported by Checkstyle 6.6 with 
checkstyle/checkstyle.xml ruleset.
[INFO] There are 1046 errors reported by Checkstyle 6.6 with 
checkstyle/checkstyle.xml ruleset.
[INFO] There are 3529 errors reported by Checkstyle 6.6 with 
checkstyle/checkstyle.xml ruleset.
[INFO] There are 587 errors reported by Checkstyle 6.6 with 
checkstyle/checkstyle.xml ruleset.
[INFO] There are 3181 errors reported by Checkstyle 6.6 with 
checkstyle/checkstyle.xml ruleset.
[INFO] There are 160 errors reported by Checkstyle 6.6 with 
checkstyle/checkstyle.xml ruleset.
[INFO] There are 805 errors reported by Checkstyle 6.6 with 
checkstyle/checkstyle.xml ruleset.
[INFO] There are 42 errors reported by Checkstyle 6.6 with 
checkstyle/checkstyle.xml ruleset.
[INFO] There are 8077 errors reported by Checkstyle 6.6 with 
checkstyle/checkstyle.xml ruleset.
[INFO] There are 103 errors reported by Checkstyle 6.6 with 
checkstyle/checkstyle.xml ruleset.
[INFO] There are 1042 errors reported by Checkstyle 6.6 with 
checkstyle/checkstyle.xml ruleset.
[INFO] There are 125 errors reported by Checkstyle 6.6 with 
checkstyle/checkstyle.xml ruleset.
[INFO] There are 45 errors reported by Checkstyle 6.6 with 
checkstyle/checkstyle.xml ruleset.
[INFO] There are 13 errors reported by Checkstyle 6.6 with 
checkstyle/checkstyle.xml ruleset.
[INFO] There are 259 errors reported by Checkstyle 6.6 with 
checkstyle/checkstyle.xml ruleset.
[INFO] There are 15 errors reported by Checkstyle 6.6 with 
checkstyle/checkstyle.xml ruleset.
[INFO] There are 502 errors reported by Checkstyle 6.6 with 
checkstyle/checkstyle.xml ruleset.
[INFO] There are 7217 errors reported by Checkstyle 6.6 with 
checkstyle/checkstyle.xml ruleset.
[INFO] There are 1145 errors reported by Checkstyle 6.6 with 
checkstyle/checkstyle.xml ruleset.
[INFO] There are 110 errors reported by Checkstyle 6.6 with 
checkstyle/checkstyle.xml ruleset.
[INFO] There are 3163 errors reported by Checkstyle 6.6 with 
checkstyle/checkstyle.xml ruleset.
[INFO] There are 710 errors reported by Checkstyle 6.6 with 
checkstyle/checkstyle.xml ruleset.
[INFO] There are 6807 errors reported by Checkstyle 6.6 with 
checkstyle/checkstyle.xml ruleset.
[INFO] There are 5 errors reported by Checkstyle 6.6 with 
checkstyle/checkstyle.xml ruleset.
[INFO] There are 663 errors reported by Checkstyle 6.6 with 
checkstyle/checkstyle.xml ruleset.
[INFO] There are 1465 errors reported by Checkstyle 6.6 with 
checkstyle/checkstyle.xml ruleset.
[INFO] There are 1181 errors reported by Checkstyle 6.6 with 
checkstyle/checkstyle.xml ruleset.
[INFO] There are 913 errors reported by Checkstyle 6.6 with 
checkstyle/checkstyle.xml ruleset.
[INFO] There are 103 errors reported by Checkstyle 6.6 with 
checkstyle/checkstyle.xml ruleset.
[INFO] There are 14 errors reported by Checkstyle 6.6 with 
checkstyle/checkstyle.xml ruleset.
[INFO] There are 542 errors reported by Checkstyle 6.6 with 
checkstyle/checkstyle.xml ruleset.
[INFO] There are 822 errors reported by Checkstyle 6.6 with 
checkstyle/checkstyle.xml ruleset.
[INFO] There are 61 errors reported by Checkstyle 6.6 with 
checkstyle/checkstyle.xml ruleset.
[INFO] There are 58 errors reported by Checkstyle 6.6 with 
checkstyle/checkstyle.xml ruleset.
[INFO] There are 725 errors reported by Checkstyle 6.6 with 

[jira] [Created] (HADOOP-13592) Outputs errors and warnings by checkstyle at compile time

2016-09-12 Thread Tsuyoshi Ozawa (JIRA)
Tsuyoshi Ozawa created HADOOP-13592:
---

 Summary: Outputs errors and warnings by checkstyle at compile time
 Key: HADOOP-13592
 URL: https://issues.apache.org/jira/browse/HADOOP-13592
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Tsuyoshi Ozawa


Currently, Apache Hadoop has lots checkstyle errors and warnings, but it's not 
outputted at compile time. This prevents us from fixing the errors and warnings.

We should output errors and warnings at compile time.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13306) adding filter doesnot check if it exists in HttpServer2,this maybe results in degraded performance

2016-09-12 Thread chillon.m (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13306?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chillon.m updated HADOOP-13306:
---
Summary: adding filter doesnot check if it exists in HttpServer2,this maybe 
results in degraded performance  (was: adding filter doesnot check if it exists 
in HttpServer,this maybe results in degraded performance)

> adding filter doesnot check if it exists in HttpServer2,this maybe results in 
> degraded performance
> --
>
> Key: HADOOP-13306
> URL: https://issues.apache.org/jira/browse/HADOOP-13306
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.5.2, 2.7.2, 2.7.3, 2.6.4, 3.0.0-alpha1
>Reporter: chillon.m
>Priority: Minor
> Attachments: 0001-HADOOP-13306(hadoop2.5.2).patch
>
>
> hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer2.java
> line-705(hadoop2.5.2):defineFilter() doesnot check filter if it exists.we 
> need check if it exists before add it.so,ervery request/response maybe dealed 
> by same filter many times,maybe affect performance
> another,No_Cache_Filter added twice when create a HttpServer2 object.I think 
> addDefaultApps()  invoke addNoCacheFilter() is unnecessary.because 
> addNoCacheFilter() has already been invoked by createWebAppContext()  in 
> httpServer2 constructor method.every request invoke twice cachefilter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13591) issue about the logic of `directory` in Aliyun OSS

2016-09-12 Thread shimingfei (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15483239#comment-15483239
 ] 

shimingfei commented on HADOOP-13591:
-

[~uncleGen]
let's make "/" a constant in Constants.java, and change ">0" to "> 0"
{code}
+if (key.length() >0 && !key.endsWith("/")) {
+  key += "/";
+}
{code}


> issue about the logic of `directory` in Aliyun OSS
> --
>
> Key: HADOOP-13591
> URL: https://issues.apache.org/jira/browse/HADOOP-13591
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: HADOOP-12756
>Reporter: Genmao Yu
>Assignee: Genmao Yu
> Fix For: HADOOP-12756
>
> Attachments: HADOOP-13591-HADOOP-12756.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13306) adding filter doesnot check if it exists in HttpServer,this maybe results in degraded performance

2016-09-12 Thread chillon_m (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13306?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chillon_m updated HADOOP-13306:
---
Summary: adding filter doesnot check if it exists in HttpServer,this maybe 
results in degraded performance  (was: adding filter doesnot check if it exists 
in HttpServer,)

> adding filter doesnot check if it exists in HttpServer,this maybe results in 
> degraded performance
> -
>
> Key: HADOOP-13306
> URL: https://issues.apache.org/jira/browse/HADOOP-13306
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.5.2, 2.7.2, 2.7.3, 2.6.4, 3.0.0-alpha1
>Reporter: chillon_m
>Priority: Minor
> Attachments: 0001-HADOOP-13306(hadoop2.5.2).patch
>
>
> hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer2.java
> line-705(hadoop2.5.2):defineFilter() doesnot check filter if it exists.we 
> need check if it exists before add it.so,ervery request/response maybe dealed 
> by same filter many times,maybe affect performance
> another,No_Cache_Filter added twice when create a HttpServer2 object.I think 
> addDefaultApps()  invoke addNoCacheFilter() is unnecessary.because 
> addNoCacheFilter() has already been invoked by createWebAppContext()  in 
> httpServer2 constructor method.every request invoke twice cachefilter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13306) adding filter doesnot check if it exists in HttpServer,

2016-09-12 Thread chillon_m (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13306?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chillon_m updated HADOOP-13306:
---
Summary: adding filter doesnot check if it exists in HttpServer,  (was: add 
filter doesnot check if it exists in HttpServer)

> adding filter doesnot check if it exists in HttpServer,
> ---
>
> Key: HADOOP-13306
> URL: https://issues.apache.org/jira/browse/HADOOP-13306
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.5.2, 2.7.2, 2.7.3, 2.6.4, 3.0.0-alpha1
>Reporter: chillon_m
>Priority: Minor
> Attachments: 0001-HADOOP-13306(hadoop2.5.2).patch
>
>
> hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer2.java
> line-705(hadoop2.5.2):defineFilter() doesnot check filter if it exists.we 
> need check if it exists before add it.so,ervery request/response maybe dealed 
> by same filter many times,maybe affect performance
> another,No_Cache_Filter added twice when create a HttpServer2 object.I think 
> addDefaultApps()  invoke addNoCacheFilter() is unnecessary.because 
> addNoCacheFilter() has already been invoked by createWebAppContext()  in 
> httpServer2 constructor method.every request invoke twice cachefilter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13306) add filter doesnot check if it exists in HttpServer

2016-09-12 Thread chillon_m (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13306?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chillon_m updated HADOOP-13306:
---
Description: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer2.java
line-705(hadoop2.5.2):defineFilter() doesnot check filter if it exists.we need 
check if it exists before add it.so,ervery request/response maybe dealed by 
same filter many times,maybe affect performance

another,No_Cache_Filter added twice when create a HttpServer2 object.I think 
addDefaultApps()  invoke addNoCacheFilter() is unnecessary.because 
addNoCacheFilter() has already been invoked by createWebAppContext()  in 
httpServer2 constructor method.every request invoke twice cachefilter.

  was:
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer2.java
line-705:defineFilter() doesnot check filter if it exists.we need check if it 
exists before add it.so,ervery request/response maybe dealed by same filter 
many times,maybe affect performance

another,No_Cache_Filter added twice when create a HttpServer2 object.I think 
addDefaultApps()  invoke addNoCacheFilter() is unnecessary.because 
addNoCacheFilter() has already been invoked by createWebAppContext()  in 
httpServer2 constructor method.every request invoke twice cachefilter.


> add filter doesnot check if it exists in HttpServer
> ---
>
> Key: HADOOP-13306
> URL: https://issues.apache.org/jira/browse/HADOOP-13306
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.5.2, 2.7.2, 2.7.3, 2.6.4, 3.0.0-alpha1
>Reporter: chillon_m
>Priority: Minor
> Attachments: 0001-HADOOP-13306(hadoop2.5.2).patch
>
>
> hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer2.java
> line-705(hadoop2.5.2):defineFilter() doesnot check filter if it exists.we 
> need check if it exists before add it.so,ervery request/response maybe dealed 
> by same filter many times,maybe affect performance
> another,No_Cache_Filter added twice when create a HttpServer2 object.I think 
> addDefaultApps()  invoke addNoCacheFilter() is unnecessary.because 
> addNoCacheFilter() has already been invoked by createWebAppContext()  in 
> httpServer2 constructor method.every request invoke twice cachefilter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13306) add filter doesnot check if it exists in HttpServer

2016-09-12 Thread chillon_m (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13306?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chillon_m updated HADOOP-13306:
---
Description: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer2.java
line-705:defineFilter() doesnot check filter if it exists.we need check if it 
exists before add it.so,ervery request/response maybe dealed by same filter 
many times,maybe affect performance

another,No_Cache_Filter added twice when create a HttpServer2 object.I think 
addDefaultApps()  invoke addNoCacheFilter() is unnecessary.because 
addNoCacheFilter() has already been invoked by createWebAppContext()  in 
httpServer2 constructor method.every request invoke twice cachefilter.

  was:
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer2.java
line-705:defineFilter() doesnot check filter if it exists.we need check if it 
exists before add it.

another,No_Cache_Filter added twice when create a HttpServer2 object.I think 
addDefaultApps()  invoke addNoCacheFilter() is unnecessary.because 
addNoCacheFilter() has already been invoked by createWebAppContext()  in 
httpServer2 constructor method.every request invoke twice cachefilter,maybe 
affact performance.


> add filter doesnot check if it exists in HttpServer
> ---
>
> Key: HADOOP-13306
> URL: https://issues.apache.org/jira/browse/HADOOP-13306
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.5.2, 2.7.2, 2.7.3, 2.6.4, 3.0.0-alpha1
>Reporter: chillon_m
>Priority: Minor
> Attachments: 0001-HADOOP-13306(hadoop2.5.2).patch
>
>
> hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer2.java
> line-705:defineFilter() doesnot check filter if it exists.we need check if it 
> exists before add it.so,ervery request/response maybe dealed by same filter 
> many times,maybe affect performance
> another,No_Cache_Filter added twice when create a HttpServer2 object.I think 
> addDefaultApps()  invoke addNoCacheFilter() is unnecessary.because 
> addNoCacheFilter() has already been invoked by createWebAppContext()  in 
> httpServer2 constructor method.every request invoke twice cachefilter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13591) issue about the logic of `directory` in Aliyun OSS

2016-09-12 Thread shimingfei (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15483203#comment-15483203
 ] 

shimingfei commented on HADOOP-13591:
-

[~steve_l] Thanks for your comments.

1. Agree with you, the JIRA titile should be more explicit, and easy to be 
understood
2. yes, we turn on the root test and get file status tests in contract options.
3. we test against public Aliyun OSS service, and there is a dedicated oss 
bucket(both in Shanghai & North America) created for Hadoop testing provided by 
Aliyun. 
4 5. Agree with you.


> issue about the logic of `directory` in Aliyun OSS
> --
>
> Key: HADOOP-13591
> URL: https://issues.apache.org/jira/browse/HADOOP-13591
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: HADOOP-12756
>Reporter: Genmao Yu
>Assignee: Genmao Yu
> Fix For: HADOOP-12756
>
> Attachments: HADOOP-13591-HADOOP-12756.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13306) add filter doesnot check if it exists in HttpServer

2016-09-12 Thread chillon_m (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13306?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chillon_m updated HADOOP-13306:
---
Affects Version/s: 3.0.0-alpha1

> add filter doesnot check if it exists in HttpServer
> ---
>
> Key: HADOOP-13306
> URL: https://issues.apache.org/jira/browse/HADOOP-13306
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.5.2, 2.7.2, 2.7.3, 2.6.4, 3.0.0-alpha1
>Reporter: chillon_m
>Priority: Minor
> Attachments: 0001-HADOOP-13306(hadoop2.5.2).patch
>
>
> hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer2.java
> line-705:defineFilter() doesnot check filter if it exists.we need check if it 
> exists before add it.
> another,No_Cache_Filter added twice when create a HttpServer2 object.I think 
> addDefaultApps()  invoke addNoCacheFilter() is unnecessary.because 
> addNoCacheFilter() has already been invoked by createWebAppContext()  in 
> httpServer2 constructor method.every request invoke twice cachefilter,maybe 
> affact performance.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org