[jira] [Commented] (HADOOP-14395) Provide Builder pattern for DistributedFileSystem.append

2017-06-17 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16052899#comment-16052899
 ] 

Brahma Reddy Battula commented on HADOOP-14395:
---


*Following two tests are failing after this in.*
{{TestFilterFileSystem.testFilterFileSystem}}
{{TestHarFileSystem.testInheritedMethodsImplemented}}

{noformat}
2017-06-17 08:34:05,922 ERROR fs.TestHarFileSystem 
(TestHarFileSystem.java:testInheritedMethodsImplemented(365)) - HarFileSystem 
MUST implement protected org.apache.hadoop.fs.FSDataOutputStreamBuilder 
org.apache.hadoop.fs.FileSystem.appendFile(org.apache.hadoop.fs.Path)
{noformat}

*Reference*: 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/437/testReport/junit/

Looks pre-commit jenkins also have this failures.

> Provide Builder pattern for DistributedFileSystem.append
> 
>
> Key: HADOOP-14395
> URL: https://issues.apache.org/jira/browse/HADOOP-14395
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Fix For: 2.9.0, 3.0.0-alpha4
>
> Attachments: HADOOP-14395.00.patch, HADOOP-14395.00-trunk.patch, 
> HADOOP-14395.01.patch, HADOOP-14395.01-trunk.patch, HADOOP-14395.02.patch, 
> HADOOP-14395.02-trunk.patch
>
>
> Follow HADOOP-14394, it should also provide a {{Builder}} API for 
> {{DistributedFileSystem#append}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14518) Customize User-Agent header sent in HTTP/HTTPS requests by WASB.

2017-06-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16052766#comment-16052766
 ] 

Hadoop QA commented on HADOOP-14518:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
31s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 21m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
1s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
54s{color} | {color:red} hadoop-common-project/hadoop-common in trunk has 19 
extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
30s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 13m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m  7s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
34s{color} | {color:green} hadoop-azure in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
39s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}103m 19s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.fs.TestHarFileSystem |
|   | hadoop.fs.TestFilterFileSystem |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14518 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12873345/HADOOP-14518-04.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  findbugs  checkstyle  |
| uname | Linux ef96b5d10d50 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 
09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 6460df2 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12557/artifact/patchprocess/branch-findbugs-hadoop-common-project_hadoop-common-warnings.html
 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12557/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12557/testReport/ |
| modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-azure U: 
. |
| Console output | 

[jira] [Commented] (HADOOP-14508) TestDFSIO throws NPE when set -sequential argument.

2017-06-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16052753#comment-16052753
 ] 

Hadoop QA commented on HADOOP-14508:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
 6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient:
 The patch generated 0 new + 47 unchanged - 1 fixed = 47 total (was 48) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}104m 
15s{color} | {color:green} hadoop-mapreduce-client-jobclient in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}122m 59s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14508 |
| GITHUB PR | https://github.com/apache/hadoop/pull/231 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 1a1b409e948b 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 6460df2 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12555/testReport/ |
| modules | C: 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient
 U: 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12555/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> TestDFSIO throws NPE when set -sequential argument.
> ---
>
> Key: HADOOP-14508
> URL: https://issues.apache.org/jira/browse/HADOOP-14508
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha4
>Reporter: wenxin he
>Assignee: wenxin he
> Attachments: HADOOP-14508.002.patch
>
>
> Benchmark tool TestDFSIO throws NPE when set {{-sequential}} due to 
> uninitialized {{ioer.stream}} in {{TestDFSIO#sequentialTest}}.
> More descriptions, stack traces see comments.



--
This message was sent by Atlassian JIRA

[jira] [Commented] (HADOOP-14469) FTPFileSystem#listStatus get currentPath and parentPath at the same time, causing recursively list action endless

2017-06-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14469?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16052747#comment-16052747
 ] 

Hadoop QA commented on HADOOP-14469:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
 1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
8s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
26s{color} | {color:red} hadoop-common-project/hadoop-common in trunk has 19 
extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 
46s{color} | {color:green} root generated 0 new + 821 unchanged - 2 fixed = 821 
total (was 823) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
37s{color} | {color:red} hadoop-common-project/hadoop-common generated 1 new + 
19 unchanged - 0 fixed = 20 total (was 19) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 53s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 58m 13s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-common-project/hadoop-common |
|  |  Possible null pointer dereference of dirEntries in 
org.apache.hadoop.fs.ftp.FTPFileSystem.delete(FTPClient, Path, boolean)  
Dereferenced at FTPFileSystem.java:dirEntries in 
org.apache.hadoop.fs.ftp.FTPFileSystem.delete(FTPClient, Path, boolean)  
Dereferenced at FTPFileSystem.java:[line 414] |
| Failed junit tests | hadoop.fs.TestFilterFileSystem |
|   | hadoop.fs.TestHarFileSystem |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14469 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12873370/HADOOP-14469-005.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux f483fa7d0498 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 6460df2 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12556/artifact/patchprocess/branch-findbugs-hadoop-common-project_hadoop-common-warnings.html
 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12556/artifact/patchprocess/new-findbugs-hadoop-common-project_hadoop-common.html
 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12556/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12556/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 

[jira] [Commented] (HADOOP-14533) TraceAdmin#run, the size of args cannot be less than zero,which is a linklist

2017-06-17 Thread Weisen Han (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16052740#comment-16052740
 ] 

Weisen Han commented on HADOOP-14533:
-

None of the findbug warning and junit test failing is related with this patch.

> TraceAdmin#run, the size of args cannot be less than zero,which is a linklist
> -
>
> Key: HADOOP-14533
> URL: https://issues.apache.org/jira/browse/HADOOP-14533
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, tracing
>Affects Versions: 3.0.0-alpha3
>Reporter: Weisen Han
>Assignee: Weisen Han
>Priority: Trivial
> Attachments: HADOOP-14533-001.patch
>
>
> {code}
>   @Override
>   public int run(String argv[]) throws Exception {
>   LinkedList args = new LinkedList();
>   ……
>  if (args.size() < 0) {
> System.err.println("You must specify an operation.");
>  return 1;
> }
> ……
> }
> {code}
> From the code above, the {{args}}  is a linklist obejct, so it cannot be less 
> than zero.meaning that code below is wrong 
> {code}
>  if (args.size() < 0) {
>   System.err.println("You must specify an operation.");
>   return 1;
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14518) Customize User-Agent header sent in HTTP/HTTPS requests by WASB.

2017-06-17 Thread Georgi Chalakov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14518?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Georgi Chalakov updated HADOOP-14518:
-
Target Version/s: 3.0.0-alpha3, 2.8.1  (was: 2.8.1, 3.0.0-alpha3)
  Status: Patch Available  (was: In Progress)

> Customize User-Agent header sent in HTTP/HTTPS requests by WASB.
> 
>
> Key: HADOOP-14518
> URL: https://issues.apache.org/jira/browse/HADOOP-14518
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Affects Versions: 3.0.0-alpha3
>Reporter: Georgi Chalakov
>Assignee: Georgi Chalakov
>Priority: Minor
> Attachments: HADOOP-14518-01.patch, HADOOP-14518-01-test.txt, 
> HADOOP-14518-02.patch, HADOOP-14518-03.patch, HADOOP-14518-04.patch
>
>
> WASB passes a User-Agent header to the Azure back-end. Right now, it uses the 
> default value set by the Azure Client SDK, so Hadoop traffic doesn't appear 
> any different from general Blob traffic. If we customize the User-Agent 
> header, then it will enable better troubleshooting and analysis by Azure 
> service.
> The following configuration
>   
> fs.azure.user.agent.id
> MSFT
>   
> set the user agent to 
>  User-Agent: WASB/3.0.0-alpha4-SNAPSHOT (MSFT) Azure-Storage/4.2.0 
> (JavaJRE 1.8.0_131; WindowsServer2012R2 6.3)
> Test Results :
> Tests run: 703, Failures: 0, Errors: 0, Skipped: 119



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14469) FTPFileSystem#listStatus get currentPath and parentPath at the same time, causing recursively list action endless

2017-06-17 Thread Hongyuan Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14469?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16052718#comment-16052718
 ] 

Hongyuan Li commented on HADOOP-14469:
--

resubmit patch to add the correct statement of param qualifyPath 
{{getFileStatus(FTPFile ftpFile, Path parentPath,boolean qualifyPath)}}.

dirEntries in {{ delete(FTPClient client, Path file, boolean recursive)}} 
cannot be null, because method 
{{listStatus(FTPClient client, Path file)}} returned { FileStatus[] { fileStat 
}} or {{getFileStatuses(absolute, ftpFiles, true)}}.Meanwhile, method 
{{getFileStatuses(absolute, ftpFiles, true)}} uses {{fileStatusList.toArray(new 
FileStatus[fileStatusList.size()])}},which will return an empty array at least.

Test result using Serv-U using code in description lists below 
{code}
FileStatus{path=ftp://test:123456@192.168.44.1/new/hadoop; isDirectory=true; 
modification_time=149671698; access_time=0; owner=user; group=group; 
permission=-; isSymlink=false}
FileStatus{path=ftp://test:123456@192.168.44.1/new/HADOOP-14431-002.patch; 
isDirectory=false; length=2036; replication=1; blocksize=4096; 
modification_time=149579778; access_time=0; owner=user; group=group; 
permission=-; isSymlink=false}
FileStatus{path=ftp://test:123456@192.168.44.1/new/HADOOP-14486-001.patch; 
isDirectory=false; length=1322; replication=1; blocksize=4096; 
modification_time=149671698; access_time=0; owner=user; group=group; 
permission=-; isSymlink=false}
FileStatus{path=ftp://test:123456@192.168.44.1/new/hadoop-main; 
isDirectory=true; modification_time=149579712; access_time=0; owner=user; 
group=group; permission=-; isSymlink=false}
{code}



> FTPFileSystem#listStatus get currentPath and parentPath at the same time, 
> causing recursively list action endless
> -
>
> Key: HADOOP-14469
> URL: https://issues.apache.org/jira/browse/HADOOP-14469
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs, tools/distcp
>Affects Versions: 3.0.0-alpha3
> Environment: ftp build by windows7 + Serv-U_64 12.1.0.8 
> code runs any os
>Reporter: Hongyuan Li
>Assignee: Hongyuan Li
>Priority: Critical
> Attachments: HADOOP-14469-001.patch, HADOOP-14469-002.patch, 
> HADOOP-14469-003.patch, HADOOP-14469-004.patch, HADOOP-14469-005.patch
>
>
> for some ftpsystems, liststatus method will return new Path(".") and new 
> Path(".."), thus causing list op looping.for example, Serv-U
> We can see the logic in code below:
> {code}
>   private FileStatus[] listStatus(FTPClient client, Path file)
>   throws IOException {
> ……
> FileStatus[] fileStats = new FileStatus[ftpFiles.length];
> for (int i = 0; i < ftpFiles.length; i++) {
>   fileStats[i] = getFileStatus(ftpFiles[i], absolute);
> }
> return fileStats;
>   }
> {code}
> {code}
> public void test() throws Exception{
> FTPFileSystem ftpFileSystem = new FTPFileSystem();
> ftpFileSystem.initialize(new 
> Path("ftp://test:123456@192.168.44.1/;).toUri(),
> new Configuration());
> FileStatus[] fileStatus  = ftpFileSystem.listStatus(new Path("/new"));
> for(FileStatus fileStatus1 : fileStatus)
>   System.out.println(fileStatus1);
> }
> {code}
> using test code below, the test results list below
> {code}
> FileStatus{path=ftp://test:123456@192.168.44.1/new; isDirectory=true; 
> modification_time=149671698; access_time=0; owner=user; group=group; 
> permission=-; isSymlink=false}
> FileStatus{path=ftp://test:123456@192.168.44.1/; isDirectory=true; 
> modification_time=149671698; access_time=0; owner=user; group=group; 
> permission=-; isSymlink=false}
> FileStatus{path=ftp://test:123456@192.168.44.1/new/hadoop; isDirectory=true; 
> modification_time=149671698; access_time=0; owner=user; group=group; 
> permission=-; isSymlink=false}
> FileStatus{path=ftp://test:123456@192.168.44.1/new/HADOOP-14431-002.patch; 
> isDirectory=false; length=2036; replication=1; blocksize=4096; 
> modification_time=149579778; access_time=0; owner=user; group=group; 
> permission=-; isSymlink=false}
> FileStatus{path=ftp://test:123456@192.168.44.1/new/HADOOP-14486-001.patch; 
> isDirectory=false; length=1322; replication=1; blocksize=4096; 
> modification_time=149671698; access_time=0; owner=user; group=group; 
> permission=-; isSymlink=false}
> FileStatus{path=ftp://test:123456@192.168.44.1/new/hadoop-main; 
> isDirectory=true; modification_time=149579712; access_time=0; owner=user; 
> group=group; permission=-; isSymlink=false}
> {code}
> In results above, {{FileStatus{path=ftp://test:123456@192.168.44.1/new; ……}} 
> is obviously the current Path, and  

[jira] [Updated] (HADOOP-14469) FTPFileSystem#listStatus get currentPath and parentPath at the same time, causing recursively list action endless

2017-06-17 Thread Hongyuan Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hongyuan Li updated HADOOP-14469:
-
Attachment: HADOOP-14469-005.patch

> FTPFileSystem#listStatus get currentPath and parentPath at the same time, 
> causing recursively list action endless
> -
>
> Key: HADOOP-14469
> URL: https://issues.apache.org/jira/browse/HADOOP-14469
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs, tools/distcp
>Affects Versions: 3.0.0-alpha3
> Environment: ftp build by windows7 + Serv-U_64 12.1.0.8 
> code runs any os
>Reporter: Hongyuan Li
>Assignee: Hongyuan Li
>Priority: Critical
> Attachments: HADOOP-14469-001.patch, HADOOP-14469-002.patch, 
> HADOOP-14469-003.patch, HADOOP-14469-004.patch, HADOOP-14469-005.patch
>
>
> for some ftpsystems, liststatus method will return new Path(".") and new 
> Path(".."), thus causing list op looping.for example, Serv-U
> We can see the logic in code below:
> {code}
>   private FileStatus[] listStatus(FTPClient client, Path file)
>   throws IOException {
> ……
> FileStatus[] fileStats = new FileStatus[ftpFiles.length];
> for (int i = 0; i < ftpFiles.length; i++) {
>   fileStats[i] = getFileStatus(ftpFiles[i], absolute);
> }
> return fileStats;
>   }
> {code}
> {code}
> public void test() throws Exception{
> FTPFileSystem ftpFileSystem = new FTPFileSystem();
> ftpFileSystem.initialize(new 
> Path("ftp://test:123456@192.168.44.1/;).toUri(),
> new Configuration());
> FileStatus[] fileStatus  = ftpFileSystem.listStatus(new Path("/new"));
> for(FileStatus fileStatus1 : fileStatus)
>   System.out.println(fileStatus1);
> }
> {code}
> using test code below, the test results list below
> {code}
> FileStatus{path=ftp://test:123456@192.168.44.1/new; isDirectory=true; 
> modification_time=149671698; access_time=0; owner=user; group=group; 
> permission=-; isSymlink=false}
> FileStatus{path=ftp://test:123456@192.168.44.1/; isDirectory=true; 
> modification_time=149671698; access_time=0; owner=user; group=group; 
> permission=-; isSymlink=false}
> FileStatus{path=ftp://test:123456@192.168.44.1/new/hadoop; isDirectory=true; 
> modification_time=149671698; access_time=0; owner=user; group=group; 
> permission=-; isSymlink=false}
> FileStatus{path=ftp://test:123456@192.168.44.1/new/HADOOP-14431-002.patch; 
> isDirectory=false; length=2036; replication=1; blocksize=4096; 
> modification_time=149579778; access_time=0; owner=user; group=group; 
> permission=-; isSymlink=false}
> FileStatus{path=ftp://test:123456@192.168.44.1/new/HADOOP-14486-001.patch; 
> isDirectory=false; length=1322; replication=1; blocksize=4096; 
> modification_time=149671698; access_time=0; owner=user; group=group; 
> permission=-; isSymlink=false}
> FileStatus{path=ftp://test:123456@192.168.44.1/new/hadoop-main; 
> isDirectory=true; modification_time=149579712; access_time=0; owner=user; 
> group=group; permission=-; isSymlink=false}
> {code}
> In results above, {{FileStatus{path=ftp://test:123456@192.168.44.1/new; ……}} 
> is obviously the current Path, and  
> {{FileStatus{path=ftp://test:123456@192.168.44.1/;……}}  is obviously the 
> parent Path.
> So, if we want to walk the directory recursively, it will stuck.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org