[jira] [Commented] (HDFS-6994) libhdfs3 - A native C/C++ HDFS client

2020-12-01 Thread Xiao Liang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-6994?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17242002#comment-17242002
 ] 

Xiao Liang commented on HDFS-6994:
--

The links
[https://github.com/Pivotal-Data-Attic/pivotalrd-libhdfs3]
[http://pivotal-data-attic.github.io/pivotalrd-libhdfs3/]

are no longer accessible, were they relocated to somewhere else?

> libhdfs3 - A native C/C++ HDFS client
> -
>
> Key: HDFS-6994
> URL: https://issues.apache.org/jira/browse/HDFS-6994
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: hdfs-client
>Reporter: Zhanwei Wang
>Assignee: Zhanwei Wang
>Priority: Major
> Attachments: HDFS-6994-rpc-8.patch, HDFS-6994.patch
>
>
> Hi All
> I just got the permission to open source libhdfs3, which is a native C/C++ 
> HDFS client based on Hadoop RPC protocol and HDFS Data Transfer Protocol.
> libhdfs3 provide the libhdfs style C interface and a C++ interface. Support 
> both HADOOP RPC version 8 and 9. Support Namenode HA and Kerberos 
> authentication.
> libhdfs3 is currently used by HAWQ of Pivotal
> I'd like to integrate libhdfs3 into HDFS source code to benefit others.
> You can find libhdfs3 code from github
> https://github.com/Pivotal-Data-Attic/pivotalrd-libhdfs3
> http://pivotal-data-attic.github.io/pivotalrd-libhdfs3/



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-15517) Add operation names for all logics calling FSNamesystem#writeUnlock() to record their writeLock held times

2020-08-06 Thread Xiao Liang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Liang reassigned HDFS-15517:
-

Assignee: Xiao Liang

> Add operation names for all logics calling FSNamesystem#writeUnlock() to 
> record their writeLock held times
> --
>
> Key: HDFS-15517
> URL: https://issues.apache.org/jira/browse/HDFS-15517
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: metrics, namenode
>Affects Versions: 2.9.0, 3.3.0
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
> Fix For: 2.9.0, 3.3.0
>
>
> Add operation names for all logics calling FSNamesystem#writeUnlock() to 
> record their writeLock held times, to get detailed counter metrics for them, 
> instead of using "OTHER", this is helpful for performance analysis.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-15517) Add operation names for all logics calling FSNamesystem#writeUnlock() to record their writeLock held times

2020-08-06 Thread Xiao Liang (Jira)
Xiao Liang created HDFS-15517:
-

 Summary: Add operation names for all logics calling 
FSNamesystem#writeUnlock() to record their writeLock held times
 Key: HDFS-15517
 URL: https://issues.apache.org/jira/browse/HDFS-15517
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: metrics, namenode
Affects Versions: 3.3.0, 2.9.0
Reporter: Xiao Liang
 Fix For: 3.3.0, 2.9.0


Add operation names for all logics calling FSNamesystem#writeUnlock() to record 
their writeLock held times, to get detailed counter metrics for them, instead 
of using "OTHER", this is helpful for performance analysis.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-15455) Expose HighestPriorityReplBlocks and LastReplicaBlocks statistics

2020-07-07 Thread Xiao Liang (Jira)
Xiao Liang created HDFS-15455:
-

 Summary: Expose HighestPriorityReplBlocks and LastReplicaBlocks 
statistics
 Key: HDFS-15455
 URL: https://issues.apache.org/jira/browse/HDFS-15455
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.9.2, 2.9.0
Reporter: Xiao Liang
Assignee: Xiao Liang


Similar to HDFS-13658, blocks with only 1 replica may cause the lost of 
customer data, so we need to make this awared and take action if necessary. 
This change is for HDFS 2.9 as we will still be using it for some time and 
switching to HDFS 3.X is not simple work.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15342) Handle more generic exceptions in the threads created in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeList#addBlockPool

2020-05-07 Thread Xiao Liang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15342?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Liang updated HDFS-15342:
--
Description: 
In 
*org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeList#addBlockPool*,
 threads will be created for each volume for adding block pools. In the thread, 
IOException will be caught and handled, however, there can be other types of 
exceptions(for example HDFS-14582), without catching them, the thread can 
terminate without any error log, and the join() won't be aware of it. The 
exception info will be only in stderr but probably not noticed and unsearchable 
from log, this could make it difficult to investigate related issues.

 

  was:
In 
*org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeList#addBlockPool*,
 threads will be created for each volume for adding block pools. In the thread, 
IOException will be caught and handled, however, there can be other types of 
exceptions(for example HDFS-14582), without catching them, the thread can 
terminate without any error log, the exception info will be only in stderr but 
probably not noticed and unsearchable from log, this could make it difficult to 
investigate related issues.

 


> Handle more generic exceptions in the threads created in 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeList#addBlockPool
> 
>
> Key: HDFS-15342
> URL: https://issues.apache.org/jira/browse/HDFS-15342
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.9.0, 3.1.0, 2.9.1, 3.2.0, 2.9.2
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>
> In 
> *org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeList#addBlockPool*,
>  threads will be created for each volume for adding block pools. In the 
> thread, IOException will be caught and handled, however, there can be other 
> types of exceptions(for example HDFS-14582), without catching them, the 
> thread can terminate without any error log, and the join() won't be aware of 
> it. The exception info will be only in stderr but probably not noticed and 
> unsearchable from log, this could make it difficult to investigate related 
> issues.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15342) Handle more generic exceptions in the threads created in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeList#addBlockPool

2020-05-07 Thread Xiao Liang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15342?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Liang updated HDFS-15342:
--
Description: 
In 
*org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeList#addBlockPool*,
 threads will be created for each volume for adding block pools. In the thread, 
IOException will be caught and handled, however, there can be other types of 
exceptions(for example HDFS-14582), without catching them, the thread can 
terminate without any error log, the exception info will be only in stderr but 
probably not noticed and unsearchable from log, this could make it difficult to 
investigate related issues.

 

  was:
In 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeList#addBlockPool,
 threads will be created for each volume for adding block pools. In the thread, 
IOException will be caught and handled, however, there can be other types of 
exceptions(for example HDFS-14582), without catching them, the thread can 
terminate without any error log, the exception info will be only in stderr but 
probably not noticed and unsearchable from log, this could make it difficult to 
investigate related issues.

 


> Handle more generic exceptions in the threads created in 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeList#addBlockPool
> 
>
> Key: HDFS-15342
> URL: https://issues.apache.org/jira/browse/HDFS-15342
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.9.0, 3.1.0, 2.9.1, 3.2.0, 2.9.2
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>
> In 
> *org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeList#addBlockPool*,
>  threads will be created for each volume for adding block pools. In the 
> thread, IOException will be caught and handled, however, there can be other 
> types of exceptions(for example HDFS-14582), without catching them, the 
> thread can terminate without any error log, the exception info will be only 
> in stderr but probably not noticed and unsearchable from log, this could make 
> it difficult to investigate related issues.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15342) Handle more generic exceptions in the threads created in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeList#addBlockPool

2020-05-07 Thread Xiao Liang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15342?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Liang updated HDFS-15342:
--
Description: 
In 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeList#addBlockPool,
 threads will be created for each volume for adding block pools. In the thread, 
IOException will be caught and handled, however, there can be other types of 
exceptions(for example HDFS-14582), without catching them, the thread can 
terminate without any error log, the exception info will be only in stderr but 
probably not noticed and unsearchable from log, this could make it difficult to 
investigate related issues.

 

  was:
*++*++In 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeList#addBlockPool,
 threads will be created for each volume for adding block pools. In the thread, 
IOException will be caught and handled, however, there can be other types of 
exceptions(for example 
[HDFS-14582|https://issues.apache.org/jira/browse/HDFS-14582]), without 
catching them, the thread can terminate without any error log, the exception 
info will be only in stderr but probably not noticed and unsearchable from log, 
this could make it difficult to investigate related issues.

 


> Handle more generic exceptions in the threads created in 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeList#addBlockPool
> 
>
> Key: HDFS-15342
> URL: https://issues.apache.org/jira/browse/HDFS-15342
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.9.0, 3.1.0, 2.9.1, 3.2.0, 2.9.2
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>
> In 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeList#addBlockPool,
>  threads will be created for each volume for adding block pools. In the 
> thread, IOException will be caught and handled, however, there can be other 
> types of exceptions(for example HDFS-14582), without catching them, the 
> thread can terminate without any error log, the exception info will be only 
> in stderr but probably not noticed and unsearchable from log, this could make 
> it difficult to investigate related issues.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-15342) Handle more generic exceptions in the threads created in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeList#addBlockPool

2020-05-07 Thread Xiao Liang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15342?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Liang reassigned HDFS-15342:
-

Assignee: Xiao Liang

> Handle more generic exceptions in the threads created in 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeList#addBlockPool
> 
>
> Key: HDFS-15342
> URL: https://issues.apache.org/jira/browse/HDFS-15342
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.9.0, 3.1.0, 2.9.1, 3.2.0, 2.9.2
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>
> *++*++In 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeList#addBlockPool,
>  threads will be created for each volume for adding block pools. In the 
> thread, IOException will be caught and handled, however, there can be other 
> types of exceptions(for example 
> [HDFS-14582|https://issues.apache.org/jira/browse/HDFS-14582]), without 
> catching them, the thread can terminate without any error log, the exception 
> info will be only in stderr but probably not noticed and unsearchable from 
> log, this could make it difficult to investigate related issues.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-15342) Handle more generic exceptions in the threads created in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeList#addBlockPool

2020-05-07 Thread Xiao Liang (Jira)
Xiao Liang created HDFS-15342:
-

 Summary: Handle more generic exceptions in the threads created in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeList#addBlockPool
 Key: HDFS-15342
 URL: https://issues.apache.org/jira/browse/HDFS-15342
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode
Affects Versions: 2.9.2, 3.2.0, 2.9.1, 3.1.0, 2.9.0
Reporter: Xiao Liang


*++*++In 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeList#addBlockPool,
 threads will be created for each volume for adding block pools. In the thread, 
IOException will be caught and handled, however, there can be other types of 
exceptions(for example 
[HDFS-14582|https://issues.apache.org/jira/browse/HDFS-14582]), without 
catching them, the thread can terminate without any error log, the exception 
info will be only in stderr but probably not noticed and unsearchable from log, 
this could make it difficult to investigate related issues.

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14201) Ability to disallow safemode NN to become active

2019-06-17 Thread Xiao Liang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16866040#comment-16866040
 ] 

Xiao Liang commented on HDFS-14201:
---

[^HDFS-14201.008.patch] LGTM too, thank [~hexiaoqiao] for the continuous effort.

> Ability to disallow safemode NN to become active
> 
>
> Key: HDFS-14201
> URL: https://issues.apache.org/jira/browse/HDFS-14201
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: auto-failover
>Affects Versions: 3.1.1, 2.9.2
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
> Attachments: HDFS-14201.001.patch, HDFS-14201.002.patch, 
> HDFS-14201.003.patch, HDFS-14201.004.patch, HDFS-14201.005.patch, 
> HDFS-14201.006.patch, HDFS-14201.007.patch, HDFS-14201.008.patch
>
>
> Currently with HA, Namenode in safemode can be possibly selected as active, 
> for availability of both read and write, Namenodes not in safemode are better 
> choices to become active though.
> It can take tens of minutes for a cold started Namenode to get out of 
> safemode, especially when there are large number of files and blocks in HDFS, 
> that means if a Namenode in safemode become active, the cluster will be not 
> fully functioning for quite a while, even if it can while there is some 
> Namenode not in safemode.
> The proposal here is to add an option, to allow Namenode to report itself as 
> UNHEALTHY to ZKFC, if it's in safemode, so as to only allow fully functioning 
> Namenode to become active, improving the general availability of the cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14201) Ability to disallow safemode NN to become active

2019-02-27 Thread Xiao Liang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16780154#comment-16780154
 ] 

Xiao Liang commented on HDFS-14201:
---

Thanks [~hexiaoqiao] for reviewing.

TestHASafeMode#testTransitionToActiveWhenSafeMode is passing with 
[^HDFS-14201.004.patch], I think it could be the randomized root dir (set by 
line 904 of TestHASafeMode.java in [^HDFS-14201.004.patch]) of MiniDFSCluster 
working.

In my local machine all test cases of 
hadoop.hdfs.tools.TestDFSZKFailoverController passed, and other failed test 
cases reported by Jenkins don't seem like related to the change.

> Ability to disallow safemode NN to become active
> 
>
> Key: HDFS-14201
> URL: https://issues.apache.org/jira/browse/HDFS-14201
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: auto-failover
>Affects Versions: 3.1.1, 2.9.2
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
> Attachments: HDFS-14201.001.patch, HDFS-14201.002.patch, 
> HDFS-14201.003.patch, HDFS-14201.004.patch
>
>
> Currently with HA, Namenode in safemode can be possibly selected as active, 
> for availability of both read and write, Namenodes not in safemode are better 
> choices to become active though.
> It can take tens of minutes for a cold started Namenode to get out of 
> safemode, especially when there are large number of files and blocks in HDFS, 
> that means if a Namenode in safemode become active, the cluster will be not 
> fully functioning for quite a while, even if it can while there is some 
> Namenode not in safemode.
> The proposal here is to add an option, to allow Namenode to report itself as 
> UNHEALTHY to ZKFC, if it's in safemode, so as to only allow fully functioning 
> Namenode to become active, improving the general availability of the cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14201) Ability to disallow safemode NN to become active

2019-02-27 Thread Xiao Liang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16779044#comment-16779044
 ] 

Xiao Liang commented on HDFS-14201:
---

Thanks [~hexiaoqiao], I uploaded [^HDFS-14201.004.patch] which combined the 
logic, in it I moved the safemode checking from class NameNodeRpcServer to 
NameNode, to make all related logic in the same class.

There is a minor modification in 
org.apache.hadoop.hdfs.server.namenode.ha.TestHASafeMode#testTransitionToActiveWhenSafeMode
 for building MiniDFSCluster to make sure it can also succeed in Windows 
(related to HDFS-13408).

Please help review, thank you.

> Ability to disallow safemode NN to become active
> 
>
> Key: HDFS-14201
> URL: https://issues.apache.org/jira/browse/HDFS-14201
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: auto-failover
>Affects Versions: 3.1.1, 2.9.2
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
> Attachments: HDFS-14201.001.patch, HDFS-14201.002.patch, 
> HDFS-14201.003.patch, HDFS-14201.004.patch
>
>
> Currently with HA, Namenode in safemode can be possibly selected as active, 
> for availability of both read and write, Namenodes not in safemode are better 
> choices to become active though.
> It can take tens of minutes for a cold started Namenode to get out of 
> safemode, especially when there are large number of files and blocks in HDFS, 
> that means if a Namenode in safemode become active, the cluster will be not 
> fully functioning for quite a while, even if it can while there is some 
> Namenode not in safemode.
> The proposal here is to add an option, to allow Namenode to report itself as 
> UNHEALTHY to ZKFC, if it's in safemode, so as to only allow fully functioning 
> Namenode to become active, improving the general availability of the cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14201) Ability to disallow safemode NN to become active

2019-02-27 Thread Xiao Liang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Liang updated HDFS-14201:
--
Attachment: HDFS-14201.004.patch

> Ability to disallow safemode NN to become active
> 
>
> Key: HDFS-14201
> URL: https://issues.apache.org/jira/browse/HDFS-14201
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: auto-failover
>Affects Versions: 3.1.1, 2.9.2
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
> Attachments: HDFS-14201.001.patch, HDFS-14201.002.patch, 
> HDFS-14201.003.patch, HDFS-14201.004.patch
>
>
> Currently with HA, Namenode in safemode can be possibly selected as active, 
> for availability of both read and write, Namenodes not in safemode are better 
> choices to become active though.
> It can take tens of minutes for a cold started Namenode to get out of 
> safemode, especially when there are large number of files and blocks in HDFS, 
> that means if a Namenode in safemode become active, the cluster will be not 
> fully functioning for quite a while, even if it can while there is some 
> Namenode not in safemode.
> The proposal here is to add an option, to allow Namenode to report itself as 
> UNHEALTHY to ZKFC, if it's in safemode, so as to only allow fully functioning 
> Namenode to become active, improving the general availability of the cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14201) Ability to disallow safemode NN to become active

2019-02-25 Thread Xiao Liang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16777427#comment-16777427
 ] 

Xiao Liang commented on HDFS-14201:
---

Thanks [~hexiaoqiao] for pointing out the manual transition, it's a valid 
point. Actually I think combining the logic in [^HDFS-14201.002.patch] and 
[^HDFS-14201.003.patch] could be an option, so that when the switch for this 
feature is on:
 # in auto-failover mode, ZKFC choose a ready-to-serve NameNode to become 
active, as those in safemode ones report UNHEALTHY;
 # in manual mode, NameNode in safemode will not be able to transit to active;

The same configuration item would be controlling these logic to be on/off. How 
do you think [~hexiaoqiao]? I would upload a new patch as proposed if you think 
it's a reasonable option.

> Ability to disallow safemode NN to become active
> 
>
> Key: HDFS-14201
> URL: https://issues.apache.org/jira/browse/HDFS-14201
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: auto-failover
>Affects Versions: 3.1.1, 2.9.2
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
> Attachments: HDFS-14201.001.patch, HDFS-14201.002.patch, 
> HDFS-14201.003.patch
>
>
> Currently with HA, Namenode in safemode can be possibly selected as active, 
> for availability of both read and write, Namenodes not in safemode are better 
> choices to become active though.
> It can take tens of minutes for a cold started Namenode to get out of 
> safemode, especially when there are large number of files and blocks in HDFS, 
> that means if a Namenode in safemode become active, the cluster will be not 
> fully functioning for quite a while, even if it can while there is some 
> Namenode not in safemode.
> The proposal here is to add an option, to allow Namenode to report itself as 
> UNHEALTHY to ZKFC, if it's in safemode, so as to only allow fully functioning 
> Namenode to become active, improving the general availability of the cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14201) Ability to disallow safemode NN to become active

2019-02-24 Thread Xiao Liang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Liang updated HDFS-14201:
--
Attachment: HDFS-14201.003.patch

> Ability to disallow safemode NN to become active
> 
>
> Key: HDFS-14201
> URL: https://issues.apache.org/jira/browse/HDFS-14201
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: auto-failover
>Affects Versions: 3.1.1, 2.9.2
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
> Attachments: HDFS-14201.001.patch, HDFS-14201.002.patch, 
> HDFS-14201.003.patch
>
>
> Currently with HA, Namenode in safemode can be possibly selected as active, 
> for availability of both read and write, Namenodes not in safemode are better 
> choices to become active though.
> It can take tens of minutes for a cold started Namenode to get out of 
> safemode, especially when there are large number of files and blocks in HDFS, 
> that means if a Namenode in safemode become active, the cluster will be not 
> fully functioning for quite a while, even if it can while there is some 
> Namenode not in safemode.
> The proposal here is to add an option, to allow Namenode to report itself as 
> UNHEALTHY to ZKFC, if it's in safemode, so as to only allow fully functioning 
> Namenode to become active, improving the general availability of the cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14201) Ability to disallow safemode NN to become active

2019-02-24 Thread Xiao Liang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16776496#comment-16776496
 ] 

Xiao Liang commented on HDFS-14201:
---

Thanks [~elgoiri] for the information, I uploaded [^HDFS-14201.003.patch] as PR 
seems not triggering Yetus.

Thanks [~hexiaoqiao] for the suggestion, however, I think the NameNode state 
transition is unnecessary, the NameNode in safemode does not really need to do 
anything other than reporting unhealthy to ZKFC to avoid being selected, as 
configured, it's more straightforward in this way I think.

> Ability to disallow safemode NN to become active
> 
>
> Key: HDFS-14201
> URL: https://issues.apache.org/jira/browse/HDFS-14201
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: auto-failover
>Affects Versions: 3.1.1, 2.9.2
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
> Attachments: HDFS-14201.001.patch, HDFS-14201.002.patch, 
> HDFS-14201.003.patch
>
>
> Currently with HA, Namenode in safemode can be possibly selected as active, 
> for availability of both read and write, Namenodes not in safemode are better 
> choices to become active though.
> It can take tens of minutes for a cold started Namenode to get out of 
> safemode, especially when there are large number of files and blocks in HDFS, 
> that means if a Namenode in safemode become active, the cluster will be not 
> fully functioning for quite a while, even if it can while there is some 
> Namenode not in safemode.
> The proposal here is to add an option, to allow Namenode to report itself as 
> UNHEALTHY to ZKFC, if it's in safemode, so as to only allow fully functioning 
> Namenode to become active, improving the general availability of the cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14201) Ability to disallow safemode NN to become active

2019-01-12 Thread Xiao Liang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Liang updated HDFS-14201:
--
Target Version/s: 2.9.2, 3.1.1  (was: 3.1.1, 2.9.2)
  Status: Patch Available  (was: Open)

> Ability to disallow safemode NN to become active
> 
>
> Key: HDFS-14201
> URL: https://issues.apache.org/jira/browse/HDFS-14201
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: auto-failover
>Affects Versions: 2.9.2, 3.1.1
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>
> Currently with HA, Namenode in safemode can be possibly selected as active, 
> for availability of both read and write, Namenodes not in safemode are better 
> choices to become active though.
> It can take tens of minutes for a cold started Namenode to get out of 
> safemode, especially when there are large number of files and blocks in HDFS, 
> that means if a Namenode in safemode become active, the cluster will be not 
> fully functioning for quite a while, even if it can while there is some 
> Namenode not in safemode.
> The proposal here is to add an option, to allow Namenode to report itself as 
> UNHEALTHY to ZKFC, if it's in safemode, so as to only allow fully functioning 
> Namenode to become active, improving the general availability of the cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14201) Ability to disallow safemode NN to become active

2019-01-11 Thread Xiao Liang (JIRA)
Xiao Liang created HDFS-14201:
-

 Summary: Ability to disallow safemode NN to become active
 Key: HDFS-14201
 URL: https://issues.apache.org/jira/browse/HDFS-14201
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: auto-failover
Affects Versions: 2.9.2, 3.1.1
Reporter: Xiao Liang
Assignee: Xiao Liang


Currently with HA, Namenode in safemode can be possibly selected as active, for 
availability of both read and write, Namenodes not in safemode are better 
choices to become active though.

It can take tens of minutes for a cold started Namenode to get out of safemode, 
especially when there are large number of files and blocks in HDFS, that means 
if a Namenode in safemode become active, the cluster will be not fully 
functioning for quite a while, even if it can while there is some Namenode not 
in safemode.

The proposal here is to add an option, to allow Namenode to report itself as 
UNHEALTHY to ZKFC, if it's in safemode, so as to only allow fully functioning 
Namenode to become active, improving the general availability of the cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14043) Tolerate corrupted seen_txid file

2018-11-05 Thread Xiao Liang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16675943#comment-16675943
 ] 

Xiao Liang commented on HDFS-14043:
---

This fixes the same issue we met, thanks [~lukmajercak] for the patch, +1 for 
[^HDFS-14043.003.patch].

> Tolerate corrupted seen_txid file
> -
>
> Key: HDFS-14043
> URL: https://issues.apache.org/jira/browse/HDFS-14043
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, namenode
>Affects Versions: 2.9.2, 3.1.2, 2.9.3
>Reporter: Lukas Majercak
>Assignee: Lukas Majercak
>Priority: Major
> Attachments: HDFS-14043.001.patch, HDFS-14043.002.patch, 
> HDFS-14043.003.patch
>
>
> We already tolerate IOExceptions when reading seen_txid file from namenode's 
> dirs. So we take the maximum txid of all the *readable* namenode dirs. We 
> should extend this to when the file is corrupted. Currently, 
> PersistentLongFile.readFile throws NumberFormatException in this case and the 
> whole NN crashes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14049) TestHttpFSServerWebServer fails on Windows because of missing winutils.exe

2018-11-02 Thread Xiao Liang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16673863#comment-16673863
 ] 

Xiao Liang commented on HDFS-14049:
---

[^HDFS-14049.001.patch] looks good to me, let's wait for the result from Yetus.

> TestHttpFSServerWebServer fails on Windows because of missing winutils.exe
> --
>
> Key: HDFS-14049
> URL: https://issues.apache.org/jira/browse/HDFS-14049
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
>  Labels: windows
> Attachments: HDFS-14049.000.patch, HDFS-14049.001.patch
>
>
> 2018-11-02 15:00:51,780 WARN  Shell - Did not find winutils.exe: {}
> java.io.FileNotFoundException: Could not locate Hadoop executable: 
> D:\hadoop-2.9\hadoop-hdfs-project\hadoop-hdfs-httpfs\target\test-dir\bin\winutils.exe
>  -see https://wiki.apache.org/hadoop/WindowsProblems
>   at org.apache.hadoop.util.Shell.getQualifiedBinInner(Shell.java:612)
>   at org.apache.hadoop.util.Shell.getQualifiedBin(Shell.java:585)
>   at org.apache.hadoop.util.Shell.(Shell.java:682)
>   at org.apache.hadoop.util.StringUtils.(StringUtils.java:78)
>   at 
> org.apache.hadoop.conf.Configuration.getBoolean(Configuration.java:1587)
>   at 
> org.apache.hadoop.fs.http.server.HttpFSServerWebServer.(HttpFSServerWebServer.java:93)
>   at 
> org.apache.hadoop.fs.http.server.TestHttpFSServerWebServer.setUp(TestHttpFSServerWebServer.java:77)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14049) TestHttpFSServerWebServer fails on Windows because of missing winutils.exe

2018-11-02 Thread Xiao Liang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16673825#comment-16673825
 ] 

Xiao Liang commented on HDFS-14049:
---

Thanks [~elgoiri] for the fix, you could use getWinUtilsFile() for winutils.exe 
instead.

> TestHttpFSServerWebServer fails on Windows because of missing winutils.exe
> --
>
> Key: HDFS-14049
> URL: https://issues.apache.org/jira/browse/HDFS-14049
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
>  Labels: windows
> Attachments: HDFS-14049.000.patch
>
>
> 2018-11-02 15:00:51,780 WARN  Shell - Did not find winutils.exe: {}
> java.io.FileNotFoundException: Could not locate Hadoop executable: 
> D:\hadoop-2.9\hadoop-hdfs-project\hadoop-hdfs-httpfs\target\test-dir\bin\winutils.exe
>  -see https://wiki.apache.org/hadoop/WindowsProblems
>   at org.apache.hadoop.util.Shell.getQualifiedBinInner(Shell.java:612)
>   at org.apache.hadoop.util.Shell.getQualifiedBin(Shell.java:585)
>   at org.apache.hadoop.util.Shell.(Shell.java:682)
>   at org.apache.hadoop.util.StringUtils.(StringUtils.java:78)
>   at 
> org.apache.hadoop.conf.Configuration.getBoolean(Configuration.java:1587)
>   at 
> org.apache.hadoop.fs.http.server.HttpFSServerWebServer.(HttpFSServerWebServer.java:93)
>   at 
> org.apache.hadoop.fs.http.server.TestHttpFSServerWebServer.setUp(TestHttpFSServerWebServer.java:77)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13983) TestOfflineImageViewer crashes in windows

2018-10-19 Thread Xiao Liang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16657528#comment-16657528
 ] 

Xiao Liang commented on HDFS-13983:
---

+1 for [^HDFS-13983-03.patch]

> TestOfflineImageViewer crashes in windows
> -
>
> Key: HDFS-13983
> URL: https://issues.apache.org/jira/browse/HDFS-13983
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
>Priority: Major
> Attachments: HDFS-13893-with-patch-intellij-idea.JPG, 
> HDFS-13893-with-patch-mvn.JPG, 
> HDFS-13893-with-patch-without-sysout-close-intellij-idea.JPG, 
> HDFS-13893-without-patch-intellij-idea.JPG, HDFS-13893-without-patch-mvn.JPG, 
> HDFS-13983-01.patch, HDFS-13983-02.patch, HDFS-13983-03.patch
>
>
> TestOfflineImageViewer crashes in windows because, OfflineImageViewer 
> REVERSEXML tries to delete the outputfile and re-create the same stream which 
> is already created.
> Also there are unclosed RAF for input files which blocks from files being 
> deleted.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13983) TestOfflineImageViewer crashes in windows

2018-10-10 Thread Xiao Liang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16645334#comment-16645334
 ] 

Xiao Liang commented on HDFS-13983:
---

Could you please share the test result with the patch on Windows?

> TestOfflineImageViewer crashes in windows
> -
>
> Key: HDFS-13983
> URL: https://issues.apache.org/jira/browse/HDFS-13983
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
>Priority: Major
> Attachments: HDFS-13983-01.patch
>
>
> TestOfflineImageViewer crashes in windows because, OfflineImageViewer 
> REVERSEXML tries to delete the outputfile and re-create the same stream which 
> is already created.
> Also there are unclosed RAF for input files which blocks from files being 
> deleted.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-11257) Evacuate DN when the remaining is negative

2018-06-28 Thread Xiao Liang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-11257?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Liang reassigned HDFS-11257:
-

Assignee: Anbang Hu  (was: Xiao Liang)

> Evacuate DN when the remaining is negative
> --
>
> Key: HDFS-11257
> URL: https://issues.apache.org/jira/browse/HDFS-11257
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.7.3
>Reporter: Íñigo Goiri
>Assignee: Anbang Hu
>Priority: Major
>
> Datanodes have a maximum amount of disk they can use. This is set using 
> {{dfs.datanode.du.reserved}}. For example, if we have a 1TB disk and we set 
> the reserved to 100GB, the DN can only use ~900GB. However, if we fill the DN 
> and later other processes (e.g., logs or co-located services) start to use 
> the disk space, the remaining space will go to a negative and the used 
> storage >100%.
> The Rebalancer or decommissioning would cover this situation. However, both 
> approaches require administrator intervention while this is a situation that 
> violates the settings. Note that decommisioning, would be too extreme as it 
> would evacuate all the data.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13681) Fix TestStartup.testNNFailToStartOnReadOnlyNNDir test failure on Windows

2018-06-17 Thread Xiao Liang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16515246#comment-16515246
 ] 

Xiao Liang commented on HDFS-13681:
---

Thanks [~elgoiri], we can see that the case is fixed in latest Windows daily 
build:

[https://builds.apache.org/view/H-L/view/Hadoop/job/hadoop-trunk-win/500/testReport/org.apache.hadoop.hdfs.server.namenode/TestStartup/]

 

> Fix TestStartup.testNNFailToStartOnReadOnlyNNDir test failure on Windows
> 
>
> Key: HDFS-13681
> URL: https://issues.apache.org/jira/browse/HDFS-13681
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: test
>Affects Versions: 3.1.0, 2.9.1
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Fix For: 2.10.0, 3.2.0, 3.1.1, 2.9.2, 3.0.4
>
> Attachments: HDFS-13681.000.patch, HDFS-13681.001.patch
>
>
> org.apache.hadoop.hdfs.server.namenode.TestStartup.testNNFailToStartOnReadOnlyNNDir
>  fails on Windows with below error message:
> NN dir should be created after NN startup. 
> expected:<[F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data\3\dfs\testNNFailToStartOnReadOnlyNNDir\]name>
>  but 
> was:<[/F:/short/hadoop-trunk-win/s/hadoop-hdfs-project/hadoop-hdfs/target/test/data/3/dfs/testNNFailToStartOnReadOnlyNNDir/]name>
> due to path not processed properly on Windows.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13681) Fix TestStartup.testNNFailToStartOnReadOnlyNNDir test failure on Windows

2018-06-15 Thread Xiao Liang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16514367#comment-16514367
 ] 

Xiao Liang commented on HDFS-13681:
---

Sure, thank you [~elgoiri] for help reviewing. I uploaded 
[^HDFS-13681.001.patch] with the update.

> Fix TestStartup.testNNFailToStartOnReadOnlyNNDir test failure on Windows
> 
>
> Key: HDFS-13681
> URL: https://issues.apache.org/jira/browse/HDFS-13681
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: test
>Affects Versions: 3.1.0, 2.9.1
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Attachments: HDFS-13681.000.patch, HDFS-13681.001.patch
>
>
> org.apache.hadoop.hdfs.server.namenode.TestStartup.testNNFailToStartOnReadOnlyNNDir
>  fails on Windows with below error message:
> NN dir should be created after NN startup. 
> expected:<[F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data\3\dfs\testNNFailToStartOnReadOnlyNNDir\]name>
>  but 
> was:<[/F:/short/hadoop-trunk-win/s/hadoop-hdfs-project/hadoop-hdfs/target/test/data/3/dfs/testNNFailToStartOnReadOnlyNNDir/]name>
> due to path not processed properly on Windows.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13681) Fix TestStartup.testNNFailToStartOnReadOnlyNNDir test failure on Windows

2018-06-15 Thread Xiao Liang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Liang updated HDFS-13681:
--
Attachment: HDFS-13681.001.patch

> Fix TestStartup.testNNFailToStartOnReadOnlyNNDir test failure on Windows
> 
>
> Key: HDFS-13681
> URL: https://issues.apache.org/jira/browse/HDFS-13681
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: test
>Affects Versions: 3.1.0, 2.9.1
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Attachments: HDFS-13681.000.patch, HDFS-13681.001.patch
>
>
> org.apache.hadoop.hdfs.server.namenode.TestStartup.testNNFailToStartOnReadOnlyNNDir
>  fails on Windows with below error message:
> NN dir should be created after NN startup. 
> expected:<[F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data\3\dfs\testNNFailToStartOnReadOnlyNNDir\]name>
>  but 
> was:<[/F:/short/hadoop-trunk-win/s/hadoop-hdfs-project/hadoop-hdfs/target/test/data/3/dfs/testNNFailToStartOnReadOnlyNNDir/]name>
> due to path not processed properly on Windows.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-11257) Evacuate DN when the remaining is negative

2018-06-15 Thread Xiao Liang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-11257?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Liang reassigned HDFS-11257:
-

Assignee: Xiao Liang

> Evacuate DN when the remaining is negative
> --
>
> Key: HDFS-11257
> URL: https://issues.apache.org/jira/browse/HDFS-11257
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.7.3
>Reporter: Íñigo Goiri
>Assignee: Xiao Liang
>Priority: Major
>
> Datanodes have a maximum amount of disk they can use. This is set using 
> {{dfs.datanode.du.reserved}}. For example, if we have a 1TB disk and we set 
> the reserved to 100GB, the DN can only use ~900GB. However, if we fill the DN 
> and later other processes (e.g., logs or co-located services) start to use 
> the disk space, the remaining space will go to a negative and the used 
> storage >100%.
> The Rebalancer or decommissioning would cover this situation. However, both 
> approaches require administrator intervention while this is a situation that 
> violates the settings. Note that decommisioning, would be too extreme as it 
> would evacuate all the data.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13681) Fix TestStartup.testNNFailToStartOnReadOnlyNNDir test failure on Windows

2018-06-15 Thread Xiao Liang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16513596#comment-16513596
 ] 

Xiao Liang commented on HDFS-13681:
---

There are 2 issues causing TestStartup.testNNFailToStartOnReadOnlyNNDir fail on 
Windows:
 # Path comparison, that the Path class does not work perfectly on Windows, 
from the error message we can see the 2 paths are logically identical, but the 
string literals are not;
 # Improper file permission API, according to the description of 
*org.apache.hadoop.fs.FileUtil#setWritable*, *File#setWritable* does not work 
as expected on Windows, so should use *FileUtil#setWritable* instead.

Without the patch, test result of this case in my local Windows machine is:

{color:#d04437}[INFO] 
---{color}
{color:#d04437}[INFO] T E S T S{color}
{color:#d04437}[INFO] 
---{color}
{color:#d04437}[INFO] Running 
org.apache.hadoop.hdfs.server.namenode.TestStartup{color}
{color:#d04437}[ERROR] Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time 
elapsed: 4.269 s <<< FAILURE! - in 
org.apache.hadoop.hdfs.server.namenode.TestStartup{color}
{color:#d04437}[ERROR] 
testNNFailToStartOnReadOnlyNNDir(org.apache.hadoop.hdfs.server.namenode.TestStartup)
 Time elapsed: 4.119 s <<< FAILURE!{color}
{color:#d04437}org.junit.ComparisonFailure: NN dir should be created after NN 
startup. 
expected:<[D:\Git\Hadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\testNNFailToStartOnReadOnlyNNDir\]name>
 but 
was:<[/D:/Git/Hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/testNNFailToStartOnReadOnlyNNDir/]name>{color}
{color:#d04437} at org.junit.Assert.assertEquals(Assert.java:115){color}
{color:#d04437} at 
org.apache.hadoop.hdfs.server.namenode.TestStartup.testNNFailToStartOnReadOnlyNNDir(TestStartup.java:729){color}
{color:#d04437} at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
Method){color}
{color:#d04437} at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62){color}
{color:#d04437} at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43){color}
{color:#d04437} at java.lang.reflect.Method.invoke(Method.java:498){color}
{color:#d04437} at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47){color}
{color:#d04437} at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12){color}
{color:#d04437} at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44){color}
{color:#d04437} at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17){color}
{color:#d04437} at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74){color}

{color:#d04437}[INFO]{color}
{color:#d04437}[INFO] Results:{color}
{color:#d04437}[INFO]{color}
{color:#d04437}[ERROR] Failures:{color}
{color:#d04437}[ERROR] TestStartup.testNNFailToStartOnReadOnlyNNDir:729 NN dir 
should be created after NN startup. 
expected:<[D:\Git\Hadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\testNNFailToStartOnReadOnlyNNDir\]name>
 but 
was:<[/D:/Git/Hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/testNNFailToStartOnReadOnlyNNDir/]name>{color}

With the patch, the result on Windows is:

{color:#14892c}[INFO] 
---{color}
{color:#14892c}[INFO] T E S T S{color}
{color:#14892c}[INFO] 
---{color}
{color:#14892c}[INFO] Running 
org.apache.hadoop.hdfs.server.namenode.TestStartup{color}
{color:#14892c}[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time 
elapsed: 4.481 s - in org.apache.hadoop.hdfs.server.namenode.TestStartup{color}
{color:#14892c}[INFO]{color}
{color:#14892c}[INFO] Results:{color}
{color:#14892c}[INFO]{color}
{color:#14892c}[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0{color}

The patch [^HDFS-13681.000.patch] can also be applied to branch-2.

> Fix TestStartup.testNNFailToStartOnReadOnlyNNDir test failure on Windows
> 
>
> Key: HDFS-13681
> URL: https://issues.apache.org/jira/browse/HDFS-13681
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: test
>Affects Versions: 3.1.0, 2.9.1
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Attachments: HDFS-13681.000.patch
>
>
> org.apache.hadoop.hdfs.server.namenode.TestStartup.testNNFailToStartOnReadOnlyNNDir
>  fails on Windows with below error message:
> NN dir should be created after NN startup. 
> 

[jira] [Updated] (HDFS-13681) Fix TestStartup.testNNFailToStartOnReadOnlyNNDir test failure on Windows

2018-06-15 Thread Xiao Liang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Liang updated HDFS-13681:
--
Component/s: test

> Fix TestStartup.testNNFailToStartOnReadOnlyNNDir test failure on Windows
> 
>
> Key: HDFS-13681
> URL: https://issues.apache.org/jira/browse/HDFS-13681
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: test
>Affects Versions: 3.1.0, 2.9.1
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Attachments: HDFS-13681.000.patch
>
>
> org.apache.hadoop.hdfs.server.namenode.TestStartup.testNNFailToStartOnReadOnlyNNDir
>  fails on Windows with below error message:
> NN dir should be created after NN startup. 
> expected:<[F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data\3\dfs\testNNFailToStartOnReadOnlyNNDir\]name>
>  but 
> was:<[/F:/short/hadoop-trunk-win/s/hadoop-hdfs-project/hadoop-hdfs/target/test/data/3/dfs/testNNFailToStartOnReadOnlyNNDir/]name>
> due to path not processed properly on Windows.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13681) Fix TestStartup.testNNFailToStartOnReadOnlyNNDir test failure on Windows

2018-06-15 Thread Xiao Liang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Liang updated HDFS-13681:
--
Affects Version/s: 3.1.0
   2.9.1

> Fix TestStartup.testNNFailToStartOnReadOnlyNNDir test failure on Windows
> 
>
> Key: HDFS-13681
> URL: https://issues.apache.org/jira/browse/HDFS-13681
> Project: Hadoop HDFS
>  Issue Type: Test
>Affects Versions: 3.1.0, 2.9.1
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Attachments: HDFS-13681.000.patch
>
>
> org.apache.hadoop.hdfs.server.namenode.TestStartup.testNNFailToStartOnReadOnlyNNDir
>  fails on Windows with below error message:
> NN dir should be created after NN startup. 
> expected:<[F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data\3\dfs\testNNFailToStartOnReadOnlyNNDir\]name>
>  but 
> was:<[/F:/short/hadoop-trunk-win/s/hadoop-hdfs-project/hadoop-hdfs/target/test/data/3/dfs/testNNFailToStartOnReadOnlyNNDir/]name>
> due to path not processed properly on Windows.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13681) Fix TestStartup.testNNFailToStartOnReadOnlyNNDir test failure on Windows

2018-06-15 Thread Xiao Liang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Liang updated HDFS-13681:
--
Status: Patch Available  (was: Open)

> Fix TestStartup.testNNFailToStartOnReadOnlyNNDir test failure on Windows
> 
>
> Key: HDFS-13681
> URL: https://issues.apache.org/jira/browse/HDFS-13681
> Project: Hadoop HDFS
>  Issue Type: Test
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Attachments: HDFS-13681.000.patch
>
>
> org.apache.hadoop.hdfs.server.namenode.TestStartup.testNNFailToStartOnReadOnlyNNDir
>  fails on Windows with below error message:
> NN dir should be created after NN startup. 
> expected:<[F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data\3\dfs\testNNFailToStartOnReadOnlyNNDir\]name>
>  but 
> was:<[/F:/short/hadoop-trunk-win/s/hadoop-hdfs-project/hadoop-hdfs/target/test/data/3/dfs/testNNFailToStartOnReadOnlyNNDir/]name>
> due to path not processed properly on Windows.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13681) Fix TestStartup.testNNFailToStartOnReadOnlyNNDir test failure on Windows

2018-06-15 Thread Xiao Liang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Liang updated HDFS-13681:
--
Attachment: HDFS-13681.000.patch

> Fix TestStartup.testNNFailToStartOnReadOnlyNNDir test failure on Windows
> 
>
> Key: HDFS-13681
> URL: https://issues.apache.org/jira/browse/HDFS-13681
> Project: Hadoop HDFS
>  Issue Type: Test
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Attachments: HDFS-13681.000.patch
>
>
> org.apache.hadoop.hdfs.server.namenode.TestStartup.testNNFailToStartOnReadOnlyNNDir
>  fails on Windows with below error message:
> NN dir should be created after NN startup. 
> expected:<[F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data\3\dfs\testNNFailToStartOnReadOnlyNNDir\]name>
>  but 
> was:<[/F:/short/hadoop-trunk-win/s/hadoop-hdfs-project/hadoop-hdfs/target/test/data/3/dfs/testNNFailToStartOnReadOnlyNNDir/]name>
> due to path not processed properly on Windows.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13681) Fix TestStartup.testNNFailToStartOnReadOnlyNNDir test failure on Windows

2018-06-14 Thread Xiao Liang (JIRA)
Xiao Liang created HDFS-13681:
-

 Summary: Fix TestStartup.testNNFailToStartOnReadOnlyNNDir test 
failure on Windows
 Key: HDFS-13681
 URL: https://issues.apache.org/jira/browse/HDFS-13681
 Project: Hadoop HDFS
  Issue Type: Test
Reporter: Xiao Liang
Assignee: Xiao Liang


org.apache.hadoop.hdfs.server.namenode.TestStartup.testNNFailToStartOnReadOnlyNNDir
 fails on Windows with below error message:

NN dir should be created after NN startup. 
expected:<[F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data\3\dfs\testNNFailToStartOnReadOnlyNNDir\]name>
 but 
was:<[/F:/short/hadoop-trunk-win/s/hadoop-hdfs-project/hadoop-hdfs/target/test/data/3/dfs/testNNFailToStartOnReadOnlyNNDir/]name>

due to path not processed properly on Windows.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13631) TestDFSAdmin#testCheckNumOfBlocksInReportCommand should use a separate MiniDFSCluster path

2018-05-31 Thread Xiao Liang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16496151#comment-16496151
 ] 

Xiao Liang commented on HDFS-13631:
---

Thanks [~huanbang1993] for the fix, +1 for [^HDFS-13631.006.patch]

> TestDFSAdmin#testCheckNumOfBlocksInReportCommand should use a separate 
> MiniDFSCluster path
> --
>
> Key: HDFS-13631
> URL: https://issues.apache.org/jira/browse/HDFS-13631
> Project: Hadoop HDFS
>  Issue Type: Test
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Minor
>  Labels: Windows
> Attachments: HDFS-13631.000.patch, HDFS-13631.001.patch, 
> HDFS-13631.002.patch, HDFS-13631.003.patch, HDFS-13631.004.patch, 
> HDFS-13631.005.patch, HDFS-13631.006.patch
>
>
> [TestDFSAdmin#testCheckNumOfBlocksInReportCommand|https://builds.apache.org/job/hadoop-trunk-win/479/testReport/org.apache.hadoop.hdfs.tools/TestDFSAdmin/testCheckNumOfBlocksInReportCommand/]
>  fails with error message:
> {color:#d04437}Could not fully delete 
> F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data\3\dfs\name-0-1{color}
> because testCheckNumOfBlocksInReportCommand is starting a new MiniDFSCluster 
> with the same base path as the one in @Before



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13631) TestDFSAdmin#testCheckNumOfBlocksInReportCommand should use a separate MiniDFSCluster path

2018-05-30 Thread Xiao Liang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16496139#comment-16496139
 ] 

Xiao Liang edited comment on HDFS-13631 at 5/31/18 5:50 AM:


Line 913 in the patch [^HDFS-13631.005.patch]:
{code:java}
Path path= new Path("/tmp.txt");
{code}
A space needed between "path" and "="?


was (Author: surmountian):
Line 913 in the patch:
{code:java}
Path path= new Path("/tmp.txt");
{code}
A space needed between "path" and "="?

> TestDFSAdmin#testCheckNumOfBlocksInReportCommand should use a separate 
> MiniDFSCluster path
> --
>
> Key: HDFS-13631
> URL: https://issues.apache.org/jira/browse/HDFS-13631
> Project: Hadoop HDFS
>  Issue Type: Test
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Minor
>  Labels: Windows
> Attachments: HDFS-13631.000.patch, HDFS-13631.001.patch, 
> HDFS-13631.002.patch, HDFS-13631.003.patch, HDFS-13631.004.patch, 
> HDFS-13631.005.patch
>
>
> [TestDFSAdmin#testCheckNumOfBlocksInReportCommand|https://builds.apache.org/job/hadoop-trunk-win/479/testReport/org.apache.hadoop.hdfs.tools/TestDFSAdmin/testCheckNumOfBlocksInReportCommand/]
>  fails with error message:
> {color:#d04437}Could not fully delete 
> F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data\3\dfs\name-0-1{color}
> because testCheckNumOfBlocksInReportCommand is starting a new MiniDFSCluster 
> with the same base path as the one in @Before



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13631) TestDFSAdmin#testCheckNumOfBlocksInReportCommand should use a separate MiniDFSCluster path

2018-05-30 Thread Xiao Liang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16496139#comment-16496139
 ] 

Xiao Liang commented on HDFS-13631:
---

Line 913 in the patch:
{code:java}
Path path= new Path("/tmp.txt");
{code}
A space needed between "path" and "="?

> TestDFSAdmin#testCheckNumOfBlocksInReportCommand should use a separate 
> MiniDFSCluster path
> --
>
> Key: HDFS-13631
> URL: https://issues.apache.org/jira/browse/HDFS-13631
> Project: Hadoop HDFS
>  Issue Type: Test
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Minor
>  Labels: Windows
> Attachments: HDFS-13631.000.patch, HDFS-13631.001.patch, 
> HDFS-13631.002.patch, HDFS-13631.003.patch, HDFS-13631.004.patch, 
> HDFS-13631.005.patch
>
>
> [TestDFSAdmin#testCheckNumOfBlocksInReportCommand|https://builds.apache.org/job/hadoop-trunk-win/479/testReport/org.apache.hadoop.hdfs.tools/TestDFSAdmin/testCheckNumOfBlocksInReportCommand/]
>  fails with error message:
> {color:#d04437}Could not fully delete 
> F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data\3\dfs\name-0-1{color}
> because testCheckNumOfBlocksInReportCommand is starting a new MiniDFSCluster 
> with the same base path as the one in @Before



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13629) Some tests in TestDiskBalancerCommand fail on Windows due to MiniDFSCluster path conflict and improper path usage

2018-05-30 Thread Xiao Liang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16494738#comment-16494738
 ] 

Xiao Liang commented on HDFS-13629:
---

[^HDFS-13629.001.patch] looks good to me, +1 for the fix.

> Some tests in TestDiskBalancerCommand fail on Windows due to MiniDFSCluster 
> path conflict and improper path usage
> -
>
> Key: HDFS-13629
> URL: https://issues.apache.org/jira/browse/HDFS-13629
> Project: Hadoop HDFS
>  Issue Type: Test
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Minor
>  Labels: Windows
> Attachments: HDFS-13629.000.patch, HDFS-13629.001.patch
>
>
> The following fail due to MiniDFSCluster path conflict:
> * 
> [testDiskBalancerForceExecute|https://builds.apache.org/job/hadoop-trunk-win/479/testReport/org.apache.hadoop.hdfs.server.diskbalancer.command/TestDiskBalancerCommand/testDiskBalancerForceExecute/]
> * 
> [testDiskBalancerExecuteOptionPlanValidityWithException|https://builds.apache.org/job/hadoop-trunk-win/479/testReport/org.apache.hadoop.hdfs.server.diskbalancer.command/TestDiskBalancerCommand/testDiskBalancerExecuteOptionPlanValidityWithException/]
> * 
> [testDiskBalancerQueryWithoutSubmit|https://builds.apache.org/job/hadoop-trunk-win/479/testReport/org.apache.hadoop.hdfs.server.diskbalancer.command/TestDiskBalancerCommand/testDiskBalancerQueryWithoutSubmit/]
> * 
> [testDiskBalancerExecuteOptionPlanValidity|https://builds.apache.org/job/hadoop-trunk-win/479/testReport/org.apache.hadoop.hdfs.server.diskbalancer.command/TestDiskBalancerCommand/testDiskBalancerExecuteOptionPlanValidity/]
> * 
> [testRunMultipleCommandsUnderOneSetup|https://builds.apache.org/job/hadoop-trunk-win/479/testReport/org.apache.hadoop.hdfs.server.diskbalancer.command/TestDiskBalancerCommand/testRunMultipleCommandsUnderOneSetup/]
> * 
> [testDiskBalancerExecutePlanValidityWithOutUnitException|https://builds.apache.org/job/hadoop-trunk-win/479/testReport/org.apache.hadoop.hdfs.server.diskbalancer.command/TestDiskBalancerCommand/testDiskBalancerExecutePlanValidityWithOutUnitException/]
> * 
> [testSubmitPlanInNonRegularStatus|https://builds.apache.org/job/hadoop-trunk-win/479/testReport/org.apache.hadoop.hdfs.server.diskbalancer.command/TestDiskBalancerCommand/testSubmitPlanInNonRegularStatus/]
> * 
> [testPrintFullPathOfPlan|https://builds.apache.org/job/hadoop-trunk-win/479/testReport/org.apache.hadoop.hdfs.server.diskbalancer.command/TestDiskBalancerCommand/testPrintFullPathOfPlan/]
> The following fails due to improper path usage:
> * 
> [testReportNodeWithoutJson|https://builds.apache.org/job/hadoop-trunk-win/479/testReport/org.apache.hadoop.hdfs.server.diskbalancer.command/TestDiskBalancerCommand/testReportNodeWithoutJson/]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13632) Randomize baseDir for MiniJournalCluster in MiniQJMHACluster for TestDFSAdminWithHA

2018-05-29 Thread Xiao Liang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16494698#comment-16494698
 ] 

Xiao Liang commented on HDFS-13632:
---

The way of setting test folders in [^HDFS-13632.004.patch] and 
[^HDFS-13632-branch-2.004.patch] looks reasonable, +1 for the fix.

> Randomize baseDir for MiniJournalCluster in MiniQJMHACluster for 
> TestDFSAdminWithHA 
> 
>
> Key: HDFS-13632
> URL: https://issues.apache.org/jira/browse/HDFS-13632
> Project: Hadoop HDFS
>  Issue Type: Test
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Minor
> Attachments: HDFS-13632-branch-2.000.patch, 
> HDFS-13632-branch-2.001.patch, HDFS-13632-branch-2.004.patch, 
> HDFS-13632.000.patch, HDFS-13632.001.patch, HDFS-13632.002.patch, 
> HDFS-13632.003.patch, HDFS-13632.004.patch
>
>
> As [HDFS-13630|https://issues.apache.org/jira/browse/HDFS-13630] indicates, 
> testUpgradeCommand keeps journalnode directory from being released, which 
> fails all subsequent tests that try to use the same path.
> Randomizing the baseDir for MiniJournalCluster in MiniQJMHACluster for 
> TestDFSAdminWithHA can isolate effects of tests from each other.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-130) TestGenerateOzoneRequiredConfigurations should use GenericTestUtils#getTempPath to set output directory

2018-05-29 Thread Xiao Liang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16494052#comment-16494052
 ] 

Xiao Liang commented on HDDS-130:
-

For test output folder, GenericTestUtils#getRandomizedTempPath may be a better 
choice, while different test cases use the same default output path, there 
could be some conflict between them especially on Windows, this can be avoided 
by using randomized test paths.

> TestGenerateOzoneRequiredConfigurations should use 
> GenericTestUtils#getTempPath to set output directory
> ---
>
> Key: HDDS-130
> URL: https://issues.apache.org/jira/browse/HDDS-130
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Tools
>Reporter: Nanda kumar
>Priority: Minor
>  Labels: newbie
>
> {{TestGenerateOzoneRequiredConfigurations}} uses current directory (.) as its 
> output location which generates {{ozone-site.xml}} file in the directory from 
> where the test-cases is executed. Insead we should use 
> {{GenericTestUtils#getTempPath}} to get the output directory for test-cases.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13631) TestDFSAdmin#testCheckNumOfBlocksInReportCommand should use a separate MiniDFSCluster path

2018-05-29 Thread Xiao Liang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16493282#comment-16493282
 ] 

Xiao Liang commented on HDFS-13631:
---

Is it on Windows?

> TestDFSAdmin#testCheckNumOfBlocksInReportCommand should use a separate 
> MiniDFSCluster path
> --
>
> Key: HDFS-13631
> URL: https://issues.apache.org/jira/browse/HDFS-13631
> Project: Hadoop HDFS
>  Issue Type: Test
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Minor
>  Labels: Windows
> Attachments: HDFS-13631.000.patch
>
>
> [TestDFSAdmin#testCheckNumOfBlocksInReportCommand|https://builds.apache.org/job/hadoop-trunk-win/479/testReport/org.apache.hadoop.hdfs.tools/TestDFSAdmin/testCheckNumOfBlocksInReportCommand/]
>  fails with error message:
> {color:#d04437}Could not fully delete 
> F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data\3\dfs\name-0-1{color}
> because testCheckNumOfBlocksInReportCommand is starting a new MiniDFSCluster 
> with the same base path as the one in @Before



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13629) Some tests in TestDiskBalancerCommand fail on Windows due to MiniDFSCluster path conflict and improper path usage

2018-05-29 Thread Xiao Liang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16493248#comment-16493248
 ] 

Xiao Liang commented on HDFS-13629:
---

Path conflict caused many test cases using MiniDFSCluster failed, and adding a 
randomized path (HDFS-13408) is a valid way to fix it.

For the fix of test failures of TestDiskBalancerCommand on Windows, how about 
adding the path randomization logic in 
DiskBalancerTestUtil#newImbalancedCluster?

> Some tests in TestDiskBalancerCommand fail on Windows due to MiniDFSCluster 
> path conflict and improper path usage
> -
>
> Key: HDFS-13629
> URL: https://issues.apache.org/jira/browse/HDFS-13629
> Project: Hadoop HDFS
>  Issue Type: Test
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Minor
>  Labels: Windows
> Attachments: HDFS-13629.000.patch
>
>
> The following fail due to MiniDFSCluster path conflict:
> * 
> [testDiskBalancerForceExecute|https://builds.apache.org/job/hadoop-trunk-win/479/testReport/org.apache.hadoop.hdfs.server.diskbalancer.command/TestDiskBalancerCommand/testDiskBalancerForceExecute/]
> * 
> [testDiskBalancerExecuteOptionPlanValidityWithException|https://builds.apache.org/job/hadoop-trunk-win/479/testReport/org.apache.hadoop.hdfs.server.diskbalancer.command/TestDiskBalancerCommand/testDiskBalancerExecuteOptionPlanValidityWithException/]
> * 
> [testDiskBalancerQueryWithoutSubmit|https://builds.apache.org/job/hadoop-trunk-win/479/testReport/org.apache.hadoop.hdfs.server.diskbalancer.command/TestDiskBalancerCommand/testDiskBalancerQueryWithoutSubmit/]
> * 
> [testDiskBalancerExecuteOptionPlanValidity|https://builds.apache.org/job/hadoop-trunk-win/479/testReport/org.apache.hadoop.hdfs.server.diskbalancer.command/TestDiskBalancerCommand/testDiskBalancerExecuteOptionPlanValidity/]
> * 
> [testRunMultipleCommandsUnderOneSetup|https://builds.apache.org/job/hadoop-trunk-win/479/testReport/org.apache.hadoop.hdfs.server.diskbalancer.command/TestDiskBalancerCommand/testRunMultipleCommandsUnderOneSetup/]
> * 
> [testDiskBalancerExecutePlanValidityWithOutUnitException|https://builds.apache.org/job/hadoop-trunk-win/479/testReport/org.apache.hadoop.hdfs.server.diskbalancer.command/TestDiskBalancerCommand/testDiskBalancerExecutePlanValidityWithOutUnitException/]
> * 
> [testSubmitPlanInNonRegularStatus|https://builds.apache.org/job/hadoop-trunk-win/479/testReport/org.apache.hadoop.hdfs.server.diskbalancer.command/TestDiskBalancerCommand/testSubmitPlanInNonRegularStatus/]
> * 
> [testPrintFullPathOfPlan|https://builds.apache.org/job/hadoop-trunk-win/479/testReport/org.apache.hadoop.hdfs.server.diskbalancer.command/TestDiskBalancerCommand/testPrintFullPathOfPlan/]
> The following fails due to improper path usage:
> * 
> [testReportNodeWithoutJson|https://builds.apache.org/job/hadoop-trunk-win/479/testReport/org.apache.hadoop.hdfs.server.diskbalancer.command/TestDiskBalancerCommand/testReportNodeWithoutJson/]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13618) Fix TestDataNodeFaultInjector test failures on Windows

2018-05-25 Thread Xiao Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13618?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16490365#comment-16490365
 ] 

Xiao Liang commented on HDFS-13618:
---

Thanks [~elgoiri] for reviewing. Those 2 failed cases reported by Yetus should 
not be caused by the patch.

> Fix TestDataNodeFaultInjector test failures on Windows
> --
>
> Key: HDFS-13618
> URL: https://issues.apache.org/jira/browse/HDFS-13618
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Attachments: HDFS-13618-branch-2.000.patch, HDFS-13618.000.patch
>
>
> Currently test cases of TestDataNodeFaultInjector are failing on Windows with 
> error like:
> {color:#d04437}Pathname 
> /F:/short/hadoop-trunk-win/s/hadoop-hdfs-project/hadoop-hdfs/target/test/data/XpN2p8YCDv/TestDataNodeFaultInjector/verifyFaultInjectionDelayPipeline/test.data
>  from 
> F:/short/hadoop-trunk-win/s/hadoop-hdfs-project/hadoop-hdfs/target/test/data/XpN2p8YCDv/TestDataNodeFaultInjector/verifyFaultInjectionDelayPipeline/test.data
>  is not a valid DFS filename.{color}
> It's a common error like other failed tests on Windows that already fixed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13618) Fix TestDataNodeFaultInjector test failures on Windows

2018-05-24 Thread Xiao Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13618?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16490039#comment-16490039
 ] 

Xiao Liang commented on HDFS-13618:
---

With the patch, the tests for both trunk and branch-2 are passed in my local 
Windows machine:

{color:#14892c}[INFO] 
---{color}
{color:#14892c}[INFO] T E S T S{color}
{color:#14892c}[INFO] 
---{color}
{color:#14892c}[INFO] Running 
org.apache.hadoop.hdfs.server.datanode.TestDataNodeFaultInjector{color}
{color:#14892c}[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time 
elapsed: 11.343 s - in 
org.apache.hadoop.hdfs.server.datanode.TestDataNodeFaultInjector{color}
{color:#14892c}[INFO]{color}
{color:#14892c}[INFO] Results:{color}
{color:#14892c}[INFO]{color}
{color:#14892c}[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0{color}

> Fix TestDataNodeFaultInjector test failures on Windows
> --
>
> Key: HDFS-13618
> URL: https://issues.apache.org/jira/browse/HDFS-13618
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Attachments: HDFS-13618-branch-2.000.patch, HDFS-13618.000.patch
>
>
> Currently test cases of TestDataNodeFaultInjector are failing on Windows with 
> error like:
> {color:#d04437}Pathname 
> /F:/short/hadoop-trunk-win/s/hadoop-hdfs-project/hadoop-hdfs/target/test/data/XpN2p8YCDv/TestDataNodeFaultInjector/verifyFaultInjectionDelayPipeline/test.data
>  from 
> F:/short/hadoop-trunk-win/s/hadoop-hdfs-project/hadoop-hdfs/target/test/data/XpN2p8YCDv/TestDataNodeFaultInjector/verifyFaultInjectionDelayPipeline/test.data
>  is not a valid DFS filename.{color}
> It's a common error like other failed tests on Windows that already fixed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13618) Fix TestDataNodeFaultInjector test failures on Windows

2018-05-24 Thread Xiao Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13618?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Liang updated HDFS-13618:
--
Status: Patch Available  (was: Open)

> Fix TestDataNodeFaultInjector test failures on Windows
> --
>
> Key: HDFS-13618
> URL: https://issues.apache.org/jira/browse/HDFS-13618
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Attachments: HDFS-13618-branch-2.000.patch, HDFS-13618.000.patch
>
>
> Currently test cases of TestDataNodeFaultInjector are failing on Windows with 
> error like:
> {color:#d04437}Pathname 
> /F:/short/hadoop-trunk-win/s/hadoop-hdfs-project/hadoop-hdfs/target/test/data/XpN2p8YCDv/TestDataNodeFaultInjector/verifyFaultInjectionDelayPipeline/test.data
>  from 
> F:/short/hadoop-trunk-win/s/hadoop-hdfs-project/hadoop-hdfs/target/test/data/XpN2p8YCDv/TestDataNodeFaultInjector/verifyFaultInjectionDelayPipeline/test.data
>  is not a valid DFS filename.{color}
> It's a common error like other failed tests on Windows that already fixed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13618) Fix TestDataNodeFaultInjector test failures on Windows

2018-05-24 Thread Xiao Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13618?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Liang updated HDFS-13618:
--
Attachment: HDFS-13618.000.patch

> Fix TestDataNodeFaultInjector test failures on Windows
> --
>
> Key: HDFS-13618
> URL: https://issues.apache.org/jira/browse/HDFS-13618
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Attachments: HDFS-13618-branch-2.000.patch, HDFS-13618.000.patch
>
>
> Currently test cases of TestDataNodeFaultInjector are failing on Windows with 
> error like:
> {color:#d04437}Pathname 
> /F:/short/hadoop-trunk-win/s/hadoop-hdfs-project/hadoop-hdfs/target/test/data/XpN2p8YCDv/TestDataNodeFaultInjector/verifyFaultInjectionDelayPipeline/test.data
>  from 
> F:/short/hadoop-trunk-win/s/hadoop-hdfs-project/hadoop-hdfs/target/test/data/XpN2p8YCDv/TestDataNodeFaultInjector/verifyFaultInjectionDelayPipeline/test.data
>  is not a valid DFS filename.{color}
> It's a common error like other failed tests on Windows that already fixed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13618) Fix TestDataNodeFaultInjector test failures on Windows

2018-05-24 Thread Xiao Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13618?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Liang updated HDFS-13618:
--
Description: 
Currently test cases of TestDataNodeFaultInjector are failing on Windows with 
error like:

{color:#d04437}Pathname 
/F:/short/hadoop-trunk-win/s/hadoop-hdfs-project/hadoop-hdfs/target/test/data/XpN2p8YCDv/TestDataNodeFaultInjector/verifyFaultInjectionDelayPipeline/test.data
 from 
F:/short/hadoop-trunk-win/s/hadoop-hdfs-project/hadoop-hdfs/target/test/data/XpN2p8YCDv/TestDataNodeFaultInjector/verifyFaultInjectionDelayPipeline/test.data
 is not a valid DFS filename.{color}

It's a common error like other failed tests on Windows that already fixed.

  was:
Currently test cases of TestDataNodeFaultInjector are failing on Windows with 
error like:

Pathname 
/F:/short/hadoop-trunk-win/s/hadoop-hdfs-project/hadoop-hdfs/target/test/data/XpN2p8YCDv/TestDataNodeFaultInjector/verifyFaultInjectionDelayPipeline/test.data
 from 
F:/short/hadoop-trunk-win/s/hadoop-hdfs-project/hadoop-hdfs/target/test/data/XpN2p8YCDv/TestDataNodeFaultInjector/verifyFaultInjectionDelayPipeline/test.data
 is not a valid DFS filename.

It's a common error like other failed tests on Windows that already fixed.


> Fix TestDataNodeFaultInjector test failures on Windows
> --
>
> Key: HDFS-13618
> URL: https://issues.apache.org/jira/browse/HDFS-13618
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Attachments: HDFS-13618-branch-2.000.patch
>
>
> Currently test cases of TestDataNodeFaultInjector are failing on Windows with 
> error like:
> {color:#d04437}Pathname 
> /F:/short/hadoop-trunk-win/s/hadoop-hdfs-project/hadoop-hdfs/target/test/data/XpN2p8YCDv/TestDataNodeFaultInjector/verifyFaultInjectionDelayPipeline/test.data
>  from 
> F:/short/hadoop-trunk-win/s/hadoop-hdfs-project/hadoop-hdfs/target/test/data/XpN2p8YCDv/TestDataNodeFaultInjector/verifyFaultInjectionDelayPipeline/test.data
>  is not a valid DFS filename.{color}
> It's a common error like other failed tests on Windows that already fixed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13618) Fix TestDataNodeFaultInjector test failures on Windows

2018-05-24 Thread Xiao Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13618?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Liang updated HDFS-13618:
--
Attachment: HDFS-13618-branch-2.000.patch

> Fix TestDataNodeFaultInjector test failures on Windows
> --
>
> Key: HDFS-13618
> URL: https://issues.apache.org/jira/browse/HDFS-13618
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Attachments: HDFS-13618-branch-2.000.patch
>
>
> Currently test cases of TestDataNodeFaultInjector are failing on Windows with 
> error like:
> Pathname 
> /F:/short/hadoop-trunk-win/s/hadoop-hdfs-project/hadoop-hdfs/target/test/data/XpN2p8YCDv/TestDataNodeFaultInjector/verifyFaultInjectionDelayPipeline/test.data
>  from 
> F:/short/hadoop-trunk-win/s/hadoop-hdfs-project/hadoop-hdfs/target/test/data/XpN2p8YCDv/TestDataNodeFaultInjector/verifyFaultInjectionDelayPipeline/test.data
>  is not a valid DFS filename.
> It's a common error like other failed tests on Windows that already fixed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13618) Fix TestDataNodeFaultInjector test failures on Windows

2018-05-24 Thread Xiao Liang (JIRA)
Xiao Liang created HDFS-13618:
-

 Summary: Fix TestDataNodeFaultInjector test failures on Windows
 Key: HDFS-13618
 URL: https://issues.apache.org/jira/browse/HDFS-13618
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Xiao Liang
Assignee: Xiao Liang


Currently test cases of TestDataNodeFaultInjector are failing on Windows with 
error like:

Pathname 
/F:/short/hadoop-trunk-win/s/hadoop-hdfs-project/hadoop-hdfs/target/test/data/XpN2p8YCDv/TestDataNodeFaultInjector/verifyFaultInjectionDelayPipeline/test.data
 from 
F:/short/hadoop-trunk-win/s/hadoop-hdfs-project/hadoop-hdfs/target/test/data/XpN2p8YCDv/TestDataNodeFaultInjector/verifyFaultInjectionDelayPipeline/test.data
 is not a valid DFS filename.

It's a common error like other failed tests on Windows that already fixed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13588) Fix TestFsDatasetImpl test failures on Windows

2018-05-22 Thread Xiao Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16484714#comment-16484714
 ] 

Xiao Liang commented on HDFS-13588:
---

For 1, according to the description of 
org.apache.hadoop.fs.FileUtil#setWritable, File#setWritable does not work as 
expected on Windows, but it's used and failed in TestFsDatasetImpl, this fix is 
to replace File#setWritable with org.apache.hadoop.fs.FileUtil#setWritable and 
make it pass on Windows.

For 2, similar to other test case failures on Windows using MiniDFSCluster that 
use a fixed path for different test cases, this fix is to used a randomized 
path for base dir of MiniDFSCluster(introduced in HDFS-13408) to avoid the 
failure due to conflict between test cases.

> Fix TestFsDatasetImpl test failures on Windows
> --
>
> Key: HDFS-13588
> URL: https://issues.apache.org/jira/browse/HDFS-13588
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Attachments: HDFS-13588-branch-2.000.patch, HDFS-13588.000.patch
>
>
> Some test cases of TestFsDatasetImpl failed on Windows due to:
>  # using File#setWritable interface;
>  # test directory conflict between test cases (details in HDFS-13408);
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13587) TestQuorumJournalManager fails on Windows

2018-05-18 Thread Xiao Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16480409#comment-16480409
 ] 

Xiao Liang commented on HDFS-13587:
---

Using the same path for different test cases is a common cause for test 
failures on Windows, [^HDFS-13587.001.patch] is a reasonable fix with 
randomized path, similar to what's added to MiniDFSCluster.

+1 for [^HDFS-13587.001.patch]

> TestQuorumJournalManager fails on Windows
> -
>
> Key: HDFS-13587
> URL: https://issues.apache.org/jira/browse/HDFS-13587
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Major
>  Labels: Windows
> Attachments: HDFS-13587.000.patch, HDFS-13587.001.patch
>
>
> There are 12 test failures in TestQuorumJournalManager on Windows. Local run 
> shows:
> {color:#d04437}[INFO] Running 
> org.apache.hadoop.hdfs.qjournal.client.TestQuorumJournalManager
> [ERROR] Tests run: 21, Failures: 0, Errors: 12, Skipped: 0, Time elapsed: 
> 106.81 s <<< FAILURE! - in 
> org.apache.hadoop.hdfs.qjournal.client.TestQuorumJournalManager
> [ERROR] 
> testCrashBetweenSyncLogAndPersistPaxosData(org.apache.hadoop.hdfs.qjournal.client.TestQuorumJournalManager)
>   Time elapsed: 1.93 s  <<< ERROR!
> org.apache.hadoop.hdfs.qjournal.client.QuorumException:
> Could not format one or more JournalNodes. 2 successful responses:
> 127.0.0.1:27044: null [success]
> 127.0.0.1:27064: null [success]
> 1 exceptions thrown:
> 127.0.0.1:27054: Directory 
> E:\OSS\hadoop-branch-2\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\journalnode-1\test-journal
>  is in an inconsistent state: Can't format the storage directory because the 
> current directory is not empty.
> at 
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.checkEmptyCurrent(Storage.java:498)
> at 
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:574)
> at 
> org.apache.hadoop.hdfs.qjournal.server.JNStorage.format(JNStorage.java:185)
> at 
> org.apache.hadoop.hdfs.qjournal.server.Journal.format(Journal.java:221)
> at 
> org.apache.hadoop.hdfs.qjournal.server.JournalNodeRpcServer.format(JournalNodeRpcServer.java:157)
> at 
> org.apache.hadoop.hdfs.qjournal.protocolPB.QJournalProtocolServerSideTranslatorPB.format(QJournalProtocolServerSideTranslatorPB.java:145)
> at 
> org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocolProtos$QJournalProtocolService$2.callBlockingMethod(QJournalProtocolProtos.java:25419)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:503)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:871)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:817)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1889)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2606)
> at 
> org.apache.hadoop.hdfs.qjournal.client.QuorumException.create(QuorumException.java:81)
> at 
> org.apache.hadoop.hdfs.qjournal.client.QuorumCall.rethrowException(QuorumCall.java:286)
> at 
> org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.format(QuorumJournalManager.java:212)
> at 
> org.apache.hadoop.hdfs.qjournal.client.TestQuorumJournalManager.setup(TestQuorumJournalManager.java:109)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
> at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
> at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
> at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
> at 

[jira] [Commented] (HDFS-13588) Fix TestFsDatasetImpl test failures on Windows

2018-05-17 Thread Xiao Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16479989#comment-16479989
 ] 

Xiao Liang commented on HDFS-13588:
---

Yes it does, uploaded [^HDFS-13588.000.patch] for trunk.

> Fix TestFsDatasetImpl test failures on Windows
> --
>
> Key: HDFS-13588
> URL: https://issues.apache.org/jira/browse/HDFS-13588
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Attachments: HDFS-13588-branch-2.000.patch, HDFS-13588.000.patch
>
>
> Some test cases of TestFsDatasetImpl failed on Windows due to:
>  # using File#setWritable interface;
>  # test directory conflict between test cases (details in HDFS-13408);
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13588) Fix TestFsDatasetImpl test failures on Windows

2018-05-17 Thread Xiao Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Liang updated HDFS-13588:
--
Attachment: HDFS-13588.000.patch

> Fix TestFsDatasetImpl test failures on Windows
> --
>
> Key: HDFS-13588
> URL: https://issues.apache.org/jira/browse/HDFS-13588
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Attachments: HDFS-13588-branch-2.000.patch, HDFS-13588.000.patch
>
>
> Some test cases of TestFsDatasetImpl failed on Windows due to:
>  # using File#setWritable interface;
>  # test directory conflict between test cases (details in HDFS-13408);
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13588) Fix TestFsDatasetImpl test failures on Windows

2018-05-17 Thread Xiao Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Liang updated HDFS-13588:
--
Attachment: HDFS-13588-branch-2.000.patch

> Fix TestFsDatasetImpl test failures on Windows
> --
>
> Key: HDFS-13588
> URL: https://issues.apache.org/jira/browse/HDFS-13588
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Attachments: HDFS-13588-branch-2.000.patch
>
>
> Some test cases of TestFsDatasetImpl failed on Windows due to:
>  # using File#setWritable interface;
>  # test directory conflict between test cases (details in HDFS-13408);
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13588) Fix TestFsDatasetImpl test failures on Windows

2018-05-17 Thread Xiao Liang (JIRA)
Xiao Liang created HDFS-13588:
-

 Summary: Fix TestFsDatasetImpl test failures on Windows
 Key: HDFS-13588
 URL: https://issues.apache.org/jira/browse/HDFS-13588
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Xiao Liang
Assignee: Xiao Liang


Some test cases of TestFsDatasetImpl failed on Windows due to:
 # using File#setWritable interface;
 # test directory conflict between test cases (details in HDFS-13408);

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13570) TestQuotaByStorageType,TestQuota,TestDFSOutputStream fail on Windows

2018-05-16 Thread Xiao Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16477183#comment-16477183
 ] 

Xiao Liang commented on HDFS-13570:
---

[~elgoiri] yes this is a typical issue of org.apache.hadoop.fs.Path class under 
Windows, while java.io.File is doing the right thing.

I think the implement of org.apache.hadoop.fs.Path might need some revision, 
and for now [^HDFS-13570.000.patch] is a reasonable fix for the test cases 
failed on Windows.

+1 and thanks [~huanbang1993] for the fix.

> TestQuotaByStorageType,TestQuota,TestDFSOutputStream fail on Windows
> 
>
> Key: HDFS-13570
> URL: https://issues.apache.org/jira/browse/HDFS-13570
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Major
>  Labels: Windows
> Attachments: HDFS-13570.000.patch
>
>
> [Daily Windows 
> build|https://builds.apache.org/job/hadoop-trunk-win/467/testReport/] shows 
> that the following 20 test cases fail on Windows with same error "Pathname 
> /F:/short/hadoop-trunk-win/s/hadoop-hdfs-project/hadoop-hdfs/target/test/data/wzEFWgBokV/TestQuotaByStorageType/testStorageSpaceQuotaWithRepFactor
>  from 
> F:/short/hadoop-trunk-win/s/hadoop-hdfs-project/hadoop-hdfs/target/test/data/wzEFWgBokV/TestQuotaByStorageType/testStorageSpaceQuotaWithRepFactor
>  is not a valid DFS filename.":
> {code:java}
> TestQuotaByStorageType#testStorageSpaceQuotaWithRepFactor
> TestQuotaByStorageType#testStorageSpaceQuotaPerQuotaClear
> TestQuotaByStorageType#testStorageSpaceQuotaWithWarmPolicy
> TestQuota#testQuotaCommands
> TestQuota#testSetAndClearSpaceQuotaRegular
> TestQuota#testQuotaByStorageType
> TestQuota#testNamespaceCommands
> TestQuota#testSetAndClearSpaceQuotaByStorageType
> TestQuota#testMaxSpaceQuotas
> TestQuota#testSetAndClearSpaceQuotaNoAccess
> TestQuota#testSpaceQuotaExceptionOnAppend
> TestQuota#testSpaceCommands
> TestQuota#testBlockAllocationAdjustsUsageConservatively
> TestQuota#testMultipleFilesSmallerThanOneBlock
> TestQuota#testHugeFileCount
> TestQuota#testSetAndClearSpaceQuotaPathIsFile
> TestQuota#testSetSpaceQuotaNegativeNumber
> TestQuota#testSpaceQuotaExceptionOnClose
> TestQuota#testSpaceQuotaExceptionOnFlush
> TestDFSOutputStream#testPreventOverflow{code}
> There are 2 test cases failing with error "It should be one line error 
> message like: clrSpaceQuota: Directory does not exist:  directory> expected:<1> but was:<2>"
> {code:java}
> TestQuota#testSetAndClearSpaceQuotaPathIsFile
> TestQuota#testSetAndClearSpaceQuotaDirecotryNotExist
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13567) TestNameNodeMetrics#testGenerateEDEKTime,TestNameNodeMetrics#testResourceCheck should use a different cluster basedir

2018-05-15 Thread Xiao Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16476354#comment-16476354
 ] 

Xiao Liang commented on HDFS-13567:
---

Yes this is a common issue for MiniDFSCluster on Windows, HDFS-13408 introduced 
a way to fix and it should be applied to test cases like these 2.

+1 for [^HDFS-13567.000.patch], thanks [~huanbang1993] for the fix.

> TestNameNodeMetrics#testGenerateEDEKTime,TestNameNodeMetrics#testResourceCheck
>  should use a different cluster basedir
> -
>
> Key: HDFS-13567
> URL: https://issues.apache.org/jira/browse/HDFS-13567
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Major
>  Labels: Windows
> Attachments: HDFS-13567.000.patch
>
>
> TestNameNodeMetrics#testGenerateEDEKTime,TestNameNodeMetrics#testResourceCheck
>  create a new cluster other than using the one created in @Before. On 
> Windows, they fail due to path conflict:
> {color:#d04437}[INFO] Running 
> org.apache.hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics{color}
> {color:#d04437}[ERROR] Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time 
> elapsed: 135.546 s <<< FAILURE! - in 
> org.apache.hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics{color}
> {color:#d04437}[ERROR] 
> testResourceCheck(org.apache.hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics)
>  Time elapsed: 94.632 s <<< ERROR!{color}
> {color:#d04437}java.io.IOException: Could not fully delete 
> E:\OSS\hadoop-branch-2\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\name1{color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:1047){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:883){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:514){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:473){color}
> {color:#d04437} at 
> org.apache.hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics.testResourceCheck(TestNameNodeMetrics.java:813){color}
> {color:#d04437} at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method){color}
> {color:#d04437} at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62){color}
> {color:#d04437} at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43){color}
> {color:#d04437} at java.lang.reflect.Method.invoke(Method.java:498){color}
> {color:#d04437} at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47){color}
> {color:#d04437} at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12){color}
> {color:#d04437} at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44){color}
> {color:#d04437} at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17){color}
> {color:#d04437} at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26){color}
> {color:#d04437} at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27){color}
> {color:#d04437} at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271){color}
> {color:#d04437} at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70){color}
> {color:#d04437} at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50){color}
> {color:#d04437} at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:238){color}
> {color:#d04437} at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63){color}
> {color:#d04437} at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236){color}
> {color:#d04437} at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:53){color}
> {color:#d04437} at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229){color}
> {color:#d04437} at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:309){color}
> {color:#d04437} at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365){color}
> {color:#d04437} at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273){color}
> {color:#d04437} at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238){color}
> {color:#d04437} at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159){color}
> {color:#d04437} at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:379){color}
> {color:#d04437} at 

[jira] [Commented] (HDFS-13548) TestResolveHdfsSymlink#testFcResolveAfs fails on Windows

2018-05-15 Thread Xiao Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16476276#comment-16476276
 ] 

Xiao Liang commented on HDFS-13548:
---

[^HDFS-13548.000.patch] , looks good to me, +1 for it.

Path format from File#getAbsolutePath on Windows can be quite different from 
Linux, which sometimes breaks the assumption of some test cases like this one. 
Thanks [~huanbang1993] for the fix.

> TestResolveHdfsSymlink#testFcResolveAfs fails on Windows
> 
>
> Key: HDFS-13548
> URL: https://issues.apache.org/jira/browse/HDFS-13548
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Major
> Attachments: HDFS-13548.000.patch
>
>
> {color:#33}TestResolveHdfsSymlink#testFcResolveAfs fails on Windows with 
> error message:{color}
> {color:#d04437}[INFO] Running 
> org.apache.hadoop.fs.TestResolveHdfsSymlink{color}
>  {color:#d04437}[ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, 
> Time elapsed: 28.574 s <<< FAILURE! - in 
> org.apache.hadoop.fs.TestResolveHdfsSymlink{color}
>  {color:#d04437}[ERROR] 
> testFcResolveAfs(org.apache.hadoop.fs.TestResolveHdfsSymlink) Time elapsed: 
> 0.039 s <<< ERROR!{color}
>  {color:#d04437}java.io.IOException: Mkdirs failed to create 
> [file:/E:/OSS/hadoop/hadoop-hdfs-project/hadoop-hdfs/file:/E:/OSS/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/n014HnmeeA|file:///E:/OSS/hadoop-branch-2/hadoop-hdfs-project/hadoop-hdfs/file:/E:/OSS/hadoop-branch-2/hadoop-hdfs-project/hadoop-hdfs/target/test/data/n014HnmeeA]{color}
>  {color:#d04437} at 
> org.apache.hadoop.hdfs.DFSTestUtil.createFile(DFSTestUtil.java:360){color}
>  {color:#d04437} at 
> org.apache.hadoop.fs.TestResolveHdfsSymlink.testFcResolveAfs(TestResolveHdfsSymlink.java:88){color}
>  {color:#d04437} at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method){color}
>  {color:#d04437} at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62){color}
>  {color:#d04437} at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43){color}
>  {color:#d04437} at java.lang.reflect.Method.invoke(Method.java:498){color}
>  {color:#d04437} at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47){color}
>  {color:#d04437} at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12){color}
>  {color:#d04437} at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44){color}
>  {color:#d04437} at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17){color}
>  {color:#d04437} at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271){color}
>  {color:#d04437} at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70){color}
>  {color:#d04437} at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50){color}
>  {color:#d04437} at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:238){color}
>  {color:#d04437} at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63){color}
>  {color:#d04437} at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236){color}
>  {color:#d04437} at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:53){color}
>  {color:#d04437} at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229){color}
>  {color:#d04437} at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26){color}
>  {color:#d04437} at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27){color}
>  {color:#d04437} at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:309){color}
>  {color:#d04437} at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365){color}
>  {color:#d04437} at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273){color}
>  {color:#d04437} at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238){color}
>  {color:#d04437} at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159){color}
>  {color:#d04437} at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:379){color}
>  {color:#d04437} at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:340){color}
>  {color:#d04437} at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:125){color}
>  {color:#d04437} at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:413){color}
> {color:#d04437}[INFO]{color}
>  {color:#d04437}[INFO] Results:{color}
>  

[jira] [Updated] (HDFS-13537) TestHdfsHelper does not generate jceks path properly for relative path in Windows

2018-05-09 Thread Xiao Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Liang updated HDFS-13537:
--
Attachment: image-2018-05-09-16-31-40-981.png

> TestHdfsHelper does not generate jceks path properly for relative path in 
> Windows
> -
>
> Key: HDFS-13537
> URL: https://issues.apache.org/jira/browse/HDFS-13537
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Attachments: HDFS-13537-branch-2.000.patch, HDFS-13537.000.patch, 
> HDFS-13537.001.patch, image-2018-05-09-16-29-50-976.png, 
> image-2018-05-09-16-31-40-981.png
>
>
> In TestHdfsHelper#startMiniHdfs, jceks path is generated as:
> {code:java}
> final String jceksPath = JavaKeyStoreProvider.SCHEME_NAME + "://file" +
> new Path(helper.getTestRootDir(), "test.jks").toUri();{code}
> while the path from getTestRootDir() is a relative path (in windows), the 
> result will be incorrect due to no "/" between "://file" and the relative 
> path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13537) TestHdfsHelper does not generate jceks path properly for relative path in Windows

2018-05-09 Thread Xiao Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16469685#comment-16469685
 ] 

Xiao Liang commented on HDFS-13537:
---

Sure, here's before:

!image-2018-05-09-16-29-50-976.png!

And this is after the patch:

!image-2018-05-09-16-31-40-981.png!

> TestHdfsHelper does not generate jceks path properly for relative path in 
> Windows
> -
>
> Key: HDFS-13537
> URL: https://issues.apache.org/jira/browse/HDFS-13537
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Attachments: HDFS-13537-branch-2.000.patch, HDFS-13537.000.patch, 
> HDFS-13537.001.patch, image-2018-05-09-16-29-50-976.png, 
> image-2018-05-09-16-31-40-981.png
>
>
> In TestHdfsHelper#startMiniHdfs, jceks path is generated as:
> {code:java}
> final String jceksPath = JavaKeyStoreProvider.SCHEME_NAME + "://file" +
> new Path(helper.getTestRootDir(), "test.jks").toUri();{code}
> while the path from getTestRootDir() is a relative path (in windows), the 
> result will be incorrect due to no "/" between "://file" and the relative 
> path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13537) TestHdfsHelper does not generate jceks path properly for relative path in Windows

2018-05-09 Thread Xiao Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Liang updated HDFS-13537:
--
Attachment: image-2018-05-09-16-29-50-976.png

> TestHdfsHelper does not generate jceks path properly for relative path in 
> Windows
> -
>
> Key: HDFS-13537
> URL: https://issues.apache.org/jira/browse/HDFS-13537
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Attachments: HDFS-13537-branch-2.000.patch, HDFS-13537.000.patch, 
> HDFS-13537.001.patch, image-2018-05-09-16-29-50-976.png
>
>
> In TestHdfsHelper#startMiniHdfs, jceks path is generated as:
> {code:java}
> final String jceksPath = JavaKeyStoreProvider.SCHEME_NAME + "://file" +
> new Path(helper.getTestRootDir(), "test.jks").toUri();{code}
> while the path from getTestRootDir() is a relative path (in windows), the 
> result will be incorrect due to no "/" between "://file" and the relative 
> path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13537) TestHdfsHelper does not generate jceks path properly for relative path in Windows

2018-05-09 Thread Xiao Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16468552#comment-16468552
 ] 

Xiao Liang commented on HDFS-13537:
---

Test result looks good, also added [^HDFS-13537-branch-2.000.patch] for 
branch-2.

> TestHdfsHelper does not generate jceks path properly for relative path in 
> Windows
> -
>
> Key: HDFS-13537
> URL: https://issues.apache.org/jira/browse/HDFS-13537
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Attachments: HDFS-13537-branch-2.000.patch, HDFS-13537.000.patch, 
> HDFS-13537.001.patch
>
>
> In TestHdfsHelper#startMiniHdfs, jceks path is generated as:
> {code:java}
> final String jceksPath = JavaKeyStoreProvider.SCHEME_NAME + "://file" +
> new Path(helper.getTestRootDir(), "test.jks").toUri();{code}
> while the path from getTestRootDir() is a relative path (in windows), the 
> result will be incorrect due to no "/" between "://file" and the relative 
> path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13537) TestHdfsHelper does not generate jceks path properly for relative path in Windows

2018-05-09 Thread Xiao Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Liang updated HDFS-13537:
--
Attachment: HDFS-13537-branch-2.000.patch

> TestHdfsHelper does not generate jceks path properly for relative path in 
> Windows
> -
>
> Key: HDFS-13537
> URL: https://issues.apache.org/jira/browse/HDFS-13537
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Attachments: HDFS-13537-branch-2.000.patch, HDFS-13537.000.patch, 
> HDFS-13537.001.patch
>
>
> In TestHdfsHelper#startMiniHdfs, jceks path is generated as:
> {code:java}
> final String jceksPath = JavaKeyStoreProvider.SCHEME_NAME + "://file" +
> new Path(helper.getTestRootDir(), "test.jks").toUri();{code}
> while the path from getTestRootDir() is a relative path (in windows), the 
> result will be incorrect due to no "/" between "://file" and the relative 
> path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13537) TestHdfsHelper does not generate jceks path properly for relative path in Windows

2018-05-08 Thread Xiao Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16468293#comment-16468293
 ] 

Xiao Liang edited comment on HDFS-13537 at 5/9/18 4:33 AM:
---

Thanks [~elgoiri], sure, please help take a look at [^HDFS-13537.001.patch] 
with variable extracted.

In the test result, the failed cases seem not related with the patch, they 
don't call the method changed in the patch.


was (Author: surmountian):
Thanks [~elgoiri], sure, please help take a look at [^HDFS-13537.001.patch] 
with variable extracted.

In the test result, the failed cases seem not related with the the patch, they 
don't call the method changed in the patch.

> TestHdfsHelper does not generate jceks path properly for relative path in 
> Windows
> -
>
> Key: HDFS-13537
> URL: https://issues.apache.org/jira/browse/HDFS-13537
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Attachments: HDFS-13537.000.patch, HDFS-13537.001.patch
>
>
> In TestHdfsHelper#startMiniHdfs, jceks path is generated as:
> {code:java}
> final String jceksPath = JavaKeyStoreProvider.SCHEME_NAME + "://file" +
> new Path(helper.getTestRootDir(), "test.jks").toUri();{code}
> while the path from getTestRootDir() is a relative path (in windows), the 
> result will be incorrect due to no "/" between "://file" and the relative 
> path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13537) TestHdfsHelper does not generate jceks path properly for relative path in Windows

2018-05-08 Thread Xiao Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16468293#comment-16468293
 ] 

Xiao Liang commented on HDFS-13537:
---

Thanks [~elgoiri], sure, please help take a look at [^HDFS-13537.001.patch] 
with variable extracted.

In the test result, the failed cases seem not related with the the patch, they 
don't call the method changed in the patch.

> TestHdfsHelper does not generate jceks path properly for relative path in 
> Windows
> -
>
> Key: HDFS-13537
> URL: https://issues.apache.org/jira/browse/HDFS-13537
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Attachments: HDFS-13537.000.patch, HDFS-13537.001.patch
>
>
> In TestHdfsHelper#startMiniHdfs, jceks path is generated as:
> {code:java}
> final String jceksPath = JavaKeyStoreProvider.SCHEME_NAME + "://file" +
> new Path(helper.getTestRootDir(), "test.jks").toUri();{code}
> while the path from getTestRootDir() is a relative path (in windows), the 
> result will be incorrect due to no "/" between "://file" and the relative 
> path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13537) TestHdfsHelper does not generate jceks path properly for relative path in Windows

2018-05-08 Thread Xiao Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Liang updated HDFS-13537:
--
Attachment: HDFS-13537.001.patch

> TestHdfsHelper does not generate jceks path properly for relative path in 
> Windows
> -
>
> Key: HDFS-13537
> URL: https://issues.apache.org/jira/browse/HDFS-13537
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Attachments: HDFS-13537.000.patch, HDFS-13537.001.patch
>
>
> In TestHdfsHelper#startMiniHdfs, jceks path is generated as:
> {code:java}
> final String jceksPath = JavaKeyStoreProvider.SCHEME_NAME + "://file" +
> new Path(helper.getTestRootDir(), "test.jks").toUri();{code}
> while the path from getTestRootDir() is a relative path (in windows), the 
> result will be incorrect due to no "/" between "://file" and the relative 
> path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13537) TestHdfsHelper does not generate jceks path properly for relative path in Windows

2018-05-08 Thread Xiao Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Liang updated HDFS-13537:
--
Status: Patch Available  (was: Open)

> TestHdfsHelper does not generate jceks path properly for relative path in 
> Windows
> -
>
> Key: HDFS-13537
> URL: https://issues.apache.org/jira/browse/HDFS-13537
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Attachments: HDFS-13537.000.patch
>
>
> In TestHdfsHelper#startMiniHdfs, jceks path is generated as:
> {code:java}
> final String jceksPath = JavaKeyStoreProvider.SCHEME_NAME + "://file" +
> new Path(helper.getTestRootDir(), "test.jks").toUri();{code}
> while the path from getTestRootDir() is a relative path (in windows), the 
> result will be incorrect due to no "/" between "://file" and the relative 
> path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13537) TestHdfsHelper does not generate jceks path properly for relative path in Windows

2018-05-08 Thread Xiao Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Liang updated HDFS-13537:
--
Attachment: HDFS-13537.000.patch

> TestHdfsHelper does not generate jceks path properly for relative path in 
> Windows
> -
>
> Key: HDFS-13537
> URL: https://issues.apache.org/jira/browse/HDFS-13537
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Attachments: HDFS-13537.000.patch
>
>
> In TestHdfsHelper#startMiniHdfs, jceks path is generated as:
> {code:java}
> final String jceksPath = JavaKeyStoreProvider.SCHEME_NAME + "://file" +
> new Path(helper.getTestRootDir(), "test.jks").toUri();{code}
> while the path from getTestRootDir() is a relative path (in windows), the 
> result will be incorrect due to no "/" between "://file" and the relative 
> path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13537) TestHdfsHelper does not generate jceks path properly for relative path in Windows

2018-05-08 Thread Xiao Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16467989#comment-16467989
 ] 

Xiao Liang commented on HDFS-13537:
---

The failed tests related to this in Windows are:

org.apache.hadoop.fs.http.client.TestHttpFSFWithSWebhdfsFileSystem.testOperation[*]
org.apache.hadoop.fs.http.client.TestHttpFSFWithSWebhdfsFileSystem.testOperationDoAs[*]
org.apache.hadoop.fs.http.client.TestHttpFSFWithWebhdfsFileSystem.testOperation[*]
org.apache.hadoop.fs.http.client.TestHttpFSFWithWebhdfsFileSystem.testOperationDoAs[*]
org.apache.hadoop.fs.http.client.TestHttpFSFileSystemLocalFileSystem.testOperation[*]
org.apache.hadoop.fs.http.client.TestHttpFSFileSystemLocalFileSystem.testOperationDoAs[*]
org.apache.hadoop.fs.http.client.TestHttpFSWithHttpFSFileSystem.testOperation[*]
org.apache.hadoop.fs.http.client.TestHttpFSWithHttpFSFileSystem.testOperationDoAs[*]

I'm preparing a patch of fix to upload.

> TestHdfsHelper does not generate jceks path properly for relative path in 
> Windows
> -
>
> Key: HDFS-13537
> URL: https://issues.apache.org/jira/browse/HDFS-13537
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
>
> In TestHdfsHelper#startMiniHdfs, jceks path is generated as:
> {code:java}
> final String jceksPath = JavaKeyStoreProvider.SCHEME_NAME + "://file" +
> new Path(helper.getTestRootDir(), "test.jks").toUri();{code}
> while the path from getTestRootDir() is a relative path (in windows), the 
> result will be incorrect due to no "/" between "://file" and the relative 
> path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-13537) TestHdfsHelper does not generate jceks path properly for relative path

2018-05-08 Thread Xiao Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Liang reassigned HDFS-13537:
-

Assignee: Xiao Liang

> TestHdfsHelper does not generate jceks path properly for relative path
> --
>
> Key: HDFS-13537
> URL: https://issues.apache.org/jira/browse/HDFS-13537
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
>
> In TestHdfsHelper#startMiniHdfs, jceks path is generated as:
> {code:java}
> final String jceksPath = JavaKeyStoreProvider.SCHEME_NAME + "://file" +
> new Path(helper.getTestRootDir(), "test.jks").toUri();{code}
> while the path from getTestRootDir() is a relative path (in windows), the 
> result will be incorrect due to no "/" between "://file" and the relative 
> path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13537) TestHdfsHelper does not generate jceks path properly for relative path

2018-05-08 Thread Xiao Liang (JIRA)
Xiao Liang created HDFS-13537:
-

 Summary: TestHdfsHelper does not generate jceks path properly for 
relative path
 Key: HDFS-13537
 URL: https://issues.apache.org/jira/browse/HDFS-13537
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Xiao Liang


In TestHdfsHelper#startMiniHdfs, jceks path is generated as:
{code:java}
final String jceksPath = JavaKeyStoreProvider.SCHEME_NAME + "://file" +
new Path(helper.getTestRootDir(), "test.jks").toUri();{code}
while the path from getTestRootDir() is a relative path (in windows), the 
result will be incorrect due to no "/" between "://file" and the relative path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13503) Fix TestFsck test failures on Windows

2018-04-30 Thread Xiao Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Liang updated HDFS-13503:
--
Attachment: HDFS-13503.001.patch

> Fix TestFsck test failures on Windows
> -
>
> Key: HDFS-13503
> URL: https://issues.apache.org/jira/browse/HDFS-13503
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: hdfs
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Attachments: HDFS-13503-branch-2.000.patch, 
> HDFS-13503-branch-2.001.patch, HDFS-13503.000.patch, HDFS-13503.001.patch
>
>
> Test failures on Windows caused by the same reason as HDFS-13336, similar fix 
> needed for TestFsck basing on HDFS-13408.
> MiniDFSCluster also needs a small fix for the getStorageDir() interface, 
> which should use determineDfsBaseDir() to get the correct path of the data 
> directory.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13503) Fix TestFsck test failures on Windows

2018-04-30 Thread Xiao Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Liang updated HDFS-13503:
--
Attachment: (was: HDFS-13503.001.patch)

> Fix TestFsck test failures on Windows
> -
>
> Key: HDFS-13503
> URL: https://issues.apache.org/jira/browse/HDFS-13503
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: hdfs
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Attachments: HDFS-13503-branch-2.000.patch, 
> HDFS-13503-branch-2.001.patch, HDFS-13503.000.patch, HDFS-13503.001.patch
>
>
> Test failures on Windows caused by the same reason as HDFS-13336, similar fix 
> needed for TestFsck basing on HDFS-13408.
> MiniDFSCluster also needs a small fix for the getStorageDir() interface, 
> which should use determineDfsBaseDir() to get the correct path of the data 
> directory.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13503) Fix TestFsck test failures on Windows

2018-04-30 Thread Xiao Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13503?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16458459#comment-16458459
 ] 

Xiao Liang commented on HDFS-13503:
---

Sure, thanks [~elgoiri], updated with [^HDFS-13503.001.patch] and 
[^HDFS-13503-branch-2.001.patch].

> Fix TestFsck test failures on Windows
> -
>
> Key: HDFS-13503
> URL: https://issues.apache.org/jira/browse/HDFS-13503
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: hdfs
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Attachments: HDFS-13503-branch-2.000.patch, 
> HDFS-13503-branch-2.001.patch, HDFS-13503.000.patch, HDFS-13503.001.patch
>
>
> Test failures on Windows caused by the same reason as HDFS-13336, similar fix 
> needed for TestFsck basing on HDFS-13408.
> MiniDFSCluster also needs a small fix for the getStorageDir() interface, 
> which should use determineDfsBaseDir() to get the correct path of the data 
> directory.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13503) Fix TestFsck test failures on Windows

2018-04-30 Thread Xiao Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Liang updated HDFS-13503:
--
Attachment: HDFS-13503-branch-2.001.patch

> Fix TestFsck test failures on Windows
> -
>
> Key: HDFS-13503
> URL: https://issues.apache.org/jira/browse/HDFS-13503
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: hdfs
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Attachments: HDFS-13503-branch-2.000.patch, 
> HDFS-13503-branch-2.001.patch, HDFS-13503.000.patch, HDFS-13503.001.patch
>
>
> Test failures on Windows caused by the same reason as HDFS-13336, similar fix 
> needed for TestFsck basing on HDFS-13408.
> MiniDFSCluster also needs a small fix for the getStorageDir() interface, 
> which should use determineDfsBaseDir() to get the correct path of the data 
> directory.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13503) Fix TestFsck test failures on Windows

2018-04-30 Thread Xiao Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Liang updated HDFS-13503:
--
Attachment: HDFS-13503.001.patch

> Fix TestFsck test failures on Windows
> -
>
> Key: HDFS-13503
> URL: https://issues.apache.org/jira/browse/HDFS-13503
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: hdfs
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Attachments: HDFS-13503-branch-2.000.patch, HDFS-13503.000.patch, 
> HDFS-13503.001.patch
>
>
> Test failures on Windows caused by the same reason as HDFS-13336, similar fix 
> needed for TestFsck basing on HDFS-13408.
> MiniDFSCluster also needs a small fix for the getStorageDir() interface, 
> which should use determineDfsBaseDir() to get the correct path of the data 
> directory.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13503) Fix TestFsck test failures on Windows

2018-04-27 Thread Xiao Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13503?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16457176#comment-16457176
 ] 

Xiao Liang commented on HDFS-13503:
---

Sure, added [^HDFS-13503.000.patch] for trunk.

Before [^HDFS-13503.000.patch] the test result for trunk is:

{color:#d04437}[INFO] 
---{color}
{color:#d04437}[INFO] T E S T S{color}
{color:#d04437}[INFO] 
---{color}
{color:#d04437}[INFO] Running 
org.apache.hadoop.hdfs.server.namenode.TestFsck{color}
{color:#d04437}[ERROR] Tests run: 32, Failures: 0, Errors: 1, Skipped: 0, Time 
elapsed: 209.85 s <<< FAILURE! - in 
org.apache.hadoop.hdfs.server.namenode.TestFsck{color}
{color:#d04437}[ERROR] 
testFsckUpgradeDomain(org.apache.hadoop.hdfs.server.namenode.TestFsck) Time 
elapsed: 1.943 s <<< ERROR!{color}
{color:#d04437}java.io.IOException: Could not fully delete 
D:\Git\Hadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\name-0-1{color}
{color:#d04437} at 
org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1031){color}
{color:#d04437} at 
org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:952){color}
{color:#d04437} at 
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:884){color}
{color:#d04437} at 
org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:517){color}
{color:#d04437} at 
org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:476){color}
{color:#d04437} at 
org.apache.hadoop.hdfs.server.namenode.TestFsck.testUpgradeDomain(TestFsck.java:2255){color}
{color:#d04437} at 
org.apache.hadoop.hdfs.server.namenode.TestFsck.testFsckUpgradeDomain(TestFsck.java:2230){color}
{color:#d04437} at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
Method){color}
{color:#d04437} at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62){color}
{color:#d04437} at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43){color}
{color:#d04437} at java.lang.reflect.Method.invoke(Method.java:498){color}
{color:#d04437} at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47){color}
{color:#d04437} at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12){color}
{color:#d04437} at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44){color}
{color:#d04437} at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17){color}
{color:#d04437} at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74){color}

{color:#d04437}[INFO]{color}
{color:#d04437}[INFO] Results:{color}
{color:#d04437}[INFO]{color}
{color:#d04437}[ERROR] Errors:{color}
{color:#d04437}[ERROR] 
TestFsck.testFsckUpgradeDomain:2230->testUpgradeDomain:2255 » IO Could not 
ful...{color}
{color:#d04437}[INFO]{color}
{color:#d04437}[ERROR] Tests run: 32, Failures: 0, Errors: 1, Skipped: 0{color}
{color:#d04437}[INFO]{color}
{color:#d04437}[INFO] 
{color}
{color:#d04437}[INFO] BUILD FAILURE{color}
{color:#d04437}[INFO] 
{color}
{color:#d04437}[INFO] Total time: 03:41 min{color}
{color:#d04437}[INFO] Finished at: 2018-04-27T14:33:26-07:00{color}
{color:#d04437}[INFO] Final Memory: 34M/651M{color}
{color:#d04437}[INFO] 
{color}
{color:#d04437}[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.21.0:test (default-test) on 
project hadoop-hdfs: There are test failures.{color}
{color:#d04437}[ERROR]{color}
{color:#d04437}[ERROR] Please refer to 
D:\Git\Hadoop\hadoop-hdfs-project\hadoop-hdfs\target\surefire-reports for the 
individual test results.{color}
{color:#d04437}[ERROR] Please refer to dump files (if any exist) 
[date]-jvmRun[N].dump, [date].dumpstream and [date]-jvmRun[N].dumpstream.{color}
{color:#d04437}[ERROR] -> [Help 1]{color}
{color:#d04437}[ERROR]{color}
{color:#d04437}[ERROR] To see the full stack trace of the errors, re-run Maven 
with the -e switch.{color}
{color:#d04437}[ERROR] Re-run Maven using the -X switch to enable full debug 
logging.{color}
{color:#d04437}[ERROR]{color}
{color:#d04437}[ERROR] For more information about the errors and possible 
solutions, please read the following articles:{color}
{color:#d04437}[ERROR] [Help 1] 
[http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException]{color}

With [^HDFS-13503.000.patch] it's:

{color:#14892c}[INFO] 
---{color}
{color:#14892c}[INFO] T E S T S{color}
{color:#14892c}[INFO] 

[jira] [Updated] (HDFS-13503) Fix TestFsck test failures on Windows

2018-04-27 Thread Xiao Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Liang updated HDFS-13503:
--
Attachment: HDFS-13503.000.patch

> Fix TestFsck test failures on Windows
> -
>
> Key: HDFS-13503
> URL: https://issues.apache.org/jira/browse/HDFS-13503
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: hdfs
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Attachments: HDFS-13503-branch-2.000.patch, HDFS-13503.000.patch
>
>
> Test failures on Windows caused by the same reason as HDFS-13336, similar fix 
> needed for TestFsck basing on HDFS-13408.
> MiniDFSCluster also needs a small fix for the getStorageDir() interface, 
> which should use determineDfsBaseDir() to get the correct path of the data 
> directory.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13509) Bug fix for breakHardlinks() of ReplicaInfo/LocalReplica, and fix TestFileAppend failures on Windows

2018-04-27 Thread Xiao Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16457174#comment-16457174
 ] 

Xiao Liang commented on HDFS-13509:
---

I checked the code, 
org.apache.hadoop.hdfs.server.datanode.LocalReplica#breakHardlinks won't be 
called by the failed case 
+[TestBlockReaderLocal.testStatisticsForErasureCodingRead|https://builds.apache.org/job/PreCommit-HDFS-Build/24097/testReport/org.apache.hadoop.hdfs.client.impl/TestBlockReaderLocal/testStatisticsForErasureCodingRead/],+
 actually it's been failing for a while, so should not be caused by this patch.

> Bug fix for breakHardlinks() of ReplicaInfo/LocalReplica, and fix 
> TestFileAppend failures on Windows
> 
>
> Key: HDFS-13509
> URL: https://issues.apache.org/jira/browse/HDFS-13509
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Fix For: 2.10.0, 3.2.0, 3.1.1, 2.9.2, 3.0.4
>
> Attachments: HDFS-13509-branch-2.000.patch, HDFS-13509.000.patch, 
> HDFS-13509.001.patch
>
>
> breakHardlinks() of ReplicaInfo(branch-2)/LocalReplica(trunk) replaces file 
> while the source is still opened as input stream, which will fail and throw 
> exception on Windows. It's the cause of  unit test case 
> org.apache.hadoop.hdfs.TestFileAppend#testBreakHardlinksIfNeeded failure on 
> Windows.
> Other test cases of TestFileAppend fail randomly on Windows due to sharing 
> the same test folder, and the solution is using randomized base dir of 
> MiniDFSCluster via HDFS-13408



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13509) Bug fix for breakHardlinks() of ReplicaInfo/LocalReplica, and fix TestFileAppend failures on Windows

2018-04-27 Thread Xiao Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13509?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Liang updated HDFS-13509:
--
Attachment: HDFS-13509.001.patch

> Bug fix for breakHardlinks() of ReplicaInfo/LocalReplica, and fix 
> TestFileAppend failures on Windows
> 
>
> Key: HDFS-13509
> URL: https://issues.apache.org/jira/browse/HDFS-13509
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Fix For: 2.10.0, 3.2.0, 3.1.1, 2.9.2, 3.0.4
>
> Attachments: HDFS-13509-branch-2.000.patch, HDFS-13509.000.patch, 
> HDFS-13509.001.patch
>
>
> breakHardlinks() of ReplicaInfo(branch-2)/LocalReplica(trunk) replaces file 
> while the source is still opened as input stream, which will fail and throw 
> exception on Windows. It's the cause of  unit test case 
> org.apache.hadoop.hdfs.TestFileAppend#testBreakHardlinksIfNeeded failure on 
> Windows.
> Other test cases of TestFileAppend fail randomly on Windows due to sharing 
> the same test folder, and the solution is using randomized base dir of 
> MiniDFSCluster via HDFS-13408



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13509) Bug fix for breakHardlinks() of ReplicaInfo/LocalReplica, and fix TestFileAppend failures on Windows

2018-04-27 Thread Xiao Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16457061#comment-16457061
 ] 

Xiao Liang commented on HDFS-13509:
---

And my local test result for trunk without [^HDFS-13509.000.patch] is:

{color:#d04437}[INFO] 
---{color}
{color:#d04437}[INFO] T E S T S{color}
{color:#d04437}[INFO] 
---{color}
{color:#d04437}[INFO] Running org.apache.hadoop.hdfs.TestFileAppend{color}
{color:#d04437}[ERROR] Tests run: 13, Failures: 0, Errors: 1, Skipped: 0, Time 
elapsed: 59.103 s <<< FAILURE! - in org.apache.hadoop.hdfs.TestFileAppend{color}
{color:#d04437}[ERROR] 
testBreakHardlinksIfNeeded(org.apache.hadoop.hdfs.TestFileAppend) Time elapsed: 
7.725 s <<< ERROR!{color}
{color:#d04437}java.io.IOException: Unable to rename 
D:\Git\Hadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\data\data1\current\BP-991234096-10.123.154.103-1524862692241\current\finalized\subdir0\subdir0\blk_1073741825.unlinked
 to 
D:\Git\Hadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\data\data1\current\BP-991234096-10.123.154.103-1524862692241\current\finalized\subdir0\subdir0\blk_1073741825{color}
{color:#d04437} at 
org.apache.hadoop.fs.FileUtil.replaceFile(FileUtil.java:1369){color}
{color:#d04437} at 
org.apache.hadoop.hdfs.server.datanode.FileIoProvider.replaceFile(FileIoProvider.java:533){color}
{color:#d04437} at 
org.apache.hadoop.hdfs.server.datanode.LocalReplica.breakHardlinks(LocalReplica.java:200){color}
{color:#d04437} at 
org.apache.hadoop.hdfs.server.datanode.LocalReplica.breakHardLinksIfNeeded(LocalReplica.java:239){color}
{color:#d04437} at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetTestUtil.breakHardlinksIfNeeded(FsDatasetTestUtil.java:71){color}
{color:#d04437} at 
org.apache.hadoop.hdfs.TestFileAppend.testBreakHardlinksIfNeeded(TestFileAppend.java:162){color}
{color:#d04437} at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
Method){color}
{color:#d04437} at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62){color}
{color:#d04437} at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43){color}
{color:#d04437} at java.lang.reflect.Method.invoke(Method.java:498){color}
{color:#d04437} at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47){color}
{color:#d04437} at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12){color}
{color:#d04437} at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44){color}
{color:#d04437} at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17){color}
{color:#d04437} at 
org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271){color}
{color:#d04437} at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70){color}
{color:#d04437} at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50){color}
{color:#d04437} at 
org.junit.runners.ParentRunner$3.run(ParentRunner.java:238){color}
{color:#d04437} at 
org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63){color}
{color:#d04437} at 
org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236){color}
{color:#d04437} at 
org.junit.runners.ParentRunner.access$000(ParentRunner.java:53){color}
{color:#d04437} at 
org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229){color}
{color:#d04437} at 
org.junit.runners.ParentRunner.run(ParentRunner.java:309){color}
{color:#d04437} at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365){color}
{color:#d04437} at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273){color}
{color:#d04437} at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238){color}
{color:#d04437} at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159){color}
{color:#d04437} at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:379){color}
{color:#d04437} at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:340){color}
{color:#d04437} at 
org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:125){color}
{color:#d04437} at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:413){color}

{color:#d04437}[INFO]{color}
{color:#d04437}[INFO] Results:{color}
{color:#d04437}[INFO]{color}
{color:#d04437}[ERROR] Errors:{color}
{color:#d04437}[ERROR] TestFileAppend.testBreakHardlinksIfNeeded:162 » IO 
Unable to rename D:\Git\Had...{color}
{color:#d04437}[INFO]{color}
{color:#d04437}[ERROR] Tests run: 13, Failures: 0, Errors: 1, Skipped: 0{color}
{color:#d04437}[INFO]{color}
{color:#d04437}[INFO] 

[jira] [Commented] (HDFS-13509) Bug fix for breakHardlinks() of ReplicaInfo/LocalReplica, and fix TestFileAppend failures on Windows

2018-04-27 Thread Xiao Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16457054#comment-16457054
 ] 

Xiao Liang commented on HDFS-13509:
---

For my local test of branch-2, currently without the patch it failed to finish:

{color:#d04437}[INFO] 
---{color}
{color:#d04437}[INFO] T E S T S{color}
{color:#d04437}[INFO] 
---{color}
{color:#d04437}[INFO] Running org.apache.hadoop.hdfs.TestFileAppend{color}
{color:#d04437}[ERROR] Tests run: 13, Failures: 0, Errors: 4, Skipped: 0, Time 
elapsed: 46.473 s <<< FAILURE! - in org.apache.hadoop.hdfs.TestFileAppend{color}
{color:#d04437}[ERROR] 
testFailedAppendBlockRejection(org.apache.hadoop.hdfs.TestFileAppend) Time 
elapsed: 1.269 s <<< ERROR!{color}
{color:#d04437}java.io.IOException:{color}
{color:#d04437}Cannot remove data directory: 
D:\Git\Hadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\datapath 
'D:\Git\Hadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\data':{color}
{color:#d04437} 
absolute:D:\Git\Hadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\data{color}
{color:#d04437} permissions: drwx{color}

And with the patch the result is:

{color:#14892c}[INFO] 
---{color}
{color:#14892c}[INFO] T E S T S{color}
{color:#14892c}[INFO] 
---{color}
{color:#14892c}[INFO] Running org.apache.hadoop.hdfs.TestFileAppend{color}
{color:#14892c}[INFO] Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time 
elapsed: 56.192 s - in org.apache.hadoop.hdfs.TestFileAppend{color}
{color:#14892c}[INFO]{color}
{color:#14892c}[INFO] Results:{color}
{color:#14892c}[INFO]{color}
{color:#14892c}[INFO] Tests run: 13, Failures: 0, Errors: 0, Skipped: 0{color}
{color:#14892c}[INFO]{color}
{color:#14892c}[INFO] 
{color}
{color:#14892c}[INFO] BUILD SUCCESS{color}
{color:#14892c}[INFO] 
{color}
{color:#14892c}[INFO] Total time: 01:05 min{color}
{color:#14892c}[INFO] Finished at: 2018-04-26T15:29:19-07:00{color}
{color:#14892c}[INFO] Final Memory: 33M/854M{color}
{color:#14892c}[INFO] 
{color}

> Bug fix for breakHardlinks() of ReplicaInfo/LocalReplica, and fix 
> TestFileAppend failures on Windows
> 
>
> Key: HDFS-13509
> URL: https://issues.apache.org/jira/browse/HDFS-13509
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Attachments: HDFS-13509-branch-2.000.patch, HDFS-13509.000.patch
>
>
> breakHardlinks() of ReplicaInfo(branch-2)/LocalReplica(trunk) replaces file 
> while the source is still opened as input stream, which will fail and throw 
> exception on Windows. It's the cause of  unit test case 
> org.apache.hadoop.hdfs.TestFileAppend#testBreakHardlinksIfNeeded failure on 
> Windows.
> Other test cases of TestFileAppend fail randomly on Windows due to sharing 
> the same test folder, and the solution is using randomized base dir of 
> MiniDFSCluster via HDFS-13408



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13509) Bug fix for breakHardlinks() of ReplicaInfo/LocalReplica, and fix TestFileAppend failures on Windows

2018-04-27 Thread Xiao Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16456909#comment-16456909
 ] 

Xiao Liang commented on HDFS-13509:
---

Thanks [~elgoiri] for reviewing.

Sure, just added [^HDFS-13509.000.patch] for trunk.

> Bug fix for breakHardlinks() of ReplicaInfo/LocalReplica, and fix 
> TestFileAppend failures on Windows
> 
>
> Key: HDFS-13509
> URL: https://issues.apache.org/jira/browse/HDFS-13509
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Attachments: HDFS-13509-branch-2.000.patch, HDFS-13509.000.patch
>
>
> breakHardlinks() of ReplicaInfo(branch-2)/LocalReplica(trunk) replaces file 
> while the source is still opened as input stream, which will fail and throw 
> exception on Windows. It's the cause of  unit test case 
> org.apache.hadoop.hdfs.TestFileAppend#testBreakHardlinksIfNeeded failure on 
> Windows.
> Other test cases of TestFileAppend fail randomly on Windows due to sharing 
> the same test folder, and the solution is using randomized base dir of 
> MiniDFSCluster via HDFS-13408



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13509) Bug fix for breakHardlinks() of ReplicaInfo/LocalReplica, and fix TestFileAppend failures on Windows

2018-04-27 Thread Xiao Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13509?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Liang updated HDFS-13509:
--
Attachment: HDFS-13509.000.patch

> Bug fix for breakHardlinks() of ReplicaInfo/LocalReplica, and fix 
> TestFileAppend failures on Windows
> 
>
> Key: HDFS-13509
> URL: https://issues.apache.org/jira/browse/HDFS-13509
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Attachments: HDFS-13509-branch-2.000.patch, HDFS-13509.000.patch
>
>
> breakHardlinks() of ReplicaInfo(branch-2)/LocalReplica(trunk) replaces file 
> while the source is still opened as input stream, which will fail and throw 
> exception on Windows. It's the cause of  unit test case 
> org.apache.hadoop.hdfs.TestFileAppend#testBreakHardlinksIfNeeded failure on 
> Windows.
> Other test cases of TestFileAppend fail randomly on Windows due to sharing 
> the same test folder, and the solution is using randomized base dir of 
> MiniDFSCluster via HDFS-13408



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13509) Bug fix for breakHardlinks() of ReplicaInfo/LocalReplica, and fix TestFileAppend failures on Windows

2018-04-26 Thread Xiao Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13509?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Liang updated HDFS-13509:
--
Status: Patch Available  (was: Open)

> Bug fix for breakHardlinks() of ReplicaInfo/LocalReplica, and fix 
> TestFileAppend failures on Windows
> 
>
> Key: HDFS-13509
> URL: https://issues.apache.org/jira/browse/HDFS-13509
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Attachments: HDFS-13509-branch-2.000.patch
>
>
> breakHardlinks() of ReplicaInfo(branch-2)/LocalReplica(trunk) replaces file 
> while the source is still opened as input stream, which will fail and throw 
> exception on Windows. It's the cause of  unit test case 
> org.apache.hadoop.hdfs.TestFileAppend#testBreakHardlinksIfNeeded failure on 
> Windows.
> Other test cases of TestFileAppend fail randomly on Windows due to sharing 
> the same test folder, and the solution is using randomized base dir of 
> MiniDFSCluster via HDFS-13408



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13509) Bug fix for breakHardlinks() of ReplicaInfo/LocalReplica, and fix TestFileAppend failures on Windows

2018-04-26 Thread Xiao Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13509?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Liang updated HDFS-13509:
--
Attachment: HDFS-13509-branch-2.000.patch

> Bug fix for breakHardlinks() of ReplicaInfo/LocalReplica, and fix 
> TestFileAppend failures on Windows
> 
>
> Key: HDFS-13509
> URL: https://issues.apache.org/jira/browse/HDFS-13509
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Attachments: HDFS-13509-branch-2.000.patch
>
>
> breakHardlinks() of ReplicaInfo(branch-2)/LocalReplica(trunk) replaces file 
> while the source is still opened as input stream, which will fail and throw 
> exception on Windows. It's the cause of  unit test case 
> org.apache.hadoop.hdfs.TestFileAppend#testBreakHardlinksIfNeeded failure on 
> Windows.
> Other test cases of TestFileAppend fail randomly on Windows due to sharing 
> the same test folder, and the solution is using randomized base dir of 
> MiniDFSCluster via HDFS-13408



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13509) Bug fix for breakHardlinks() of ReplicaInfo/LocalReplica, and fix TestFileAppend failures on Windows

2018-04-26 Thread Xiao Liang (JIRA)
Xiao Liang created HDFS-13509:
-

 Summary: Bug fix for breakHardlinks() of ReplicaInfo/LocalReplica, 
and fix TestFileAppend failures on Windows
 Key: HDFS-13509
 URL: https://issues.apache.org/jira/browse/HDFS-13509
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Xiao Liang
Assignee: Xiao Liang


breakHardlinks() of ReplicaInfo(branch-2)/LocalReplica(trunk) replaces file 
while the source is still opened as input stream, which will fail and throw 
exception on Windows. It's the cause of  unit test case 
org.apache.hadoop.hdfs.TestFileAppend#testBreakHardlinksIfNeeded failure on 
Windows.

Other test cases of TestFileAppend fail randomly on Windows due to sharing the 
same test folder, and the solution is using randomized base dir of 
MiniDFSCluster via HDFS-13408



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13503) Fix TestFsck test failures on Windows

2018-04-25 Thread Xiao Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13503?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16453349#comment-16453349
 ] 

Xiao Liang commented on HDFS-13503:
---

In my local machine, current status on Windows:

{color:#d04437}[INFO] 
---{color}
{color:#d04437}[INFO] T E S T S{color}
{color:#d04437}[INFO] 
---{color}
{color:#d04437}[INFO] Running 
org.apache.hadoop.hdfs.server.namenode.TestFsck{color}
{color:#d04437}[ERROR] Tests run: 28, Failures: 0, Errors: 1, Skipped: 0, Time 
elapsed: 190.863 s <<< FAILURE! - in 
org.apache.hadoop.hdfs.server.namenode.TestFsck{color}
{color:#d04437}[ERROR] 
testFsckUpgradeDomain(org.apache.hadoop.hdfs.server.namenode.TestFsck) Time 
elapsed: 2.283 s <<< ERROR!{color}
{color:#d04437}java.io.IOException: Could not fully delete 
D:\Git\Hadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\name1{color}
{color:#d04437} at 
org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:1043){color}
{color:#d04437} at 
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:879){color}
{color:#d04437} at 
org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:513){color}
{color:#d04437} at 
org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:472){color}
{color:#d04437} at 
org.apache.hadoop.hdfs.server.namenode.TestFsck.testUpgradeDomain(TestFsck.java:2072){color}
{color:#d04437} at 
org.apache.hadoop.hdfs.server.namenode.TestFsck.testFsckUpgradeDomain(TestFsck.java:2047){color}
{color:#d04437} at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
Method){color}
{color:#d04437} at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62){color}
{color:#d04437} at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43){color}
{color:#d04437} at java.lang.reflect.Method.invoke(Method.java:498){color}
{color:#d04437} at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47){color}
{color:#d04437} at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12){color}
{color:#d04437} at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44){color}
{color:#d04437} at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17){color}
{color:#d04437} at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74){color}

{color:#d04437}[INFO]{color}
{color:#d04437}[INFO] Results:{color}
{color:#d04437}[INFO]{color}
{color:#d04437}[ERROR] Errors:{color}
{color:#d04437}[ERROR] 
TestFsck.testFsckUpgradeDomain:2047->testUpgradeDomain:2072 » IO Could not 
ful...{color}

Wtih the patch:

{color:#14892c}[INFO] 
---{color}
{color:#14892c}[INFO] T E S T S{color}
{color:#14892c}[INFO] 
---{color}
{color:#14892c}[INFO] Running 
org.apache.hadoop.hdfs.server.namenode.TestFsck{color}
{color:#14892c}[INFO] Tests run: 28, Failures: 0, Errors: 0, Skipped: 0, Time 
elapsed: 183.202 s - in org.apache.hadoop.hdfs.server.namenode.TestFsck{color}
{color:#14892c}[INFO]{color}
{color:#14892c}[INFO] Results:{color}
{color:#14892c}[INFO]{color}
{color:#14892c}[INFO] Tests run: 28, Failures: 0, Errors: 0, Skipped: 0{color}
{color:#14892c}[INFO]{color}
{color:#14892c}[INFO] 
{color}
{color:#14892c}[INFO] BUILD SUCCESS{color}
{color:#14892c}[INFO] 
{color}
{color:#14892c}[INFO] Total time: 03:15 min{color}
{color:#14892c}[INFO] Finished at: 2018-04-25T18:15:37-07:00{color}
{color:#14892c}[INFO] Final Memory: 26M/634M{color}
{color:#14892c}[INFO] 
{color}

 

> Fix TestFsck test failures on Windows
> -
>
> Key: HDFS-13503
> URL: https://issues.apache.org/jira/browse/HDFS-13503
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: hdfs
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Attachments: HDFS-13503-branch-2.000.patch
>
>
> Test failures on Windows caused by the same reason as HDFS-13336, similar fix 
> needed for TestFsck basing on HDFS-13408.
> MiniDFSCluster also needs a small fix for the getStorageDir() interface, 
> which should use determineDfsBaseDir() to get the correct path of the data 
> directory.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: 

[jira] [Updated] (HDFS-13503) Fix TestFsck test failures on Windows

2018-04-25 Thread Xiao Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Liang updated HDFS-13503:
--
Status: Patch Available  (was: Open)

> Fix TestFsck test failures on Windows
> -
>
> Key: HDFS-13503
> URL: https://issues.apache.org/jira/browse/HDFS-13503
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: hdfs
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Attachments: HDFS-13503-branch-2.000.patch
>
>
> Test failures on Windows caused by the same reason as HDFS-13336, similar fix 
> needed for TestFsck basing on HDFS-13408.
> MiniDFSCluster also needs a small fix for the getStorageDir() interface, 
> which should use determineDfsBaseDir() to get the correct path of the data 
> directory.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13503) Fix TestFsck test failures on Windows

2018-04-25 Thread Xiao Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Liang updated HDFS-13503:
--
Attachment: HDFS-13503-branch-2.000.patch

> Fix TestFsck test failures on Windows
> -
>
> Key: HDFS-13503
> URL: https://issues.apache.org/jira/browse/HDFS-13503
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: hdfs
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Attachments: HDFS-13503-branch-2.000.patch
>
>
> Test failures on Windows caused by the same reason as HDFS-13336, similar fix 
> needed for TestFsck basing on HDFS-13408.
> MiniDFSCluster also needs a small fix for the getStorageDir() interface, 
> which should use determineDfsBaseDir() to get the correct path of the data 
> directory.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13503) Fix TestFsck test failures on Windows

2018-04-24 Thread Xiao Liang (JIRA)
Xiao Liang created HDFS-13503:
-

 Summary: Fix TestFsck test failures on Windows
 Key: HDFS-13503
 URL: https://issues.apache.org/jira/browse/HDFS-13503
 Project: Hadoop HDFS
  Issue Type: Test
  Components: hdfs
Reporter: Xiao Liang
Assignee: Xiao Liang


Test failures on Windows caused by the same reason as HDFS-13336, similar fix 
needed for TestFsck basing on HDFS-13408.

MiniDFSCluster also needs a small fix for the getStorageDir() interface, which 
should use determineDfsBaseDir() to get the correct path of the data directory.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13336) Test cases of TestWriteToReplica failed in windows

2018-04-23 Thread Xiao Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16449120#comment-16449120
 ] 

Xiao Liang commented on HDFS-13336:
---

Failed tests are not related to this change, it should be good?

> Test cases of TestWriteToReplica failed in windows
> --
>
> Key: HDFS-13336
> URL: https://issues.apache.org/jira/browse/HDFS-13336
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Attachments: HDFS-13336.000.patch, HDFS-13336.001.patch, 
> HDFS-13336.002.patch, HDFS-13336.003.patch
>
>
> Test cases of TestWriteToReplica failed in windows with errors like:
> h4. 
> !https://builds.apache.org/static/fc5100d0/images/16x16/document_delete.png!  
> Error Details
> Could not fully delete 
> F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data\1\dfs\name-0-1
> h4. 
> !https://builds.apache.org/static/fc5100d0/images/16x16/document_delete.png!  
> Stack Trace
> java.io.IOException: Could not fully delete 
> F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data\1\dfs\name-0-1
>  at 
> org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1011)
>  at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:932)
>  at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:864)
>  at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:497) at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:456) 
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestWriteToReplica.testAppend(TestWriteToReplica.java:89)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271) at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>  at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238) at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63) at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236) at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:53) at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229) at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:309) at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:369)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:275)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:239)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:160)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:373)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:334)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:119) 
> at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:407)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13336) Test cases of TestWriteToReplica failed in windows

2018-04-23 Thread Xiao Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16448957#comment-16448957
 ] 

Xiao Liang commented on HDFS-13336:
---

The test result without [^HDFS-13336.003.patch] in Windows is like:

{color:#d04437}[INFO] Results:{color}
{color:#d04437}[INFO]{color}
{color:#d04437}[ERROR] Errors:{color}
{color:#d04437}[ERROR] TestWriteToReplica.testAppend:88 » IO Could not fully 
delete D:\Git\MT\OSSHado...{color}
{color:#d04437}[ERROR] TestWriteToReplica.testClose:66 » IO Could not fully 
delete D:\Git\MT\OSSHadoo...{color}
{color:#d04437}[ERROR] 
TestWriteToReplica.testReplicaMapAfterDatanodeRestart:512 » IO Could not 
fully...{color}
{color:#d04437}[ERROR] TestWriteToReplica.testWriteToRbw:108 » IO Could not 
fully delete D:\Git\MT\OS...{color}
{color:#d04437}[ERROR] TestWriteToReplica.testWriteToTemporary:128 » IO Could 
not fully delete D:\Git...{color}
{color:#d04437}[INFO]{color}
{color:#d04437}[ERROR] Tests run: 6, Failures: 0, Errors: 5, Skipped: 0{color}

And with the patch it is:

{color:#14892c}[INFO] 
---{color}
{color:#14892c}[INFO] T E S T S{color}
{color:#14892c}[INFO] 
---{color}
{color:#14892c}[INFO] Running 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestWriteToReplica{color}
{color:#14892c}[INFO] Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time 
elapsed: 15.151 s - in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestWriteToReplica{color}
{color:#14892c}[INFO]{color}
{color:#14892c}[INFO] Results:{color}
{color:#14892c}[INFO]{color}
{color:#14892c}[INFO] Tests run: 6, Failures: 0, Errors: 0, Skipped: 0{color}

> Test cases of TestWriteToReplica failed in windows
> --
>
> Key: HDFS-13336
> URL: https://issues.apache.org/jira/browse/HDFS-13336
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Attachments: HDFS-13336.000.patch, HDFS-13336.001.patch, 
> HDFS-13336.002.patch, HDFS-13336.003.patch
>
>
> Test cases of TestWriteToReplica failed in windows with errors like:
> h4. 
> !https://builds.apache.org/static/fc5100d0/images/16x16/document_delete.png!  
> Error Details
> Could not fully delete 
> F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data\1\dfs\name-0-1
> h4. 
> !https://builds.apache.org/static/fc5100d0/images/16x16/document_delete.png!  
> Stack Trace
> java.io.IOException: Could not fully delete 
> F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data\1\dfs\name-0-1
>  at 
> org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1011)
>  at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:932)
>  at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:864)
>  at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:497) at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:456) 
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestWriteToReplica.testAppend(TestWriteToReplica.java:89)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271) at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>  at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238) at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63) at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236) at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:53) at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229) at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:309) at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:369)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:275)
>  at 
> 

[jira] [Commented] (HDFS-13336) Test cases of TestWriteToReplica failed in windows

2018-04-23 Thread Xiao Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16448899#comment-16448899
 ] 

Xiao Liang commented on HDFS-13336:
---

Thanks [~elgoiri] and [~chris.douglas] for the fix of 
https://issues.apache.org/jira/browse/HDFS-13408 , I have updated the patch 
[^HDFS-13336.003.patch] basing on it, which should fix the test failures for 
windodws.

> Test cases of TestWriteToReplica failed in windows
> --
>
> Key: HDFS-13336
> URL: https://issues.apache.org/jira/browse/HDFS-13336
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Attachments: HDFS-13336.000.patch, HDFS-13336.001.patch, 
> HDFS-13336.002.patch, HDFS-13336.003.patch
>
>
> Test cases of TestWriteToReplica failed in windows with errors like:
> h4. 
> !https://builds.apache.org/static/fc5100d0/images/16x16/document_delete.png!  
> Error Details
> Could not fully delete 
> F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data\1\dfs\name-0-1
> h4. 
> !https://builds.apache.org/static/fc5100d0/images/16x16/document_delete.png!  
> Stack Trace
> java.io.IOException: Could not fully delete 
> F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data\1\dfs\name-0-1
>  at 
> org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1011)
>  at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:932)
>  at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:864)
>  at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:497) at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:456) 
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestWriteToReplica.testAppend(TestWriteToReplica.java:89)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271) at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>  at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238) at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63) at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236) at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:53) at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229) at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:309) at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:369)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:275)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:239)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:160)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:373)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:334)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:119) 
> at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:407)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13336) Test cases of TestWriteToReplica failed in windows

2018-04-23 Thread Xiao Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16448899#comment-16448899
 ] 

Xiao Liang edited comment on HDFS-13336 at 4/23/18 9:38 PM:


Thanks [~elgoiri] and [~chris.douglas] for the fix of 
https://issues.apache.org/jira/browse/HDFS-13408 , I have updated the patch 
[^HDFS-13336.003.patch] basing on it, which should fix the test failures for 
windows.


was (Author: surmountian):
Thanks [~elgoiri] and [~chris.douglas] for the fix of 
https://issues.apache.org/jira/browse/HDFS-13408 , I have updated the patch 
[^HDFS-13336.003.patch] basing on it, which should fix the test failures for 
windodws.

> Test cases of TestWriteToReplica failed in windows
> --
>
> Key: HDFS-13336
> URL: https://issues.apache.org/jira/browse/HDFS-13336
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Attachments: HDFS-13336.000.patch, HDFS-13336.001.patch, 
> HDFS-13336.002.patch, HDFS-13336.003.patch
>
>
> Test cases of TestWriteToReplica failed in windows with errors like:
> h4. 
> !https://builds.apache.org/static/fc5100d0/images/16x16/document_delete.png!  
> Error Details
> Could not fully delete 
> F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data\1\dfs\name-0-1
> h4. 
> !https://builds.apache.org/static/fc5100d0/images/16x16/document_delete.png!  
> Stack Trace
> java.io.IOException: Could not fully delete 
> F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data\1\dfs\name-0-1
>  at 
> org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1011)
>  at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:932)
>  at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:864)
>  at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:497) at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:456) 
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestWriteToReplica.testAppend(TestWriteToReplica.java:89)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271) at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>  at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238) at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63) at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236) at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:53) at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229) at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:309) at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:369)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:275)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:239)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:160)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:373)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:334)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:119) 
> at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:407)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13336) Test cases of TestWriteToReplica failed in windows

2018-04-23 Thread Xiao Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Liang updated HDFS-13336:
--
Attachment: HDFS-13336.003.patch

> Test cases of TestWriteToReplica failed in windows
> --
>
> Key: HDFS-13336
> URL: https://issues.apache.org/jira/browse/HDFS-13336
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Attachments: HDFS-13336.000.patch, HDFS-13336.001.patch, 
> HDFS-13336.002.patch, HDFS-13336.003.patch
>
>
> Test cases of TestWriteToReplica failed in windows with errors like:
> h4. 
> !https://builds.apache.org/static/fc5100d0/images/16x16/document_delete.png!  
> Error Details
> Could not fully delete 
> F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data\1\dfs\name-0-1
> h4. 
> !https://builds.apache.org/static/fc5100d0/images/16x16/document_delete.png!  
> Stack Trace
> java.io.IOException: Could not fully delete 
> F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data\1\dfs\name-0-1
>  at 
> org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1011)
>  at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:932)
>  at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:864)
>  at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:497) at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:456) 
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestWriteToReplica.testAppend(TestWriteToReplica.java:89)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271) at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>  at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238) at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63) at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236) at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:53) at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229) at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:309) at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:369)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:275)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:239)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:160)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:373)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:334)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:119) 
> at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:407)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13408) MiniDFSCluster to support being built on randomized base directory

2018-04-21 Thread Xiao Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16446737#comment-16446737
 ] 

Xiao Liang commented on HDFS-13408:
---

+1 on [^HDFS-13408.004.patch].

> MiniDFSCluster to support being built on randomized base directory
> --
>
> Key: HDFS-13408
> URL: https://issues.apache.org/jira/browse/HDFS-13408
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Attachments: HDFS-13408.000.patch, HDFS-13408.001.patch, 
> HDFS-13408.002.patch, HDFS-13408.003.patch, HDFS-13408.004.patch
>
>
> Generated files of MiniDFSCluster during test are not properly cleaned in 
> Windows, which fails all subsequent test cases using the same default 
> directory (Windows does not allow other processes to delete them). By 
> migrating to randomized base directories, the conflict of test path of test 
> cases will be avoided, even if they are running at the same time.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13408) MiniDFSCluster to support being built on randomized base directory

2018-04-20 Thread Xiao Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16445615#comment-16445615
 ] 

Xiao Liang commented on HDFS-13408:
---

Thank you [~giovanni.fumarola] for the suggestion, and thank you 
[~chris.douglas] for adding more problem description, it's correct and precise, 
the updated patch looks good to me. 
{quote}Would the failing tests also fail to run in parallel, even on 
non-Windows platforms?
{quote}
I don't have an environment to test running in parallel on non-Windows 
platforms, but there should be file read/write conflicts between test cases 
using the same base dir and test data file names, if running in parallel.

[~elgoiri] for the test cases, the 3 failed ones in the latest run don't seem 
to relate to this change, as the default path will be the same as before, might 
be random failure or caused by other changes?

> MiniDFSCluster to support being built on randomized base directory
> --
>
> Key: HDFS-13408
> URL: https://issues.apache.org/jira/browse/HDFS-13408
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Attachments: HDFS-13408.000.patch, HDFS-13408.001.patch, 
> HDFS-13408.002.patch
>
>
> Generated files of MiniDFSCluster during test are not properly cleaned in 
> Windows, which fails all subsequent test cases using the same default 
> directory (Windows does not allow other processes to delete them). By 
> migrating to randomized base directories, the conflict of test path of test 
> cases will be avoided, even if they are running at the same time.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13408) MiniDFSCluster to support being built on randomized base directory

2018-04-06 Thread Xiao Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Liang updated HDFS-13408:
--
Status: Patch Available  (was: Open)

> MiniDFSCluster to support being built on randomized base directory
> --
>
> Key: HDFS-13408
> URL: https://issues.apache.org/jira/browse/HDFS-13408
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Attachments: HDFS-13408.000.patch
>
>
> Generated files of MiniDFSCluster during test are not properly cleaned in 
> Windows, which fails all subsequent test cases using the same default 
> directory (Windows does not allow other processes to delete them). By 
> migrating to randomized base directories, the conflict of test path of test 
> cases will be avoided, even if they are running at the same time.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13408) MiniDFSCluster to support being built on randomized base directory

2018-04-06 Thread Xiao Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Liang updated HDFS-13408:
--
Attachment: HDFS-13408.000.patch

> MiniDFSCluster to support being built on randomized base directory
> --
>
> Key: HDFS-13408
> URL: https://issues.apache.org/jira/browse/HDFS-13408
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Attachments: HDFS-13408.000.patch
>
>
> Generated files of MiniDFSCluster during test are not properly cleaned in 
> Windows, which fails all subsequent test cases using the same default 
> directory (Windows does not allow other processes to delete them). By 
> migrating to randomized base directories, the conflict of test path of test 
> cases will be avoided, even if they are running at the same time.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   >