Zuoming Zhang commented on HDFS-13673:

Thanks [~elgoiri]

Answers to your questions:
 * Nope. Actually for other places that are calling 
_DataNodeTestUtils.injectDataDirFailure_, they are not calling on the volume 
folder itself. So not affected by _in_use.lock_. I've also checked with all 
other tests that call the _injectDataDirFailure_, and the tests are not failing.
 * What do you mean to extract the variable? I think this is only used once in 
this file?

> TestNameNodeMetrics fails on Windows
> ------------------------------------
>                 Key: HDFS-13673
>                 URL: https://issues.apache.org/jira/browse/HDFS-13673
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: test
>    Affects Versions: 3.1.0, 2.9.1
>            Reporter: Zuoming Zhang
>            Priority: Minor
>              Labels: Windows
>             Fix For: 3.1.0, 2.9.1
>         Attachments: HDFS-13673.000.patch, 
> TestNameNodeMetrics-testVolumeFailures-Report.000.txt
> _TestNameNodeMetrics_ fails on Windows
> Problem:
> This is because in _testVolumeFailures_, it tries to call 
> _DataNodeTestUtils.injectDataDirFailure_ on a volume folder. What 
> _injectDataDirFailure_does is actually modifying the folder name from 
> _volume_name_ to _volume_name_._origin_ and create a new file named as 
> _volume_name_. Inside the folder, it has two things: 1. a directory named as 
> "_current_", 2. a file named as "_in_use.lock_". Windows behaves different 
> from Linux when renaming the parent folder of a locked file. Windows prevent 
> you from renaming while Linux allows.
> Fix:
> So in order to inject data failure on to the volume. Instead of renaming the 
> volume folder itself. Rename the folder inside it which doesn't hold a lock. 
> Since the folder inside the volume is "_current_". Then we only need to 
> inject data failure to _volume_name/current_.

This message was sent by Atlassian JIRA

To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to