[jira] [Commented] (HDFS-10756) Expose getTrashRoot to HTTPFS and WebHDFS

2016-09-10 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15481102#comment-15481102
 ] 

Xiao Chen commented on HDFS-10756:
--

bq. We have a holiday next week(mid-autumn day)
Sure, thanks for the heads-up. Wish you enjoy some good mooncakes with your 
family. :)

Also forgot to say in my last comment, we should add docs in this patch (to 
WebHDFS.md for example).

> Expose getTrashRoot to HTTPFS and WebHDFS
> -
>
> Key: HDFS-10756
> URL: https://issues.apache.org/jira/browse/HDFS-10756
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: encryption, httpfs, webhdfs
>Reporter: Xiao Chen
>Assignee: Yuanbo Liu
> Attachments: HDFS-10756.001.patch, HDFS-10756.002.patch
>
>
> Currently, hadoop FileSystem API has 
> [getTrashRoot|https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java#L2708]
>  to determine trash directory at run time. Default trash dir is under 
> {{/user/$USER}}
> For an encrypted file, since moving files between/in/out of EZs are not 
> allowed, when an EZ file is deleted via CLI, it calls in to [DFS 
> implementation|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java#L2485]
>  to move the file to a trash directory under the same EZ.
> This works perfectly fine for CLI users or java users who call FileSystem 
> API. But for users via httpfs/webhdfs, currently there is no way to figure 
> out what the trash root would be. This jira is proposing we add such 
> interface to httpfs and webhdfs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10850) getEZForPath should NOT throw FNF

2016-09-10 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15481053#comment-15481053
 ] 

Rakesh R commented on HDFS-10850:
-

Thank you [~daryn] for pointing out this. The javadoc in 
{{HdfsAdmin#getEncryptionZoneForPath}} says that if the path does not exist 
then throws FNF. IMHO, javadoc has to be modified to convey the message clearly 
so that we could avoid such situations in future. I'm interested to take the 
discussion ahead and work on this jira.

[Reference 
HdfsAdmin.java#L335|https://github.com/apache/hadoop/blob/branch-2.8/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/HdfsAdmin.java#L335]
{code}
   * Get the path of the encryption zone for a given file or directory.
   *
   * @param path The path to get the ez for.
   *
   * @return The EncryptionZone of the ez, or null if path is not in an ez.
   * @throws IOExceptionif there was a general IO exception
   * @throws AccessControlException if the caller does not have access to path
   * @throws FileNotFoundException  if the path does not exist
{code}

[discussion 
thread|https://issues.apache.org/jira/browse/HDFS-9348?focusedCommentId=14986075=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14986075]

> getEZForPath should NOT throw FNF
> -
>
> Key: HDFS-10850
> URL: https://issues.apache.org/jira/browse/HDFS-10850
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.8.0
>Reporter: Daryn Sharp
>Assignee: Rakesh R
>Priority: Blocker
>
> HDFS-9433 made an incompatible change to the semantics of getEZForPath.  It 
> used to return the EZ of the closest ancestor path.  It never threw FNF.  A 
> common use of getEZForPath to determining if a file can be renamed, or must 
> be copied due to mismatched EZs.  Notably, this has broken hive.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-10850) getEZForPath should NOT throw FNF

2016-09-10 Thread Rakesh R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh R reassigned HDFS-10850:
---

Assignee: Rakesh R

> getEZForPath should NOT throw FNF
> -
>
> Key: HDFS-10850
> URL: https://issues.apache.org/jira/browse/HDFS-10850
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.8.0
>Reporter: Daryn Sharp
>Assignee: Rakesh R
>Priority: Blocker
>
> HDFS-9433 made an incompatible change to the semantics of getEZForPath.  It 
> used to return the EZ of the closest ancestor path.  It never threw FNF.  A 
> common use of getEZForPath to determining if a file can be renamed, or must 
> be copied due to mismatched EZs.  Notably, this has broken hive.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10756) Expose getTrashRoot to HTTPFS and WebHDFS

2016-09-10 Thread Yuanbo Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15480846#comment-15480846
 ] 

Yuanbo Liu commented on HDFS-10756:
---

[~xiaochen] Thanks for your comments.
We have a holiday next week(mid-autumn day), and I will take a vacation, so I 
will reply it later. Thanks again for your time!

> Expose getTrashRoot to HTTPFS and WebHDFS
> -
>
> Key: HDFS-10756
> URL: https://issues.apache.org/jira/browse/HDFS-10756
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: encryption, httpfs, webhdfs
>Reporter: Xiao Chen
>Assignee: Yuanbo Liu
> Attachments: HDFS-10756.001.patch, HDFS-10756.002.patch
>
>
> Currently, hadoop FileSystem API has 
> [getTrashRoot|https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java#L2708]
>  to determine trash directory at run time. Default trash dir is under 
> {{/user/$USER}}
> For an encrypted file, since moving files between/in/out of EZs are not 
> allowed, when an EZ file is deleted via CLI, it calls in to [DFS 
> implementation|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java#L2485]
>  to move the file to a trash directory under the same EZ.
> This works perfectly fine for CLI users or java users who call FileSystem 
> API. But for users via httpfs/webhdfs, currently there is no way to figure 
> out what the trash root would be. This jira is proposing we add such 
> interface to httpfs and webhdfs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10830) FsDatasetImpl#removeVolumes crashes with IllegalMonitorStateException when vol being removed is in use

2016-09-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15480835#comment-15480835
 ] 

Hudson commented on HDFS-10830:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10423 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10423/])
HDFS-10830. FsDatasetImpl#removeVolumes crashes with (arp: rev 
a99bf26a0899bcc4307c3a242c8414eaef555aa7)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeList.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/AutoCloseableLock.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestFsDatasetImpl.java


> FsDatasetImpl#removeVolumes crashes with IllegalMonitorStateException when 
> vol being removed is in use
> --
>
> Key: HDFS-10830
> URL: https://issues.apache.org/jira/browse/HDFS-10830
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Arpit Agarwal
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HDFS-10830.01.patch, HDFS-10830.02.patch, 
> HDFS-10830.05.patch, HDFS-10830.06.patch
>
>
> {{FsDatasetImpl#removeVolumes()}} operation crashes abruptly with 
> IllegalMonitorStateException whenever the volume being removed is in use 
> concurrently.
> Looks like {{removeVolumes()}} is waiting on a monitor object "this" (that is 
> FsDatasetImpl) which it has never locked, leading to  
> IllegalMonitorStateException. This monitor wait happens only the volume being 
> removed is in use (referencecount > 0). The thread performing this remove 
> volume operation thus crashes abruptly and block invalidations for the remove 
> volumes are totally skipped. 
> {code:title=FsDatasetImpl.java|borderStyle=solid}
> @Override
> public void removeVolumes(Set volumesToRemove, boolean clearFailure) {
> ..
> ..
> try (AutoCloseableLock lock = datasetLock.acquire()) {   <== LOCK acquire 
> datasetLock
> for (int idx = 0; idx < dataStorage.getNumStorageDirs(); idx++) {
>   .. .. ..
>   asyncDiskService.removeVolume(sd.getCurrentDir()); <== volume SD1 remove
>   volumes.removeVolume(absRoot, clearFailure);
>   volumes.waitVolumeRemoved(5000, this); <== WAIT on "this" 
> ?? But, we haven't locked it yet.
>  This will cause 
> IllegalMonitorStateException
>  and crash 
> getBlockReports()/FBR thread!
>   for (String bpid : volumeMap.getBlockPoolList()) {
> List blocks = new ArrayList<>();
> for (Iterator it = volumeMap.replicas(bpid).iterator();
>  it.hasNext(); ) {
> .. .. .. 
> it.remove(); <== volumeMap removal
>   }
> blkToInvalidate.put(bpid, blocks);
>   }
>  .. ..
> }<== LOCK release 
> datasetLock   
> // Call this outside the lock.
> for (Map.Entry entry :
> blkToInvalidate.entrySet()) {
>  ..
>  for (ReplicaInfo block : blocks) {
>   invalidate(bpid, block);   <== Notify NN of 
> Block removal
>  }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10830) FsDatasetImpl#removeVolumes crashes with IllegalMonitorStateException when vol being removed is in use

2016-09-10 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-10830:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0-alpha2
   2.8.0
   Status: Resolved  (was: Patch Available)

Thanks [~xiaochen] and [~manojg]. I committed this for 2.8.0.

Manoj I also credited you for the patch.

> FsDatasetImpl#removeVolumes crashes with IllegalMonitorStateException when 
> vol being removed is in use
> --
>
> Key: HDFS-10830
> URL: https://issues.apache.org/jira/browse/HDFS-10830
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Arpit Agarwal
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HDFS-10830.01.patch, HDFS-10830.02.patch, 
> HDFS-10830.05.patch, HDFS-10830.06.patch
>
>
> {{FsDatasetImpl#removeVolumes()}} operation crashes abruptly with 
> IllegalMonitorStateException whenever the volume being removed is in use 
> concurrently.
> Looks like {{removeVolumes()}} is waiting on a monitor object "this" (that is 
> FsDatasetImpl) which it has never locked, leading to  
> IllegalMonitorStateException. This monitor wait happens only the volume being 
> removed is in use (referencecount > 0). The thread performing this remove 
> volume operation thus crashes abruptly and block invalidations for the remove 
> volumes are totally skipped. 
> {code:title=FsDatasetImpl.java|borderStyle=solid}
> @Override
> public void removeVolumes(Set volumesToRemove, boolean clearFailure) {
> ..
> ..
> try (AutoCloseableLock lock = datasetLock.acquire()) {   <== LOCK acquire 
> datasetLock
> for (int idx = 0; idx < dataStorage.getNumStorageDirs(); idx++) {
>   .. .. ..
>   asyncDiskService.removeVolume(sd.getCurrentDir()); <== volume SD1 remove
>   volumes.removeVolume(absRoot, clearFailure);
>   volumes.waitVolumeRemoved(5000, this); <== WAIT on "this" 
> ?? But, we haven't locked it yet.
>  This will cause 
> IllegalMonitorStateException
>  and crash 
> getBlockReports()/FBR thread!
>   for (String bpid : volumeMap.getBlockPoolList()) {
> List blocks = new ArrayList<>();
> for (Iterator it = volumeMap.replicas(bpid).iterator();
>  it.hasNext(); ) {
> .. .. .. 
> it.remove(); <== volumeMap removal
>   }
> blkToInvalidate.put(bpid, blocks);
>   }
>  .. ..
> }<== LOCK release 
> datasetLock   
> // Call this outside the lock.
> for (Map.Entry entry :
> blkToInvalidate.entrySet()) {
>  ..
>  for (ReplicaInfo block : blocks) {
>   invalidate(bpid, block);   <== Notify NN of 
> Block removal
>  }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10830) FsDatasetImpl#removeVolumes crashes with IllegalMonitorStateException when vol being removed is in use

2016-09-10 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-10830:
-
Summary: FsDatasetImpl#removeVolumes crashes with 
IllegalMonitorStateException when vol being removed is in use  (was: 
FsDatasetImpl#removeVolumes() crashes with IllegalMonitorStateException when 
vol being removed is in use)

> FsDatasetImpl#removeVolumes crashes with IllegalMonitorStateException when 
> vol being removed is in use
> --
>
> Key: HDFS-10830
> URL: https://issues.apache.org/jira/browse/HDFS-10830
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Arpit Agarwal
> Attachments: HDFS-10830.01.patch, HDFS-10830.02.patch, 
> HDFS-10830.05.patch, HDFS-10830.06.patch
>
>
> {{FsDatasetImpl#removeVolumes()}} operation crashes abruptly with 
> IllegalMonitorStateException whenever the volume being removed is in use 
> concurrently.
> Looks like {{removeVolumes()}} is waiting on a monitor object "this" (that is 
> FsDatasetImpl) which it has never locked, leading to  
> IllegalMonitorStateException. This monitor wait happens only the volume being 
> removed is in use (referencecount > 0). The thread performing this remove 
> volume operation thus crashes abruptly and block invalidations for the remove 
> volumes are totally skipped. 
> {code:title=FsDatasetImpl.java|borderStyle=solid}
> @Override
> public void removeVolumes(Set volumesToRemove, boolean clearFailure) {
> ..
> ..
> try (AutoCloseableLock lock = datasetLock.acquire()) {   <== LOCK acquire 
> datasetLock
> for (int idx = 0; idx < dataStorage.getNumStorageDirs(); idx++) {
>   .. .. ..
>   asyncDiskService.removeVolume(sd.getCurrentDir()); <== volume SD1 remove
>   volumes.removeVolume(absRoot, clearFailure);
>   volumes.waitVolumeRemoved(5000, this); <== WAIT on "this" 
> ?? But, we haven't locked it yet.
>  This will cause 
> IllegalMonitorStateException
>  and crash 
> getBlockReports()/FBR thread!
>   for (String bpid : volumeMap.getBlockPoolList()) {
> List blocks = new ArrayList<>();
> for (Iterator it = volumeMap.replicas(bpid).iterator();
>  it.hasNext(); ) {
> .. .. .. 
> it.remove(); <== volumeMap removal
>   }
> blkToInvalidate.put(bpid, blocks);
>   }
>  .. ..
> }<== LOCK release 
> datasetLock   
> // Call this outside the lock.
> for (Map.Entry entry :
> blkToInvalidate.entrySet()) {
>  ..
>  for (ReplicaInfo block : blocks) {
>   invalidate(bpid, block);   <== Notify NN of 
> Block removal
>  }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10742) Measure lock time in FsDatasetImpl

2016-09-10 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15480770#comment-15480770
 ] 

Arpit Agarwal commented on HDFS-10742:
--

Committed to branch-2 and branch-2.8 after local unit test run.

> Measure lock time in FsDatasetImpl
> --
>
> Key: HDFS-10742
> URL: https://issues.apache.org/jira/browse/HDFS-10742
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.0.0-alpha2
>Reporter: Chen Liang
>Assignee: Chen Liang
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HDFS-10742.001.patch, HDFS-10742.002.patch, 
> HDFS-10742.003.patch, HDFS-10742.004.patch, HDFS-10742.005.patch, 
> HDFS-10742.006.patch, HDFS-10742.007.patch, HDFS-10742.008.patch, 
> HDFS-10742.009.patch, HDFS-10742.010.patch, HDFS-10742.011.patch, 
> HDFS-10742.012.patch, HDFS-10742.013.patch, HDFS-10742.014.patch, 
> HDFS-10742.015.patch, HDFS-10742.016.patch, HDFS-10742.017.patch
>
>
> This JIRA proposes to measure the time the of lock of {{FsDatasetImpl}} is 
> held by a thread. Doing so will allow us to measure lock statistics.
> This can be done by extending the {{AutoCloseableLock}} lock object in 
> {{FsDatasetImpl}}. In the future we can also consider replacing the lock with 
> a read-write lock.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10742) Measure lock time in FsDatasetImpl

2016-09-10 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-10742:
-
Fix Version/s: 2.8.0

> Measure lock time in FsDatasetImpl
> --
>
> Key: HDFS-10742
> URL: https://issues.apache.org/jira/browse/HDFS-10742
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.0.0-alpha2
>Reporter: Chen Liang
>Assignee: Chen Liang
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HDFS-10742.001.patch, HDFS-10742.002.patch, 
> HDFS-10742.003.patch, HDFS-10742.004.patch, HDFS-10742.005.patch, 
> HDFS-10742.006.patch, HDFS-10742.007.patch, HDFS-10742.008.patch, 
> HDFS-10742.009.patch, HDFS-10742.010.patch, HDFS-10742.011.patch, 
> HDFS-10742.012.patch, HDFS-10742.013.patch, HDFS-10742.014.patch, 
> HDFS-10742.015.patch, HDFS-10742.016.patch, HDFS-10742.017.patch
>
>
> This JIRA proposes to measure the time the of lock of {{FsDatasetImpl}} is 
> held by a thread. Doing so will allow us to measure lock statistics.
> This can be done by extending the {{AutoCloseableLock}} lock object in 
> {{FsDatasetImpl}}. In the future we can also consider replacing the lock with 
> a read-write lock.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10682) Replace FsDatasetImpl object lock with a separate lock object

2016-09-10 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-10682:
-
Fix Version/s: 3.0.0-alpha2

> Replace FsDatasetImpl object lock with a separate lock object
> -
>
> Key: HDFS-10682
> URL: https://issues.apache.org/jira/browse/HDFS-10682
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Chen Liang
>Assignee: Chen Liang
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HDFS-10682-branch-2.001.patch, 
> HDFS-10682-branch-2.002.patch, HDFS-10682-branch-2.003.patch, 
> HDFS-10682-branch-2.004.patch, HDFS-10682-branch-2.005.patch, 
> HDFS-10682-branch-2.006.patch, HDFS-10682.001.patch, HDFS-10682.002.patch, 
> HDFS-10682.003.patch, HDFS-10682.004.patch, HDFS-10682.005.patch, 
> HDFS-10682.006.patch, HDFS-10682.007.patch, HDFS-10682.008.patch, 
> HDFS-10682.009.patch, HDFS-10682.010.patch
>
>
> This Jira proposes to replace the FsDatasetImpl object lock with a separate 
> lock object. Doing so will make it easier to measure lock statistics like 
> lock held time and warn about potential lock contention due to slow disk 
> operations.
> Right now we can use org.apache.hadoop.util.AutoCloseableLock. In the future 
> we can also consider replacing the lock with a read-write lock.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10682) Replace FsDatasetImpl object lock with a separate lock object

2016-09-10 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15480768#comment-15480768
 ] 

Arpit Agarwal commented on HDFS-10682:
--

Thanks for the catch [~xiaochen], I've cherry-picked from branch-2.8 to 
branch-2.

> Replace FsDatasetImpl object lock with a separate lock object
> -
>
> Key: HDFS-10682
> URL: https://issues.apache.org/jira/browse/HDFS-10682
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Chen Liang
>Assignee: Chen Liang
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HDFS-10682-branch-2.001.patch, 
> HDFS-10682-branch-2.002.patch, HDFS-10682-branch-2.003.patch, 
> HDFS-10682-branch-2.004.patch, HDFS-10682-branch-2.005.patch, 
> HDFS-10682-branch-2.006.patch, HDFS-10682.001.patch, HDFS-10682.002.patch, 
> HDFS-10682.003.patch, HDFS-10682.004.patch, HDFS-10682.005.patch, 
> HDFS-10682.006.patch, HDFS-10682.007.patch, HDFS-10682.008.patch, 
> HDFS-10682.009.patch, HDFS-10682.010.patch
>
>
> This Jira proposes to replace the FsDatasetImpl object lock with a separate 
> lock object. Doing so will make it easier to measure lock statistics like 
> lock held time and warn about potential lock contention due to slow disk 
> operations.
> Right now we can use org.apache.hadoop.util.AutoCloseableLock. In the future 
> we can also consider replacing the lock with a read-write lock.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10838) Last full block report received time for each DN should be easily discoverable

2016-09-10 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15480729#comment-15480729
 ] 

Arpit Agarwal commented on HDFS-10838:
--

Hi [~surendrasingh], I think either seconds or perhaps more practically minutes 
(since full block report interval is in hours the count in seconds could get 
rather large). That will allow sorting on the column numerically to quickly 
scan for outliers. 

> Last full block report received time for each DN should be easily discoverable
> --
>
> Key: HDFS-10838
> URL: https://issues.apache.org/jira/browse/HDFS-10838
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: ui
>Reporter: Arpit Agarwal
>Assignee: Surendra Singh Lilhore
> Attachments: DFSAdmin-Report.png, HDFS-10838-001.patch, 
> HDFS-10838.002.patch, HDFS-10838.003.patch, NN_UI.png, NN_UI_relative_time.png
>
>
> It should be easy for administrators to discover the time of last full block 
> report from each DataNode.
> We can show it in the NameNode web UI or in the output of {{hdfs dfsadmin 
> -report}}, or both.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10685) libhdfs++: return explicit error when non-secured client connects to secured server

2016-09-10 Thread Bob Hansen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15480699#comment-15480699
 ] 

Bob Hansen commented on HDFS-10685:
---

Thanks for putting that together, [~vectorijk]!

I'll have to check when I get back to the office, but I think we may have a 
specific Status instance for an Authentication error which we should use in 
this case.  If we don't, we should add one for this.

Since Apache has the Hadoop Jira loked down at the moment, feel free to put the 
patch up on github onto a fork of 
https://github.com/apache/hadoop/tree/HDFS-8707.  Also, I think the patch you 
posted may be reversed.  :-)

> libhdfs++: return explicit error when non-secured client connects to secured 
> server
> ---
>
> Key: HDFS-10685
> URL: https://issues.apache.org/jira/browse/HDFS-10685
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Bob Hansen
>
> When a non-secured client tries to connect to a secured server, the first 
> indication is an error from RpcConnection::HandleRpcRespose complaining about 
> "RPC response with Unknown call id -33".
> We should insert code in HandleRpcResponse to detect if the unknown call id 
> == RpcEngine::kCallIdSasl and return an informative error that you have an 
> unsecured client connecting to a secured server.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10685) libhdfs++: return explicit error when non-secured client connects to secured server

2016-09-10 Thread Kai Jiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15480688#comment-15480688
 ] 

Kai Jiang commented on HDFS-10685:
--

cc [~bobhansen] [~anatoli.shein]

> libhdfs++: return explicit error when non-secured client connects to secured 
> server
> ---
>
> Key: HDFS-10685
> URL: https://issues.apache.org/jira/browse/HDFS-10685
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Bob Hansen
>
> When a non-secured client tries to connect to a secured server, the first 
> indication is an error from RpcConnection::HandleRpcRespose complaining about 
> "RPC response with Unknown call id -33".
> We should insert code in HandleRpcResponse to detect if the unknown call id 
> == RpcEngine::kCallIdSasl and return an informative error that you have an 
> unsecured client connecting to a secured server.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10682) Replace FsDatasetImpl object lock with a separate lock object

2016-09-10 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15480063#comment-15480063
 ] 

Xiao Chen commented on HDFS-10682:
--

Hi [~arpitagarwal], I couldn't find this in branch-2, only in branch-2.8.
Could you point me to the branch-2 commit hash? Thanks.

> Replace FsDatasetImpl object lock with a separate lock object
> -
>
> Key: HDFS-10682
> URL: https://issues.apache.org/jira/browse/HDFS-10682
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Chen Liang
>Assignee: Chen Liang
> Fix For: 2.8.0
>
> Attachments: HDFS-10682-branch-2.001.patch, 
> HDFS-10682-branch-2.002.patch, HDFS-10682-branch-2.003.patch, 
> HDFS-10682-branch-2.004.patch, HDFS-10682-branch-2.005.patch, 
> HDFS-10682-branch-2.006.patch, HDFS-10682.001.patch, HDFS-10682.002.patch, 
> HDFS-10682.003.patch, HDFS-10682.004.patch, HDFS-10682.005.patch, 
> HDFS-10682.006.patch, HDFS-10682.007.patch, HDFS-10682.008.patch, 
> HDFS-10682.009.patch, HDFS-10682.010.patch
>
>
> This Jira proposes to replace the FsDatasetImpl object lock with a separate 
> lock object. Doing so will make it easier to measure lock statistics like 
> lock held time and warn about potential lock contention due to slow disk 
> operations.
> Right now we can use org.apache.hadoop.util.AutoCloseableLock. In the future 
> we can also consider replacing the lock with a read-write lock.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10830) FsDatasetImpl#removeVolumes() crashes with IllegalMonitorStateException when vol being removed is in use

2016-09-10 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15480042#comment-15480042
 ] 

Xiao Chen commented on HDFS-10830:
--

Thanks Arpit, patch 6 LGTM +1.
Failed test is BindException and unrelated.

> FsDatasetImpl#removeVolumes() crashes with IllegalMonitorStateException when 
> vol being removed is in use
> 
>
> Key: HDFS-10830
> URL: https://issues.apache.org/jira/browse/HDFS-10830
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Arpit Agarwal
> Attachments: HDFS-10830.01.patch, HDFS-10830.02.patch, 
> HDFS-10830.05.patch, HDFS-10830.06.patch
>
>
> {{FsDatasetImpl#removeVolumes()}} operation crashes abruptly with 
> IllegalMonitorStateException whenever the volume being removed is in use 
> concurrently.
> Looks like {{removeVolumes()}} is waiting on a monitor object "this" (that is 
> FsDatasetImpl) which it has never locked, leading to  
> IllegalMonitorStateException. This monitor wait happens only the volume being 
> removed is in use (referencecount > 0). The thread performing this remove 
> volume operation thus crashes abruptly and block invalidations for the remove 
> volumes are totally skipped. 
> {code:title=FsDatasetImpl.java|borderStyle=solid}
> @Override
> public void removeVolumes(Set volumesToRemove, boolean clearFailure) {
> ..
> ..
> try (AutoCloseableLock lock = datasetLock.acquire()) {   <== LOCK acquire 
> datasetLock
> for (int idx = 0; idx < dataStorage.getNumStorageDirs(); idx++) {
>   .. .. ..
>   asyncDiskService.removeVolume(sd.getCurrentDir()); <== volume SD1 remove
>   volumes.removeVolume(absRoot, clearFailure);
>   volumes.waitVolumeRemoved(5000, this); <== WAIT on "this" 
> ?? But, we haven't locked it yet.
>  This will cause 
> IllegalMonitorStateException
>  and crash 
> getBlockReports()/FBR thread!
>   for (String bpid : volumeMap.getBlockPoolList()) {
> List blocks = new ArrayList<>();
> for (Iterator it = volumeMap.replicas(bpid).iterator();
>  it.hasNext(); ) {
> .. .. .. 
> it.remove(); <== volumeMap removal
>   }
> blkToInvalidate.put(bpid, blocks);
>   }
>  .. ..
> }<== LOCK release 
> datasetLock   
> // Call this outside the lock.
> for (Map.Entry entry :
> blkToInvalidate.entrySet()) {
>  ..
>  for (ReplicaInfo block : blocks) {
>   invalidate(bpid, block);   <== Notify NN of 
> Block removal
>  }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10838) Last full block report received time for each DN should be easily discoverable

2016-09-10 Thread Surendra Singh Lilhore (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15479789#comment-15479789
 ] 

Surendra Singh Lilhore commented on HDFS-10838:
---

Thanks [~arpitagarwal] for review.

bq. Do you know what helper_relative_time_now will print if the value is zero?
It will print "a few seconds ago" only.
bq. I think we can print in the same format as heartbeat time so it remains 
numerically sortable.
Heartbeat time only print in seconds.
{code}{lastContact}s{code}
 Do you want block report time in seconds ?

> Last full block report received time for each DN should be easily discoverable
> --
>
> Key: HDFS-10838
> URL: https://issues.apache.org/jira/browse/HDFS-10838
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: ui
>Reporter: Arpit Agarwal
>Assignee: Surendra Singh Lilhore
> Attachments: DFSAdmin-Report.png, HDFS-10838-001.patch, 
> HDFS-10838.002.patch, HDFS-10838.003.patch, NN_UI.png, NN_UI_relative_time.png
>
>
> It should be easy for administrators to discover the time of last full block 
> report from each DataNode.
> We can show it in the NameNode web UI or in the output of {{hdfs dfsadmin 
> -report}}, or both.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10855) Fix typos for HDFS documents

2016-09-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15479468#comment-15479468
 ] 

Hadoop QA commented on HDFS-10855:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 58m 
33s{color} | {color:green} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 73m  5s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HDFS-10855 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12827880/HDFS-10855.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  |
| uname | Linux c0f01b2ebb9d 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 
20:42:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / bee9f57 |
| Default Java | 1.8.0_101 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16705/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16705/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Fix typos for HDFS documents 
> -
>
> Key: HDFS-10855
> URL: https://issues.apache.org/jira/browse/HDFS-10855
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Attachments: HDFS-10855.001.patch
>
>
> Now there many typos in many HDFS documents. The detail typos info:
> * {{HDFSHighAvailabilityWithNFS.md}}
> Beacuse->Because
> processs->process
> * {{ArchivalStorage.md}}
> specificed->specified
> * {{ViewFs.md}}
> Futher->Further
> * {{HdfsNfsGateway.md}}
> differnt->different
> regrulation->regulation
> * {{HdfsMultihoming.md}}, {{hdfs-default.xml}}
> adress->address



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To 

[jira] [Commented] (HDFS-10830) FsDatasetImpl#removeVolumes() crashes with IllegalMonitorStateException when vol being removed is in use

2016-09-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15479455#comment-15479455
 ] 

Hadoop QA commented on HDFS-10830:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
47s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
31s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 81m 40s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}133m 28s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped |
| Timed out junit tests | org.apache.hadoop.hdfs.TestLeaseRecovery2 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HDFS-10830 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12827879/HDFS-10830.06.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 6a82f7ee0dda 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / bee9f57 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16704/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16704/testReport/ |
| modules | C: hadoop-common-project/hadoop-common 
hadoop-hdfs-project/hadoop-hdfs U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16704/console 

[jira] [Updated] (HDFS-10855) Fix typos for HDFS documents

2016-09-10 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10855?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-10855:
-
Attachment: HDFS-10855.001.patch

> Fix typos for HDFS documents 
> -
>
> Key: HDFS-10855
> URL: https://issues.apache.org/jira/browse/HDFS-10855
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Attachments: HDFS-10855.001.patch
>
>
> Now there many typos in many HDFS documents. The detail typos info:
> * {{HDFSHighAvailabilityWithNFS.md}}
> Beacuse->Because
> processs->process
> * {{ArchivalStorage.md}}
> specificed->specified
> * {{ViewFs.md}}
> Futher->Further
> * {{HdfsNfsGateway.md}}
> differnt->different
> regrulation->regulation
> * {{HdfsMultihoming.md}}, {{hdfs-default.xml}}
> adress->address



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-10855) Fix typos for HDFS documents

2016-09-10 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15479370#comment-15479370
 ] 

Yiqun Lin edited comment on HDFS-10855 at 9/10/16 7:39 AM:
---

Attach a simple patch to make a fix.


was (Author: linyiqun):
Attach a simple to make a fix.

> Fix typos for HDFS documents 
> -
>
> Key: HDFS-10855
> URL: https://issues.apache.org/jira/browse/HDFS-10855
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Attachments: HDFS-10855.001.patch
>
>
> Now there many typos in many HDFS documents. The detail typos info:
> * {{HDFSHighAvailabilityWithNFS.md}}
> Beacuse->Because
> processs->process
> * {{ArchivalStorage.md}}
> specificed->specified
> * {{ViewFs.md}}
> Futher->Further
> * {{HdfsNfsGateway.md}}
> differnt->different
> regrulation->regulation
> * {{HdfsMultihoming.md}}, {{hdfs-default.xml}}
> adress->address



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10855) Fix typos for HDFS documents

2016-09-10 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10855?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-10855:
-
Status: Patch Available  (was: Open)

Attach a simple to make a fix.

> Fix typos for HDFS documents 
> -
>
> Key: HDFS-10855
> URL: https://issues.apache.org/jira/browse/HDFS-10855
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
>
> Now there many typos in many HDFS documents. The detail typos info:
> * {{HDFSHighAvailabilityWithNFS.md}}
> Beacuse->Because
> processs->process
> * {{ArchivalStorage.md}}
> specificed->specified
> * {{ViewFs.md}}
> Futher->Further
> * {{HdfsNfsGateway.md}}
> differnt->different
> regrulation->regulation
> * {{HdfsMultihoming.md}}, {{hdfs-default.xml}}
> adress->address



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-10855) Fix typos for HDFS documents

2016-09-10 Thread Yiqun Lin (JIRA)
Yiqun Lin created HDFS-10855:


 Summary: Fix typos for HDFS documents 
 Key: HDFS-10855
 URL: https://issues.apache.org/jira/browse/HDFS-10855
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: documentation
Reporter: Yiqun Lin
Assignee: Yiqun Lin
Priority: Minor


Now there many typos in many HDFS documents. The detail typos info:

* {{HDFSHighAvailabilityWithNFS.md}}
Beacuse->Because
processs->process

* {{ArchivalStorage.md}}
specificed->specified

* {{ViewFs.md}}
Futher->Further

* {{HdfsNfsGateway.md}}
differnt->different
regrulation->regulation

* {{HdfsMultihoming.md}}, {{hdfs-default.xml}}
adress->address



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10830) FsDatasetImpl#removeVolumes() crashes with IllegalMonitorStateException when vol being removed is in use

2016-09-10 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-10830:
-
Attachment: HDFS-10830.06.patch

> FsDatasetImpl#removeVolumes() crashes with IllegalMonitorStateException when 
> vol being removed is in use
> 
>
> Key: HDFS-10830
> URL: https://issues.apache.org/jira/browse/HDFS-10830
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Arpit Agarwal
> Attachments: HDFS-10830.01.patch, HDFS-10830.02.patch, 
> HDFS-10830.05.patch, HDFS-10830.06.patch
>
>
> {{FsDatasetImpl#removeVolumes()}} operation crashes abruptly with 
> IllegalMonitorStateException whenever the volume being removed is in use 
> concurrently.
> Looks like {{removeVolumes()}} is waiting on a monitor object "this" (that is 
> FsDatasetImpl) which it has never locked, leading to  
> IllegalMonitorStateException. This monitor wait happens only the volume being 
> removed is in use (referencecount > 0). The thread performing this remove 
> volume operation thus crashes abruptly and block invalidations for the remove 
> volumes are totally skipped. 
> {code:title=FsDatasetImpl.java|borderStyle=solid}
> @Override
> public void removeVolumes(Set volumesToRemove, boolean clearFailure) {
> ..
> ..
> try (AutoCloseableLock lock = datasetLock.acquire()) {   <== LOCK acquire 
> datasetLock
> for (int idx = 0; idx < dataStorage.getNumStorageDirs(); idx++) {
>   .. .. ..
>   asyncDiskService.removeVolume(sd.getCurrentDir()); <== volume SD1 remove
>   volumes.removeVolume(absRoot, clearFailure);
>   volumes.waitVolumeRemoved(5000, this); <== WAIT on "this" 
> ?? But, we haven't locked it yet.
>  This will cause 
> IllegalMonitorStateException
>  and crash 
> getBlockReports()/FBR thread!
>   for (String bpid : volumeMap.getBlockPoolList()) {
> List blocks = new ArrayList<>();
> for (Iterator it = volumeMap.replicas(bpid).iterator();
>  it.hasNext(); ) {
> .. .. .. 
> it.remove(); <== volumeMap removal
>   }
> blkToInvalidate.put(bpid, blocks);
>   }
>  .. ..
> }<== LOCK release 
> datasetLock   
> // Call this outside the lock.
> for (Map.Entry entry :
> blkToInvalidate.entrySet()) {
>  ..
>  for (ReplicaInfo block : blocks) {
>   invalidate(bpid, block);   <== Notify NN of 
> Block removal
>  }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org