[ 
https://issues.apache.org/jira/browse/HDFS-3912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13470412#comment-13470412
 ] 

nkeywal commented on HDFS-3912:
-------------------------------

It will try the patch on HBase 0.96 next week (hopefully).
I had a look at the patch, it seems ok to me. Only point is this one:
{code}
+      LOG.warn("The given interval for marking stale datanode = "
+          + staleInterval + ", which is smaller than the default value "
+          + DFSConfigKeys.DFS_NAMENODE_STALE_DATANODE_INTERVAL_DEFAULT
+          + ".");
{code}

I think we should not have a warning if we're below the default, because:
- usually the default are just "the most common harmless setting", i.e. it's 
should be possible to go below it without being in danger.
- a reasonable setting for HBase would be around 20s (so less than the hdfs 
default), to be sure that the datanode is not used when we start the HBase 
recovery. So when used with HBase we will have a warning when using the 
recommended setting.
                
> Detecting and avoiding stale datanodes for writing
> --------------------------------------------------
>
>                 Key: HDFS-3912
>                 URL: https://issues.apache.org/jira/browse/HDFS-3912
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>    Affects Versions: 3.0.0
>            Reporter: Jing Zhao
>            Assignee: Jing Zhao
>         Attachments: HDFS-3912.001.patch, HDFS-3912.002.patch, 
> HDFS-3912.003.patch, HDFS-3912.004.patch, HDFS-3912.005.patch, 
> HDFS-3912.006.patch
>
>
> 1. Make stale timeout adaptive to the number of nodes marked stale in the 
> cluster.
> 2. Consider having a separate configuration for write skipping the stale 
> nodes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to