[ 
https://issues.apache.org/jira/browse/AMBARI-15449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Denissov updated AMBARI-15449:
----------------------------------------
    Attachment: AMBARI-15449.branch_2_2.patch

patch applies to trunk as well

> HAWQ hdfs-client / output.replace-datanode-on-failure should be set to true 
> by default
> --------------------------------------------------------------------------------------
>
>                 Key: AMBARI-15449
>                 URL: https://issues.apache.org/jira/browse/AMBARI-15449
>             Project: Ambari
>          Issue Type: Bug
>            Reporter: Alexander Denissov
>            Assignee: Alexander Denissov
>            Priority: Minor
>             Fix For: 2.2.2
>
>         Attachments: AMBARI-15449.branch_2_2.patch
>
>
> On large cluster, replace-datanode-on-failure should be set to true, but on 
> small clusters (developers environment or testing environment), it should be 
> set to false, otherwise, if datanodes are overloaded, it will report error. 
> This is the reason it was set to false by default earlier. 
> Ambari should set it to true when cluster size > 4, otherwise set it to false.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to