[ 
https://issues.apache.org/jira/browse/HDFS-12410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16166887#comment-16166887
 ] 

Anu Engineer commented on HDFS-12410:
-------------------------------------

bq. Pushing a config with an unsupported/misspelled type... losing the nodes 
with a clear error is easier to debug 

I can certainly see both sides of the argument. 

1. *Fail fast argument* -- Hadoop should have the smarts to fail when a config 
is wrong. As I said I might have mistyped "disc" and now hundreds of disks are 
not working, since some "disks" work the cluster appears to be working. In that 
case, I would rather that the config failed on me, instead of the mistake being 
propagated to thousands of machines. If I am pushing a config with an error, it 
is better to fail and easy for me to fix that as soon as I push the deploy 
config button.

2. *Postel's Law* - "be conservative in what you do, be liberal in what you 
accept from others." I do see that you are arguing that we can be liberal in 
what we accept in the config. Unfortunately, when you ignore a storage 
configuration option, and you are a file system, It ceases to be conservative 
in my opinion. 

So I am still inclined to argue that the current behavior is very much what you 
want.  


 


> Ignore unknown StorageTypes
> ---------------------------
>
>                 Key: HDFS-12410
>                 URL: https://issues.apache.org/jira/browse/HDFS-12410
>             Project: Hadoop HDFS
>          Issue Type: Task
>          Components: datanode, fs
>            Reporter: Chris Douglas
>            Priority: Minor
>
> A storage configured with an unknown type will cause runtime exceptions. 
> Instead, these storages can be ignored/skipped.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to