RE: reconfiguring storage

2017-07-09 Thread Brahma Reddy Battula
: Decommission might not proper if there are no ecnough nodes in the cluster ,it might fail. --Brahma Reddy Battula From: Brian Jeltema [mailto:bdjelt...@gmail.com] Sent: 07 July 2017 22:24 To: user Subject: Re: reconfiguring storage I prefer to decommission - reconfigure - recommission

Re: reconfiguring storage

2017-07-07 Thread Brian Jeltema
I prefer to decommission - reconfigure - recommission. If hdfs is configured to use volumes at /hdfs-1, /hdfs-2 and /hdfs-3, can I just delete the entire contents of those volumes before recommissioning? > On Jul 6, 2017, at 12:29 PM, daemeon reiydelle wrote: > > Another

Re: reconfiguring storage

2017-07-07 Thread sidharth kumar
From: daemeon reiydelle Sent: Thursday, 6 July, 9:59 PM Subject: Re: reconfiguring storage To: Brian Jeltema Cc: user Another option is to stop the node's relevant Hadoop services (including e.g spark, impala, etc. if applicable), move the existing local storage, mount

Re: reconfiguring storage

2017-07-06 Thread daemeon reiydelle
Another option is to stop the node's relevant Hadoop services (including e.g spark, impala, etc. if applicable), move the existing local storage, mount the desired file system, and move the data over. Then just restart hadoop. As long as this does not take too long, you don't have write

reconfiguring storage

2017-07-06 Thread Brian Jeltema
I recently discovered that I made a mistake setting up some cluster nodes and didn’t attach storage to some mount points for HDFS. To fix this, I presume I should decommission the relevant nodes, fix the mounts, then recommission the nodes. My question is, when the nodes are recommissioned,