Thanks for suggestion. I will try this and update the thread if the data locality is restored.
Thanks, Rahul On Sat, May 9, 2015 at 6:36 PM, Dave Latham <[email protected]> wrote: > Major compactions will fix locality, so long as there is space on the > local data nodes and they actually happen. Also, if there is already > only a single HFile in a store, major compaction may be skipped. > Newer versions of hbase have a parameter > hbase.hstore.min.locality.to.skip.major.compact that you can set to > perform major compaction if locality is below some threshold, even if > there is only one store file. (See HBASE-11195). > > Dave > > On Sat, May 9, 2015 at 5:31 PM, Bryan Beaudreault > <[email protected]> wrote: > > Major compactions will restore locality to the cluster. > > > > On Sat, May 9, 2015 at 3:36 PM, Michael Segel <[email protected] > > > > wrote: > > > >> First, understand why you had to create an ‘auto restart’ script. > >> > >> Taking down HBase completely (probably including zookeeper) and do a > full > >> restart would probably fix the issue of data locality. > >> > >> > >> > On May 9, 2015, at 5:05 PM, rahul malviya <[email protected] > > > >> wrote: > >> > > >> > Hi, > >> > > >> > My HBase cluster went through a rough patch recently where lot of > region > >> > server started dying because of sudden increase in amount of data > being > >> > funneled to the HBase cluster and we have to place a auto start script > >> for > >> > regionservers. > >> > > >> > After this all my data locality is lost which does not seems to > recover > >> > even after compaction. This has degraded the performance by a factor > of > >> 4. > >> > So I want to know is their a way to restore the data locality of my > HBase > >> > cluster. > >> > > >> > I am using hbase-0.98.6-cdh5.2.0. > >> > > >> > Thanks, > >> > Rahul > >> > >> The opinions expressed here are mine, while they may reflect a cognitive > >> thought, that is purely accidental. > >> Use at your own risk. > >> Michael Segel > >> michael_segel (AT) hotmail.com > >> > >> > >> > >> > >> > >> >
