Hmm. Would something like this be useful:

    org.apache.hadoop.hbase.HFileLocalityChecker [options]

    Reports the number of local and nonlocal HFile blocks, and the ratio of
    as a percentage.

    Where options are:

      -f <file>    Analyze a store file
      -r <region>  Analyze all store files for the region
      -t <table>   Analyze all store files for regions of the table served
                   by the local regionserver
      -h <host>    Consider <host> local, defaults to the local host
      -v           Verbose operation


? Or overkill? Happy to code it up...


Best regards,


       - Andy

Problems worthy of attack prove their worth by hitting back. - Piet Hein (via 
Tom White)


----- Original Message -----
> From: Stack <[email protected]>
> To: [email protected]
> Cc: 
> Sent: Friday, December 16, 2011 3:11 PM
> Subject: Re: Is there an easy way to check HFile locality in HDFS?
> 
> On Thu, Dec 15, 2011 at 9:50 PM, Bruce Bian <[email protected]> wrote:
>>  Hi,
>>  some disks of one node in my hbase cluster were broken, and after I mounted
>>  some new ones and start regionserver/datanode on that node again, there
>>  can't be data locality anymore unless I trigger a major_compaction on 
> the
>>  table manually(datanode/regionserver share the same physical node)
>>  My question is, is there an easy way to check that all the regionservers
>>  have a copy of its regions on the same physical node,like a script or
>>  command,or else where to get the information so I can write one? I know
>>   the region info is stored in the .META. table, how about the region's
>>  hfile blocks?
> 
> 
> In 0.92, there is a locality metric that tells you how much of the
> regionserver load is local as a percentage that shows in the
> regionserver UI.
> 
> St.Ack
>

Reply via email to