ah yes, there is some logic that prevents meta/root from being on the
same RS, which is desirable in a larger configuration.

On Thu, Oct 15, 2009 at 11:16 AM, Yannis Pavlidis <[email protected]> wrote:
> Hey Ryan,
>
> I performed additional testing with some alternate configurations and the 
> problem arises (ONLY) when there is only one regionserver left which has the 
> META table already assigned to it.
>
> In this case the ROOT table does not get assigned to the last regionserver 
> (which holds the META table).
>
> Interestingly enough though when there is only one regionserver left that has 
> the ROOT table already assign to it then it can also have the META table 
> re-assigned to it (if again is the only server - i.e. in this scenario you 
> can have one regionserver holding both the META and ROOT tables).
>
> Unless I am missing something I cannot find any reason why we cannot assign 
> the ROOT table to the regionserver that manages the META table if it is the 
> only one remaining (again it is an extreme case I agree that this can happen).
>
> I applied and tested a fix (at the hbase-0.20.0 codebase) in the 
> RegionManager::regionsAwaitingAssignment where I add the root table in the 
> regionstoAssign set if the it is the metaServer and also the only server.
>
> Here is the diff:
>
> diff RegionManager.java.FIXES RegionManager.java
> 414c414
> <       if ((!isMetaServer) || (isMetaServer && isSingleServer)) {
> ---
>>       if (!isMetaServer) {
>
> Let me know what you guys think.
>
> Thanks,
>
> Yannis.
>
> -----Original Message-----
> From: Ryan Rawson [mailto:[email protected]]
> Sent: Wed 10/14/2009 5:58 PM
> To: [email protected]
> Subject: Re: ROOT table does not get re-assigned
>
> Hey,
>
> Next time, you can use a service like pastebin.com or anything else like that.
>
> So the log looks ok until the end. At that point, the master is
> relying on a regionserver heartbeat back to the master so that the
> master has a chance to direct the regionserver. But looks like the 3rd
> and last regionserver doesnt check in?
>
> You say this is highly reproducible?  Are you able to run your test on
> more machines? A 3 node cluster is a little light, i wouldnt consider
> running HBase and HDFS on < 10 nodes for production.  It could be an
> artifact of having only 3 nodes too...
>
> -ryan
>
>

Reply via email to