Jack Levin <magnito@...> writes: > > I am on 0.89-830 > > On Tue, Mar 29, 2011 at 10:44 AM, Ted Yu <yuzhihong@...> wrote: > > Are you using 0.90.1 (where regions are randomly distributed across cluser) > > ? > > I logged HBASE-3373 but was told it is specific to our usage. > > > > On Tue, Mar 29, 2011 at 10:39 AM, Jack Levin <magnito@...> wrote: > > > >> Hello, we have this one table what about 12 regions, that is super hot > >> with writes, for some reason most of the regions were assigned to a > >> single server, which cause it to flush and compact every 10 minutes, > >> causing suboptimal performance. We do use random row_keys, so I don't > >> see how sorted ROWs can land on single region anyway, the issue was > >> that each region was assigned to the same region server, is there a > >> way to mitigate the issue? I closed closing the regions manually, but > >> they went right back to the same server, finally I shutdown RS > >> process, and that table's regions went in random places around the > >> cluster, but still its puzzling, anyone had issues like that? > >> > >> -Jack > >> > > > >
hello, I am encountering the same problem with 0.90.1. All the created regions (via inserting puts via java api) are assigned to a single region server. What is interesting is that the load balancing works, since the number of regions is changing on all regionservers - some 'old' regions on the highly loaded regionserver are being re-assigned to other regionservers. I guess this is a bug, since the system is sort of unusable like this. Stan
