Hi Johannes,

To override this behavior, you can do the following:
- set phoenix.sequence.saltBuckets in your client-side hbase-site.xml to 0.
- manually disable and drop SYSTEM.SEQUENCE from an HBase shell. Note -
this assumes you're not using sequences - if you are, let us know.
- re-connect a Phoenix client to the cluster

At this point, the SYSTEM.SEQUENCE table will again be in a single region.
FWIW, we've change the default value of the phoenix.sequence.saltBuckets to
0 (I believe as of 4.6), as the previous value was too high for smaller
clusters.

Thanks,
James

On Sun, Feb 21, 2016 at 11:57 PM, Johannes <[email protected]> wrote:

> Hi,
>
> I am using the Cloudera Manager on a six-node cluster (master: aaa, the
> others: node[1-5]). It runs CDH 5.5.2 and consists of HBase, HDFS and
> ZooKeeper. I downloaded, distributed and enabled the Phoenix Parcel
> 4.5.2-1.clabs_phoenix1.2.0.p0.774
>
> On the HBase-Web-UI:
> * Region Servers: my five nodes, two have Num. Regions = 1
> (hbase:namespace and hbase:meta), the others have 0
>
> Now I start Phoenix:
> * cd /etc/hbase/conf
> * /usr/bin/phoenix-sqlline.py localhost:2181:/hbase
>
> On the HBase-Web-UI:
> * SYSTEM.CATALOG is created
> * SYSTEM.SEQUENCE is created and has 12 regions
> * Browser reload (F5): Now, SYSTEM.SEQUENCE has 200 region
> * When SYSTEM.SEQUENCE has 224 regions, all Region Servers are dead.
> * Regions in transition. Red entry:
> SYSTEM.SEQUENCE,,1455886910318.84b30d86ec2535f7ff514b9eca1a62ff.
> state=FAILED_OPEN, ts=Fri Feb 19 14:01:52 CET 2016 (34s ago),
> server=aaa,60020,1455886812045
>
> I also tried to use the Phoenix tar instead of the Cloudera parcel.
> Exactly the same behavior.
>
> Thanks for every kind of help!
>
> Johannes
>

Reply via email to