Hi, We are using HDP2.3.4 and Phoenix4.4. Our global index table is doing excessive splitting. Our cluster region size setting is 8 Gigabytes but global index table has 18 regions and max size of region is 10.9 MB. This is definitely not a good behavior. I looked into tuning ( https://phoenix.apache.org/tuning.html) and i could not find anything relevant. Is this region splitting intentionally done by Phoenix for secondary index tables?
Here is the output of du command: [ag@hdpclient1 ~]$ hadoop fs -du -h /apps/hbase/data/data/default/SEC_INDEX 761 /apps/hbase/data/data/default/SEC_INDEX/.tabledesc 0 /apps/hbase/data/data/default/SEC_INDEX/.tmp 9.3 M /apps/hbase/data/data/default/SEC_INDEX/079db2c953c30a8270ecbd52582e81ff 2.9 M /apps/hbase/data/data/default/SEC_INDEX/0952c070234c05888bfc2a01645e9e88 10.9 M /apps/hbase/data/data/default/SEC_INDEX/0d69bbb8991b868f0437b624410e9bed 8.2 M /apps/hbase/data/data/default/SEC_INDEX/206562491fd1de9db48cf422dd8c2059 7.9 M /apps/hbase/data/data/default/SEC_INDEX/25318837ab8e1db6922f5081c840d2e7 9.5 M /apps/hbase/data/data/default/SEC_INDEX/5369e0d6526b3d2cdab9937cb320ccb3 9.6 M /apps/hbase/data/data/default/SEC_INDEX/62704ee3c9418f0cd48210a747e1f8ac 7.8 M /apps/hbase/data/data/default/SEC_INDEX/631376fc5515d7785b2bcfc8a1f64223 2.8 M /apps/hbase/data/data/default/SEC_INDEX/6648d5396ba7a3c3bf884e5e1300eb0e 9.4 M /apps/hbase/data/data/default/SEC_INDEX/6e6e133580aea9a19a6b3ea643735072 8.1 M /apps/hbase/data/data/default/SEC_INDEX/8535a5c8a0989dcdfad2b1e9e9f3e18c 7.8 M /apps/hbase/data/data/default/SEC_INDEX/8ffa32e0c6357c2a0b413f3896208439 9.3 M /apps/hbase/data/data/default/SEC_INDEX/c27e2809cd352e3b06c0f11d3e7278c6 8.0 M /apps/hbase/data/data/default/SEC_INDEX/c4f5a98ce6452a6b5d052964cc70595a 8.1 M /apps/hbase/data/data/default/SEC_INDEX/c578d3190363c32032b4d92c8d307215 7.9 M /apps/hbase/data/data/default/SEC_INDEX/d750860bac8aa372eb28aaf055ea63e7 9.6 M /apps/hbase/data/data/default/SEC_INDEX/e9756aa4c7c8b9bfcd0857b43ad5bfbe 8.0 M /apps/hbase/data/data/default/SEC_INDEX/ebaae6c152e82c9b74c473babaf644dd -- Thanks & Regards, Anil Gupta
