The HBase reference guide [1) has some suggestions with respect to sizing
of hardware and tuning for performance. There is no industry standard
because the best way to configure HBase is dependent upon how you are using
HBase.

As for the ZK question, Google can point you to resources that describe
what happens when your quorum goes from an odd number of servers to an even
number.

1. https://hbase.apache.org/book.html

Cheers,

On Friday, November 4, 2016, Manjeet Singh <manjeet.chand...@gmail.com>
wrote:

> Hi All,
>
> I have below points if anyone answer it
>
> Q1. how many hardware core required by Hbase? what is the calculation at
> the end of the day one need to sure how much core it required?
>
>
>
> Q2. what is the RAM distribution calculation on each RS, Master, Java Heap,
> Client? please consider my requirement as we insert 12 GB data per day I
> have applied snappy and FastDiff and perform random get/ put orations in
> bulk by using Spark Job (15 min window, Max data size 4 GB and min 300 MB).
>
> Right now All RS having 12 GB of RAM
> Master having 6 GB of RAM
> 45 GB RAM for Spark
> 5% RAM free for Cloudera
> java Heap 4 GB
> Client 4GB
>
>
> Q3. For Hbase HA zookeeper 100% required (question comes in if any
> zookeeper node goes down what will happen, as it should be in odd figure)?
>
>
> Q4. is their any Industry standard  configuration in one doc so I can use
> for H/W sizing and RAM allocation.
>
>
>
>
> Thanks
> Manjeet
>
>
>
>
>
>
>
>
>
> --
> luv all
>


-- 
-Dima

Reply via email to