On Sun, Nov 21, 2010 at 9:55 PM, Jean-Daniel Cryans <[email protected]>wrote:
I'm unclear about the 2TB disk thing, is it 1x2TB or 2x1TB or 4x500GB?
I hope it's the last one, as you want to have as many spindles as possible.

 We have 2X1TB ,

I would prefer 24GB to 16, this is what we run on and it
works like a charm, and gives more room for those memory hungry jobs.

OK , the questions here is 24GB is critical? Will all this stuff be stable
 using 16GB ?


 What kind of stability issues are you having?

It looks like I got all mentioned in hbase list issues regarding region
server crashes.


It was planned to run simultaneously 3 m/r jobs which insert result to 3
HBase tables (every table insertion is ~2GB).
In addition there 10(in a future will be 20)  scans.

Almost all tests failed , so I run only one job , or only one scan at the
same time. In addition I reduces from 3 parallel  reducers to 2 . It brings
me some stability , but with a such way of stability it is impossible to go
to production.


  Thanks Oleg



> J-D
>
> On Sun, Nov 21, 2010 at 5:53 AM, Oleg Ruchovets <[email protected]>
> wrote:
> > Hi all,
> > After testing HBase for few months with very light configurations  (5
> > machines, 2 TB disk, 8 GB RAM), we are now planing for production.
> > Our Load -
> > 1) 50GB log files to process per day by Map/Reduce jobs.
> > 2)  Insert 4-5GB to 3 tables in hbase.
> > 3) Run 10-20 scans per day (scanning about 20 regions in a table).
> > All this should run in parallel.
> > Our current configuration can't cope with this load and we are having
> many
> > stability issues.
> >
> > This is what we have in mind :
> > 1. Master machine - 32 GB, 4 TB, Two quad core CPUs.
> > 2. Name node - 16 GB, 2TB, Two quad core CPUs.
> > we plan to have up to 20 name servers (starting with 5).
> >
> > We already read
> >
> http://www.cloudera.com/blog/2010/03/clouderas-support-team-shares-some-basic-hardware-recommendations/
> > .
> >
> > We would appreciate your feedback on our proposed configuration.
> >
> >
> > Regards Oleg & Lior
> >
>

Reply via email to