On Fri, Jun 27, 2014 at 12:22 AM, Guillermo Ortiz <[email protected]>
wrote:

> If I have to.... how me reducers I should have??



Depends.  Best if you can have zero.  Otherwise, try default partitioning
and go from there?



> as many as number of
> regions?? I have read about HRegionPartitioner, but it has some
> limitations, and you have to be sure that any region isn't going to split
> while you're putting new data in your table.



It is just looking at region boundaries calculating partitions
http://hbase.apache.org/xref/org/apache/hadoop/hbase/mapreduce/HRegionPartitioner.html#73




> Is it only for performance?
> what could it happen if you put too many data in your table and it splits
> an region with a HRegionPartitioner?
>
>
It'll keep on writing over the split.
St.Ack




>
> 2014-06-26 21:43 GMT+02:00 Stack <[email protected]>:
>
> > Be sure to read http://hbase.apache.org/book.html#d3314e5975 Guillermo
> if
> > you have not already.  Avoid reduce phase if you can.
> >
> > St.Ack
> >
> >
> > On Thu, Jun 26, 2014 at 8:24 AM, Guillermo Ortiz <[email protected]>
> > wrote:
> >
> > > I have a question.
> > > I want to execute an MapReduce and the output of my reduce it's going
> to
> > > store in HBase.
> > >
> > > So, it's a MapReduce with an output which it's going to be stored in
> > HBase.
> > > I can do a Map and use HFileOutputFormat.configureIncrementalLoad(pJob,
> > > table); but, I don't know how I could do it if I have a Reduce as
> well,,
> > > since the configureIncrementalLoad generates an reduce.
> > >
> >
>

Reply via email to