It 's weird that hbase aggregate functions don't use MapReduce, this means that 
the performance will be very poor.
Is it a must to use coprocessors?
Is there a much easier way to improve the functions' performance ?

> CC: [email protected]
> From: [email protected]
> Subject: Re: Hbase MapReduce
> Date: Sat, 24 Nov 2012 12:05:45 -0600
> To: [email protected]
> 
> Do you think it would be a good idea to temper the use of CoProcessors?
> 
> This kind of reminds me of when people first started using stored 
> procedures...
> 
> 
> Sent from a remote device. Please excuse any typos...
> 
> Mike Segel
> 
> On Nov 24, 2012, at 11:46 AM, tom <[email protected]> wrote:
> 
> > Hi, but you do not need to us M/R. You could also use coprocessors.
> > 
> > See this site:
> > https://blogs.apache.org/hbase/entry/coprocessor_introduction
> > -> in the section "Endpoints"
> > 
> > An aggregation coprocessor ships with hbase that should match your 
> > requirements.
> > You just need to load it and eventually you can access it from HTable:
> > 
> > HTable.coprocessorExec(..) 
> > <http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/HTable.html#coprocessorExec%28java.lang.Class,%20byte[],%20byte[],%20org.apache.hadoop.hbase.client.coprocessor.Batch.Call,%20org.apache.hadoop.hbase.client.coprocessor.Batch.Callback%29>
> > 
> > Regards
> > tom
> > 
> > Am 24.11.2012 18:32, schrieb Marcos Ortiz:
> >> Regards, Dalia.
> >> You have to use MapReduce for that.
> >> In the HBase in Practice´s book, there are lot of great examples for this.
> >> 
> >> On 11/24/2012 12:15 PM, Dalia Sobhy wrote:
> >>> Dear all,
> >>> I wanted to ask a question..
> >>> Do Hbase Aggregate Functions such as rowcount, getMax, get Average use 
> >>> MapReduce to execute those functions?
> >>> Thanks :D
> > 
                                          

Reply via email to