On Wed, Sep 12, 2012 at 4:48 PM, lars hofhansl <[email protected]> wrote:
> No. By default each call to ClientScanner.next(...) incurs an RPC call to > the HBase server, which is why it is important to enable scanner caching > (as opposed to batching) if you expect to scan many rows. > By default scanner caching is set to 1. > Thanks! If caching is set > 1 then is there a way to limit no of rows that's fetched from the server? > > > ________________________________ > From: Mohit Anchlia <[email protected]> > To: [email protected] > Sent: Wednesday, September 12, 2012 4:29 PM > Subject: Re: No of rows > > But when resultscanner executes wouldn't it already query the servers for > all the rows matching the startkey? I am tyring to avoid reading all the > blocks from the file system that matches the keys. > > On Wed, Sep 12, 2012 at 3:59 PM, Doug Meil <[email protected] > >wrote: > > > > > Hi there, > > > > If you're talking about stopping a scan after X rows (as opposed to the > > batching), but break out of the ResultScanner loop after X rows. > > > > http://hbase.apache.org/book.html#data_model_operations > > > > You can either add a ColumnFamily to a scan, or add specific attributes > > (I.e., "cf:column") to a scan. > > > > > > > > > > On 9/12/12 6:50 PM, "Mohit Anchlia" <[email protected]> wrote: > > > > >I am using client 0.90.5 jar > > > > > >Is there a way to limit how many rows can be fetched in one scan call? > > > > > >Similarly is there something for colums? > > > > > > >
