Most time is spent reading from Store file and not on network transfer time
of Increment objects.

Sent from my iPhone

On 12 בינו 2013, at 17:40, Anoop John <[email protected]> wrote:

Hi
    Can you check with using API  HTable#batch()?  Here you can batch a
number of increments for many rows in just one RPC call. Might help you to
reduce the net time taken.  Good luck.

-Anoop-

On Sat, Jan 12, 2013 at 4:07 PM, kiran <[email protected]> wrote:

Hi,


My usecase is I need to increment 1 million rows with in 15 mins. I tried

two approaches but none of the yielded results.


I have used HTable.increment, but is not getting completed in the specified

time. I tried multi-threading also but it is very costly. I have also

implemented get and put as other alternative, but that approach is also not

getting completed in 15 mins.


Can I use any low level implementation like using "Store or HRegionServer"

to increment 1 million rows. I know the table splits, and region servers

serving them, and rows which fall into table splits. I suspect the major

concern as network I/O rather than processing with the above two

approaches.


--

Thank you

Kiran Sarvabhotla


-----Even a correct decision is wrong when it is taken late

Reply via email to