bq. Earlier client (0.94) didn't complain about it.

Did you observe any data loss (w.r.t. Increments) in 0.94 when the servers
were loaded ?

As Anoop said, it is not recommended to turn off this feature in 0.98

On Wed, Sep 17, 2014 at 12:34 AM, Vin Gup <[email protected]>
wrote:

> Yes possibly. Why would that be a problem?
> Earlier client (0.94) didn't complain about it.
>
> Thanks,
> -Vinay
>
> > On Sep 17, 2014, at 12:16 AM, Anoop John <[email protected]> wrote:
> >
> > You have more than one increment for the same key in one batch?
> >
> > On Wed, Sep 17, 2014 at 12:33 PM, Vinay Gupta
> <[email protected]>
> > wrote:
> >
> >> Also the regionserver keeps throwing exceptions like
> >>
> >> 2014-09-17 06:56:07,151 DEBUG [RpcServer.handler=10,port=60020]
> >> regionserver.ServerNonceManager: Conflict detected by nonce:
> >> [43871278468062569
> >> 89:2793719453824938427], [state 0, hasWait false, activity 06:55:41.091]
> >> 2014-09-17 06:56:07,151 DEBUG [RpcServer.handler=10,port=60020]
> >> regionserver.ServerNonceManager: Conflict detected by nonce:
> >> [43871278468062569
> >> 89:843474753753473839], [state 0, hasWait false, activity 06:55:41.094]
> >>
> >>
> >> Are we sending data too fast? Is there a client side setting or a server
> >> side setting we need to look at to alleviate this?
> >> Again this was never a problem with HBase 0.94 cluster.
> >>
> >> We are calling the batch API in a List<> of 1000 increment and we do
> >> approx 30000 Increments (30 batches) at a time.
> >>
> >>
> >> -Vinay
> >>
> >>
> >> On Sep 16, 2014, at 11:24 PM, Vinay Gupta <[email protected]
> >
> >> wrote:
> >>
> >>>
> >>>>
> >>>> Hi,
> >>>> We are using Hbase batch API and with 0.98.1 we get the following
> >> exception on using batch() with Increment
> >>>> ————————————
> >>>> org.apache.hadoop.hbase.exceptions.OperationConflictException: The
> >> operation with nonce {5266048044724982303, 5395957753774586342} on row
> >> [rowkey13-20140331] may have already completed
> >>>>     at
> >>
> org.apache.hadoop.hbase.regionserver.HRegionServer.startNonceOperation(HRegionServer.java:4199)
> >>>>     at
> >>
> org.apache.hadoop.hbase.regionserver.HRegionServer.increment(HRegionServer.java:4163)
> >>>>     at
> >>
> org.apache.hadoop.hbase.regionserver.HRegionServer.doNonAtomicRegionMutation(HRegionServer.java:3424)
> >>>>     at
> >>
> org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3359)
> >>>>     at
> >>
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29503)
> >>>>     at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2012)
> >>>>     at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:98)
> >>>>     at
> >>
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.consumerLoop(SimpleRpcScheduler.java:160)
> >>>>     at
> >>
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.access$000(SimpleRpcScheduler.java:38)
> >>>>     at
> >>
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler$1.run(SimpleRpcScheduler.java:110)
> >>>>     at java.lang.Thread.run(Thread.java:745)
> >>>> ————————————
> >>>> Eventually the job fails with
> >>>> "Error:
> >> org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException"
> >>>>
> >>>> The same job works in Hbase 0.94 installation. Any tips on which
> config
> >> settings to play with to resolve this?
> >>>> Is the application supposed to handle these exceptions? (something new
> >> in Hbase 0.98 or 0.96 ??)
> >>>>
> >>>> Thanks,
> >>>> -Vinay
> >>
> >>
>

Reply via email to