Re: HBase Get Encounters java.lang.IndexOutOfBoundsException

2018-12-10 Thread William Shen
Thank you for taking a look. This specific issue with fast diff encoding is
the first time we've found it, though perhaps it had happened in the past
as well (we did encounter a couple of these IndexOutOfBound, or
BufferOverflow issues in the past, but had dropped data and moved on back
then). In the past we had issues with data becoming corrupted and no longer
being in a format Phoenix expects from time to time, but in those cases we
could always read at the HBase level unlike this time.

- Will
On Mon, Dec 10, 2018 at 9:19 AM Stack  wrote:

> Thank you William Shen. Looks like a corruption (either in the writing or
> subsequent). Does it happen frequently?
> Thanks,
> S
>
> On Thu, Dec 6, 2018 at 12:24 PM William Shen 
> wrote:
>
> > I have created https://issues.apache.org/jira/browse/HBASE-21563 in case
> > anyone else is able to give a try at reading the HFile. Thank you!
> >
> > On Wed, Dec 5, 2018 at 3:24 PM William Shen 
> > wrote:
> >
> > > In addition, when running hbase hfile -f -p, kv pairs were printed
> until
> > > the program hit the following exception:
> > >
> > > Exception in thread "main" java.lang.RuntimeException: Unknown code 65
> > >
> > > at org.apache.hadoop.hbase.KeyValue$Type.codeToType(KeyValue.java:259)
> > >
> > > at org.apache.hadoop.hbase.KeyValue.keyToString(KeyValue.java:1246)
> > >
> > > at
> > > org.apache.hadoop.hbase.io
> >
> .encoding.BufferedDataBlockEncoder$ClonedSeekerState.toString(BufferedDataBlockEncoder.java:506)
> > >
> > > at java.lang.String.valueOf(String.java:2994)
> > >
> > > at java.lang.StringBuilder.append(StringBuilder.java:131)
> > >
> > > at
> > > org.apache.hadoop.hbase.io
> > .hfile.HFilePrettyPrinter.scanKeysValues(HFilePrettyPrinter.java:382)
> > >
> > > at
> > > org.apache.hadoop.hbase.io
> > .hfile.HFilePrettyPrinter.processFile(HFilePrettyPrinter.java:316)
> > >
> > > at
> > > org.apache.hadoop.hbase.io
> > .hfile.HFilePrettyPrinter.run(HFilePrettyPrinter.java:255)
> > >
> > > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> > >
> > > at
> > > org.apache.hadoop.hbase.io
> > .hfile.HFilePrettyPrinter.main(HFilePrettyPrinter.java:677)
> > >
> > > On Wed, Dec 5, 2018 at 2:24 PM William Shen <
> wills...@marinsoftware.com>
> > > wrote:
> > >
> > >> Thank you Stack.
> > >> I was able to isolate the specific Hfile causing the exception. Do you
> > >> mind teaching me how to play with the file standalone? I am not sure
> if
> > I
> > >> know how to do that.
> > >> Thanks!
> > >>
> > >> On Wed, Dec 5, 2018 at 1:04 PM Stack  wrote:
> > >>
> > >>> Looks like bug in FastDiffDeltaEncoder triggered by whatever the
> > current
> > >>> form of the target file. Can you figure which file it is (going by
> the
> > >>> Get
> > >>> coordinates?). I suppose the compactor is running into the same
> problem
> > >>> (was thinking a major compaction might get you over this hump). You
> > could
> > >>> make a copy of the problematic file and play with it standalone to
> see
> > if
> > >>> can figure the bug. Failing that, post to a JIRA if you yourself
> can't
> > >>> figure it so someone else might have a go at it?
> > >>>
> > >>> Thanks,
> > >>> S
> > >>>
> > >>> On Wed, Dec 5, 2018 at 11:22 AM William Shen <
> > wills...@marinsoftware.com
> > >>> >
> > >>> wrote:
> > >>>
> > >>> > Hi there,
> > >>> >
> > >>> > We've recently encountered issue retrieving data from our HBase
> > >>> cluster,
> > >>> > and have not had much luck troubleshooting the issue. We narrowed
> > down
> > >>> our
> > >>> > issue to a single GET, which appears to be caused by
> > >>> > FastDiffDeltaEncoder.java running into
> > >>> java.lang.IndexOutOfBoundsException.
> > >>> > Has anyone encountered similar issues before, or does anyone have
> > >>> > experience troubleshooting issues such as this one? Any help would
> be
> > >>> much
> > >>> > appreciated! We are running 1.2.0-cdh5.9.2, and the GET in question
> > is:
> >

Re: HBase Get Encounters java.lang.IndexOutOfBoundsException

2018-12-10 Thread Stack
Thank you William Shen. Looks like a corruption (either in the writing or
subsequent). Does it happen frequently?
Thanks,
S

On Thu, Dec 6, 2018 at 12:24 PM William Shen 
wrote:

> I have created https://issues.apache.org/jira/browse/HBASE-21563 in case
> anyone else is able to give a try at reading the HFile. Thank you!
>
> On Wed, Dec 5, 2018 at 3:24 PM William Shen 
> wrote:
>
> > In addition, when running hbase hfile -f -p, kv pairs were printed until
> > the program hit the following exception:
> >
> > Exception in thread "main" java.lang.RuntimeException: Unknown code 65
> >
> > at org.apache.hadoop.hbase.KeyValue$Type.codeToType(KeyValue.java:259)
> >
> > at org.apache.hadoop.hbase.KeyValue.keyToString(KeyValue.java:1246)
> >
> > at
> > org.apache.hadoop.hbase.io
> .encoding.BufferedDataBlockEncoder$ClonedSeekerState.toString(BufferedDataBlockEncoder.java:506)
> >
> > at java.lang.String.valueOf(String.java:2994)
> >
> > at java.lang.StringBuilder.append(StringBuilder.java:131)
> >
> > at
> > org.apache.hadoop.hbase.io
> .hfile.HFilePrettyPrinter.scanKeysValues(HFilePrettyPrinter.java:382)
> >
> > at
> > org.apache.hadoop.hbase.io
> .hfile.HFilePrettyPrinter.processFile(HFilePrettyPrinter.java:316)
> >
> > at
> > org.apache.hadoop.hbase.io
> .hfile.HFilePrettyPrinter.run(HFilePrettyPrinter.java:255)
> >
> > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> >
> > at
> > org.apache.hadoop.hbase.io
> .hfile.HFilePrettyPrinter.main(HFilePrettyPrinter.java:677)
> >
> > On Wed, Dec 5, 2018 at 2:24 PM William Shen 
> > wrote:
> >
> >> Thank you Stack.
> >> I was able to isolate the specific Hfile causing the exception. Do you
> >> mind teaching me how to play with the file standalone? I am not sure if
> I
> >> know how to do that.
> >> Thanks!
> >>
> >> On Wed, Dec 5, 2018 at 1:04 PM Stack  wrote:
> >>
> >>> Looks like bug in FastDiffDeltaEncoder triggered by whatever the
> current
> >>> form of the target file. Can you figure which file it is (going by the
> >>> Get
> >>> coordinates?). I suppose the compactor is running into the same problem
> >>> (was thinking a major compaction might get you over this hump). You
> could
> >>> make a copy of the problematic file and play with it standalone to see
> if
> >>> can figure the bug. Failing that, post to a JIRA if you yourself can't
> >>> figure it so someone else might have a go at it?
> >>>
> >>> Thanks,
> >>> S
> >>>
> >>> On Wed, Dec 5, 2018 at 11:22 AM William Shen <
> wills...@marinsoftware.com
> >>> >
> >>> wrote:
> >>>
> >>> > Hi there,
> >>> >
> >>> > We've recently encountered issue retrieving data from our HBase
> >>> cluster,
> >>> > and have not had much luck troubleshooting the issue. We narrowed
> down
> >>> our
> >>> > issue to a single GET, which appears to be caused by
> >>> > FastDiffDeltaEncoder.java running into
> >>> java.lang.IndexOutOfBoundsException.
> >>> > Has anyone encountered similar issues before, or does anyone have
> >>> > experience troubleshooting issues such as this one? Any help would be
> >>> much
> >>> > appreciated! We are running 1.2.0-cdh5.9.2, and the GET in question
> is:
> >>> >
> >>> > hbase(main):004:0> get 'qa2.ADGROUPS',
> >>> >
> >>> >
> >>>
> "\x05\x80\x00\x00\x00\x00\x1F\x54\x9C\x80\x00\x00\x00\x00\x1C\x7D\x45\x00\x04\x80\x00\x00\x00\x00\x1D\x0F\x19\x80\x00\x00\x00\x00\x4A\x64\x6F\x80\x00\x00\x00\x01\xD9\xDB\xCE"
> >>> >
> >>> > COLUMNCELL
> >>> >
> >>> >
> >>> >
> >>> >
> >>> > ERROR: java.io.IOException
> >>> >
> >>> > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2215)
> >>> >
> >>> > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:109)
> >>> >
> >>> > at
> >>> >
> >>>
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:185)
> >>> >
> >>> > at
> >>> >
> >>>
> org.apache

Re: HBase Get Encounters java.lang.IndexOutOfBoundsException

2018-12-06 Thread William Shen
I have created https://issues.apache.org/jira/browse/HBASE-21563 in case
anyone else is able to give a try at reading the HFile. Thank you!

On Wed, Dec 5, 2018 at 3:24 PM William Shen 
wrote:

> In addition, when running hbase hfile -f -p, kv pairs were printed until
> the program hit the following exception:
>
> Exception in thread "main" java.lang.RuntimeException: Unknown code 65
>
> at org.apache.hadoop.hbase.KeyValue$Type.codeToType(KeyValue.java:259)
>
> at org.apache.hadoop.hbase.KeyValue.keyToString(KeyValue.java:1246)
>
> at
> org.apache.hadoop.hbase.io.encoding.BufferedDataBlockEncoder$ClonedSeekerState.toString(BufferedDataBlockEncoder.java:506)
>
> at java.lang.String.valueOf(String.java:2994)
>
> at java.lang.StringBuilder.append(StringBuilder.java:131)
>
> at
> org.apache.hadoop.hbase.io.hfile.HFilePrettyPrinter.scanKeysValues(HFilePrettyPrinter.java:382)
>
> at
> org.apache.hadoop.hbase.io.hfile.HFilePrettyPrinter.processFile(HFilePrettyPrinter.java:316)
>
> at
> org.apache.hadoop.hbase.io.hfile.HFilePrettyPrinter.run(HFilePrettyPrinter.java:255)
>
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>
> at
> org.apache.hadoop.hbase.io.hfile.HFilePrettyPrinter.main(HFilePrettyPrinter.java:677)
>
> On Wed, Dec 5, 2018 at 2:24 PM William Shen 
> wrote:
>
>> Thank you Stack.
>> I was able to isolate the specific Hfile causing the exception. Do you
>> mind teaching me how to play with the file standalone? I am not sure if I
>> know how to do that.
>> Thanks!
>>
>> On Wed, Dec 5, 2018 at 1:04 PM Stack  wrote:
>>
>>> Looks like bug in FastDiffDeltaEncoder triggered by whatever the current
>>> form of the target file. Can you figure which file it is (going by the
>>> Get
>>> coordinates?). I suppose the compactor is running into the same problem
>>> (was thinking a major compaction might get you over this hump). You could
>>> make a copy of the problematic file and play with it standalone to see if
>>> can figure the bug. Failing that, post to a JIRA if you yourself can't
>>> figure it so someone else might have a go at it?
>>>
>>> Thanks,
>>> S
>>>
>>> On Wed, Dec 5, 2018 at 11:22 AM William Shen >> >
>>> wrote:
>>>
>>> > Hi there,
>>> >
>>> > We've recently encountered issue retrieving data from our HBase
>>> cluster,
>>> > and have not had much luck troubleshooting the issue. We narrowed down
>>> our
>>> > issue to a single GET, which appears to be caused by
>>> > FastDiffDeltaEncoder.java running into
>>> java.lang.IndexOutOfBoundsException.
>>> > Has anyone encountered similar issues before, or does anyone have
>>> > experience troubleshooting issues such as this one? Any help would be
>>> much
>>> > appreciated! We are running 1.2.0-cdh5.9.2, and the GET in question is:
>>> >
>>> > hbase(main):004:0> get 'qa2.ADGROUPS',
>>> >
>>> >
>>> "\x05\x80\x00\x00\x00\x00\x1F\x54\x9C\x80\x00\x00\x00\x00\x1C\x7D\x45\x00\x04\x80\x00\x00\x00\x00\x1D\x0F\x19\x80\x00\x00\x00\x00\x4A\x64\x6F\x80\x00\x00\x00\x01\xD9\xDB\xCE"
>>> >
>>> > COLUMNCELL
>>> >
>>> >
>>> >
>>> >
>>> > ERROR: java.io.IOException
>>> >
>>> > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2215)
>>> >
>>> > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:109)
>>> >
>>> > at
>>> >
>>> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:185)
>>> >
>>> > at
>>> >
>>> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:165)
>>> >
>>> > Caused by: java.lang.IndexOutOfBoundsException
>>> >
>>> > at java.nio.Buffer.checkBounds(Buffer.java:567)
>>> >
>>> > at java.nio.HeapByteBuffer.get(HeapByteBuffer.java:149)
>>> >
>>> > at
>>> > org.apache.hadoop.hbase.io
>>> > .encoding.FastDiffDeltaEncoder$1.decode(FastDiffDeltaEncoder.java:465)
>>> >
>>> > at
>>> > org.apache.hadoop.hbase.io
>>> >
>>> .encoding.FastDiffDeltaEncoder$1.decodeNext(FastDiffDeltaEncoder.java:516)
>>> >
>>> > at
>>> > org.apache.hadoop.hbase.io
>>> >
>>> .encoding.BufferedDataBlockEncoder$BufferedEnco

Re: HBase Get Encounters java.lang.IndexOutOfBoundsException

2018-12-05 Thread William Shen
In addition, when running hbase hfile -f -p, kv pairs were printed until
the program hit the following exception:

Exception in thread "main" java.lang.RuntimeException: Unknown code 65

at org.apache.hadoop.hbase.KeyValue$Type.codeToType(KeyValue.java:259)

at org.apache.hadoop.hbase.KeyValue.keyToString(KeyValue.java:1246)

at
org.apache.hadoop.hbase.io.encoding.BufferedDataBlockEncoder$ClonedSeekerState.toString(BufferedDataBlockEncoder.java:506)

at java.lang.String.valueOf(String.java:2994)

at java.lang.StringBuilder.append(StringBuilder.java:131)

at
org.apache.hadoop.hbase.io.hfile.HFilePrettyPrinter.scanKeysValues(HFilePrettyPrinter.java:382)

at
org.apache.hadoop.hbase.io.hfile.HFilePrettyPrinter.processFile(HFilePrettyPrinter.java:316)

at
org.apache.hadoop.hbase.io.hfile.HFilePrettyPrinter.run(HFilePrettyPrinter.java:255)

at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)

at
org.apache.hadoop.hbase.io.hfile.HFilePrettyPrinter.main(HFilePrettyPrinter.java:677)

On Wed, Dec 5, 2018 at 2:24 PM William Shen 
wrote:

> Thank you Stack.
> I was able to isolate the specific Hfile causing the exception. Do you
> mind teaching me how to play with the file standalone? I am not sure if I
> know how to do that.
> Thanks!
>
> On Wed, Dec 5, 2018 at 1:04 PM Stack  wrote:
>
>> Looks like bug in FastDiffDeltaEncoder triggered by whatever the current
>> form of the target file. Can you figure which file it is (going by the Get
>> coordinates?). I suppose the compactor is running into the same problem
>> (was thinking a major compaction might get you over this hump). You could
>> make a copy of the problematic file and play with it standalone to see if
>> can figure the bug. Failing that, post to a JIRA if you yourself can't
>> figure it so someone else might have a go at it?
>>
>> Thanks,
>> S
>>
>> On Wed, Dec 5, 2018 at 11:22 AM William Shen 
>> wrote:
>>
>> > Hi there,
>> >
>> > We've recently encountered issue retrieving data from our HBase cluster,
>> > and have not had much luck troubleshooting the issue. We narrowed down
>> our
>> > issue to a single GET, which appears to be caused by
>> > FastDiffDeltaEncoder.java running into
>> java.lang.IndexOutOfBoundsException.
>> > Has anyone encountered similar issues before, or does anyone have
>> > experience troubleshooting issues such as this one? Any help would be
>> much
>> > appreciated! We are running 1.2.0-cdh5.9.2, and the GET in question is:
>> >
>> > hbase(main):004:0> get 'qa2.ADGROUPS',
>> >
>> >
>> "\x05\x80\x00\x00\x00\x00\x1F\x54\x9C\x80\x00\x00\x00\x00\x1C\x7D\x45\x00\x04\x80\x00\x00\x00\x00\x1D\x0F\x19\x80\x00\x00\x00\x00\x4A\x64\x6F\x80\x00\x00\x00\x01\xD9\xDB\xCE"
>> >
>> > COLUMNCELL
>> >
>> >
>> >
>> >
>> > ERROR: java.io.IOException
>> >
>> > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2215)
>> >
>> > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:109)
>> >
>> > at
>> >
>> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:185)
>> >
>> > at
>> >
>> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:165)
>> >
>> > Caused by: java.lang.IndexOutOfBoundsException
>> >
>> > at java.nio.Buffer.checkBounds(Buffer.java:567)
>> >
>> > at java.nio.HeapByteBuffer.get(HeapByteBuffer.java:149)
>> >
>> > at
>> > org.apache.hadoop.hbase.io
>> > .encoding.FastDiffDeltaEncoder$1.decode(FastDiffDeltaEncoder.java:465)
>> >
>> > at
>> > org.apache.hadoop.hbase.io
>> >
>> .encoding.FastDiffDeltaEncoder$1.decodeNext(FastDiffDeltaEncoder.java:516)
>> >
>> > at
>> > org.apache.hadoop.hbase.io
>> >
>> .encoding.BufferedDataBlockEncoder$BufferedEncodedSeeker.next(BufferedDataBlockEncoder.java:618)
>> >
>> > at
>> > org.apache.hadoop.hbase.io
>> > .hfile.HFileReaderV2$EncodedScannerV2.next(HFileReaderV2.java:1277)
>> >
>> > at
>> >
>> >
>> org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:180)
>> >
>> > at
>> >
>> >
>> org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:108)
>> >
>> > at
>> >
>> >
>> org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:5

Re: HBase Get Encounters java.lang.IndexOutOfBoundsException

2018-12-05 Thread William Shen
Thank you Stack.
I was able to isolate the specific Hfile causing the exception. Do you mind
teaching me how to play with the file standalone? I am not sure if I know
how to do that.
Thanks!

On Wed, Dec 5, 2018 at 1:04 PM Stack  wrote:

> Looks like bug in FastDiffDeltaEncoder triggered by whatever the current
> form of the target file. Can you figure which file it is (going by the Get
> coordinates?). I suppose the compactor is running into the same problem
> (was thinking a major compaction might get you over this hump). You could
> make a copy of the problematic file and play with it standalone to see if
> can figure the bug. Failing that, post to a JIRA if you yourself can't
> figure it so someone else might have a go at it?
>
> Thanks,
> S
>
> On Wed, Dec 5, 2018 at 11:22 AM William Shen 
> wrote:
>
> > Hi there,
> >
> > We've recently encountered issue retrieving data from our HBase cluster,
> > and have not had much luck troubleshooting the issue. We narrowed down
> our
> > issue to a single GET, which appears to be caused by
> > FastDiffDeltaEncoder.java running into
> java.lang.IndexOutOfBoundsException.
> > Has anyone encountered similar issues before, or does anyone have
> > experience troubleshooting issues such as this one? Any help would be
> much
> > appreciated! We are running 1.2.0-cdh5.9.2, and the GET in question is:
> >
> > hbase(main):004:0> get 'qa2.ADGROUPS',
> >
> >
> "\x05\x80\x00\x00\x00\x00\x1F\x54\x9C\x80\x00\x00\x00\x00\x1C\x7D\x45\x00\x04\x80\x00\x00\x00\x00\x1D\x0F\x19\x80\x00\x00\x00\x00\x4A\x64\x6F\x80\x00\x00\x00\x01\xD9\xDB\xCE"
> >
> > COLUMNCELL
> >
> >
> >
> >
> > ERROR: java.io.IOException
> >
> > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2215)
> >
> > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:109)
> >
> > at
> > org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:185)
> >
> > at
> > org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:165)
> >
> > Caused by: java.lang.IndexOutOfBoundsException
> >
> > at java.nio.Buffer.checkBounds(Buffer.java:567)
> >
> > at java.nio.HeapByteBuffer.get(HeapByteBuffer.java:149)
> >
> > at
> > org.apache.hadoop.hbase.io
> > .encoding.FastDiffDeltaEncoder$1.decode(FastDiffDeltaEncoder.java:465)
> >
> > at
> > org.apache.hadoop.hbase.io
> >
> .encoding.FastDiffDeltaEncoder$1.decodeNext(FastDiffDeltaEncoder.java:516)
> >
> > at
> > org.apache.hadoop.hbase.io
> >
> .encoding.BufferedDataBlockEncoder$BufferedEncodedSeeker.next(BufferedDataBlockEncoder.java:618)
> >
> > at
> > org.apache.hadoop.hbase.io
> > .hfile.HFileReaderV2$EncodedScannerV2.next(HFileReaderV2.java:1277)
> >
> > at
> >
> >
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:180)
> >
> > at
> >
> >
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:108)
> >
> > at
> >
> >
> org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:588)
> >
> > at
> >
> >
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:147)
> >
> > at
> >
> >
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:5706)
> >
> > at
> >
> >
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:5865)
> >
> > at
> >
> >
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:5643)
> >
> > at
> >
> >
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:5620)
> >
> > at
> >
> >
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:5606)
> >
> > at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:6801)
> >
> > at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:6779)
> >
> > at
> >
> >
> org.apache.hadoop.hbase.regionserver.RSRpcServices.get(RSRpcServices.java:2029)
> >
> > at
> >
> >
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33644)
> >
> > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2170)
> >
> > ... 3 more
> >
> >
> > Thank you very much in advance!
> >
>


Re: HBase Get Encounters java.lang.IndexOutOfBoundsException

2018-12-05 Thread Stack
Looks like bug in FastDiffDeltaEncoder triggered by whatever the current
form of the target file. Can you figure which file it is (going by the Get
coordinates?). I suppose the compactor is running into the same problem
(was thinking a major compaction might get you over this hump). You could
make a copy of the problematic file and play with it standalone to see if
can figure the bug. Failing that, post to a JIRA if you yourself can't
figure it so someone else might have a go at it?

Thanks,
S

On Wed, Dec 5, 2018 at 11:22 AM William Shen 
wrote:

> Hi there,
>
> We've recently encountered issue retrieving data from our HBase cluster,
> and have not had much luck troubleshooting the issue. We narrowed down our
> issue to a single GET, which appears to be caused by
> FastDiffDeltaEncoder.java running into java.lang.IndexOutOfBoundsException.
> Has anyone encountered similar issues before, or does anyone have
> experience troubleshooting issues such as this one? Any help would be much
> appreciated! We are running 1.2.0-cdh5.9.2, and the GET in question is:
>
> hbase(main):004:0> get 'qa2.ADGROUPS',
>
> "\x05\x80\x00\x00\x00\x00\x1F\x54\x9C\x80\x00\x00\x00\x00\x1C\x7D\x45\x00\x04\x80\x00\x00\x00\x00\x1D\x0F\x19\x80\x00\x00\x00\x00\x4A\x64\x6F\x80\x00\x00\x00\x01\xD9\xDB\xCE"
>
> COLUMNCELL
>
>
>
>
> ERROR: java.io.IOException
>
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2215)
>
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:109)
>
> at
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:185)
>
> at
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:165)
>
> Caused by: java.lang.IndexOutOfBoundsException
>
> at java.nio.Buffer.checkBounds(Buffer.java:567)
>
> at java.nio.HeapByteBuffer.get(HeapByteBuffer.java:149)
>
> at
> org.apache.hadoop.hbase.io
> .encoding.FastDiffDeltaEncoder$1.decode(FastDiffDeltaEncoder.java:465)
>
> at
> org.apache.hadoop.hbase.io
> .encoding.FastDiffDeltaEncoder$1.decodeNext(FastDiffDeltaEncoder.java:516)
>
> at
> org.apache.hadoop.hbase.io
> .encoding.BufferedDataBlockEncoder$BufferedEncodedSeeker.next(BufferedDataBlockEncoder.java:618)
>
> at
> org.apache.hadoop.hbase.io
> .hfile.HFileReaderV2$EncodedScannerV2.next(HFileReaderV2.java:1277)
>
> at
>
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:180)
>
> at
>
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:108)
>
> at
>
> org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:588)
>
> at
>
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:147)
>
> at
>
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:5706)
>
> at
>
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:5865)
>
> at
>
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:5643)
>
> at
>
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:5620)
>
> at
>
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:5606)
>
> at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:6801)
>
> at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:6779)
>
> at
>
> org.apache.hadoop.hbase.regionserver.RSRpcServices.get(RSRpcServices.java:2029)
>
> at
>
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33644)
>
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2170)
>
> ... 3 more
>
>
> Thank you very much in advance!
>


HBase Get Encounters java.lang.IndexOutOfBoundsException

2018-12-05 Thread William Shen
Hi there,

We've recently encountered issue retrieving data from our HBase cluster,
and have not had much luck troubleshooting the issue. We narrowed down our
issue to a single GET, which appears to be caused by
FastDiffDeltaEncoder.java running into java.lang.IndexOutOfBoundsException.
Has anyone encountered similar issues before, or does anyone have
experience troubleshooting issues such as this one? Any help would be much
appreciated! We are running 1.2.0-cdh5.9.2, and the GET in question is:

hbase(main):004:0> get 'qa2.ADGROUPS',
"\x05\x80\x00\x00\x00\x00\x1F\x54\x9C\x80\x00\x00\x00\x00\x1C\x7D\x45\x00\x04\x80\x00\x00\x00\x00\x1D\x0F\x19\x80\x00\x00\x00\x00\x4A\x64\x6F\x80\x00\x00\x00\x01\xD9\xDB\xCE"

COLUMNCELL




ERROR: java.io.IOException

at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2215)

at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:109)

at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:185)

at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:165)

Caused by: java.lang.IndexOutOfBoundsException

at java.nio.Buffer.checkBounds(Buffer.java:567)

at java.nio.HeapByteBuffer.get(HeapByteBuffer.java:149)

at
org.apache.hadoop.hbase.io.encoding.FastDiffDeltaEncoder$1.decode(FastDiffDeltaEncoder.java:465)

at
org.apache.hadoop.hbase.io.encoding.FastDiffDeltaEncoder$1.decodeNext(FastDiffDeltaEncoder.java:516)

at
org.apache.hadoop.hbase.io.encoding.BufferedDataBlockEncoder$BufferedEncodedSeeker.next(BufferedDataBlockEncoder.java:618)

at
org.apache.hadoop.hbase.io.hfile.HFileReaderV2$EncodedScannerV2.next(HFileReaderV2.java:1277)

at
org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:180)

at
org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:108)

at
org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:588)

at
org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:147)

at
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:5706)

at
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:5865)

at
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:5643)

at
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:5620)

at
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:5606)

at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:6801)

at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:6779)

at
org.apache.hadoop.hbase.regionserver.RSRpcServices.get(RSRpcServices.java:2029)

at
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33644)

at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2170)

... 3 more


Thank you very much in advance!


Re: java.lang.IndexOutOfBoundsException

2011-04-21 Thread Venkatesh

 Thanks..we have the same exact code that process 700 million puts per day in 
0.20.6
from a tomcat servlet (each thread creates new HTable, does 1 put  closes)

..in 0.90.2..we changed just the API whose signature changed (mainly 
HTable)..it crawls
..each/most requests taking well over 2 sec..we can't keep up with even 1/10 th 
of
production load.

everything in the cluster is identical 20 node cluster..


That is impressive performance..from async..Thanks for the tip..i'll give it a 
try (assuming
it would work with 0.90.2)


 


 

 

-Original Message-
From: tsuna tsuna...@gmail.com
To: user@hbase.apache.org
Sent: Wed, Apr 20, 2011 4:30 pm
Subject: Re: java.lang.IndexOutOfBoundsException


On Wed, Apr 20, 2011 at 10:04 AM, Venkatesh vramanatha...@aol.com wrote:

 On 0.90.2, do you all think using HTablePool would help with performance 

problem?



What performance problems are you seeing?



BTW, if you want a thread-safe client that's highly scalable for

high-throughput, multi-threaded applications, look at asynchbase:

http://github.com/stumbleupon/asynchbase

OpenTSDB uses it and I'm able to push 20 edits per second to 3

RegionServers.



-- 

Benoit tsuna Sigoure

Software Engineer @ www.StumbleUpon.com


 


Re: java.lang.IndexOutOfBoundsException

2011-04-20 Thread Ted Yu
I have seen this before.
HTable isn't thread-safe.

Please describe your usage.

Thanks

On Wed, Apr 20, 2011 at 6:03 AM, Venkatesh vramanatha...@aol.com wrote:


  Using hbase-0.90.2..(sigh..) Any tip? thanks


  java.lang.IndexOutOfBoundsException: Index: 4, Size: 3
at java.util.ArrayList.RangeCheck(ArrayList.java:547)
at java.util.ArrayList.remove(ArrayList.java:387)
at
 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchOfPuts(HConnectionManager.java:1257)
at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:822)
at org.apache.hadoop.hbase.client.HTable.doPut(HTable.java:678)
at org.apache.hadoop.hbase.client.HTable.put(HTable.java:663)






Re: java.lang.IndexOutOfBoundsException

2011-04-20 Thread Jean-Daniel Cryans
Are you sharing a single HTable between multiple threads that do puts?

J-D

On Wed, Apr 20, 2011 at 6:03 AM, Venkatesh vramanatha...@aol.com wrote:

  Using hbase-0.90.2..(sigh..) Any tip? thanks


  java.lang.IndexOutOfBoundsException: Index: 4, Size: 3
    at java.util.ArrayList.RangeCheck(ArrayList.java:547)
    at java.util.ArrayList.remove(ArrayList.java:387)
    at 
 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchOfPuts(HConnectionManager.java:1257)
    at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:822)
    at org.apache.hadoop.hbase.client.HTable.doPut(HTable.java:678)
    at org.apache.hadoop.hbase.client.HTable.put(HTable.java:663)






Re: java.lang.IndexOutOfBoundsException

2011-04-20 Thread Ted Yu
I think HConnectionManager can catch IndexOutOfBoundsException and translate
into a more user-friendly message, informing user about thread-safety.

On Wed, Apr 20, 2011 at 9:11 AM, Ted Yu yuzhih...@gmail.com wrote:

 I have seen this before.
 HTable isn't thread-safe.

 Please describe your usage.

 Thanks


 On Wed, Apr 20, 2011 at 6:03 AM, Venkatesh vramanatha...@aol.com wrote:


  Using hbase-0.90.2..(sigh..) Any tip? thanks


  java.lang.IndexOutOfBoundsException: Index: 4, Size: 3
at java.util.ArrayList.RangeCheck(ArrayList.java:547)
at java.util.ArrayList.remove(ArrayList.java:387)
at
 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchOfPuts(HConnectionManager.java:1257)
at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:822)
at org.apache.hadoop.hbase.client.HTable.doPut(HTable.java:678)
at org.apache.hadoop.hbase.client.HTable.put(HTable.java:663)







Re: java.lang.IndexOutOfBoundsException

2011-04-20 Thread Venkatesh

 Yeah you  J-D both hit it..
I knew it's bad..I was trying anything  everything to solve the incredibly 
long latency 
with hbase puts on 0.90.2..
 I get ok/better response with batch put.. this was quick  dirty way to 
accumulate puts by sharing
same HTable instance 
Thanks for letting me know..this exception is due to sharing of HTable..

I've to go back to to 0.20.6 since our system is down too long..(starting with 
empty table)

On 0.90.2, do you all think using HTablePool would help with performance 
problem?
thx

 


 

 

-Original Message-
From: Ted Yu yuzhih...@gmail.com
To: user@hbase.apache.org
Sent: Wed, Apr 20, 2011 12:27 pm
Subject: Re: java.lang.IndexOutOfBoundsException


I think HConnectionManager can catch IndexOutOfBoundsException and translate

into a more user-friendly message, informing user about thread-safety.



On Wed, Apr 20, 2011 at 9:11 AM, Ted Yu yuzhih...@gmail.com wrote:



 I have seen this before.

 HTable isn't thread-safe.



 Please describe your usage.



 Thanks





 On Wed, Apr 20, 2011 at 6:03 AM, Venkatesh vramanatha...@aol.com wrote:





  Using hbase-0.90.2..(sigh..) Any tip? thanks





  java.lang.IndexOutOfBoundsException: Index: 4, Size: 3

at java.util.ArrayList.RangeCheck(ArrayList.java:547)

at java.util.ArrayList.remove(ArrayList.java:387)

at

 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchOfPuts(HConnectionManager.java:1257)

at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:822)

at org.apache.hadoop.hbase.client.HTable.doPut(HTable.java:678)

at org.apache.hadoop.hbase.client.HTable.put(HTable.java:663)












 


Re: java.lang.IndexOutOfBoundsException

2011-04-20 Thread Ted Yu
When using HTablePool, try not to define maxSize yourself - use the default.

On Wed, Apr 20, 2011 at 10:04 AM, Venkatesh vramanatha...@aol.com wrote:


  Yeah you  J-D both hit it..
 I knew it's bad..I was trying anything  everything to solve the incredibly
 long latency
 with hbase puts on 0.90.2..
  I get ok/better response with batch put.. this was quick  dirty way to
 accumulate puts by sharing
 same HTable instance
 Thanks for letting me know..this exception is due to sharing of HTable..

 I've to go back to to 0.20.6 since our system is down too long..(starting
 with empty table)

 On 0.90.2, do you all think using HTablePool would help with performance
 problem?
 thx








 -Original Message-
 From: Ted Yu yuzhih...@gmail.com
 To: user@hbase.apache.org
 Sent: Wed, Apr 20, 2011 12:27 pm
 Subject: Re: java.lang.IndexOutOfBoundsException


 I think HConnectionManager can catch IndexOutOfBoundsException and
 translate

 into a more user-friendly message, informing user about thread-safety.



 On Wed, Apr 20, 2011 at 9:11 AM, Ted Yu yuzhih...@gmail.com wrote:



  I have seen this before.

  HTable isn't thread-safe.

 

  Please describe your usage.

 

  Thanks

 

 

  On Wed, Apr 20, 2011 at 6:03 AM, Venkatesh vramanatha...@aol.com
 wrote:

 

 

   Using hbase-0.90.2..(sigh..) Any tip? thanks

 

 

   java.lang.IndexOutOfBoundsException: Index: 4, Size: 3

 at java.util.ArrayList.RangeCheck(ArrayList.java:547)

 at java.util.ArrayList.remove(ArrayList.java:387)

 at

 
 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchOfPuts(HConnectionManager.java:1257)

 at
 org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:822)

 at org.apache.hadoop.hbase.client.HTable.doPut(HTable.java:678)

 at org.apache.hadoop.hbase.client.HTable.put(HTable.java:663)

 

 

 

 

 






Re: java.lang.IndexOutOfBoundsException

2011-04-20 Thread Venkatesh
If I use default ..i can't share/pass my HBaseConfiguration object..atleast i 
don't see a constructor/setter..
that would go against previous suggestion 

 

 


 

 

-Original Message-
From: Ted Yu yuzhih...@gmail.com
To: user@hbase.apache.org
Sent: Wed, Apr 20, 2011 1:08 pm
Subject: Re: java.lang.IndexOutOfBoundsException


When using HTablePool, try not to define maxSize yourself - use the default.



On Wed, Apr 20, 2011 at 10:04 AM, Venkatesh vramanatha...@aol.com wrote:





  Yeah you  J-D both hit it..

 I knew it's bad..I was trying anything  everything to solve the incredibly

 long latency

 with hbase puts on 0.90.2..

  I get ok/better response with batch put.. this was quick  dirty way to

 accumulate puts by sharing

 same HTable instance

 Thanks for letting me know..this exception is due to sharing of HTable..



 I've to go back to to 0.20.6 since our system is down too long..(starting

 with empty table)



 On 0.90.2, do you all think using HTablePool would help with performance

 problem?

 thx

















 -Original Message-

 From: Ted Yu yuzhih...@gmail.com

 To: user@hbase.apache.org

 Sent: Wed, Apr 20, 2011 12:27 pm

 Subject: Re: java.lang.IndexOutOfBoundsException





 I think HConnectionManager can catch IndexOutOfBoundsException and

 translate



 into a more user-friendly message, informing user about thread-safety.







 On Wed, Apr 20, 2011 at 9:11 AM, Ted Yu yuzhih...@gmail.com wrote:







  I have seen this before.



  HTable isn't thread-safe.



 



  Please describe your usage.



 



  Thanks



 



 



  On Wed, Apr 20, 2011 at 6:03 AM, Venkatesh vramanatha...@aol.com

 wrote:



 



 



   Using hbase-0.90.2..(sigh..) Any tip? thanks



 



 



   java.lang.IndexOutOfBoundsException: Index: 4, Size: 3



 at java.util.ArrayList.RangeCheck(ArrayList.java:547)



 at java.util.ArrayList.remove(ArrayList.java:387)



 at



 

 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchOfPuts(HConnectionManager.java:1257)



 at

 org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:822)



 at org.apache.hadoop.hbase.client.HTable.doPut(HTable.java:678)



 at org.apache.hadoop.hbase.client.HTable.put(HTable.java:663)



 



 



 



 



 










 


Re: java.lang.IndexOutOfBoundsException

2011-04-20 Thread Ted Yu
I meant specifying Integer.MAX_VALUE as maxSize along with config.

On Wed, Apr 20, 2011 at 10:17 AM, Venkatesh vramanatha...@aol.com wrote:

 If I use default ..i can't share/pass my HBaseConfiguration object..atleast
 i don't see a constructor/setter..
 that would go against previous suggestion










 -Original Message-
 From: Ted Yu yuzhih...@gmail.com
 To: user@hbase.apache.org
 Sent: Wed, Apr 20, 2011 1:08 pm
 Subject: Re: java.lang.IndexOutOfBoundsException


 When using HTablePool, try not to define maxSize yourself - use the
 default.



 On Wed, Apr 20, 2011 at 10:04 AM, Venkatesh vramanatha...@aol.com wrote:



 

   Yeah you  J-D both hit it..

  I knew it's bad..I was trying anything  everything to solve the
 incredibly

  long latency

  with hbase puts on 0.90.2..

   I get ok/better response with batch put.. this was quick  dirty way to

  accumulate puts by sharing

  same HTable instance

  Thanks for letting me know..this exception is due to sharing of HTable..

 

  I've to go back to to 0.20.6 since our system is down too long..(starting

  with empty table)

 

  On 0.90.2, do you all think using HTablePool would help with performance

  problem?

  thx

 

 

 

 

 

 

 

 

  -Original Message-

  From: Ted Yu yuzhih...@gmail.com

  To: user@hbase.apache.org

  Sent: Wed, Apr 20, 2011 12:27 pm

  Subject: Re: java.lang.IndexOutOfBoundsException

 

 

  I think HConnectionManager can catch IndexOutOfBoundsException and

  translate

 

  into a more user-friendly message, informing user about thread-safety.

 

 

 

  On Wed, Apr 20, 2011 at 9:11 AM, Ted Yu yuzhih...@gmail.com wrote:

 

 

 

   I have seen this before.

 

   HTable isn't thread-safe.

 

  

 

   Please describe your usage.

 

  

 

   Thanks

 

  

 

  

 

   On Wed, Apr 20, 2011 at 6:03 AM, Venkatesh vramanatha...@aol.com

  wrote:

 

  

 

  

 

Using hbase-0.90.2..(sigh..) Any tip? thanks

 

  

 

  

 

java.lang.IndexOutOfBoundsException: Index: 4, Size: 3

 

  at java.util.ArrayList.RangeCheck(ArrayList.java:547)

 

  at java.util.ArrayList.remove(ArrayList.java:387)

 

  at

 

  

 
 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchOfPuts(HConnectionManager.java:1257)

 

  at

  org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:822)

 

  at org.apache.hadoop.hbase.client.HTable.doPut(HTable.java:678)

 

  at org.apache.hadoop.hbase.client.HTable.put(HTable.java:663)

 

  

 

  

 

  

 

  

 

  

 

 

 

 






Re: java.lang.IndexOutOfBoundsException

2011-04-20 Thread Venkatesh

 

 sorry..yeah..that's dumb of me..clearly i'm not thinking anything..just 
frustrated with upgrade
thx


 

 

-Original Message-
From: Ted Yu yuzhih...@gmail.com
To: user@hbase.apache.org
Sent: Wed, Apr 20, 2011 1:24 pm
Subject: Re: java.lang.IndexOutOfBoundsException


I meant specifying Integer.MAX_VALUE as maxSize along with config.



On Wed, Apr 20, 2011 at 10:17 AM, Venkatesh vramanatha...@aol.com wrote:



 If I use default ..i can't share/pass my HBaseConfiguration object..atleast

 i don't see a constructor/setter..

 that would go against previous suggestion





















 -Original Message-

 From: Ted Yu yuzhih...@gmail.com

 To: user@hbase.apache.org

 Sent: Wed, Apr 20, 2011 1:08 pm

 Subject: Re: java.lang.IndexOutOfBoundsException





 When using HTablePool, try not to define maxSize yourself - use the

 default.







 On Wed, Apr 20, 2011 at 10:04 AM, Venkatesh vramanatha...@aol.com wrote:







 



   Yeah you  J-D both hit it..



  I knew it's bad..I was trying anything  everything to solve the

 incredibly



  long latency



  with hbase puts on 0.90.2..



   I get ok/better response with batch put.. this was quick  dirty way to



  accumulate puts by sharing



  same HTable instance



  Thanks for letting me know..this exception is due to sharing of HTable..



 



  I've to go back to to 0.20.6 since our system is down too long..(starting



  with empty table)



 



  On 0.90.2, do you all think using HTablePool would help with performance



  problem?



  thx



 



 



 



 



 



 



 



 



  -Original Message-



  From: Ted Yu yuzhih...@gmail.com



  To: user@hbase.apache.org



  Sent: Wed, Apr 20, 2011 12:27 pm



  Subject: Re: java.lang.IndexOutOfBoundsException



 



 



  I think HConnectionManager can catch IndexOutOfBoundsException and



  translate



 



  into a more user-friendly message, informing user about thread-safety.



 



 



 



  On Wed, Apr 20, 2011 at 9:11 AM, Ted Yu yuzhih...@gmail.com wrote:



 



 



 



   I have seen this before.



 



   HTable isn't thread-safe.



 



  



 



   Please describe your usage.



 



  



 



   Thanks



 



  



 



  



 



   On Wed, Apr 20, 2011 at 6:03 AM, Venkatesh vramanatha...@aol.com



  wrote:



 



  



 



  



 



Using hbase-0.90.2..(sigh..) Any tip? thanks



 



  



 



  



 



java.lang.IndexOutOfBoundsException: Index: 4, Size: 3



 



  at java.util.ArrayList.RangeCheck(ArrayList.java:547)



 



  at java.util.ArrayList.remove(ArrayList.java:387)



 



  at



 



  



 

 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchOfPuts(HConnectionManager.java:1257)



 



  at



  org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:822)



 



  at org.apache.hadoop.hbase.client.HTable.doPut(HTable.java:678)



 



  at org.apache.hadoop.hbase.client.HTable.put(HTable.java:663)



 



  



 



  



 



  



 



  



 



  



 



 



 



 










 


Re: java.lang.IndexOutOfBoundsException

2011-04-20 Thread tsuna
On Wed, Apr 20, 2011 at 10:04 AM, Venkatesh vramanatha...@aol.com wrote:
 On 0.90.2, do you all think using HTablePool would help with performance 
 problem?

What performance problems are you seeing?

BTW, if you want a thread-safe client that's highly scalable for
high-throughput, multi-threaded applications, look at asynchbase:
http://github.com/stumbleupon/asynchbase
OpenTSDB uses it and I'm able to push 20 edits per second to 3
RegionServers.

-- 
Benoit tsuna Sigoure
Software Engineer @ www.StumbleUpon.com