Royston:
The exception came from this line:
    ResultScanner scanner = table.getScanner(scan2);
Can you help me review the logic starting with:
    // scan the region with median and find it
    Scan scan2 = new Scan(scan);
You can log the String form of scan and scan2 before the table.getScanner()
call.

I think the NPE below reveals that startRow is null (median is in first
region).
If that is the case, the following should help:
    if (startRow != null) scan2.setStartRow(startRow);

Thanks

On Mon, Jan 23, 2012 at 5:50 AM, Royston Sellman <
[email protected]> wrote:

> Hi Ted,
>
> Finally rebuilt branch/0.92 and applied your patch and rebuilt my code.
> Using AggregationClient.sum() on my test table I get the correct result.
> Just swapping to AggregationClient.median() I get the following error:
>
>  [sshexec] org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed
> after attempts=10, exceptions:
>  [sshexec] Mon Jan 23 13:44:12 GMT 2012,
> org.apache.hadoop.hbase.client.ScannerCallable@219ba640,
> java.lang.NullPointerException
>  [sshexec] Mon Jan 23 13:44:13 GMT 2012,
> org.apache.hadoop.hbase.client.ScannerCallable@219ba640,
> java.lang.NullPointerException
>  [sshexec] Mon Jan 23 13:44:14 GMT 2012,
> org.apache.hadoop.hbase.client.ScannerCallable@219ba640,
> java.lang.NullPointerException
>  [sshexec] Mon Jan 23 13:44:15 GMT 2012,
> org.apache.hadoop.hbase.client.ScannerCallable@219ba640,
> java.lang.NullPointerException
>  [sshexec] Mon Jan 23 13:44:17 GMT 2012,
> org.apache.hadoop.hbase.client.ScannerCallable@219ba640,
> java.lang.NullPointerException
>  [sshexec] Mon Jan 23 13:44:19 GMT 2012,
> org.apache.hadoop.hbase.client.ScannerCallable@219ba640,
> java.lang.NullPointerException
>  [sshexec] Mon Jan 23 13:44:23 GMT 2012,
> org.apache.hadoop.hbase.client.ScannerCallable@219ba640,
> java.lang.NullPointerException
>  [sshexec] Mon Jan 23 13:44:27 GMT 2012,
> org.apache.hadoop.hbase.client.ScannerCallable@219ba640,
> java.lang.NullPointerException
>  [sshexec] Mon Jan 23 13:44:35 GMT 2012,
> org.apache.hadoop.hbase.client.ScannerCallable@219ba640,
> java.lang.NullPointerException
>  [sshexec] Mon Jan 23 13:44:51 GMT 2012,
> org.apache.hadoop.hbase.client.ScannerCallable@219ba640,
> java.lang.NullPointerException
>  [sshexec]
>  [sshexec] Result = -1
>  [sshexec]     at
>
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.
> getRegionServerWithRetries(HConnectionManager.java:1345)
>  [sshexec]     at
>
> org.apache.hadoop.hbase.client.HTable$ClientScanner.nextScanner(HTable.java:
> 1203)
>  [sshexec]     at
>
> org.apache.hadoop.hbase.client.HTable$ClientScanner.initialize(HTable.java:1
> 126)
>  [sshexec]     at
> org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:627)
>  [sshexec]     at
>
> org.apache.hadoop.hbase.client.coprocessor.AggregationClient.median(Aggregat
> ionClient.java:469)
>  [sshexec]     at
>
> uk.org.cse.aggregation.EDRPAggregator.testSumWithValidRange(EDRPAggregator.j
> ava:55)
>  [sshexec]     at
> uk.org.cse.aggregation.EDRPAggregator.main(EDRPAggregator.java:85)
>  [sshexec]     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
> Method)
>  [sshexec]     at
>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39
> )
>  [sshexec]     at
>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl
> .java:25)
>  [sshexec]     at java.lang.reflect.Method.invoke(Method.java:597)
>  [sshexec]     at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
>
> Something wrong with Scan setup?
>
> Cheers,
> Royston
>
>
>
> -----Original Message-----
> From: [email protected] [mailto:[email protected]]
> Sent: 21 January 2012 17:14
> To: [email protected]
> Subject: Re: Hbase out of memory error
>
> Benoit's patches are already in 0.92
>
> Thanks
>
>
> On Jan 21, 2012, at 9:11 AM, Royston Sellman
> <[email protected]> wrote:
>
> > So should I try applying Benoit Sigoure's patch for HBASE-5204? Will this
> patch be in the 0.92 branch soon?
> >
> > Cheers,
> > Royston
> >
> >
> >
> > On 21 Jan 2012, at 16:58, [email protected] wrote:
> >
> >> That is the correct branch.
> >>
> >> Thanks
> >>
> >>
> >>
> >> On Jan 21, 2012, at 8:50 AM, Royston Sellman
> <[email protected]> wrote:
> >>
> >>> Hi Ted,
> >>>
> >>> Yes, I am compiling with the same HBase jars. I wasn't aware of
> HBASE-5204, thanks, it sounds possible this is my problem. Can you think of
> anything else I should check?
> >>>
> >>> Just to make sure: I am checking out the code from
> svn.apache.org/repos/asf/hbase/branches/0.92  Is this the correct branch?
> >>>
> >>> Thanks,
> >>> Royston
> >>>
> >>>
> >>> On 20 Jan 2012, at 18:45, Ted Yu wrote:
> >>>
> >>>> Royston:
> >>>> I guess you have seen HBASE-5204. In particular:
> >>>>>> when a 0.92 server fails to deserialize a 0.90-style RPC, it
> >>>>>> attempts to
> >>>> allocate a large buffer because it doesn't read fields of
> >>>> 0.90-style RPCs properly.
> >>>>
> >>>> Were your client code compiled with the same version of HBase as
> >>>> what was running on your cluster ?
> >>>>
> >>>> Thanks
> >>>>
> >>>> On Fri, Jan 20, 2012 at 9:20 AM, Royston Sellman <
> >>>> [email protected]> wrote:
> >>>>
> >>>>> Trying to run my code (a test of Aggregation Protocol and an MR
> >>>>> HBase table
> >>>>> loader) on latest build of 0.92.0 (r1232715) I get an 'old server'
> >>>>> warning (I've seen this before and it's always been non-fatal)
> >>>>> then an out of memory exception then job hangs:
> >>>>>
> >>>>>
> >>>>>
> >>>>> [sshexec] 12/01/20 16:56:48 WARN zookeeper.ClientCnxnSocket:
> >>>>> Connected to an old server; r-o mode will be unavailable
> >>>>>
> >>>>> [sshexec] 12/01/20 16:56:48 INFO zookeeper.ClientCnxn: Session
> >>>>> establishment complete on server namenode/10.0.0.235:2181,
> >>>>> sessionid = 0x34cda4e5d000e5, negotiated timeout = 40000
> >>>>>
> >>>>> [sshexec] 12/01/20 16:56:49 WARN ipc.HBaseClient: Unexpected
> >>>>> exception receiving call responses
> >>>>>
> >>>>> [sshexec] java.lang.OutOfMemoryError: Java heap space
> >>>>>
> >>>>> [sshexec]       at java.lang.reflect.Array.newArray(Native Method)
> >>>>>
> >>>>> [sshexec]       at java.lang.reflect.Array.newInstance(Array.java:52)
> >>>>>
> >>>>> [sshexec]       at
> >>>>>
> >>>>> org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObj
> >>>>> ectWritabl
> >>>>> e.java:542)
> >>>>>
> >>>>> [sshexec]       at
> >>>>>
> >>>>> org.apache.hadoop.hbase.io.HbaseObjectWritable.readFields(HbaseObj
> >>>>> ectWritabl
> >>>>> e.java:289)
> >>>>>
> >>>>> [sshexec]       at
> >>>>>
> >>>>> org.apache.hadoop.hbase.ipc.HBaseClient$Connection.receiveResponse
> >>>>> (HBaseClie
> >>>>> nt.java:593)
> >>>>>
> >>>>> [sshexec]       at
> >>>>>
> >>>>> org.apache.hadoop.hbase.ipc.HBaseClient$Connection.run(HBaseClient
> >>>>> .java:505)
> >>>>>
> >>>>>
> >>>>>
> >>>>> Hbase shell seems to work (I can list and scan my tables).
> >>>>>
> >>>>>
> >>>>>
> >>>>> If I svn roll back to 12 Jan 0.92 and rebuild my code works.
> >>>>>
> >>>>>
> >>>>>
> >>>>> Tried setting export HBASE_HEAPSIZE=1500 but got same error.
> >>>>>
> >>>>>
> >>>>>
> >>>>> Nothing significant in logs.
> >>>>>
> >>>>>
> >>>>>
> >>>>> [Note to Ted Yu: I need to fix this so I can carry on testing on
> >>>>> Aggregation Protocol]
> >>>>>
> >>>>>
> >>>>>
> >>>>> Best Regards,
> >>>>>
> >>>>> Royston
> >>>>>
> >>>>>
> >>>
> >
>
>

Reply via email to