OFFSET will not scale well with large values as there is no way to
implement it in HBase other than scanning from the beginning and skipping
that many rows. I'd suggest using row value constructors instead. You can
read more about that here: https://phoenix.apache.org/paged.html

Thanks,
James

On Fri, May 25, 2018 at 6:36 AM, ychernya...@gmail.com <
ychernya...@gmail.com> wrote:

> Hi everyone.
>
>    I faced the problem of executing query. We have table with 150 rows, if
> you try execute query  with huge offset, order by  and using limit, phoenix
> will be crushed with such problem:
>
>
> Example query:
>
> Select * from table order by col1 , col2 limit 1000 offset 15677558;
>
> Caused by:
>
> Error: org.apache.phoenix.exception.PhoenixIOException:
> org.apache.hadoop.hbase.DoNotRetryIOException: TABLE,,1524572088462.
> 899bce582428250714db99a6b679e435.: Requested memory of 4201719544 bytes
> is larger than global pool of 272655974 bytes.
>         at org.apache.phoenix.util.ServerUtil.createIOException(
> ServerUtil.java:96)
>         at org.apache.phoenix.util.ServerUtil.throwIOException(
> ServerUtil.java:62)
>         at org.apache.phoenix.coprocessor.BaseScannerRegionObserver$
> RegionScannerHolder.overrideDelegate(BaseScannerRegionObserver.java:264)
>         at org.apache.phoenix.coprocessor.BaseScannerRegionObserver$
> RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:293)
>         at org.apache.hadoop.hbase.regionserver.RSRpcServices.
> scan(RSRpcServices.java:2808)
>         at org.apache.hadoop.hbase.regionserver.RSRpcServices.
> scan(RSRpcServices.java:3045)
>         at org.apache.hadoop.hbase.protobuf.generated.
> ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:36613)
>         at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2352)
>         at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124)
>         at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(
> RpcExecutor.java:297)
>         at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(
> RpcExecutor.java:277)
> Caused by: org.apache.phoenix.memory.InsufficientMemoryException:
> Requested memory of 4201719544 bytes is larger than global pool of
> 272655974 bytes.
>         at org.apache.phoenix.memory.GlobalMemoryManager.allocateBytes(
> GlobalMemoryManager.java:66)
>         at org.apache.phoenix.memory.GlobalMemoryManager.allocate(
> GlobalMemoryManager.java:89)
>         at org.apache.phoenix.memory.GlobalMemoryManager.allocate(
> GlobalMemoryManager.java:95)
>         at org.apache.phoenix.iterate.NonAggregateRegionScannerFacto
> ry.getTopNScanner(NonAggregateRegionScannerFactory.java:315)
>         at org.apache.phoenix.iterate.NonAggregateRegionScannerFacto
> ry.getRegionScanner(NonAggregateRegionScannerFactory.java:163)
>         at org.apache.phoenix.coprocessor.ScanRegionObserver.
> doPostScannerOpen(ScanRegionObserver.java:72)
>         at org.apache.phoenix.coprocessor.BaseScannerRegionObserver$
> RegionScannerHolder.overrideDelegate(BaseScannerRegionObserver.java:245)
>         ... 8 more (state=08000,code=101)
>
>
> But if you remove "order by" or "limit " problem will gone.
>
>
> Versions:
> Phoenix:  4.13.1
> Hbase: 1.3
> Hadoop : 2.6.4
>

Reply via email to