Re: Phoenix ODBC driver limitations

2018-05-25 Thread Josh Elser

It's confusing with PQS in the mix :)

The original documentation meant client side (JDBC thick driver) and 
server-side (HBase). With PQS in the mix, you'd need PQS and HBase (but 
not client-side, guessing).


On 5/24/18 3:24 AM, Stepan Migunov wrote:

Yes, I read this. But the document says "This needs to be set at client and
server both". I've been confused - what is "client" in case of
ODBC-connection. I supposed that driver, but it seems queryserver.

-Original Message-
From: Francis Chuang [mailto:francischu...@apache.org]
Sent: Thursday, May 24, 2018 1:35 AM
To: user@phoenix.apache.org
Subject: Re: Phoenix ODBC driver limitations

Namespace mapping is something you need to enable on the server (it's off by
default).

See documentation for enabling it here:
http://phoenix.apache.org/namspace_mapping.html

Francis

On 24/05/2018 5:23 AM, Stepan Migunov wrote:

Thanks you for response, Josh!

I got something like "Inconsistent namespace mapping properties" and
thought it was because it's impossible to set
"isNamespaceMappingEnabled" for the ODBC driver (client side). After
your explanation I understood that the "client" in this case is
queryserver but not ODBC driver. And now I need to check why queryserver
doesn't apply this property.

-Original Message-
From: Josh Elser [mailto:els...@apache.org]
Sent: Wednesday, May 23, 2018 6:52 PM
To: user@phoenix.apache.org
Subject: Re: Phoenix ODBC driver limitations

I'd be surprised to hear that the ODBC driver would need to know
anything about namespace-mapping.

Do you have an error? Steps to reproduce an issue which you see?

The reason I am surprised is that namespace mapping is an
implementation detail of the JDBC driver which lives inside of PQS --
*not* the ODBC driver. The trivial thing you can check would be to
validate that the hbase-site.xml which PQS references is up to date
and that PQS was restarted to pick up the newest version of
hbase-site.xml

On 5/22/18 4:16 AM, Stepan Migunov wrote:

Hi,

Is the ODBC driver from Hortonworks the only way to access Phoenix
from .NET code now?
The problem is that driver has some critical limitations - it seems,
driver doesn't support Namespace Mapping (it couldn't be able to
connect to Phoenix if phoenix.schema.isNamespaceMappingEnabled=true)
and doesn't support query hints.

Regards,
Stepan.



Re: Problem with query with: limit, offset and order by

2018-05-25 Thread James Taylor
OFFSET will not scale well with large values as there is no way to
implement it in HBase other than scanning from the beginning and skipping
that many rows. I'd suggest using row value constructors instead. You can
read more about that here: https://phoenix.apache.org/paged.html

Thanks,
James

On Fri, May 25, 2018 at 6:36 AM, ychernya...@gmail.com <
ychernya...@gmail.com> wrote:

> Hi everyone.
>
>I faced the problem of executing query. We have table with 150 rows, if
> you try execute query  with huge offset, order by  and using limit, phoenix
> will be crushed with such problem:
>
>
> Example query:
>
> Select * from table order by col1 , col2 limit 1000 offset 15677558;
>
> Caused by:
>
> Error: org.apache.phoenix.exception.PhoenixIOException:
> org.apache.hadoop.hbase.DoNotRetryIOException: TABLE,,1524572088462.
> 899bce582428250714db99a6b679e435.: Requested memory of 4201719544 bytes
> is larger than global pool of 272655974 bytes.
> at org.apache.phoenix.util.ServerUtil.createIOException(
> ServerUtil.java:96)
> at org.apache.phoenix.util.ServerUtil.throwIOException(
> ServerUtil.java:62)
> at org.apache.phoenix.coprocessor.BaseScannerRegionObserver$
> RegionScannerHolder.overrideDelegate(BaseScannerRegionObserver.java:264)
> at org.apache.phoenix.coprocessor.BaseScannerRegionObserver$
> RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:293)
> at org.apache.hadoop.hbase.regionserver.RSRpcServices.
> scan(RSRpcServices.java:2808)
> at org.apache.hadoop.hbase.regionserver.RSRpcServices.
> scan(RSRpcServices.java:3045)
> at org.apache.hadoop.hbase.protobuf.generated.
> ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:36613)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2352)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(
> RpcExecutor.java:297)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(
> RpcExecutor.java:277)
> Caused by: org.apache.phoenix.memory.InsufficientMemoryException:
> Requested memory of 4201719544 bytes is larger than global pool of
> 272655974 bytes.
> at org.apache.phoenix.memory.GlobalMemoryManager.allocateBytes(
> GlobalMemoryManager.java:66)
> at org.apache.phoenix.memory.GlobalMemoryManager.allocate(
> GlobalMemoryManager.java:89)
> at org.apache.phoenix.memory.GlobalMemoryManager.allocate(
> GlobalMemoryManager.java:95)
> at org.apache.phoenix.iterate.NonAggregateRegionScannerFacto
> ry.getTopNScanner(NonAggregateRegionScannerFactory.java:315)
> at org.apache.phoenix.iterate.NonAggregateRegionScannerFacto
> ry.getRegionScanner(NonAggregateRegionScannerFactory.java:163)
> at org.apache.phoenix.coprocessor.ScanRegionObserver.
> doPostScannerOpen(ScanRegionObserver.java:72)
> at org.apache.phoenix.coprocessor.BaseScannerRegionObserver$
> RegionScannerHolder.overrideDelegate(BaseScannerRegionObserver.java:245)
> ... 8 more (state=08000,code=101)
>
>
> But if you remove "order by" or "limit " problem will gone.
>
>
> Versions:
> Phoenix:  4.13.1
> Hbase: 1.3
> Hadoop : 2.6.4
>


Problem with query with: limit, offset and order by

2018-05-25 Thread ychernyatin
Hi everyone.

   I faced the problem of executing query. We have table with 150 rows, if you 
try execute query  with huge offset, order by  and using limit, phoenix will be 
crushed with such problem:


Example query:

Select * from table order by col1 , col2 limit 1000 offset 15677558;

Caused by:

Error: org.apache.phoenix.exception.PhoenixIOException: 
org.apache.hadoop.hbase.DoNotRetryIOException: 
TABLE,,1524572088462.899bce582428250714db99a6b679e435.: Requested memory of 
4201719544 bytes is larger than global pool of 272655974 bytes.
at 
org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:96)
at 
org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:62)
at 
org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.overrideDelegate(BaseScannerRegionObserver.java:264)
at 
org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:293)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2808)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3045)
at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:36613)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2352)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:297)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277)
Caused by: org.apache.phoenix.memory.InsufficientMemoryException: Requested 
memory of 4201719544 bytes is larger than global pool of 272655974 bytes.
at 
org.apache.phoenix.memory.GlobalMemoryManager.allocateBytes(GlobalMemoryManager.java:66)
at 
org.apache.phoenix.memory.GlobalMemoryManager.allocate(GlobalMemoryManager.java:89)
at 
org.apache.phoenix.memory.GlobalMemoryManager.allocate(GlobalMemoryManager.java:95)
at 
org.apache.phoenix.iterate.NonAggregateRegionScannerFactory.getTopNScanner(NonAggregateRegionScannerFactory.java:315)
at 
org.apache.phoenix.iterate.NonAggregateRegionScannerFactory.getRegionScanner(NonAggregateRegionScannerFactory.java:163)
at 
org.apache.phoenix.coprocessor.ScanRegionObserver.doPostScannerOpen(ScanRegionObserver.java:72)
at 
org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.overrideDelegate(BaseScannerRegionObserver.java:245)
... 8 more (state=08000,code=101)


But if you remove "order by" or "limit " problem will gone. 


Versions:
Phoenix:  4.13.1
Hbase: 1.3
Hadoop : 2.6.4