[jira] [Created] (PHOENIX-7265) Add 5.2 versions to BackwardsCompatibilityIT once released

2024-03-07 Thread Istvan Toth (Jira)
Istvan Toth created PHOENIX-7265:


 Summary: Add 5.2 versions to BackwardsCompatibilityIT once released
 Key: PHOENIX-7265
 URL: https://issues.apache.org/jira/browse/PHOENIX-7265
 Project: Phoenix
  Issue Type: Task
  Components: core
Affects Versions: 5.2.0, 5.3.0
Reporter: Istvan Toth


This is a reminder to add the new 5.2 versions once they are released.
We cannot add them until the 5.2.0 artifacts are avaialble publicly.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (PHOENIX-7255) Non-existent artifacts referred in compatible_client_versions.json

2024-03-07 Thread Istvan Toth (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Istvan Toth reassigned PHOENIX-7255:


Assignee: Istvan Toth

> Non-existent artifacts referred in compatible_client_versions.json
> --
>
> Key: PHOENIX-7255
> URL: https://issues.apache.org/jira/browse/PHOENIX-7255
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 5.2.0, 5.3.0
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Major
>  Labels: test
>
> The compatible_client_versions.json refers to Hbase 2.3 support for Phoenix 
> 5.2, which has been removed some time ago, but the file hasn't been updated.
> We need to keep in mind that we need to update this file as new hbase 
> profiles are added or old ones are dropped.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-7255) Non-existent artifacts referred in compatible_client_versions.json

2024-03-07 Thread Istvan Toth (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Istvan Toth updated PHOENIX-7255:
-
Labels: test  (was: )

> Non-existent artifacts referred in compatible_client_versions.json
> --
>
> Key: PHOENIX-7255
> URL: https://issues.apache.org/jira/browse/PHOENIX-7255
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 5.2.0, 5.3.0
>Reporter: Istvan Toth
>Priority: Major
>  Labels: test
>
> The compatible_client_versions.json refers to Hbase 2.3 support for Phoenix 
> 5.2, which has been removed some time ago, but the file hasn't been updated.
> We need to keep in mind that we need to update this file as new hbase 
> profiles are added or old ones are dropped.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (PHOENIX-7251) Refactor server-side code to support multiple ServerMetadataCache for HA tests

2024-03-07 Thread Rushabh Shah (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7251?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rushabh Shah resolved PHOENIX-7251.
---
Resolution: Fixed

Thank you [~palashc] for the patch. 

> Refactor server-side code to support multiple ServerMetadataCache for HA tests
> --
>
> Key: PHOENIX-7251
> URL: https://issues.apache.org/jira/browse/PHOENIX-7251
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Palash Chauhan
>Assignee: Palash Chauhan
>Priority: Major
>
> In the metadata caching re-design, `ServerMetadataCache` is required to be a 
> singleton in the implementation. This affects tests for the HA use case 
> because the coprocessors on the 2 clusters end up using the same 
> `ServerMetadataCache`. All tests which execute queries with 1 of the clusters 
> unavailable will fail. 
> We can refactor the implementation in the following way to support HA test 
> cases:
> 1. Create a `ServerMetadataCache` interface and use the current 
> implementation as `ServerMetadataCacheImpl` for all other tests. This would 
> be a singleton.
> 2. Implement `ServerMetadataCacheHAImpl` with a map of instances keyed on 
> config.
> 3. Extend `PhoenixRegionServerEndpoint` and use `ServerMetadataCacheHAImpl`. 
> 4. In HA tests, load this new endpoint on the region servers. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (PHOENIX-7245) NPE in Phoenix Coproc leading to Region Server crash

2024-03-07 Thread Kadir Ozdemir (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kadir Ozdemir reassigned PHOENIX-7245:
--

Assignee: Kadir Ozdemir

> NPE in Phoenix Coproc leading to Region Server crash
> 
>
> Key: PHOENIX-7245
> URL: https://issues.apache.org/jira/browse/PHOENIX-7245
> Project: Phoenix
>  Issue Type: Bug
>  Components: phoenix
>Affects Versions: 5.1.1
>Reporter: Ravi Kishore Valeti
>Assignee: Kadir Ozdemir
>Priority: Major
>
> In our Production, while investigating Region Server crashes, we found that 
> it is due to Phoenix coproc throwing Null Pointer Exception in 
> IndexRegionObserver.postBatchMutateIndispensably() method.
> Below are the logs
> {code:java}
> 2024-02-26 13:52:40,716 ERROR 
> [r.default.FPBQ.Fifo.handler=216,queue=8,port=x] 
> coprocessor.CoprocessorHost - The coprocessor 
> org.apache.phoenix.hbase.index.IndexRegionObserver threw 
> java.lang.NullPointerExceptionjava.lang.NullPointerExceptionat 
> org.apache.phoenix.hbase.index.IndexRegionObserver.postBatchMutateIndispensably(IndexRegionObserver.java:1301)at
>  
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$30.call(RegionCoprocessorHost.java:1028)at
>  
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$30.call(RegionCoprocessorHost.java:1025)at
>  
> org.apache.hadoop.hbase.coprocessor.CoprocessorHost$ObserverOperationWithoutResult.callObserver(CoprocessorHost.java:558)at
>  
> org.apache.hadoop.hbase.coprocessor.CoprocessorHost.execOperation(CoprocessorHost.java:631)at
>  
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postBatchMutateIndispensably(RegionCoprocessorHost.java:1025)at
>  
> org.apache.hadoop.hbase.regionserver.HRegion$MutationBatchOperation.doPostOpCleanupForMiniBatch(HRegion.java:4134)at
>  
> org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutate(HRegion.java:4573)at
>  
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:4447)at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:4369)at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:1033)at
>  
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicBatchOp(RSRpcServices.java:951)at
>  
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:916)at
>  
> org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2892)at
>  
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:45961)at
>  org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:415)at 
> org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124)at 
> org.apache.hadoop.hbase.ipc.RpcHandler.run(RpcHandler.java:102)at 
> org.apache.hadoop.hbase.ipc.RpcHandler.run(RpcHandler.java:82)
> 2024-02-26 13:52:40,725 ERROR 
> [r.default.FPBQ.Fifo.handler=216,queue=8,port=x] 
> regionserver.HRegionServer - * ABORTING region server 
> ,x,1708268161243: The coprocessor 
> org.apache.phoenix.hbase.index.IndexRegionObserver threw 
> java.lang.NullPointerException *java.lang.NullPointerExceptionat 
> org.apache.phoenix.hbase.index.IndexRegionObserver.postBatchMutateIndispensably(IndexRegionObserver.java:1301)at
>  
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$30.call(RegionCoprocessorHost.java:1028)at
>  
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$30.call(RegionCoprocessorHost.java:1025)at
>  
> org.apache.hadoop.hbase.coprocessor.CoprocessorHost$ObserverOperationWithoutResult.callObserver(CoprocessorHost.java:558)at
>  
> org.apache.hadoop.hbase.coprocessor.CoprocessorHost.execOperation(CoprocessorHost.java:631)at
>  
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postBatchMutateIndispensably(RegionCoprocessorHost.java:1025)at
>  
> org.apache.hadoop.hbase.regionserver.HRegion$MutationBatchOperation.doPostOpCleanupForMiniBatch(HRegion.java:4134)at
>  
> org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutate(HRegion.java:4573)at
>  
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:4447)at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:4369)at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:1033)at
>  
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicBatchOp(RSRpcServices.java:951)at
>  
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:916)at
>  
> org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2892)at
>  
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:45961)at
>  

[jira] [Updated] (PHOENIX-7253) Perf improvement for non-full scan queries on large table

2024-03-07 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7253?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated PHOENIX-7253:
--
Description: 
Any considerably large table with more than 100k regions can give problematic 
performance if we access all region locations from meta for the given table 
before generating parallel or sequential scans for the given query. The perf 
impact can really hurt range scan queries.

Consider a table with hundreds of thousands of tenant views. Unless the query 
is strict point lookup, any query on any tenant view would end up retrieving 
region locations of all regions of the base table. In case if IOException is 
thrown by HBase client during any region location lookup in meta, we only 
perform single retry.

Proposal:
 # All non point lookup queries should only retrieve region locations that 
cover the scan boundary. Avoid fetching all region locations of the base table.
 # Make retries configurable with higher default value.

 

The proposal should improve the performance of queries:
 * Range Scan
 * Range scan on Salted table
 * Range scan on Salted table with Tenant id and/or View index id
 * Range Scan on Tenant connection
 * Full Scan on Tenant connection

 

Sample stacktrace from the multiple failures observed:
{code:java}
java.sql.SQLException: ERROR 1102 (XCL02): Cannot get all table regions.Stack 
trace: java.sql.SQLException: ERROR 1102 (XCL02): Cannot get all table regions.
    at 
org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:620)
    at 
org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:229)
    at 
org.apache.phoenix.query.ConnectionQueryServicesImpl.getAllTableRegions(ConnectionQueryServicesImpl.java:781)
    at 
org.apache.phoenix.query.DelegateConnectionQueryServices.getAllTableRegions(DelegateConnectionQueryServices.java:87)
    at 
org.apache.phoenix.query.DelegateConnectionQueryServices.getAllTableRegions(DelegateConnectionQueryServices.java:87)
    at 
org.apache.phoenix.iterate.DefaultParallelScanGrouper.getRegionBoundaries(DefaultParallelScanGrouper.java:74)
    at 
org.apache.phoenix.iterate.BaseResultIterators.getRegionBoundaries(BaseResultIterators.java:587)
    at 
org.apache.phoenix.iterate.BaseResultIterators.getParallelScans(BaseResultIterators.java:936)
    at 
org.apache.phoenix.iterate.BaseResultIterators.getParallelScans(BaseResultIterators.java:669)
    at 
org.apache.phoenix.iterate.BaseResultIterators.(BaseResultIterators.java:555)
    at 
org.apache.phoenix.iterate.SerialIterators.(SerialIterators.java:69)
    at org.apache.phoenix.execute.ScanPlan.newIterator(ScanPlan.java:278)
    at org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:374)
    at org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:222)
    at org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:217)
    at org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:212)
    at 
org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:370)
    at 
org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:328)
    at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
    at 
org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:328)
    at 
org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:320)
    at 
org.apache.phoenix.jdbc.PhoenixPreparedStatement.executeQuery(PhoenixPreparedStatement.java:188)
    ...
    ...
    Caused by: java.io.InterruptedIOException: Origin: InterruptedException
        at 
org.apache.hadoop.hbase.util.ExceptionUtil.asInterrupt(ExceptionUtil.java:72)
        at 
org.apache.hadoop.hbase.client.ConnectionImplementation.takeUserRegionLock(ConnectionImplementation.java:1129)
        at 
org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegionInMeta(ConnectionImplementation.java:994)
        at 
org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegion(ConnectionImplementation.java:895)
        at 
org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegion(ConnectionImplementation.java:881)
        at 
org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegion(ConnectionImplementation.java:851)
        at 
org.apache.hadoop.hbase.client.ConnectionImplementation.getRegionLocation(ConnectionImplementation.java:730)
        at 
org.apache.phoenix.query.ConnectionQueryServicesImpl.getAllTableRegions(ConnectionQueryServicesImpl.java:766)
        ... 254 more
Caused by: java.lang.InterruptedException
        at 
java.base/java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireNanos(AbstractQueuedSynchronizer.java:982)
        at 
java.base/java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireNanos(AbstractQueuedSynchronizer.java:1288)
        at 

[jira] [Updated] (PHOENIX-7243) Add connectionType property to ConnectionInfo class.

2024-03-07 Thread Rushabh Shah (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rushabh Shah updated PHOENIX-7243:
--
Summary: Add connectionType property to ConnectionInfo class.  (was: Add 
isServerConnection property to ConnectionInfo class.)

> Add connectionType property to ConnectionInfo class.
> 
>
> Key: PHOENIX-7243
> URL: https://issues.apache.org/jira/browse/PHOENIX-7243
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Rushabh Shah
>Assignee: Palash Chauhan
>Priority: Major
> Fix For: 5.3.0
>
>
> In PhoenixDriver, we have a cache of ConnectionQueryServices which is keyed 
> by ConnectionInfo object. Refer 
> [here|https://github.com/apache/phoenix/blob/master/phoenix-core-client/src/main/java/org/apache/phoenix/jdbc/PhoenixDriver.java#L258-L270]
>  for more details.
> Lets say if we want to create a server  connection (with property 
> IS_SERVER_CONNECTION set to true) and we already have a _non server_ 
> connection present in the cache (with the same user, principal, keytab, 
> haGroup), it will return the non server connection.
> We need to add isServerConnection property to 
> [ConnectionInfo|https://github.com/apache/phoenix/blob/master/phoenix-core-client/src/main/java/org/apache/phoenix/jdbc/ConnectionInfo.java#L317-L334]
>  class to differentiate between server and non server connection.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (PHOENIX-7243) Add isServerConnection property to ConnectionInfo class.

2024-03-07 Thread Rushabh Shah (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rushabh Shah resolved PHOENIX-7243.
---
Fix Version/s: 5.3.0
   Resolution: Fixed

> Add isServerConnection property to ConnectionInfo class.
> 
>
> Key: PHOENIX-7243
> URL: https://issues.apache.org/jira/browse/PHOENIX-7243
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Rushabh Shah
>Assignee: Palash Chauhan
>Priority: Major
> Fix For: 5.3.0
>
>
> In PhoenixDriver, we have a cache of ConnectionQueryServices which is keyed 
> by ConnectionInfo object. Refer 
> [here|https://github.com/apache/phoenix/blob/master/phoenix-core-client/src/main/java/org/apache/phoenix/jdbc/PhoenixDriver.java#L258-L270]
>  for more details.
> Lets say if we want to create a server  connection (with property 
> IS_SERVER_CONNECTION set to true) and we already have a _non server_ 
> connection present in the cache (with the same user, principal, keytab, 
> haGroup), it will return the non server connection.
> We need to add isServerConnection property to 
> [ConnectionInfo|https://github.com/apache/phoenix/blob/master/phoenix-core-client/src/main/java/org/apache/phoenix/jdbc/ConnectionInfo.java#L317-L334]
>  class to differentiate between server and non server connection.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-7263) Row value constructor split keys not allowed on indexes

2024-03-07 Thread Rajeshbabu Chintaguntla (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7263?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla updated PHOENIX-7263:
-
Fix Version/s: 5.3.0
   (was: 5.2.1)

> Row value constructor split keys not allowed on indexes
> ---
>
> Key: PHOENIX-7263
> URL: https://issues.apache.org/jira/browse/PHOENIX-7263
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>Priority: Major
> Fix For: 5.2.0, 5.3.0, 5.1.4
>
>
> While creating indexes if we pass row value constructor split keys getting 
> following error  same is passing with create table because while creating the 
> table properly building the split keys using expression compiler which is not 
> the case with index creation.
> {noformat}
> java.lang.ClassCastException: 
> org.apache.phoenix.expression.RowValueConstructorExpression cannot be cast to 
> org.apache.phoenix.expression.LiteralExpression
>   at 
> org.apache.phoenix.compile.CreateIndexCompiler.compile(CreateIndexCompiler.java:77)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableCreateIndexStatement.compilePlan(PhoenixStatement.java:1205)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableCreateIndexStatement.compilePlan(PhoenixStatement.java:1191)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:435)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:425)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:424)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:412)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:2009)
>   at sqlline.Commands.executeSingleQuery(Commands.java:1054)
>   at sqlline.Commands.execute(Commands.java:1003)
>   at sqlline.Commands.sql(Commands.java:967)
>   at sqlline.SqlLine.dispatch(SqlLine.java:734)
>   at sqlline.SqlLine.begin(SqlLine.java:541)
>   at sqlline.SqlLine.start(SqlLine.java:267)
>   at sqlline.SqlLine.main(SqlLine.java:206)
> {noformat}
> In create table:
> {code:java}
> final byte[][] splits = new byte[splitNodes.size()][];
> ImmutableBytesWritable ptr = context.getTempPtr();
> ExpressionCompiler expressionCompiler = new 
> ExpressionCompiler(context);
> for (int i = 0; i < splits.length; i++) {
> ParseNode node = splitNodes.get(i);
> if (node instanceof BindParseNode) {
> context.getBindManager().addParamMetaData((BindParseNode) 
> node, VARBINARY_DATUM);
> }
> if (node.isStateless()) {
> Expression expression = node.accept(expressionCompiler);
> if (expression.evaluate(null, ptr)) {;
> splits[i] = ByteUtil.copyKeyBytesIfNecessary(ptr);
> continue;
> }
> }
> throw new 
> SQLExceptionInfo.Builder(SQLExceptionCode.SPLIT_POINT_NOT_CONSTANT)
> .setMessage("Node: " + node).build().buildException();
> }
> {code}
> Where as in indexing expecting only literals.
> {code:java}
> final byte[][] splits = new byte[splitNodes.size()][];
> for (int i = 0; i < splits.length; i++) {
> ParseNode node = splitNodes.get(i);
> if (!node.isStateless()) {
> throw new 
> SQLExceptionInfo.Builder(SQLExceptionCode.SPLIT_POINT_NOT_CONSTANT)
> .setMessage("Node: " + node).build().buildException();
> }
> LiteralExpression expression = 
> (LiteralExpression)node.accept(expressionCompiler);
> splits[i] = expression.getBytes();
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-7263) Row value constructor split keys not allowed on indexes

2024-03-07 Thread Rajeshbabu Chintaguntla (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7263?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla updated PHOENIX-7263:
-
Fix Version/s: 5.2.0
   5.2.1
   5.1.4

> Row value constructor split keys not allowed on indexes
> ---
>
> Key: PHOENIX-7263
> URL: https://issues.apache.org/jira/browse/PHOENIX-7263
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>Priority: Major
> Fix For: 5.2.0, 5.2.1, 5.1.4
>
>
> While creating indexes if we pass row value constructor split keys getting 
> following error  same is passing with create table because while creating the 
> table properly building the split keys using expression compiler which is not 
> the case with index creation.
> {noformat}
> java.lang.ClassCastException: 
> org.apache.phoenix.expression.RowValueConstructorExpression cannot be cast to 
> org.apache.phoenix.expression.LiteralExpression
>   at 
> org.apache.phoenix.compile.CreateIndexCompiler.compile(CreateIndexCompiler.java:77)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableCreateIndexStatement.compilePlan(PhoenixStatement.java:1205)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableCreateIndexStatement.compilePlan(PhoenixStatement.java:1191)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:435)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:425)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:424)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:412)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:2009)
>   at sqlline.Commands.executeSingleQuery(Commands.java:1054)
>   at sqlline.Commands.execute(Commands.java:1003)
>   at sqlline.Commands.sql(Commands.java:967)
>   at sqlline.SqlLine.dispatch(SqlLine.java:734)
>   at sqlline.SqlLine.begin(SqlLine.java:541)
>   at sqlline.SqlLine.start(SqlLine.java:267)
>   at sqlline.SqlLine.main(SqlLine.java:206)
> {noformat}
> In create table:
> {code:java}
> final byte[][] splits = new byte[splitNodes.size()][];
> ImmutableBytesWritable ptr = context.getTempPtr();
> ExpressionCompiler expressionCompiler = new 
> ExpressionCompiler(context);
> for (int i = 0; i < splits.length; i++) {
> ParseNode node = splitNodes.get(i);
> if (node instanceof BindParseNode) {
> context.getBindManager().addParamMetaData((BindParseNode) 
> node, VARBINARY_DATUM);
> }
> if (node.isStateless()) {
> Expression expression = node.accept(expressionCompiler);
> if (expression.evaluate(null, ptr)) {;
> splits[i] = ByteUtil.copyKeyBytesIfNecessary(ptr);
> continue;
> }
> }
> throw new 
> SQLExceptionInfo.Builder(SQLExceptionCode.SPLIT_POINT_NOT_CONSTANT)
> .setMessage("Node: " + node).build().buildException();
> }
> {code}
> Where as in indexing expecting only literals.
> {code:java}
> final byte[][] splits = new byte[splitNodes.size()][];
> for (int i = 0; i < splits.length; i++) {
> ParseNode node = splitNodes.get(i);
> if (!node.isStateless()) {
> throw new 
> SQLExceptionInfo.Builder(SQLExceptionCode.SPLIT_POINT_NOT_CONSTANT)
> .setMessage("Node: " + node).build().buildException();
> }
> LiteralExpression expression = 
> (LiteralExpression)node.accept(expressionCompiler);
> splits[i] = expression.getBytes();
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (PHOENIX-7264) Admin.flush() hangs in HBase 3 while the clock is stopped

2024-03-07 Thread Istvan Toth (Jira)
Istvan Toth created PHOENIX-7264:


 Summary: Admin.flush() hangs in HBase 3 while the clock is stopped
 Key: PHOENIX-7264
 URL: https://issues.apache.org/jira/browse/PHOENIX-7264
 Project: Phoenix
  Issue Type: Bug
  Components: core
Reporter: Istvan Toth
Assignee: Istvan Toth


Several tests using EnvironmentEdgeManager are hanging and/or failing with 
HBase 3 .

I don't really know how to fix them, as the test break if we let the clock run.
HBase doesn't seem to care about this use case, so we probably just have to 
disable these tests on Hbase 3.




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (OMID-287) Improve omid startup script to have all the options like pid file generation, log handling etc

2024-03-07 Thread Istvan Toth (Jira)


[ 
https://issues.apache.org/jira/browse/OMID-287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17824378#comment-17824378
 ] 

Istvan Toth commented on OMID-287:
--

Sounds good.

> Improve omid startup script to have all the options like pid file generation, 
> log handling etc
> --
>
> Key: OMID-287
> URL: https://issues.apache.org/jira/browse/OMID-287
> Project: Phoenix Omid
>  Issue Type: Improvement
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Nihal Jain
>Priority: Major
>
> Currently omid startup script doesn't have any way to check the liveness 
> using pid file generation, log handling like log rolling during the startup, 
> passing custom log directory. Would be better to adopt hbase-env.sh script to 
> get the environment variables like HBASE_PID_DIR, HBASE_LOG_DIR etc..same way 
> like phoenix-queryserver
> FYI [~nihaljain.cs] [~stoty] 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (PHOENIX-7263) Row value constructor split keys not allowed on indexes

2024-03-07 Thread Rajeshbabu Chintaguntla (Jira)
Rajeshbabu Chintaguntla created PHOENIX-7263:


 Summary: Row value constructor split keys not allowed on indexes
 Key: PHOENIX-7263
 URL: https://issues.apache.org/jira/browse/PHOENIX-7263
 Project: Phoenix
  Issue Type: Bug
Reporter: Rajeshbabu Chintaguntla
Assignee: Rajeshbabu Chintaguntla


While creating indexes if we pass row value constructor split keys getting 
following error  same is passing with create table because while creating the 
table properly building the split keys using expression compiler which is not 
the case with index creation.
{noformat}
java.lang.ClassCastException: 
org.apache.phoenix.expression.RowValueConstructorExpression cannot be cast to 
org.apache.phoenix.expression.LiteralExpression
at 
org.apache.phoenix.compile.CreateIndexCompiler.compile(CreateIndexCompiler.java:77)
at 
org.apache.phoenix.jdbc.PhoenixStatement$ExecutableCreateIndexStatement.compilePlan(PhoenixStatement.java:1205)
at 
org.apache.phoenix.jdbc.PhoenixStatement$ExecutableCreateIndexStatement.compilePlan(PhoenixStatement.java:1191)
at 
org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:435)
at 
org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:425)
at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:424)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:412)
at 
org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:2009)
at sqlline.Commands.executeSingleQuery(Commands.java:1054)
at sqlline.Commands.execute(Commands.java:1003)
at sqlline.Commands.sql(Commands.java:967)
at sqlline.SqlLine.dispatch(SqlLine.java:734)
at sqlline.SqlLine.begin(SqlLine.java:541)
at sqlline.SqlLine.start(SqlLine.java:267)
at sqlline.SqlLine.main(SqlLine.java:206)
{noformat}

In create table:

{code:java}
final byte[][] splits = new byte[splitNodes.size()][];
ImmutableBytesWritable ptr = context.getTempPtr();
ExpressionCompiler expressionCompiler = new ExpressionCompiler(context);
for (int i = 0; i < splits.length; i++) {
ParseNode node = splitNodes.get(i);
if (node instanceof BindParseNode) {
context.getBindManager().addParamMetaData((BindParseNode) node, 
VARBINARY_DATUM);
}
if (node.isStateless()) {
Expression expression = node.accept(expressionCompiler);
if (expression.evaluate(null, ptr)) {;
splits[i] = ByteUtil.copyKeyBytesIfNecessary(ptr);
continue;
}
}
throw new 
SQLExceptionInfo.Builder(SQLExceptionCode.SPLIT_POINT_NOT_CONSTANT)
.setMessage("Node: " + node).build().buildException();
}
{code}

Where as in indexing expecting only literals.

{code:java}
final byte[][] splits = new byte[splitNodes.size()][];
for (int i = 0; i < splits.length; i++) {
ParseNode node = splitNodes.get(i);
if (!node.isStateless()) {
throw new 
SQLExceptionInfo.Builder(SQLExceptionCode.SPLIT_POINT_NOT_CONSTANT)
.setMessage("Node: " + node).build().buildException();
}
LiteralExpression expression = 
(LiteralExpression)node.accept(expressionCompiler);
splits[i] = expression.getBytes();
}
{code}





--
This message was sent by Atlassian Jira
(v8.20.10#820010)