[ANNOUNCE] Palash Chauhan as Phoenix Committer

2024-06-11 Thread Viraj Jasani
On behalf of the Apache Phoenix PMC, I'm pleased to announce that Palash
Chauhan has accepted the PMC's invitation to become a committer on Apache
Phoenix.

We appreciate all of the great contributions Palash has made to the
community thus far and we look forward to their continued involvement.

Congratulations and Welcome, Palash!


[jira] [Resolved] (PHOENIX-7192) IDE shows errors on JSON comment

2024-06-06 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani resolved PHOENIX-7192.
---
Resolution: Fixed

> IDE shows errors on JSON comment
> 
>
> Key: PHOENIX-7192
> URL: https://issues.apache.org/jira/browse/PHOENIX-7192
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Reporter: Istvan Toth
>Assignee: Ranganath Govardhanagiri
>Priority: Minor
> Fix For: 5.3.0
>
>
> We have a few JSON files for tests, which include the ASF header.
> JSON does not allow comments, and my Eclipse sometimes flags this an error.
> Remove the ASF header.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-7192) IDE shows errors on JSON comment

2024-06-06 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated PHOENIX-7192:
--
Fix Version/s: 5.3.0

> IDE shows errors on JSON comment
> 
>
> Key: PHOENIX-7192
> URL: https://issues.apache.org/jira/browse/PHOENIX-7192
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Reporter: Istvan Toth
>Assignee: Ranganath Govardhanagiri
>Priority: Minor
> Fix For: 5.3.0
>
>
> We have a few JSON files for tests, which include the ASF header.
> JSON does not allow comments, and my Eclipse sometimes flags this an error.
> Remove the ASF header.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (PHOENIX-6960) Scan range is wrong when query desc columns

2024-06-04 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani reassigned PHOENIX-6960:
-

Assignee: Viraj Jasani  (was: Jing Yu)

> Scan range is wrong when query desc columns
> ---
>
> Key: PHOENIX-6960
> URL: https://issues.apache.org/jira/browse/PHOENIX-6960
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.1.3
>Reporter: fanartoria
>Assignee: Viraj Jasani
>Priority: Critical
> Fix For: 5.2.0, 5.1.4
>
>
> Ways to reproduce
> {code}
> 0: jdbc:phoenix:> create table sts(id integer primary key, name varchar, type 
> integer, status integer);
> No rows affected (1.259 seconds)
> 0: jdbc:phoenix:> create index sts_name_desc on sts(status, type desc, name 
> desc);
> ^[[ANo rows affected (6.376 seconds)
> 0: jdbc:phoenix:> create index sts_name_asc on sts(type desc, name) include 
> (status);
> No rows affected (6.377 seconds)
> 0: jdbc:phoenix:> upsert into sts values(1, 'test10.txt', 1, 1);
> 1 row affected (0.026 seconds)
> 0: jdbc:phoenix:>
> 0: jdbc:phoenix:>
> 0: jdbc:phoenix:> explain select * from sts where type = 1 and name like 
> 'test10%';
> +--++---+-+
> | PLAN
>  | EST_BYTES_READ | EST_ROWS_READ | EST_INFO_TS |
> +--++---+-+
> | CLIENT 1-CHUNK PARALLEL 1-WAY ROUND ROBIN RANGE SCAN OVER STS_NAME_ASC 
> [~1,'test10'] - [~1,'test11'] | null   | null  | null|
> +--++---+-+
> 1 row selected (0.023 seconds)
> 0: jdbc:phoenix:> select * from sts where type = 1 and name like 'test10%';
> +++--++
> | ID |NAME| TYPE | STATUS |
> +++--++
> | 1  | test10.txt | 1| 1  |
> +++--++
> 1 row selected (0.033 seconds)
> 0: jdbc:phoenix:> explain select * from sts where status = 1 and type = 1 and 
> name like 'test10%';
> +-++---+-+
> |PLAN 
> | EST_BYTES_READ | EST_ROWS_READ | 
> EST_INFO_TS |
> +-++---+-+
> | CLIENT 1-CHUNK PARALLEL 1-WAY ROUND ROBIN RANGE SCAN OVER STS_NAME_DESC 
> [1,~1,~'test10'] - [1,~1,~'test1/'] | null   | null  | null   
>  |
> | SERVER FILTER BY FIRST KEY ONLY AND "NAME" LIKE 'test10%'   
> | null   | null  | null   
>  |
> +-++---+-+
> 2 rows selected (0.022 seconds)
> 0: jdbc:phoenix:> select * from sts where status = 1 and type = 1 and name 
> like 'test10%';
> ++--+--++
> | ID | NAME | TYPE | STATUS |
> ++--+--++
> ++--+--++
> No rows selected (0.04 seconds)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (PHOENIX-628) Support native JSON data type

2024-05-11 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani resolved PHOENIX-628.
--
Resolution: Fixed

> Support native JSON data type
> -
>
> Key: PHOENIX-628
> URL: https://issues.apache.org/jira/browse/PHOENIX-628
> Project: Phoenix
>  Issue Type: New Feature
>Affects Versions: 5.1.4
>Reporter: James R. Taylor
>Assignee: Ranganath Govardhanagiri
>Priority: Blocker
>  Labels: JSON, Java, SQL
> Fix For: 5.3.0, 4.4.1
>
> Attachments: JSON Support for Phoenix.docx, Screen Shot 2022-02-02 at 
> 12.23.24 PM.png, image-2023-12-07-11-26-56-198.png
>
>
> MongoDB and Postgres do some interesting things with JSON. We should look at 
> adding similar support. For a detailed description, see JSONB support in 
> Postgres: 
> [http://www.craigkerstiens.com/2014/03/24/Postgres-9.4-Looking-up]
> [http://www.depesz.com/2014/03/25/waiting-for-9-4-introduce-jsonb-a-structured-format-for-storing-json/]
> [http://michael.otacoo.com/postgresql-2/manipulating-jsonb-data-with-key-unique/]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-628) Support native JSON data type

2024-05-11 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated PHOENIX-628:
-
Release Note: Initial support for JSON datatype in Phoenix. More follow-up 
work is expected in future.
Priority: Blocker

> Support native JSON data type
> -
>
> Key: PHOENIX-628
> URL: https://issues.apache.org/jira/browse/PHOENIX-628
> Project: Phoenix
>  Issue Type: New Feature
>Affects Versions: 5.1.4
>Reporter: James R. Taylor
>Assignee: Ranganath Govardhanagiri
>Priority: Blocker
>  Labels: JSON, Java, SQL
> Fix For: 4.4.1, 5.3.0
>
> Attachments: JSON Support for Phoenix.docx, Screen Shot 2022-02-02 at 
> 12.23.24 PM.png, image-2023-12-07-11-26-56-198.png
>
>
> MongoDB and Postgres do some interesting things with JSON. We should look at 
> adding similar support. For a detailed description, see JSONB support in 
> Postgres: 
> [http://www.craigkerstiens.com/2014/03/24/Postgres-9.4-Looking-up]
> [http://www.depesz.com/2014/03/25/waiting-for-9-4-introduce-jsonb-a-structured-format-for-storing-json/]
> [http://michael.otacoo.com/postgresql-2/manipulating-jsonb-data-with-key-unique/]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-628) Support native JSON data type

2024-05-11 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated PHOENIX-628:
-
Issue Type: New Feature  (was: Task)

> Support native JSON data type
> -
>
> Key: PHOENIX-628
> URL: https://issues.apache.org/jira/browse/PHOENIX-628
> Project: Phoenix
>  Issue Type: New Feature
>Affects Versions: 5.1.4
>Reporter: James R. Taylor
>Assignee: Ranganath Govardhanagiri
>  Labels: JSON, Java, SQL
> Fix For: 4.4.1, 5.3.0
>
> Attachments: JSON Support for Phoenix.docx, Screen Shot 2022-02-02 at 
> 12.23.24 PM.png, image-2023-12-07-11-26-56-198.png
>
>
> MongoDB and Postgres do some interesting things with JSON. We should look at 
> adding similar support. For a detailed description, see JSONB support in 
> Postgres: 
> [http://www.craigkerstiens.com/2014/03/24/Postgres-9.4-Looking-up]
> [http://www.depesz.com/2014/03/25/waiting-for-9-4-introduce-jsonb-a-structured-format-for-storing-json/]
> [http://michael.otacoo.com/postgresql-2/manipulating-jsonb-data-with-key-unique/]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-628) Support native JSON data type

2024-05-11 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated PHOENIX-628:
-
Fix Version/s: 5.3.0

> Support native JSON data type
> -
>
> Key: PHOENIX-628
> URL: https://issues.apache.org/jira/browse/PHOENIX-628
> Project: Phoenix
>  Issue Type: Task
>Affects Versions: 5.1.4
>Reporter: James R. Taylor
>Assignee: Ranganath Govardhanagiri
>  Labels: JSON, Java, SQL
> Fix For: 4.4.1, 5.3.0
>
> Attachments: JSON Support for Phoenix.docx, Screen Shot 2022-02-02 at 
> 12.23.24 PM.png, image-2023-12-07-11-26-56-198.png
>
>
> MongoDB and Postgres do some interesting things with JSON. We should look at 
> adding similar support. For a detailed description, see JSONB support in 
> Postgres: 
> [http://www.craigkerstiens.com/2014/03/24/Postgres-9.4-Looking-up]
> [http://www.depesz.com/2014/03/25/waiting-for-9-4-introduce-jsonb-a-structured-format-for-storing-json/]
> [http://michael.otacoo.com/postgresql-2/manipulating-jsonb-data-with-key-unique/]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-628) Support native JSON data type

2024-05-11 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated PHOENIX-628:
-
Fix Version/s: (was: 5.3.0)

> Support native JSON data type
> -
>
> Key: PHOENIX-628
> URL: https://issues.apache.org/jira/browse/PHOENIX-628
> Project: Phoenix
>  Issue Type: Task
>Affects Versions: 5.1.4
>Reporter: James R. Taylor
>Assignee: Ranganath Govardhanagiri
>  Labels: JSON, Java, SQL
> Fix For: 4.4.1
>
> Attachments: JSON Support for Phoenix.docx, Screen Shot 2022-02-02 at 
> 12.23.24 PM.png, image-2023-12-07-11-26-56-198.png
>
>
> MongoDB and Postgres do some interesting things with JSON. We should look at 
> adding similar support. For a detailed description, see JSONB support in 
> Postgres: 
> [http://www.craigkerstiens.com/2014/03/24/Postgres-9.4-Looking-up]
> [http://www.depesz.com/2014/03/25/waiting-for-9-4-introduce-jsonb-a-structured-format-for-storing-json/]
> [http://michael.otacoo.com/postgresql-2/manipulating-jsonb-data-with-key-unique/]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (PHOENIX-7306) Metadata lookup should be permitted only within query timeout

2024-05-04 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7306?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani resolved PHOENIX-7306.
---
Resolution: Fixed

> Metadata lookup should be permitted only within query timeout
> -
>
> Key: PHOENIX-7306
> URL: https://issues.apache.org/jira/browse/PHOENIX-7306
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 5.2.0
>    Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
> Fix For: 5.2.1, 5.3.0, 5.1.4
>
>
> When Phoenix query performs region location metadata lookup, region split or 
> merge could cause temporary inconsistency, which would later be resolved by 
> updating the region location at HBase client side as part of the Scan 
> operation.
> However, frequent region split/merge could potentially cause the query to 
> fail as Phoenix client consistently checks for the region boundary for 
> adjacent regions while retrieving region locations for each region one by 
> one. Instead of throwing errors, we should allow the metadata lookup to 
> continue upto the given query timeout param. This would prevent Phoenix 
> client to get stuck forever in case of any abnormal HBase region boundary 
> inconsistencies.
> The proposal:
>  * Increase default retry count for the overall metadata lookup operation.
>  * Do no throw Exception if the region boundaries are determined to be 
> overlapping. This would be resolved by HBase client internally while either 
> opening the scanner or resuming the scanner after receiving error from server 
> side.
>  * Do not allow metadata lookup to continue beyond the query timeout 
> configured.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (PHOENIX-7311) 5.2 multibranch build is not getting triggered automatically

2024-05-01 Thread Viraj Jasani (Jira)
Viraj Jasani created PHOENIX-7311:
-

 Summary: 5.2 multibranch build is not getting triggered 
automatically
 Key: PHOENIX-7311
 URL: https://issues.apache.org/jira/browse/PHOENIX-7311
 Project: Phoenix
  Issue Type: Task
Reporter: Viraj Jasani


Similar to master and 5.1 branches, any commits landing on 5.2 is not 
triggering multibranch builds on 
[https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-mulitbranch/job/5.2/]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Reopened] (PHOENIX-7245) NPE in Phoenix Coproc leading to Region Server crash

2024-05-01 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani reopened PHOENIX-7245:
---

> NPE in Phoenix Coproc leading to Region Server crash
> 
>
> Key: PHOENIX-7245
> URL: https://issues.apache.org/jira/browse/PHOENIX-7245
> Project: Phoenix
>  Issue Type: Bug
>  Components: phoenix
>Affects Versions: 5.1.1, 5.2.0
>Reporter: Ravi Kishore Valeti
>Assignee: Kadir Ozdemir
>Priority: Major
> Fix For: 5.2.1, 5.3.0
>
>
> In our Production, while investigating Region Server crashes, we found that 
> it is due to Phoenix coproc throwing Null Pointer Exception in 
> IndexRegionObserver.postBatchMutateIndispensably() method.
> Below are the logs
> {code:java}
> 2024-02-26 13:52:40,716 ERROR 
> [r.default.FPBQ.Fifo.handler=216,queue=8,port=x] 
> coprocessor.CoprocessorHost - The coprocessor 
> org.apache.phoenix.hbase.index.IndexRegionObserver threw 
> java.lang.NullPointerExceptionjava.lang.NullPointerExceptionat 
> org.apache.phoenix.hbase.index.IndexRegionObserver.postBatchMutateIndispensably(IndexRegionObserver.java:1301)at
>  
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$30.call(RegionCoprocessorHost.java:1028)at
>  
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$30.call(RegionCoprocessorHost.java:1025)at
>  
> org.apache.hadoop.hbase.coprocessor.CoprocessorHost$ObserverOperationWithoutResult.callObserver(CoprocessorHost.java:558)at
>  
> org.apache.hadoop.hbase.coprocessor.CoprocessorHost.execOperation(CoprocessorHost.java:631)at
>  
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postBatchMutateIndispensably(RegionCoprocessorHost.java:1025)at
>  
> org.apache.hadoop.hbase.regionserver.HRegion$MutationBatchOperation.doPostOpCleanupForMiniBatch(HRegion.java:4134)at
>  
> org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutate(HRegion.java:4573)at
>  
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:4447)at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:4369)at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:1033)at
>  
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicBatchOp(RSRpcServices.java:951)at
>  
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:916)at
>  
> org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2892)at
>  
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:45961)at
>  org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:415)at 
> org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124)at 
> org.apache.hadoop.hbase.ipc.RpcHandler.run(RpcHandler.java:102)at 
> org.apache.hadoop.hbase.ipc.RpcHandler.run(RpcHandler.java:82)
> 2024-02-26 13:52:40,725 ERROR 
> [r.default.FPBQ.Fifo.handler=216,queue=8,port=x] 
> regionserver.HRegionServer - * ABORTING region server 
> ,x,1708268161243: The coprocessor 
> org.apache.phoenix.hbase.index.IndexRegionObserver threw 
> java.lang.NullPointerException *java.lang.NullPointerExceptionat 
> org.apache.phoenix.hbase.index.IndexRegionObserver.postBatchMutateIndispensably(IndexRegionObserver.java:1301)at
>  
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$30.call(RegionCoprocessorHost.java:1028)at
>  
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$30.call(RegionCoprocessorHost.java:1025)at
>  
> org.apache.hadoop.hbase.coprocessor.CoprocessorHost$ObserverOperationWithoutResult.callObserver(CoprocessorHost.java:558)at
>  
> org.apache.hadoop.hbase.coprocessor.CoprocessorHost.execOperation(CoprocessorHost.java:631)at
>  
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postBatchMutateIndispensably(RegionCoprocessorHost.java:1025)at
>  
> org.apache.hadoop.hbase.regionserver.HRegion$MutationBatchOperation.doPostOpCleanupForMiniBatch(HRegion.java:4134)at
>  
> org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutate(HRegion.java:4573)at
>  
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:4447)at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:4369)at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:1033)at
>  
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicBatchOp(RSRpcServices.java:951)at
>  
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicReg

[jira] [Resolved] (PHOENIX-7245) NPE in Phoenix Coproc leading to Region Server crash

2024-05-01 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani resolved PHOENIX-7245.
---
Resolution: Fixed

> NPE in Phoenix Coproc leading to Region Server crash
> 
>
> Key: PHOENIX-7245
> URL: https://issues.apache.org/jira/browse/PHOENIX-7245
> Project: Phoenix
>  Issue Type: Bug
>  Components: phoenix
>Affects Versions: 5.1.1, 5.2.0
>Reporter: Ravi Kishore Valeti
>Assignee: Kadir Ozdemir
>Priority: Major
> Fix For: 5.2.1, 5.3.0
>
>
> In our Production, while investigating Region Server crashes, we found that 
> it is due to Phoenix coproc throwing Null Pointer Exception in 
> IndexRegionObserver.postBatchMutateIndispensably() method.
> Below are the logs
> {code:java}
> 2024-02-26 13:52:40,716 ERROR 
> [r.default.FPBQ.Fifo.handler=216,queue=8,port=x] 
> coprocessor.CoprocessorHost - The coprocessor 
> org.apache.phoenix.hbase.index.IndexRegionObserver threw 
> java.lang.NullPointerExceptionjava.lang.NullPointerExceptionat 
> org.apache.phoenix.hbase.index.IndexRegionObserver.postBatchMutateIndispensably(IndexRegionObserver.java:1301)at
>  
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$30.call(RegionCoprocessorHost.java:1028)at
>  
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$30.call(RegionCoprocessorHost.java:1025)at
>  
> org.apache.hadoop.hbase.coprocessor.CoprocessorHost$ObserverOperationWithoutResult.callObserver(CoprocessorHost.java:558)at
>  
> org.apache.hadoop.hbase.coprocessor.CoprocessorHost.execOperation(CoprocessorHost.java:631)at
>  
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postBatchMutateIndispensably(RegionCoprocessorHost.java:1025)at
>  
> org.apache.hadoop.hbase.regionserver.HRegion$MutationBatchOperation.doPostOpCleanupForMiniBatch(HRegion.java:4134)at
>  
> org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutate(HRegion.java:4573)at
>  
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:4447)at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:4369)at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:1033)at
>  
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicBatchOp(RSRpcServices.java:951)at
>  
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:916)at
>  
> org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2892)at
>  
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:45961)at
>  org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:415)at 
> org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124)at 
> org.apache.hadoop.hbase.ipc.RpcHandler.run(RpcHandler.java:102)at 
> org.apache.hadoop.hbase.ipc.RpcHandler.run(RpcHandler.java:82)
> 2024-02-26 13:52:40,725 ERROR 
> [r.default.FPBQ.Fifo.handler=216,queue=8,port=x] 
> regionserver.HRegionServer - * ABORTING region server 
> ,x,1708268161243: The coprocessor 
> org.apache.phoenix.hbase.index.IndexRegionObserver threw 
> java.lang.NullPointerException *java.lang.NullPointerExceptionat 
> org.apache.phoenix.hbase.index.IndexRegionObserver.postBatchMutateIndispensably(IndexRegionObserver.java:1301)at
>  
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$30.call(RegionCoprocessorHost.java:1028)at
>  
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$30.call(RegionCoprocessorHost.java:1025)at
>  
> org.apache.hadoop.hbase.coprocessor.CoprocessorHost$ObserverOperationWithoutResult.callObserver(CoprocessorHost.java:558)at
>  
> org.apache.hadoop.hbase.coprocessor.CoprocessorHost.execOperation(CoprocessorHost.java:631)at
>  
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postBatchMutateIndispensably(RegionCoprocessorHost.java:1025)at
>  
> org.apache.hadoop.hbase.regionserver.HRegion$MutationBatchOperation.doPostOpCleanupForMiniBatch(HRegion.java:4134)at
>  
> org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutate(HRegion.java:4573)at
>  
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:4447)at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:4369)at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:1033)at
>  
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicBatchOp(RSRpcServices.java:951)at
>  
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAto

[jira] [Updated] (PHOENIX-7245) NPE in Phoenix Coproc leading to Region Server crash

2024-05-01 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated PHOENIX-7245:
--
Fix Version/s: 5.2.1
   5.3.0

> NPE in Phoenix Coproc leading to Region Server crash
> 
>
> Key: PHOENIX-7245
> URL: https://issues.apache.org/jira/browse/PHOENIX-7245
> Project: Phoenix
>  Issue Type: Bug
>  Components: phoenix
>Affects Versions: 5.1.1, 5.2.0
>Reporter: Ravi Kishore Valeti
>Assignee: Kadir Ozdemir
>Priority: Major
> Fix For: 5.2.1, 5.3.0
>
>
> In our Production, while investigating Region Server crashes, we found that 
> it is due to Phoenix coproc throwing Null Pointer Exception in 
> IndexRegionObserver.postBatchMutateIndispensably() method.
> Below are the logs
> {code:java}
> 2024-02-26 13:52:40,716 ERROR 
> [r.default.FPBQ.Fifo.handler=216,queue=8,port=x] 
> coprocessor.CoprocessorHost - The coprocessor 
> org.apache.phoenix.hbase.index.IndexRegionObserver threw 
> java.lang.NullPointerExceptionjava.lang.NullPointerExceptionat 
> org.apache.phoenix.hbase.index.IndexRegionObserver.postBatchMutateIndispensably(IndexRegionObserver.java:1301)at
>  
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$30.call(RegionCoprocessorHost.java:1028)at
>  
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$30.call(RegionCoprocessorHost.java:1025)at
>  
> org.apache.hadoop.hbase.coprocessor.CoprocessorHost$ObserverOperationWithoutResult.callObserver(CoprocessorHost.java:558)at
>  
> org.apache.hadoop.hbase.coprocessor.CoprocessorHost.execOperation(CoprocessorHost.java:631)at
>  
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postBatchMutateIndispensably(RegionCoprocessorHost.java:1025)at
>  
> org.apache.hadoop.hbase.regionserver.HRegion$MutationBatchOperation.doPostOpCleanupForMiniBatch(HRegion.java:4134)at
>  
> org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutate(HRegion.java:4573)at
>  
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:4447)at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:4369)at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:1033)at
>  
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicBatchOp(RSRpcServices.java:951)at
>  
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:916)at
>  
> org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2892)at
>  
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:45961)at
>  org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:415)at 
> org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124)at 
> org.apache.hadoop.hbase.ipc.RpcHandler.run(RpcHandler.java:102)at 
> org.apache.hadoop.hbase.ipc.RpcHandler.run(RpcHandler.java:82)
> 2024-02-26 13:52:40,725 ERROR 
> [r.default.FPBQ.Fifo.handler=216,queue=8,port=x] 
> regionserver.HRegionServer - * ABORTING region server 
> ,x,1708268161243: The coprocessor 
> org.apache.phoenix.hbase.index.IndexRegionObserver threw 
> java.lang.NullPointerException *java.lang.NullPointerExceptionat 
> org.apache.phoenix.hbase.index.IndexRegionObserver.postBatchMutateIndispensably(IndexRegionObserver.java:1301)at
>  
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$30.call(RegionCoprocessorHost.java:1028)at
>  
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$30.call(RegionCoprocessorHost.java:1025)at
>  
> org.apache.hadoop.hbase.coprocessor.CoprocessorHost$ObserverOperationWithoutResult.callObserver(CoprocessorHost.java:558)at
>  
> org.apache.hadoop.hbase.coprocessor.CoprocessorHost.execOperation(CoprocessorHost.java:631)at
>  
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postBatchMutateIndispensably(RegionCoprocessorHost.java:1025)at
>  
> org.apache.hadoop.hbase.regionserver.HRegion$MutationBatchOperation.doPostOpCleanupForMiniBatch(HRegion.java:4134)at
>  
> org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutate(HRegion.java:4573)at
>  
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:4447)at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:4369)at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:1033)at
>  
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicBatchOp(RSRpcServices.java:951)at
>  
> org.apache.hadoop.hbas

[jira] [Updated] (PHOENIX-7229) Leverage bloom filters for single key point lookups

2024-05-01 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7229?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated PHOENIX-7229:
--
Fix Version/s: 5.2.1
   5.3.0

> Leverage bloom filters for single key point lookups
> ---
>
> Key: PHOENIX-7229
> URL: https://issues.apache.org/jira/browse/PHOENIX-7229
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 5.1.3
>Reporter: Tanuj Khurana
>Assignee: Tanuj Khurana
>Priority: Major
> Fix For: 5.2.0, 5.2.1, 5.3.0
>
>
> PHOENIX-6710 enabled bloom filters by default when Phoenix tables are 
> created. However, we were not making use of it because Phoenix translates 
> point lookups to scans with the scan range [startkey, stopkey) where startkey 
> is inclusive and is equal to the row key and stopkey is exclusive and is the 
> next key after the row key. 
> This fails the check inside the hbase code in 
> [StoreFileReader#passesBloomFilter|https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFileReader.java#L245-L250]
>  because it applies bloom filter only to scans which are gets and a scan is a 
> GET only if startkey = stopkey and both are inclusive. This is defined here 
> [Scan#isGetScan|https://github.com/apache/hbase/blob/master/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Scan.java#L253-L255]
> We recently have some customers whose use case involves doing point lookups 
> where the row key is not going to be present in the table. Bloom filters are 
> ideal for those use cases.
> We can change our scan range for point lookups to leverage Bloom filters.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (PHOENIX-7227) Phoenix 5.2.0 release

2024-04-19 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani resolved PHOENIX-7227.
---
Resolution: Fixed

> Phoenix 5.2.0 release
> -
>
> Key: PHOENIX-7227
> URL: https://issues.apache.org/jira/browse/PHOENIX-7227
> Project: Phoenix
>  Issue Type: Task
>        Reporter: Viraj Jasani
>    Assignee: Viraj Jasani
>Priority: Major
>
> # Clean up fix versions
>  # Spin RCs + Close the repository on 
> https://repository.apache.org/#stagingRepositories
>  # "Release" stages nexus repository
>  # Promote RC artifacts in SVN
>  # Update reporter tool with the released version
>  # Push signed release tag
>  # Add release version to the download page
>  # Send announce email



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[ANNOUNCE] Apache Phoenix 5.2.0 is now available for download

2024-04-17 Thread Viraj Jasani
The Apache Phoenix team is pleased to announce the immediate availability
of Phoenix 5.2.0.

Apache Phoenix enables SQL-based OLTP and operational analytics for Apache
Hadoop using Apache HBase as its backing store and provides integration
with other projects in the Apache ecosystem such as Spark, Hive, Pig,
Flume, and MapReduce.

This is the first release of the project in the 5.2 release line, which
aims to improve the stability and reliability of Apache Phoenix.

Change Log and Release Notes can be found here.
CHANGELOG: https://downloads.apache.org/phoenix/phoenix-5.2.0/CHANGES.md
RELEASENOTES:
https://downloads.apache.org/phoenix/phoenix-5.2.0/RELEASENOTES.md

To download please follow the link from our website:
https://phoenix.apache.org/download.html

Questions, comments, and problems are always welcome at:
dev@phoenix.apache.org
u...@phoenix.apache.org

Thanks to all who contributed and made this release possible.

Cheers,
The Phoenix Dev Team


[jira] [Updated] (PHOENIX-6883) Phoenix metadata caching redesign

2024-04-15 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated PHOENIX-6883:
--
Fix Version/s: 5.3.0
   (was: 5.2.0)

> Phoenix metadata caching redesign
> -
>
> Key: PHOENIX-6883
> URL: https://issues.apache.org/jira/browse/PHOENIX-6883
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Kadir Ozdemir
>Assignee: Rushabh Shah
>Priority: Major
> Fix For: 5.3.0
>
>
> PHOENIX-6761 improves the client side metadata caching by eliminating the 
> separate cache for each connection. This improvement results in memory and 
> compute savings since it eliminates copying CQSI level cache every time a 
> Phoenix connection is created, and also replaces the inefficient the CQSI 
> level cache implementation with Guava Cache from Google. 
> Despite this improvement, the overall metadata caching architecture begs for 
> redesign. This is because every operation in Phoenix need to make multiple 
> RPCs to metadata servers for the SYSTEM.CATALOG table (please see 
> PHOENIX-6860) to ensure the latest metadata changes are visible to clients. 
> These constant RPCs makes the region servers serving SYSTEM.CATALOG hot spot 
> and thus leads to poor performance and availability issues.
> The UPDATE_CACHE_FREQUENCY configuration parameter specifies how frequently 
> the client cache is updated. However, setting this parameter to a non-zero 
> value results in stale caching. Stale caching can cause data integrity 
> issues. For example, if an index table creation is not visible to the client, 
> Phoenix would skip updating the index table in the write path. That's why is 
> this parameter is typically set to zero. However, this defeats the purpose of 
> client side metadata caching.
> The redesign of the metadata caching architecture is to directly address this 
> issue by making sure that the client metadata caching is always used (that 
> is, UPDATE_CACHE_FREQUENCY is set to NEVER) but still ensures the data 
> integrity. This is achieved by three main changes. 
> The first change is to introduce server side metadata caching in all region 
> servers. Currently, the server side metadata caching is used on the region 
> servers serving SYSTEM.CATALOG. This metadata caching should be strongly 
> consistent such that the metadata updates should include invalidating the 
> corresponding entries on the server side caches. This would ensure the server 
> cache would not become stale.
> The second change is that the Phoenix client passes the LAST_DDL_TIMESTAMP 
> table attribute along with scan and mutation operations to the server regions 
> (more accurately to the Phoenix coprocessors). Then the Phoenix coprocessors 
> would check the timestamp on a given operation against with the timestamp in 
> its server side cache to validate that the client did not use stale metadata 
> when it prepared the operation. If the client did use stale metadata then the 
> coprocessor would return an exception (this exception can be called 
> StaleClientMetadataCacheException) to the client.
> The third change is that upon receiving StaleClientMetadataCacheException the 
> Phoenix client makes an RPC call to the metadata server to update the client 
> cache, reconstruct the operation with the updated cached, and retry the 
> operation.
> This redesign would require updating client and server metadata caches only 
> when metadata is stale instead of updating the client metadata cache for each 
> (scan or mutation) operation. This would eliminate hot spotting on the 
> metadata servers and thus poor performance and availability issues caused by 
> this hot spotting.
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-7110) Adding Explain Plan information to connectionActivity Logger

2024-04-15 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated PHOENIX-7110:
--
Fix Version/s: 5.3.0
   (was: 5.2.0)

> Adding Explain Plan information to connectionActivity Logger
> 
>
> Key: PHOENIX-7110
> URL: https://issues.apache.org/jira/browse/PHOENIX-7110
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: vikas meka
>Assignee: vikas meka
>Priority: Minor
> Fix For: 5.3.0, 5.1.4
>
>
> Currently, the Connection Activity Logger has information related to 
> connection attributes like connection ID and TableName. This improvement is 
> to add information about explain plan output which would help us understand 
> which Regions are heavily queried when HBase is experiencing Slow Query 
> Responses. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-5586) Add documentation for Splittable SYSTEM.CATALOG

2024-04-15 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5586?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated PHOENIX-5586:
--
Fix Version/s: 5.3.0
   (was: 5.2.0)

> Add documentation for Splittable SYSTEM.CATALOG
> ---
>
> Key: PHOENIX-5586
> URL: https://issues.apache.org/jira/browse/PHOENIX-5586
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.15.0, 5.1.0
>Reporter: Chinmay Kulkarni
>Priority: Major
> Fix For: 5.3.0
>
>
> There are many changes after PHOENIX-3534 especially for backwards 
> compatibility. There are additional configurations such as 
> "phoenix.allow.system.catalog.rollback" which allows rollback of splittable 
> SYSTEM.CATALOG, etc. We should document these changes.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-6802) HA Client Documentation

2024-04-15 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6802?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated PHOENIX-6802:
--
Fix Version/s: 5.3.0
   (was: 5.2.0)

> HA Client Documentation
> ---
>
> Key: PHOENIX-6802
> URL: https://issues.apache.org/jira/browse/PHOENIX-6802
> Project: Phoenix
>  Issue Type: Task
>Reporter: Geoffrey Jacoby
>Priority: Major
> Fix For: 5.3.0
>
>
> The Phoenix HA client is being released as part of Phoenix 5.2. This will 
> need documentation on the Phoenix site explaining how to use it, what use 
> cases it's suited for, and use cases (such as mutable tables) for which it 
> isn't. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-7232) Phoenix level compaction is not collecting delete markers beyond max lookback age for view indexes

2024-04-15 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7232?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated PHOENIX-7232:
--
Fix Version/s: 5.3.0
   (was: 5.2.0)

> Phoenix level compaction is not collecting delete markers beyond max lookback 
> age for view indexes
> --
>
> Key: PHOENIX-7232
> URL: https://issues.apache.org/jira/browse/PHOENIX-7232
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Sanjeet Malhotra
>Assignee: Sanjeet Malhotra
>Priority: Major
> Fix For: 5.3.0
>
>
> Currently, with phoenix.table.ttl.enabled we try to retrieve PTable object 
> using HBase table name and if PTable instance is null then skip the 
> CompactionScanner completely. For view indexes, physical table name will be 
> different from logical table name in SYSCAT as physical table name will have 
> _IDX_ prefix. For this case, we can get logical table name from physical 
> table name by removing _IDX_ prefix and then retrieving the PTable instance. 
>  
> Currently, delete markers are not getting collected for view indexes and cell 
> count will keep increasing.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-628) Support native JSON data type

2024-04-15 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated PHOENIX-628:
-
Fix Version/s: 5.3.0
   (was: 5.2.0)

> Support native JSON data type
> -
>
> Key: PHOENIX-628
> URL: https://issues.apache.org/jira/browse/PHOENIX-628
> Project: Phoenix
>  Issue Type: Task
>Affects Versions: 5.1.4
>Reporter: James R. Taylor
>Assignee: Ranganath Govardhanagiri
>  Labels: JSON, Java, SQL
> Fix For: 4.4.1, 5.3.0
>
> Attachments: JSON Support for Phoenix.docx, Screen Shot 2022-02-02 at 
> 12.23.24 PM.png, image-2023-12-07-11-26-56-198.png
>
>
> MongoDB and Postgres do some interesting things with JSON. We should look at 
> adding similar support. For a detailed description, see JSONB support in 
> Postgres: 
> [http://www.craigkerstiens.com/2014/03/24/Postgres-9.4-Looking-up]
> [http://www.depesz.com/2014/03/25/waiting-for-9-4-introduce-jsonb-a-structured-format-for-storing-json/]
> [http://michael.otacoo.com/postgresql-2/manipulating-jsonb-data-with-key-unique/]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-6987) Tables with UPDATE_CACHE_FREQUENCY set to 0 should not be inserted into the client side metadata cache

2024-04-15 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6987?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated PHOENIX-6987:
--
Fix Version/s: 5.3.0
   (was: 5.2.0)

> Tables with UPDATE_CACHE_FREQUENCY set to 0 should not be inserted into the 
> client side metadata cache
> --
>
> Key: PHOENIX-6987
> URL: https://issues.apache.org/jira/browse/PHOENIX-6987
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Palash Chauhan
>Priority: Major
> Fix For: 5.3.0
>
>
> CQSI maintains a client-side metadata cache for tables. 
> UPDATE_CACHE_FREQUENCY is a property which can be set on tables and used to 
> decide when to update a table's metadata in the client-side cache. If the 
> UPDATE_CACHE_FREQUENCY is set to 0, a tables's metadata should always be 
> retrieved by getting the latest metadata from the server. 
> Currently, tables with UPDATE_CACHE_FREQUENCY set to 0 are retrieved from the 
> server each time they are accessed for a query or mutation. After every 
> retrieval from the server, the old table ref in the cache is removed and the 
> new one is inserted unnecessarily. 
>  [MetaDataCachingIT#testCacheShouldBeUsedOnlyForConfiguredTables() 
> |https://github.com/apache/phoenix/blob/master/phoenix-core/src/it/java/org/apache/phoenix/query/MetaDataCachingIT.java#L170]can
>  be used to confirm that the cache is used only for tables with non zero 
> update cache frequency. This test is currently ignored. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-7064) Prepare of local index mutations is extremely slow

2024-04-15 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated PHOENIX-7064:
--
Fix Version/s: 5.3.0
   (was: 5.2.0)

> Prepare of local index mutations is extremely slow
> --
>
> Key: PHOENIX-7064
> URL: https://issues.apache.org/jira/browse/PHOENIX-7064
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 5.1.3
>Reporter: fanartoria
>Priority: Major
> Fix For: 5.3.0, 5.1.4
>
> Attachments: ddl-global.sql, ddl-local.sql, gen-data.sh, 
> image-2023-10-09-17-29-47-856.png, image-2023-10-09-17-41-29-679.png, 
> test-patch-using-global-index-logic.patch
>
>
> When the data table has more than one index, the prepare time of local index 
> will be much slower than global index. 
> The write performance should be better on local indexes.
> Here is the stack trace which the most time is spent in.
> !image-2023-10-09-17-29-47-856.png!
> Seems a LocalTableState object will be create when prepare index mutation for 
> each row.
> Compared with other ValueGetter, LazyValueGetter may be has bad performance.
> Why not use IndexMaintainer#createGetterFromKeyValues?
> Or combine the logic with global index prepare?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-7197) PhoenixMRJobSubmitter is failing with non-ha yarn cluster

2024-04-15 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7197?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated PHOENIX-7197:
--
Fix Version/s: 5.3.0
   (was: 5.2.0)

> PhoenixMRJobSubmitter is failing with non-ha yarn cluster
> -
>
> Key: PHOENIX-7197
> URL: https://issues.apache.org/jira/browse/PHOENIX-7197
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.1.3
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Anchal Kejriwal
>Priority: Major
> Fix For: 5.3.0, 5.1.4
>
>
> Currently starting PhoenixMRJobSubmitter is expecting yarn HA should be 
> enabled otherwise it's failing
> {noformat}
> 2024-02-07 06:01:31,942 INFO  [main] zookeeper.ZooKeeper: Session: 
> 0x100293e630a0088 closed
> Exception in thread "main" 2024-02-07 06:01:31,942 INFO  [main-EventThread] 
> zookeeper.ClientCnxn: EventThread shut down for session: 0x100293e630a0088
> org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = 
> NoNode for /yarn-leader-election
>   at org.apache.zookeeper.KeeperException.create(KeeperException.java:118)
>   at org.apache.zookeeper.KeeperException.create(KeeperException.java:54)
>   at org.apache.zookeeper.ZooKeeper.getChildren(ZooKeeper.java:2589)
>   at 
> org.apache.phoenix.util.PhoenixMRJobUtil.getActiveResourceManagerAddress(PhoenixMRJobUtil.java:103)
>   at 
> org.apache.phoenix.mapreduce.index.automation.PhoenixMRJobSubmitter.getSubmittedYarnApps(PhoenixMRJobSubmitter.java:305)
>   at 
> org.apache.phoenix.mapreduce.index.automation.PhoenixMRJobSubmitter.scheduleIndexBuilds(PhoenixMRJobSubmitter.java:251)
>   at 
> org.apache.phoenix.mapreduce.index.automation.PhoenixMRJobSubmitter.main(PhoenixMRJobSubmitter.java:332)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-7231) Delete from table for transformed table is failing

2024-04-15 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated PHOENIX-7231:
--
Fix Version/s: 5.3.0
   (was: 5.2.0)

> Delete from table for transformed table is failing
> --
>
> Key: PHOENIX-7231
> URL: https://issues.apache.org/jira/browse/PHOENIX-7231
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Sanjeet Malhotra
>Assignee: Sanjeet Malhotra
>Priority: Major
> Fix For: 5.3.0
>
>
> Steps to reproduce:
>  # Create a table.
>  # Upsert one row into it.
>  # Change column encoding scheme to transform table.
>  # Run transform tool and make sure cutover is successful.
>  # Upsert one more row to original Phoenix table which now points to new 
> physical table.
>  # Run delete from 
> Following error is thrown:
> {{org.apache.phoenix.hbase.index.builder.IndexBuildingFailureException: 
> org.apache.phoenix.hbase.index.builder.IndexBuildingFailureException: Failed 
> to build index for unexpected reason!}}
> {{        at 
> org.apache.phoenix.hbase.index.util.IndexManagementUtil.rethrowIndexingException(IndexManagementUtil.java:208)}}
> {{        at 
> org.apache.phoenix.hbase.index.IndexRegionObserver.preBatchMutate(IndexRegionObserver.java:467)}}
> {{        at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$28.call(RegionCoprocessorHost.java:997)}}
> {{        at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$28.call(RegionCoprocessorHost.java:994)}}
> {{        at 
> org.apache.hadoop.hbase.coprocessor.CoprocessorHost$ObserverOperationWithoutResult.callObserver(CoprocessorHost.java:558)}}
> {{        at 
> org.apache.hadoop.hbase.coprocessor.CoprocessorHost.execOperation(CoprocessorHost.java:631)}}
> {{        at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preBatchMutate(RegionCoprocessorHost.java:994)}}
> {{        at 
> org.apache.hadoop.hbase.regionserver.HRegion$MutationBatchOperation.prepareMiniBatchOperations(HRegion.java:3790)}}
> {{        at 
> org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutate(HRegion.java:4508)}}
> {{        at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:4446)}}
> {{        at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:4368)}}
> {{        at 
> org.apache.hadoop.hbase.regionserver.HRegion.lambda$batchMutate$10(HRegion.java:4381)}}
> {{        at 
> org.apache.hadoop.hbase.trace.TraceUtil.trace(TraceUtil.java:216)}}
> {{        at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:4380)}}
> {{        at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:4376)}}
> {{        at 
> org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.commitBatch(UngroupedAggregateRegionObserver.java:277)}}
> {{        at 
> org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.commitBatchWithRetries(UngroupedAggregateRegionObserver.java:240)}}
> {{        at 
> org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.commit(UngroupedAggregateRegionObserver.java:504)}}
> {{        at 
> org.apache.phoenix.coprocessor.UngroupedAggregateRegionScanner.annotateAndCommit(UngroupedAggregateRegionScanner.java:691)}}
> {{        at 
> org.apache.phoenix.coprocessor.UngroupedAggregateRegionScanner.next(UngroupedAggregateRegionScanner.java:642)}}
> {{        at 
> org.apache.phoenix.coprocessor.BaseRegionScanner.nextRaw(BaseRegionScanner.java:56)}}
> {{        at 
> org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:79)}}
> {{        at 
> org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:79)}}
> {{        at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:254)}}
> {{        at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3389)}}
> {{        at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3655)}}
> {{        at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44996)}}
> {{        at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:415)}}
> {{        at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124)}}
> {{        at org.apache.hadoop.hbase.ipc.RpcHandler.run(RpcHandler.java:102)}}
> {{        at org.apache.hadoop.hbase.ipc.RpcHandler.run(RpcHandler.java:82)}}
> {{Caused by: java.lang.N

[jira] [Updated] (PHOENIX-7228) Document global uncovered and partial Indexes

2024-04-15 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7228?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated PHOENIX-7228:
--
Fix Version/s: 5.3.0
   (was: 5.2.0)

> Document global uncovered and partial Indexes
> -
>
> Key: PHOENIX-7228
> URL: https://issues.apache.org/jira/browse/PHOENIX-7228
> Project: Phoenix
>  Issue Type: Task
>Reporter: Kadir Ozdemir
>Priority: Major
> Fix For: 5.3.0
>
>
> The new two global secondary index features, uncovered indexes and partial 
> indexes, have been committed to the master branch and will be released as 
> part of the upcoming 5.2.0 release. The [web page for secondary 
> indexes|https://phoenix.apache.org/secondary_indexing.html] should be updated 
> with these new features. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-7217) MaxLookback and TTL improvements and fixes

2024-04-15 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated PHOENIX-7217:
--
Fix Version/s: 5.3.0
   (was: 5.2.0)

> MaxLookback and TTL improvements and fixes
> --
>
> Key: PHOENIX-7217
> URL: https://issues.apache.org/jira/browse/PHOENIX-7217
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 5.2.0
>    Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
> Fix For: 5.3.0
>
>
> Some improvements/fixes for Phoenix Table MaxLookback and TTL feature:
>  * Use overridden value of maxLookback ms while confirming whether region 
> level compaction is required
>  * Use map of table with map of ColumnFamily to MaxLookback value for the 
> override API
>  * For the entire row version that is within max lookback window, avoid extra 
> region level compaction
>  * Support DeleteFamilyVersion marker with DeleteFamily markers such that 
> DeleteFamilyVersion marker can mask only Put cells with the same timestamp. 
> This requires changes to how we build and consume the column list for all 
> cells that are outside the maxLookback window. The changes are required in 
> both phoenix and hbase level compaction. We need to retain 
> DeleteFamilyVersion markers as they can be used to perform masking of old 
> cells (inside or outside of TTL window). The combination of DeleteFamily and 
> DeleteFamilyVersion markers need more tests.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-7279) column not found exception when aliased column used in order by of union all query and first query in it also aliased

2024-04-15 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated PHOENIX-7279:
--
Fix Version/s: (was: 5.2.0)

> column not found exception when aliased column used in order by of union all 
> query and first query in it also aliased
> -
>
> Key: PHOENIX-7279
> URL: https://issues.apache.org/jira/browse/PHOENIX-7279
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>Priority: Critical
> Fix For: 5.3.0, 5.1.4
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-7263) Row value constructor split keys not allowed on indexes

2024-04-15 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7263?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated PHOENIX-7263:
--
Fix Version/s: (was: 5.2.0)

> Row value constructor split keys not allowed on indexes
> ---
>
> Key: PHOENIX-7263
> URL: https://issues.apache.org/jira/browse/PHOENIX-7263
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>Priority: Major
> Fix For: 5.3.0, 5.1.4
>
>
> While creating indexes if we pass row value constructor split keys getting 
> following error  same is passing with create table because while creating the 
> table properly building the split keys using expression compiler which is not 
> the case with index creation.
> {noformat}
> java.lang.ClassCastException: 
> org.apache.phoenix.expression.RowValueConstructorExpression cannot be cast to 
> org.apache.phoenix.expression.LiteralExpression
>   at 
> org.apache.phoenix.compile.CreateIndexCompiler.compile(CreateIndexCompiler.java:77)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableCreateIndexStatement.compilePlan(PhoenixStatement.java:1205)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableCreateIndexStatement.compilePlan(PhoenixStatement.java:1191)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:435)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:425)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:424)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:412)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:2009)
>   at sqlline.Commands.executeSingleQuery(Commands.java:1054)
>   at sqlline.Commands.execute(Commands.java:1003)
>   at sqlline.Commands.sql(Commands.java:967)
>   at sqlline.SqlLine.dispatch(SqlLine.java:734)
>   at sqlline.SqlLine.begin(SqlLine.java:541)
>   at sqlline.SqlLine.start(SqlLine.java:267)
>   at sqlline.SqlLine.main(SqlLine.java:206)
> {noformat}
> In create table:
> {code:java}
> final byte[][] splits = new byte[splitNodes.size()][];
> ImmutableBytesWritable ptr = context.getTempPtr();
> ExpressionCompiler expressionCompiler = new 
> ExpressionCompiler(context);
> for (int i = 0; i < splits.length; i++) {
> ParseNode node = splitNodes.get(i);
> if (node instanceof BindParseNode) {
> context.getBindManager().addParamMetaData((BindParseNode) 
> node, VARBINARY_DATUM);
> }
> if (node.isStateless()) {
> Expression expression = node.accept(expressionCompiler);
> if (expression.evaluate(null, ptr)) {;
> splits[i] = ByteUtil.copyKeyBytesIfNecessary(ptr);
> continue;
> }
> }
> throw new 
> SQLExceptionInfo.Builder(SQLExceptionCode.SPLIT_POINT_NOT_CONSTANT)
> .setMessage("Node: " + node).build().buildException();
> }
> {code}
> Where as in indexing expecting only literals.
> {code:java}
> final byte[][] splits = new byte[splitNodes.size()][];
> for (int i = 0; i < splits.length; i++) {
> ParseNode node = splitNodes.get(i);
> if (!node.isStateless()) {
> throw new 
> SQLExceptionInfo.Builder(SQLExceptionCode.SPLIT_POINT_NOT_CONSTANT)
> .setMessage("Node: " + node).build().buildException();
> }
> LiteralExpression expression = 
> (LiteralExpression)node.accept(expressionCompiler);
> splits[i] = expression.getBytes();
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [VOTE] Release of Apache Phoenix 5.2.0 RC8

2024-04-15 Thread Viraj Jasani
Thank you Richard, Lars, Rajeshbabu for the release votes!

Here is my +1,

* Signature: ok
* Checksum: ok
* Rat check: ok
* Build with source: ok
* Running some backward compatible tests against 5.1 client, with CRUD on
table, table with indexes, views, views with indexes: looks good

With total 5 binding votes (including mine) and no other -0 or -1 votes,
this vote passes. Will release the RC soon.


On Mon, Apr 15, 2024 at 11:32 AM Richárd Antal 
wrote:

> +1 (binding)
>
> Signature: OK
> Checksum: OK
> Checksum for Hbase 2.4, Hbase 2.5, 2.5.0 binary distribution: OK
> Signature for Hbase 2.4, Hbase 2.5, 2.5.0 binary distribution: OK
>
> mvn clean apache-rat:check : OK
>
> mvn clean package -DskipTests: OK
> Unit tests - OK
> mvn clean verify: OK
> Note that, MutableIndexFailureWithNamespaceIT is quite flaky but it was
> successful sometimes.
>
> Richard Antal
>
> Lars Hofhansl  ezt írta (időpont: 2024. ápr. 15., H,
> 19:15):
>
> > +1 (binding)
> >
> > - Built from source.
> > - Tried with Omid
> > - Build the Trino connector, verified it all still works
> >
> > (I'm not active much with Phoenix anymore, but since I noticed the
> problem
> > with Omid, I thought I'd try it out and verify. :) )
> >
> > -- Lars
> >
> > On 2024/04/06 21:24:53 Viraj Jasani wrote:
> > > Please vote on this Apache Phoenix release candidate, Phoenix-5.2.0RC8
> > >
> > > The VOTE will remain open for at least 72 hours.
> > >
> > > [ ] +1 Release this package as Apache Phoenix 5.2.0
> > > [ ] -1 Do not release this package because ...
> > >
> > > The tag to be voted on is 5.2.0RC8:
> > >
> > > https://github.com/apache/phoenix/tree/5.2.0RC8
> > >
> > > The release files, including signatures, digests, as well as CHANGES.md
> > and
> > > RELEASENOTES.md included in this RC can be found at:
> > >
> > > https://dist.apache.org/repos/dist/dev/phoenix/phoenix-5.2.0RC8/
> > >
> > > Maven artifacts are available in a staging repository at:
> > >
> > >
> >
> https://repository.apache.org/content/repositories/orgapachephoenix-1256/
> > >
> > > Artifacts were signed with the 1012D134 key which can be found in:
> > >
> > > https://dist.apache.org/repos/dist/release/phoenix/KEYS
> > >
> > > To learn more about Apache Phoenix, please see
> > >
> > > https://phoenix.apache.org/
> > >
> > > Thanks,
> > > Your Phoenix Release Manager
> > >
> >
>


[jira] [Created] (PHOENIX-7306) Metadata lookup should be permitted only within query timeout

2024-04-12 Thread Viraj Jasani (Jira)
Viraj Jasani created PHOENIX-7306:
-

 Summary: Metadata lookup should be permitted only within query 
timeout
 Key: PHOENIX-7306
 URL: https://issues.apache.org/jira/browse/PHOENIX-7306
 Project: Phoenix
  Issue Type: Improvement
Affects Versions: 5.2.0
Reporter: Viraj Jasani
Assignee: Viraj Jasani
 Fix For: 5.2.1, 5.3.0, 5.1.4


When Phoenix query performs region location metadata lookup, region split or 
merge could cause temporary inconsistency, which would later be resolved by 
updating the region location at HBase client side as part of the Scan operation.

However, frequent region split/merge could potentially cause the query to fail 
as Phoenix client consistently checks for the region boundary for adjacent 
regions while retrieving region locations for each region one by one. Instead 
of throwing errors, we should allow the metadata lookup to continue upto the 
given query timeout param. This would prevent Phoenix client to get stuck 
forever in case of any abnormal HBase region boundary inconsistencies.

The proposal:
 * Increase default retry count for the overall metadata lookup operation.
 * Do no throw Exception if the region boundaries are determined to be 
overlapping. This would be resolved by HBase client internally while either 
opening the scanner or resuming the scanner after receiving error from server 
side.
 * Do not allow metadata lookup to continue beyond the query timeout configured.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [VOTE] Release of Apache Phoenix 5.2.0 RC8

2024-04-12 Thread Viraj Jasani
Thanks Istvan!

A gentle reminder to others for testing out the RC.


On Mon, Apr 8, 2024 at 7:34 AM Istvan Toth 
wrote:

> +1 (binding)
>
> Checksum: OK
> Signature: OK
> mvn install ( without test suite): OK
> mvn package ( with UTs): OK
> mvn apache-rat:check: OK
> Smoke test on pseudo-distributed cluster with Omid: OK
>
> best regards
> Istvan
>
> On Sat, Apr 6, 2024 at 11:25 PM Viraj Jasani  wrote:
>
> > Please vote on this Apache Phoenix release candidate, Phoenix-5.2.0RC8
> >
> > The VOTE will remain open for at least 72 hours.
> >
> > [ ] +1 Release this package as Apache Phoenix 5.2.0
> > [ ] -1 Do not release this package because ...
> >
> > The tag to be voted on is 5.2.0RC8:
> >
> > https://github.com/apache/phoenix/tree/5.2.0RC8
> >
> > The release files, including signatures, digests, as well as CHANGES.md
> and
> > RELEASENOTES.md included in this RC can be found at:
> >
> > https://dist.apache.org/repos/dist/dev/phoenix/phoenix-5.2.0RC8/
> >
> > Maven artifacts are available in a staging repository at:
> >
> >
> https://repository.apache.org/content/repositories/orgapachephoenix-1256/
> >
> > Artifacts were signed with the 1012D134 key which can be found in:
> >
> > https://dist.apache.org/repos/dist/release/phoenix/KEYS
> >
> > To learn more about Apache Phoenix, please see
> >
> > https://phoenix.apache.org/
> >
> > Thanks,
> > Your Phoenix Release Manager
> >
>
>
> --
> *István Tóth* | Sr. Staff Software Engineer
> *Email*: st...@cloudera.com
> cloudera.com <https://www.cloudera.com>
> [image: Cloudera] <https://www.cloudera.com/>
> [image: Cloudera on Twitter] <https://twitter.com/cloudera> [image:
> Cloudera on Facebook] <https://www.facebook.com/cloudera> [image: Cloudera
> on LinkedIn] <https://www.linkedin.com/company/cloudera>
> --
> --
>


[jira] [Updated] (PHOENIX-7302) Server Paging doesn't work on scans with limit

2024-04-11 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated PHOENIX-7302:
--
Fix Version/s: 5.3.0
   (was: 5.3)

> Server Paging doesn't work on scans with limit 
> ---
>
> Key: PHOENIX-7302
> URL: https://issues.apache.org/jira/browse/PHOENIX-7302
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.1.3
>Reporter: Tanuj Khurana
>Assignee: Tanuj Khurana
>Priority: Major
> Fix For: 5.2.1, 5.3.0
>
>
> Full scans with limit but no other filter or range scans with limit and 
> filters on primary key prefix do not support server paging. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-7299) ScanningResultIterator should not time out a query after receiving a valid result

2024-04-11 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated PHOENIX-7299:
--
Fix Version/s: 5.2.1
   5.3.0
   (was: 5.2.0)

> ScanningResultIterator should not time out a query after receiving a valid 
> result
> -
>
> Key: PHOENIX-7299
> URL: https://issues.apache.org/jira/browse/PHOENIX-7299
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Kadir Ozdemir
>Assignee: Lokesh Khurana
>Priority: Major
> Fix For: 5.2.1, 5.3.0
>
>
> Phoenix query time includes setting up scanners and retrieving the very first 
> result from each of these scanners. The query timeout check in 
> ScanningResutIterator introduced by PHOENIX-6918 extends the query timeout 
> check beyond the first result from a given scanner. ScanningResutIterator 
> should not check if query timeout after the first valid (not dummy) result 
> from the server.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[VOTE] Release of Apache Phoenix 5.2.0 RC8

2024-04-06 Thread Viraj Jasani
Please vote on this Apache Phoenix release candidate, Phoenix-5.2.0RC8

The VOTE will remain open for at least 72 hours.

[ ] +1 Release this package as Apache Phoenix 5.2.0
[ ] -1 Do not release this package because ...

The tag to be voted on is 5.2.0RC8:

https://github.com/apache/phoenix/tree/5.2.0RC8

The release files, including signatures, digests, as well as CHANGES.md and
RELEASENOTES.md included in this RC can be found at:

https://dist.apache.org/repos/dist/dev/phoenix/phoenix-5.2.0RC8/

Maven artifacts are available in a staging repository at:

https://repository.apache.org/content/repositories/orgapachephoenix-1256/

Artifacts were signed with the 1012D134 key which can be found in:

https://dist.apache.org/repos/dist/release/phoenix/KEYS

To learn more about Apache Phoenix, please see

https://phoenix.apache.org/

Thanks,
Your Phoenix Release Manager


[jira] [Resolved] (PHOENIX-7299) ScanningResultIterator should not time out a query after receiving a valid result

2024-04-04 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani resolved PHOENIX-7299.
---
Resolution: Fixed

> ScanningResultIterator should not time out a query after receiving a valid 
> result
> -
>
> Key: PHOENIX-7299
> URL: https://issues.apache.org/jira/browse/PHOENIX-7299
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Kadir Ozdemir
>Assignee: Lokesh Khurana
>Priority: Major
> Fix For: 5.2.0
>
>
> Phoenix query time includes setting up scanners and retrieving the very first 
> result from each of these scanners. The query timeout check in 
> ScanningResutIterator introduced by PHOENIX-6918 extends the query timeout 
> check beyond the first result from a given scanner. ScanningResutIterator 
> should not check if query timeout after the first valid (not dummy) result 
> from the server.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-7299) ScanningResultIterator should not time out a query after receiving a valid result

2024-04-04 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated PHOENIX-7299:
--
Fix Version/s: 5.2.0

> ScanningResultIterator should not time out a query after receiving a valid 
> result
> -
>
> Key: PHOENIX-7299
> URL: https://issues.apache.org/jira/browse/PHOENIX-7299
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Kadir Ozdemir
>Assignee: Lokesh Khurana
>Priority: Major
> Fix For: 5.2.0
>
>
> Phoenix query time includes setting up scanners and retrieving the very first 
> result from each of these scanners. The query timeout check in 
> ScanningResutIterator introduced by PHOENIX-6918 extends the query timeout 
> check beyond the first result from a given scanner. ScanningResutIterator 
> should not check if query timeout after the first valid (not dummy) result 
> from the server.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-7291) Bump up omid to 1.1.2

2024-04-02 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated PHOENIX-7291:
--
Fix Version/s: (was: 5.3.0)

> Bump up omid to 1.1.2
> -
>
> Key: PHOENIX-7291
> URL: https://issues.apache.org/jira/browse/PHOENIX-7291
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>Priority: Major
> Fix For: 5.2.0, 5.1.4
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (PHOENIX-7295) Fix getTableRegions failing due to overlap/inconsistencies on region

2024-04-02 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani resolved PHOENIX-7295.
---
Resolution: Fixed

> Fix getTableRegions failing due to overlap/inconsistencies on region
> 
>
> Key: PHOENIX-7295
> URL: https://issues.apache.org/jira/browse/PHOENIX-7295
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Tanuj Khurana
>Assignee: Tanuj Khurana
>Priority: Major
> Fix For: 5.2.0, 5.1.4
>
>
> Call is failing with below exception but prev region start/end key is more 
> that current region start/end key.
> {code:java}
> HBase region overlap/inconsistencies on 
> region=TEST.T1,00DG000BmaA0D51L7I7hn00051606bhpG,1630096440965.f7984a617b32835f54e080f236e6517e.,
>  hostname=localhost,61020,1711471610848, seqNum=130428 , current key: 
> 00DG000BmaA0D51L7Iagpf0051606wBmb , region startKey: 
> 00DG000BmaA0D51L7I7hn00051606bhpG , region endKey: 
> 00DG000BmaA0D51L7JBsQU005G004AXE2 , prev region startKey: 
> 00DG000BmaA0D51L9I6xCm0051606bhpG , prev region endKey: 
> 00DG000BmaA0D58X9JLrTm {code}
> {code:java}
> Exception encountered in getAllTableRegions for table: TEST.T1, retryCount: 3 
> , currentKey: 00DG000BmaA0D51L7Iagpf , startRowKey: 
> 00DG000BmaA0D51L7Iagpf , endRowKey: 00DG000BmaA0D58XE2Nmki
> Cause: java.io.IOException: HBase region information overlap/inconsistencies 
> on region 
> TEST.T1,00DG000BmaA0D51L7I7hn00051606bhpG,1630096440965.f7984a617b32835f54e080f236e6517e.
> StackTrace: 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.getNextRegionStartKey(ConnectionQueryServicesImpl.java:775)
> org.apache.phoenix.query.ConnectionQueryServicesImpl.getTableRegions(ConnectionQueryServicesImpl.java:815)
> org.apache.phoenix.query.DelegateConnectionQueryServices.getTableRegions(DelegateConnectionQueryServices.java:96)
> org.apache.phoenix.iterate.DefaultParallelScanGrouper.getRegionBoundaries(DefaultParallelScanGrouper.java:85)
> org.apache.phoenix.iterate.BaseResultIterators.getRegionBoundaries(BaseResultIterators.java:588)
>  {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-7295) Fix getTableRegions failing due to overlap/inconsistencies on region

2024-04-02 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated PHOENIX-7295:
--
Fix Version/s: 5.2.0
   5.1.4

> Fix getTableRegions failing due to overlap/inconsistencies on region
> 
>
> Key: PHOENIX-7295
> URL: https://issues.apache.org/jira/browse/PHOENIX-7295
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Tanuj Khurana
>Assignee: Tanuj Khurana
>Priority: Major
> Fix For: 5.2.0, 5.1.4
>
>
> Call is failing with below exception but prev region start/end key is more 
> that current region start/end key.
> {code:java}
> HBase region overlap/inconsistencies on 
> region=TEST.T1,00DG000BmaA0D51L7I7hn00051606bhpG,1630096440965.f7984a617b32835f54e080f236e6517e.,
>  hostname=localhost,61020,1711471610848, seqNum=130428 , current key: 
> 00DG000BmaA0D51L7Iagpf0051606wBmb , region startKey: 
> 00DG000BmaA0D51L7I7hn00051606bhpG , region endKey: 
> 00DG000BmaA0D51L7JBsQU005G004AXE2 , prev region startKey: 
> 00DG000BmaA0D51L9I6xCm0051606bhpG , prev region endKey: 
> 00DG000BmaA0D58X9JLrTm {code}
> {code:java}
> Exception encountered in getAllTableRegions for table: TEST.T1, retryCount: 3 
> , currentKey: 00DG000BmaA0D51L7Iagpf , startRowKey: 
> 00DG000BmaA0D51L7Iagpf , endRowKey: 00DG000BmaA0D58XE2Nmki
> Cause: java.io.IOException: HBase region information overlap/inconsistencies 
> on region 
> TEST.T1,00DG000BmaA0D51L7I7hn00051606bhpG,1630096440965.f7984a617b32835f54e080f236e6517e.
> StackTrace: 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.getNextRegionStartKey(ConnectionQueryServicesImpl.java:775)
> org.apache.phoenix.query.ConnectionQueryServicesImpl.getTableRegions(ConnectionQueryServicesImpl.java:815)
> org.apache.phoenix.query.DelegateConnectionQueryServices.getTableRegions(DelegateConnectionQueryServices.java:96)
> org.apache.phoenix.iterate.DefaultParallelScanGrouper.getRegionBoundaries(DefaultParallelScanGrouper.java:85)
> org.apache.phoenix.iterate.BaseResultIterators.getRegionBoundaries(BaseResultIterators.java:588)
>  {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (PHOENIX-7298) Expose client metric for GET_TABLE_REGIONS_FAIL errors

2024-04-02 Thread Viraj Jasani (Jira)
Viraj Jasani created PHOENIX-7298:
-

 Summary: Expose client metric for GET_TABLE_REGIONS_FAIL errors
 Key: PHOENIX-7298
 URL: https://issues.apache.org/jira/browse/PHOENIX-7298
 Project: Phoenix
  Issue Type: Improvement
Reporter: Viraj Jasani


We don't have any metrics to track the failure to retrieve metadata region 
location lookups while running a query. We should expose a counter metric in 
GlobalClientMetrics that can help track the num of failures to retrieve 
metadata.

We have metric COUNTER_METADATA_INCONSISTENCY that helps track failure with 
metadata inconsistency, however it eventually leads to GET_TABLE_REGIONS_FAIL 
errors. Hence, metric for GET_TABLE_REGIONS_FAIL failures would be really 
helpful as the underlying cause could be anything.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [VOTE] Release of Apache Phoenix 5.2.0 RC6

2024-03-27 Thread Viraj Jasani
Once Omid is upgraded to 1.1.2 on Phoenix master and 5.2 branches, I would
be happy to start new RC for Phoenix 5.2.0 release sometime next week.


On Thu, Feb 29, 2024 at 10:45 AM Viraj Jasani  wrote:

> Thanks Istvan, Rajeshbabu and Lars for identifying, fixing and now
> starting with new Omid release.
> I will re-create RCs once upgraded Omid version is checked-in.
>
> In the meantime, if there is any important fix anyone thinks is worth
> including in 5.2.0, please let me know.
>
> This vote is now closed and the RC is rejected for the release purpose.
>
>
>
> On Thu, Feb 29, 2024 at 8:13 AM rajeshb...@apache.org <
> chrajeshbab...@gmail.com> wrote:
>
>> Istvan,
>>
>> Sure, yes will do 1.1.2 release tomorrow.
>>
>> Thanks,
>> Rajeshbabu.
>>
>> On Thu, Feb 29, 2024, 9:40 PM Istvan Toth  wrote:
>>
>> > The fix for OMID-277 has landed (among with two other fixes)
>> >
>> > Is there a chance you could manage the Omid 1.2.1 release Rajeshbabu ?
>> >
>> > On Thu, Feb 29, 2024 at 7:10 AM Istvan Toth  wrote:
>> >
>> > > -1 We need a new Omid release first that fixes OMID-277 first
>> (binding)
>> > >
>> > > On Thu, Feb 29, 2024 at 6:57 AM Istvan Toth 
>> wrote:
>> > >
>> > >> I have done no testing yet, but
>> > >> https://issues.apache.org/jira/browse/OMID-277 looks like a blocker.
>> > >>
>> > >> On Wed, Feb 28, 2024 at 10:55 PM Viraj Jasani 
>> > wrote:
>> > >>
>> > >>> Please vote on this Apache Phoenix release candidate,
>> Phoenix-5.2.0RC6
>> > >>>
>> > >>> The VOTE will remain open for at least 72 hours.
>> > >>>
>> > >>> [ ] +1 Release this package as Apache Phoenix 5.2.0
>> > >>> [ ] -1 Do not release this package because ...
>> > >>>
>> > >>> The tag to be voted on is 5.2.0RC6:
>> > >>>
>> > >>>   https://github.com/apache/phoenix/tree/5.2.0RC6
>> > >>>
>> > >>> The release files, including signatures, digests, as well as
>> CHANGES.md
>> > >>> and RELEASENOTES.md included in this RC can be found at:
>> > >>>
>> > >>> https://dist.apache.org/repos/dist/dev/phoenix/phoenix-5.2.0RC6/
>> > >>>
>> > >>> Maven artifacts are available in a staging repository at:
>> > >>>
>> > >>>
>> >
>> https://repository.apache.org/content/repositories/orgapachephoenix-1254/
>> > >>>
>> > >>> Artifacts were signed with the 1012D134 key which can be found in:
>> > >>>
>> > >>>   https://dist.apache.org/repos/dist/release/phoenix/KEYS
>> > >>>
>> > >>> To learn more about Apache Phoenix, please see
>> > >>>
>> > >>>   https://phoenix.apache.org/
>> > >>>
>> > >>> Thanks,
>> > >>> Your Phoenix Release Manager
>> > >>>
>> > >>
>> > >>
>> > >> --
>> > >> *István Tóth* | Sr. Staff Software Engineer
>> > >> *Email*: st...@cloudera.com
>> > >> cloudera.com <https://www.cloudera.com>
>> > >> [image: Cloudera] <https://www.cloudera.com/>
>> > >> [image: Cloudera on Twitter] <https://twitter.com/cloudera> [image:
>> > >> Cloudera on Facebook] <https://www.facebook.com/cloudera> [image:
>> > >> Cloudera on LinkedIn] <https://www.linkedin.com/company/cloudera>
>> > >> --
>> > >> --
>> > >>
>> > >
>> >
>>
>


[jira] [Updated] (PHOENIX-7285) Upgade Zookeeper to 3.8.4

2024-03-24 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated PHOENIX-7285:
--
Fix Version/s: 5.2.0
   (was: 5.3.0)

> Upgade Zookeeper to 3.8.4
> -
>
> Key: PHOENIX-7285
> URL: https://issues.apache.org/jira/browse/PHOENIX-7285
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 5.2.0, 5.3.0, 5.1.4
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Major
> Fix For: 5.2.0, 5.1.4
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [DISCUSS] reducing Github noise in the JIRA comment section

2024-03-22 Thread Viraj Jasani
+1, we can pursue the same settings from HBase, specifically the recent one
notifies when PR is opened, but discussions are not commented to the Jira.


On Thu, Mar 21, 2024 at 10:49 PM Istvan Toth  wrote:

> A few years back I have changed the Github integration settings to write
> all events as comments to the ticket.
> In hindsight, this was a bad idea, as the tickets are now so noisy that
> they are barely suitable for discussions.
> I propose copying the relevant project settings from HBase, which seems to
> be much more usable.
>
> WDYT ?
>
> Istvan
>


Re: [VOTE] Release of Apache Phoenix Omid 1.1.2 RC0

2024-03-21 Thread Viraj Jasani
+1,

Signature: ok
Checksum: ok
Build from source: ok
Rat check: ok


On Thu, Mar 14, 2024 at 8:15 PM rajeshb...@apache.org <
chrajeshbab...@gmail.com> wrote:

> Please vote on this Apache phoenix omid release candidate,
> phoenix-omid-1.1.2RC0
>
> The VOTE will remain open for at least 72 hours.
>
> [ ] +1 Release this package as Apache phoenix omid 1.1.2
> [ ] -1 Do not release this package because ...
>
> The tag to be voted on is 1.1.2RC0:
>
>   https://github.com/apache/phoenix-omid/tree/1.1.2RC0
> 
>
> The release files, including signatures, digests, as well as CHANGES.md
> and RELEASENOTES.md included in this RC can be found at:
>
> https://dist.apache.org/repos/dist/dev/phoenix/phoenix-omid-1.1.2RC0/
>
> Maven artifacts are available in a staging repository at:
>
> https://repository.apache.org/content/repositories/orgapachephoenix-1255/
>
> Artifacts were signed with the 0x2CC0FD99 key which can be found in:
>
>   https://dist.apache.org/repos/dist/release/phoenix/KEYS
>
> To learn more about Apache phoenix omid, please see
>
>   https://phoenix.apache.org/
>
>
> Thanks
> Phoenix Release Manager
>


[jira] [Resolved] (PHOENIX-7253) Metadata lookup performance improvement for range scan queries

2024-03-15 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7253?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani resolved PHOENIX-7253.
---
Resolution: Fixed

> Metadata lookup performance improvement for range scan queries
> --
>
> Key: PHOENIX-7253
> URL: https://issues.apache.org/jira/browse/PHOENIX-7253
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 5.2.0, 5.1.3
>    Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Critical
> Fix For: 5.2.0, 5.1.4
>
>
> Any considerably large table with more than 100k regions can give problematic 
> performance if we access all region locations from meta for the given table 
> before generating parallel or sequential scans for the given query. The perf 
> impact can really hurt range scan queries.
> Consider a table with hundreds of thousands of tenant views. Unless the query 
> is strict point lookup, any query on any tenant view would end up retrieving 
> region locations of all regions of the base table. In case if IOException is 
> thrown by HBase client during any region location lookup in meta, we only 
> perform single retry.
> Proposal:
>  # All non point lookup queries should only retrieve region locations that 
> cover the scan boundary. Avoid fetching all region locations of the base 
> table.
>  # Make retries configurable with higher default value.
>  
> The proposal should improve the performance of queries:
>  * Range Scan
>  * Range scan on Salted table
>  * Range scan on Salted table with Tenant id and/or View index id
>  * Range Scan on Tenant connection
>  * Full Scan on Tenant connection
> Here, full scan on tenant connection is always of type "range scan" for the 
> base table.
> Sample stacktrace from the multiple failures observed:
> {code:java}
> java.sql.SQLException: ERROR 1102 (XCL02): Cannot get all table regions.Stack 
> trace: java.sql.SQLException: ERROR 1102 (XCL02): Cannot get all table 
> regions.
>     at 
> org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:620)
>     at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:229)
>     at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.getAllTableRegions(ConnectionQueryServicesImpl.java:781)
>     at 
> org.apache.phoenix.query.DelegateConnectionQueryServices.getAllTableRegions(DelegateConnectionQueryServices.java:87)
>     at 
> org.apache.phoenix.query.DelegateConnectionQueryServices.getAllTableRegions(DelegateConnectionQueryServices.java:87)
>     at 
> org.apache.phoenix.iterate.DefaultParallelScanGrouper.getRegionBoundaries(DefaultParallelScanGrouper.java:74)
>     at 
> org.apache.phoenix.iterate.BaseResultIterators.getRegionBoundaries(BaseResultIterators.java:587)
>     at 
> org.apache.phoenix.iterate.BaseResultIterators.getParallelScans(BaseResultIterators.java:936)
>     at 
> org.apache.phoenix.iterate.BaseResultIterators.getParallelScans(BaseResultIterators.java:669)
>     at 
> org.apache.phoenix.iterate.BaseResultIterators.(BaseResultIterators.java:555)
>     at 
> org.apache.phoenix.iterate.SerialIterators.(SerialIterators.java:69)
>     at org.apache.phoenix.execute.ScanPlan.newIterator(ScanPlan.java:278)
>     at 
> org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:374)
>     at 
> org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:222)
>     at 
> org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:217)
>     at 
> org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:212)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:370)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:328)
>     at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:328)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:320)
>     at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.executeQuery(PhoenixPreparedStatement.java:188)
>     ...
>     ...
>     Caused by: java.io.InterruptedIOException: Origin: InterruptedException
>         at 
> org.apache.hadoop.hbase.util.ExceptionUtil.asInterrupt(ExceptionUtil.java:72)
>         at 
> org.apache.hadoop.hbase.client.ConnectionImplementation.takeUserRegionLock(ConnectionImplementation.java:1129)
>         at 
> org.apache.hadoop.hbase.client.ConnectionImplementation.loc

[jira] [Updated] (PHOENIX-7253) Metadata lookup performance improvement for range scan queries

2024-03-14 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7253?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated PHOENIX-7253:
--
Description: 
Any considerably large table with more than 100k regions can give problematic 
performance if we access all region locations from meta for the given table 
before generating parallel or sequential scans for the given query. The perf 
impact can really hurt range scan queries.

Consider a table with hundreds of thousands of tenant views. Unless the query 
is strict point lookup, any query on any tenant view would end up retrieving 
region locations of all regions of the base table. In case if IOException is 
thrown by HBase client during any region location lookup in meta, we only 
perform single retry.

Proposal:
 # All non point lookup queries should only retrieve region locations that 
cover the scan boundary. Avoid fetching all region locations of the base table.
 # Make retries configurable with higher default value.

 

The proposal should improve the performance of queries:
 * Range Scan
 * Range scan on Salted table
 * Range scan on Salted table with Tenant id and/or View index id
 * Range Scan on Tenant connection
 * Full Scan on Tenant connection

Here, full scan on tenant connection is always of type "range scan" for the 
base table.

Sample stacktrace from the multiple failures observed:
{code:java}
java.sql.SQLException: ERROR 1102 (XCL02): Cannot get all table regions.Stack 
trace: java.sql.SQLException: ERROR 1102 (XCL02): Cannot get all table regions.
    at 
org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:620)
    at 
org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:229)
    at 
org.apache.phoenix.query.ConnectionQueryServicesImpl.getAllTableRegions(ConnectionQueryServicesImpl.java:781)
    at 
org.apache.phoenix.query.DelegateConnectionQueryServices.getAllTableRegions(DelegateConnectionQueryServices.java:87)
    at 
org.apache.phoenix.query.DelegateConnectionQueryServices.getAllTableRegions(DelegateConnectionQueryServices.java:87)
    at 
org.apache.phoenix.iterate.DefaultParallelScanGrouper.getRegionBoundaries(DefaultParallelScanGrouper.java:74)
    at 
org.apache.phoenix.iterate.BaseResultIterators.getRegionBoundaries(BaseResultIterators.java:587)
    at 
org.apache.phoenix.iterate.BaseResultIterators.getParallelScans(BaseResultIterators.java:936)
    at 
org.apache.phoenix.iterate.BaseResultIterators.getParallelScans(BaseResultIterators.java:669)
    at 
org.apache.phoenix.iterate.BaseResultIterators.(BaseResultIterators.java:555)
    at 
org.apache.phoenix.iterate.SerialIterators.(SerialIterators.java:69)
    at org.apache.phoenix.execute.ScanPlan.newIterator(ScanPlan.java:278)
    at org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:374)
    at org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:222)
    at org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:217)
    at org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:212)
    at 
org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:370)
    at 
org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:328)
    at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
    at 
org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:328)
    at 
org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:320)
    at 
org.apache.phoenix.jdbc.PhoenixPreparedStatement.executeQuery(PhoenixPreparedStatement.java:188)
    ...
    ...
    Caused by: java.io.InterruptedIOException: Origin: InterruptedException
        at 
org.apache.hadoop.hbase.util.ExceptionUtil.asInterrupt(ExceptionUtil.java:72)
        at 
org.apache.hadoop.hbase.client.ConnectionImplementation.takeUserRegionLock(ConnectionImplementation.java:1129)
        at 
org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegionInMeta(ConnectionImplementation.java:994)
        at 
org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegion(ConnectionImplementation.java:895)
        at 
org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegion(ConnectionImplementation.java:881)
        at 
org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegion(ConnectionImplementation.java:851)
        at 
org.apache.hadoop.hbase.client.ConnectionImplementation.getRegionLocation(ConnectionImplementation.java:730)
        at 
org.apache.phoenix.query.ConnectionQueryServicesImpl.getAllTableRegions(ConnectionQueryServicesImpl.java:766)
        ... 254 more
Caused by: java.lang.InterruptedException
        at 
java.base/java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireNanos(AbstractQueuedSynchronizer.java:982)
        at 
java.base/java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAc

[jira] [Updated] (PHOENIX-7253) Metadata lookup performance improvement for range scan queries

2024-03-14 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7253?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated PHOENIX-7253:
--
Summary: Metadata lookup performance improvement for range scan queries  
(was: Metadata lookup performance improvement for range scan and multi-tenant 
queries on large tables)

> Metadata lookup performance improvement for range scan queries
> --
>
> Key: PHOENIX-7253
> URL: https://issues.apache.org/jira/browse/PHOENIX-7253
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 5.2.0, 5.1.3
>    Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Critical
> Fix For: 5.2.0, 5.1.4
>
>
> Any considerably large table with more than 100k regions can give problematic 
> performance if we access all region locations from meta for the given table 
> before generating parallel or sequential scans for the given query. The perf 
> impact can really hurt range scan queries.
> Consider a table with hundreds of thousands of tenant views. Unless the query 
> is strict point lookup, any query on any tenant view would end up retrieving 
> region locations of all regions of the base table. In case if IOException is 
> thrown by HBase client during any region location lookup in meta, we only 
> perform single retry.
> Proposal:
>  # All non point lookup queries should only retrieve region locations that 
> cover the scan boundary. Avoid fetching all region locations of the base 
> table.
>  # Make retries configurable with higher default value.
>  
> The proposal should improve the performance of queries:
>  * Range Scan
>  * Range scan on Salted table
>  * Range scan on Salted table with Tenant id and/or View index id
>  * Range Scan on Tenant connection
>  * Full Scan on Tenant connection
>  
> Sample stacktrace from the multiple failures observed:
> {code:java}
> java.sql.SQLException: ERROR 1102 (XCL02): Cannot get all table regions.Stack 
> trace: java.sql.SQLException: ERROR 1102 (XCL02): Cannot get all table 
> regions.
>     at 
> org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:620)
>     at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:229)
>     at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.getAllTableRegions(ConnectionQueryServicesImpl.java:781)
>     at 
> org.apache.phoenix.query.DelegateConnectionQueryServices.getAllTableRegions(DelegateConnectionQueryServices.java:87)
>     at 
> org.apache.phoenix.query.DelegateConnectionQueryServices.getAllTableRegions(DelegateConnectionQueryServices.java:87)
>     at 
> org.apache.phoenix.iterate.DefaultParallelScanGrouper.getRegionBoundaries(DefaultParallelScanGrouper.java:74)
>     at 
> org.apache.phoenix.iterate.BaseResultIterators.getRegionBoundaries(BaseResultIterators.java:587)
>     at 
> org.apache.phoenix.iterate.BaseResultIterators.getParallelScans(BaseResultIterators.java:936)
>     at 
> org.apache.phoenix.iterate.BaseResultIterators.getParallelScans(BaseResultIterators.java:669)
>     at 
> org.apache.phoenix.iterate.BaseResultIterators.(BaseResultIterators.java:555)
>     at 
> org.apache.phoenix.iterate.SerialIterators.(SerialIterators.java:69)
>     at org.apache.phoenix.execute.ScanPlan.newIterator(ScanPlan.java:278)
>     at 
> org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:374)
>     at 
> org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:222)
>     at 
> org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:217)
>     at 
> org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:212)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:370)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:328)
>     at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:328)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:320)
>     at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.executeQuery(PhoenixPreparedStatement.java:188)
>     ...
>     ...
>     Caused by: java.io.InterruptedIOException: Origin: InterruptedException
>         at 
> org.apache.hadoop.hbase.util.ExceptionUtil.asInterrupt(ExceptionUtil.java:72)
>         at 
> org.apache.hadoop.hbase.client.ConnectionImplementation.takeUserRegionLock(Connection

[jira] [Updated] (PHOENIX-7275) Update HBase 2.5 default version to 2.5.8

2024-03-14 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated PHOENIX-7275:
--
Fix Version/s: (was: 5.3.0)

> Update HBase 2.5 default version to 2.5.8
> -
>
> Key: PHOENIX-7275
> URL: https://issues.apache.org/jira/browse/PHOENIX-7275
> Project: Phoenix
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 5.2.0, 5.1.3, 5.3.0
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Minor
> Fix For: 5.2.0, 5.1.4
>
>
> HBase 2.5.8 has just been released.
> We should be able to comfortably fit this into 5.2.0 (unless the tests 
> uncover some problem)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-7275) Update HBase 2.5 default version to 2.5.8

2024-03-14 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated PHOENIX-7275:
--
Fix Version/s: 5.2.0

> Update HBase 2.5 default version to 2.5.8
> -
>
> Key: PHOENIX-7275
> URL: https://issues.apache.org/jira/browse/PHOENIX-7275
> Project: Phoenix
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 5.2.0, 5.1.3, 5.3.0
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Minor
> Fix For: 5.2.0, 5.3.0, 5.1.4
>
>
> HBase 2.5.8 has just been released.
> We should be able to comfortably fit this into 5.2.0 (unless the tests 
> uncover some problem)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Reopened] (PHOENIX-7275) Update HBase 2.5 default version to 2.5.8

2024-03-14 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani reopened PHOENIX-7275:
---

Reopening for 5.2 backport. Would be nice to have this with 5.2.0 release.

> Update HBase 2.5 default version to 2.5.8
> -
>
> Key: PHOENIX-7275
> URL: https://issues.apache.org/jira/browse/PHOENIX-7275
> Project: Phoenix
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 5.2.0, 5.1.3, 5.3.0
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Minor
> Fix For: 5.3.0, 5.1.4
>
>
> HBase 2.5.8 has just been released.
> We should be able to comfortably fit this into 5.2.0 (unless the tests 
> uncover some problem)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (PHOENIX-7275) Update HBase 2.5 default version to 2.5.8

2024-03-14 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani resolved PHOENIX-7275.
---
Resolution: Fixed

> Update HBase 2.5 default version to 2.5.8
> -
>
> Key: PHOENIX-7275
> URL: https://issues.apache.org/jira/browse/PHOENIX-7275
> Project: Phoenix
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 5.2.0, 5.1.3, 5.3.0
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Minor
> Fix For: 5.2.0, 5.1.4
>
>
> HBase 2.5.8 has just been released.
> We should be able to comfortably fit this into 5.2.0 (unless the tests 
> uncover some problem)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-7253) Metadata lookup performance improvement for range scan and multi-tenant queries on large tables

2024-03-13 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7253?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated PHOENIX-7253:
--
Summary: Metadata lookup performance improvement for range scan and 
multi-tenant queries on large tables  (was: Metadata lookup perf improvement 
for range scan and multi-tenant queries on large tables)

> Metadata lookup performance improvement for range scan and multi-tenant 
> queries on large tables
> ---
>
> Key: PHOENIX-7253
> URL: https://issues.apache.org/jira/browse/PHOENIX-7253
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 5.2.0, 5.1.3
>Reporter: Viraj Jasani
>    Assignee: Viraj Jasani
>Priority: Critical
> Fix For: 5.2.0, 5.1.4
>
>
> Any considerably large table with more than 100k regions can give problematic 
> performance if we access all region locations from meta for the given table 
> before generating parallel or sequential scans for the given query. The perf 
> impact can really hurt range scan queries.
> Consider a table with hundreds of thousands of tenant views. Unless the query 
> is strict point lookup, any query on any tenant view would end up retrieving 
> region locations of all regions of the base table. In case if IOException is 
> thrown by HBase client during any region location lookup in meta, we only 
> perform single retry.
> Proposal:
>  # All non point lookup queries should only retrieve region locations that 
> cover the scan boundary. Avoid fetching all region locations of the base 
> table.
>  # Make retries configurable with higher default value.
>  
> The proposal should improve the performance of queries:
>  * Range Scan
>  * Range scan on Salted table
>  * Range scan on Salted table with Tenant id and/or View index id
>  * Range Scan on Tenant connection
>  * Full Scan on Tenant connection
>  
> Sample stacktrace from the multiple failures observed:
> {code:java}
> java.sql.SQLException: ERROR 1102 (XCL02): Cannot get all table regions.Stack 
> trace: java.sql.SQLException: ERROR 1102 (XCL02): Cannot get all table 
> regions.
>     at 
> org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:620)
>     at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:229)
>     at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.getAllTableRegions(ConnectionQueryServicesImpl.java:781)
>     at 
> org.apache.phoenix.query.DelegateConnectionQueryServices.getAllTableRegions(DelegateConnectionQueryServices.java:87)
>     at 
> org.apache.phoenix.query.DelegateConnectionQueryServices.getAllTableRegions(DelegateConnectionQueryServices.java:87)
>     at 
> org.apache.phoenix.iterate.DefaultParallelScanGrouper.getRegionBoundaries(DefaultParallelScanGrouper.java:74)
>     at 
> org.apache.phoenix.iterate.BaseResultIterators.getRegionBoundaries(BaseResultIterators.java:587)
>     at 
> org.apache.phoenix.iterate.BaseResultIterators.getParallelScans(BaseResultIterators.java:936)
>     at 
> org.apache.phoenix.iterate.BaseResultIterators.getParallelScans(BaseResultIterators.java:669)
>     at 
> org.apache.phoenix.iterate.BaseResultIterators.(BaseResultIterators.java:555)
>     at 
> org.apache.phoenix.iterate.SerialIterators.(SerialIterators.java:69)
>     at org.apache.phoenix.execute.ScanPlan.newIterator(ScanPlan.java:278)
>     at 
> org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:374)
>     at 
> org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:222)
>     at 
> org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:217)
>     at 
> org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:212)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:370)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:328)
>     at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:328)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:320)
>     at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.executeQuery(PhoenixPreparedStatement.java:188)
>     ...
>     ...
>     Caused by: java.io.InterruptedIOException: Origin: InterruptedException
>         at 
> org.apache.hadoop.hbase.util.ExceptionUtil.asInterrupt(Excepti

[jira] [Updated] (PHOENIX-7253) Metadata lookup perf improvement for range scan and multi-tenant queries on large tables

2024-03-13 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7253?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated PHOENIX-7253:
--
Summary: Metadata lookup perf improvement for range scan and multi-tenant 
queries on large tables  (was: Perf improvement for non-full scan queries on 
large table)

> Metadata lookup perf improvement for range scan and multi-tenant queries on 
> large tables
> 
>
> Key: PHOENIX-7253
> URL: https://issues.apache.org/jira/browse/PHOENIX-7253
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 5.2.0, 5.1.3
>Reporter: Viraj Jasani
>    Assignee: Viraj Jasani
>Priority: Critical
> Fix For: 5.2.0, 5.1.4
>
>
> Any considerably large table with more than 100k regions can give problematic 
> performance if we access all region locations from meta for the given table 
> before generating parallel or sequential scans for the given query. The perf 
> impact can really hurt range scan queries.
> Consider a table with hundreds of thousands of tenant views. Unless the query 
> is strict point lookup, any query on any tenant view would end up retrieving 
> region locations of all regions of the base table. In case if IOException is 
> thrown by HBase client during any region location lookup in meta, we only 
> perform single retry.
> Proposal:
>  # All non point lookup queries should only retrieve region locations that 
> cover the scan boundary. Avoid fetching all region locations of the base 
> table.
>  # Make retries configurable with higher default value.
>  
> The proposal should improve the performance of queries:
>  * Range Scan
>  * Range scan on Salted table
>  * Range scan on Salted table with Tenant id and/or View index id
>  * Range Scan on Tenant connection
>  * Full Scan on Tenant connection
>  
> Sample stacktrace from the multiple failures observed:
> {code:java}
> java.sql.SQLException: ERROR 1102 (XCL02): Cannot get all table regions.Stack 
> trace: java.sql.SQLException: ERROR 1102 (XCL02): Cannot get all table 
> regions.
>     at 
> org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:620)
>     at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:229)
>     at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.getAllTableRegions(ConnectionQueryServicesImpl.java:781)
>     at 
> org.apache.phoenix.query.DelegateConnectionQueryServices.getAllTableRegions(DelegateConnectionQueryServices.java:87)
>     at 
> org.apache.phoenix.query.DelegateConnectionQueryServices.getAllTableRegions(DelegateConnectionQueryServices.java:87)
>     at 
> org.apache.phoenix.iterate.DefaultParallelScanGrouper.getRegionBoundaries(DefaultParallelScanGrouper.java:74)
>     at 
> org.apache.phoenix.iterate.BaseResultIterators.getRegionBoundaries(BaseResultIterators.java:587)
>     at 
> org.apache.phoenix.iterate.BaseResultIterators.getParallelScans(BaseResultIterators.java:936)
>     at 
> org.apache.phoenix.iterate.BaseResultIterators.getParallelScans(BaseResultIterators.java:669)
>     at 
> org.apache.phoenix.iterate.BaseResultIterators.(BaseResultIterators.java:555)
>     at 
> org.apache.phoenix.iterate.SerialIterators.(SerialIterators.java:69)
>     at org.apache.phoenix.execute.ScanPlan.newIterator(ScanPlan.java:278)
>     at 
> org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:374)
>     at 
> org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:222)
>     at 
> org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:217)
>     at 
> org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:212)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:370)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:328)
>     at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:328)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:320)
>     at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.executeQuery(PhoenixPreparedStatement.java:188)
>     ...
>     ...
>     Caused by: java.io.InterruptedIOException: Origin: InterruptedException
>         at 
> org.apache.hadoop.hbase.util.ExceptionUtil.asInterrupt(ExceptionUtil.java:72)
>         at 
> org.apache.hadoop.hbase.client.ConnectionImplementation.takeUserRegionLock(Connect

Re: [DISCUSS] 5.2.0 RC blocking issues

2024-03-11 Thread Viraj Jasani
Are we good to start with Omid release?


On Wed, Feb 28, 2024 at 1:51 PM Viraj Jasani  wrote:

> After resolving a couple more issues, I finally have the RC ready for
> vote. I will start the thread soon.
>
>
> On Tue, Feb 27, 2024 at 8:26 AM Viraj Jasani  wrote:
>
>> Another release attempt failed during publish release step, pushed fix
>> and ported to 5.2 branch:
>>
>> https://github.com/apache/phoenix/commit/bc1e2e7bea40c7d03940748e8f1d9f6b23339867
>>
>>
>> On Mon, Feb 26, 2024 at 5:36 PM Viraj Jasani  wrote:
>>
>>> Thank you Istvan!
>>>
>>> Except for the arm64 vs amd64, I was able to get over other issues. For
>>> arm64 JDK, I have done local change to unblock the RC and I hope that
>>> should be fine.
>>>
>>> However, publish-release step is failing with gpg error:
>>>
>>> 01:03:53 [INFO] --- maven-gpg-plugin:3.1.0:sign (sign-release-artifacts)
>>> @ phoenix ---
>>> 01:03:53 [INFO] Signing 3 files with 0x1012D134 secret key.
>>> gpg: setting pinentry mode 'error' failed: Forbidden
>>> gpg: keydb_search failed: Forbidden
>>> gpg: skipped "0x1012D134": Forbidden
>>> gpg: signing failed: Forbidden
>>>
>>> I am not sure of the exact root cause here, but it is quite likely that
>>> this is related to MGPG-92
>>> <https://issues.apache.org/jira/browse/MGPG-92> that Nick created. I
>>> wonder if we can run the publish-release step directly for debugging
>>> purpose by any chance.
>>>
>>>
>>>
>>>
>>> On Sun, Feb 25, 2024 at 10:03 PM Istvan Toth 
>>> wrote:
>>>
>>>> IIRC I copied the docker release originally from HBase, which took them
>>>> from Spark.
>>>> The M1 issues may have been already fixed in one of those projects.
>>>>
>>>> A simple Ubuntu base image upgrade to 22.04 may fix the M1 specific
>>>> issues.
>>>> I can't help directly, as I do not have access to a Mac, but ping me on
>>>> Slack if you get stuck.
>>>>
>>>> As for the third issue, the scripts generate logs in the working
>>>> directory.
>>>> If they do not log the maven command line, you could easily add a line
>>>> to
>>>> log them.
>>>> The ERRORS logged are a known issue, as Maven does not like the tricks
>>>> used
>>>> for multi-profile building, but even 3.9.6 accepts them, and only logs
>>>> WARNINGs in my experience.
>>>>
>>>> I'm going to do a dry-run of the release scripts locally, and see if I
>>>> can
>>>> repro some problems on my Intel Linux machine.
>>>> If you have access to a secure Intel Linux host, you may also want to
>>>> try
>>>> to run the scripts there.
>>>> (though getting the ssh password entry working can be tricky)
>>>>
>>>> Istvan
>>>>
>>>> On Sun, Feb 25, 2024 at 9:37 PM Viraj Jasani 
>>>> wrote:
>>>>
>>>> > Hi,
>>>> >
>>>> > I have started with creating 5.2.0 RC, I am starting this thread to
>>>> discuss
>>>> > some of the issues I have come across so far.
>>>> >
>>>> > 1) do-release-docker.sh is not able to grep and identify snapshot and
>>>> > release versions in release-utils.
>>>> > While the function parse_version works fine, if run manually on the
>>>> 5.2 pom
>>>> > contents. Hence, I manually updated the utility to take 5.2.0-SNAPSHOT
>>>> > version:
>>>> >
>>>> > --- a/dev/create-release/release-util.sh
>>>> > +++ b/dev/create-release/release-util.sh
>>>> > @@ -149,6 +149,7 @@ function get_release_info {
>>>> >local version
>>>> >version="$(curl -s
>>>> > "$ASF_REPO_WEBUI;a=blob_plain;f=pom.xml;hb=refs/heads/$GIT_BRANCH" |
>>>> >  parse_version)"
>>>> > +  version="5.2.0-SNAPSHOT"
>>>> >echo "Current branch VERSION is $version."
>>>> >
>>>> >RELEASE_VERSION=""
>>>> >
>>>> >
>>>> > This is done to unblock the release for now. We can investigate and
>>>> fix
>>>> > this later.
>>>> >
>>>> > 2) openjdk-8-amd64 installation fails because I am using M1 Mac:
>>>> &g

[jira] [Resolved] (PHOENIX-7006) Configure maxLookbackAge at table level

2024-03-11 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7006?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani resolved PHOENIX-7006.
---
Fix Version/s: 5.3.0
   Resolution: Fixed

> Configure maxLookbackAge at table level
> ---
>
> Key: PHOENIX-7006
> URL: https://issues.apache.org/jira/browse/PHOENIX-7006
> Project: Phoenix
>  Issue Type: Improvement
>        Reporter: Viraj Jasani
>Assignee: Sanjeet Malhotra
>Priority: Major
> Fix For: 5.3.0
>
>
> Phoenix max lookback age feature preserves live or deleted row versions that 
> are only visible through the max lookback window, it does not preserve any 
> unwanted row versions that should not be visible through the max lookback 
> window. More details on the max lookback redesign: PHOENIX-6888
> As of today, maxlookback age is only configurable at the cluster level 
> (config key: {_}phoenix.max.lookback.age.seconds{_}), meaning the same value 
> is used by all tables. This does not allow individual table level compaction 
> scanner to be able to retain data based on the table level maxlookback age. 
> Setting max lookback age at the table level can serve multiple purposes e.g. 
> change-data-capture (PHOENIX-7001) for individual table should have it's own 
> latest data retention period.
> The purpose of this Jira is to allow maxlookback age as a table level 
> property:
>  * New column in SYSTEM.CATALOG to preserve table level maxlookback age
>  * PTable object to read the value of maxlookback from SYSTEM.CATALOG
>  * Allow CREATE/ALTER TABLE DDLs to provide maxlookback attribute
>  * CompactionScanner should use table level maxlookbackAge, if available, 
> else use cluster level config



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (PHOENIX-7258) Query Optimizer should pick Index hint even for point lookup queries

2024-03-08 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani resolved PHOENIX-7258.
---
Resolution: Fixed

> Query Optimizer should pick Index hint even for point lookup queries
> 
>
> Key: PHOENIX-7258
> URL: https://issues.apache.org/jira/browse/PHOENIX-7258
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 5.2.0, 5.1.3
>    Reporter: Viraj Jasani
>Assignee: Sanjeet Malhotra
>Priority: Major
> Fix For: 5.2.0
>
>
> For better performance, user can create covered indexes such that the indexed 
> columns are same as composite primary key of the data table, but with 
> different order. For instance, create data table with columns PK1, PK2, PK3, 
> C1, C2 columns with primary key being PK1, Pk2, PK3. In order to get better 
> performance from HBase block caching, if the data with same PK3 values are 
> going to reside as close to each other as possible, we can create an index on 
> PK3, PK2, PK1 and also include columns C1 and C2.
> For point lookups on the data table, it might still be helpful to query index 
> table depending on the usecase. We should allow using index hint to query the 
> index table for point lookup.
> When the query optimizer identifies that the query is point lookup for the 
> data table and if the "stop at best plan" is true, then it immediately 
> returns without checking the hint. We should check for hint and if applicable 
> index based hint plan is identified, use it.
>  
> Assuming getHintedPlanIfApplicable() retrieves hinted plan, it can look 
> something like:
> {code:java}
> if (dataPlan.getContext().getScanRanges().isPointLookup() && stopAtBestPlan
> && dataPlan.isApplicable()) {
> if (indexes.isEmpty() || select.getHint().getHint(Hint.INDEX) == null) {
> return Collections.singletonList(dataPlan);
> }
> QueryPlan hintedPlan = getHintedPlanIfApplicable(dataPlan, statement, 
> targetColumns,
> parallelIteratorFactory, select, indexes);
> if (hintedPlan != null) {
> PTable index = hintedPlan.getTableRef().getTable();
> if (hintedPlan.isApplicable() && (index.getIndexWhere() == null
> || isPartialIndexUsable(select, dataPlan, index))) {
> return Collections.singletonList(hintedPlan);
> }
> }
> return Collections.singletonList(dataPlan);
> } {code}
> We still need to be optimal i.e. if the hinted index plan is not applicable 
> or useful, we still need to immediately return the data plan of point lookup.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-7258) Query Optimizer should pick Index hint even for point lookup queries

2024-03-08 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated PHOENIX-7258:
--
Fix Version/s: 5.2.0

> Query Optimizer should pick Index hint even for point lookup queries
> 
>
> Key: PHOENIX-7258
> URL: https://issues.apache.org/jira/browse/PHOENIX-7258
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 5.2.0, 5.1.3
>    Reporter: Viraj Jasani
>Assignee: Sanjeet Malhotra
>Priority: Major
> Fix For: 5.2.0
>
>
> For better performance, user can create covered indexes such that the indexed 
> columns are same as composite primary key of the data table, but with 
> different order. For instance, create data table with columns PK1, PK2, PK3, 
> C1, C2 columns with primary key being PK1, Pk2, PK3. In order to get better 
> performance from HBase block caching, if the data with same PK3 values are 
> going to reside as close to each other as possible, we can create an index on 
> PK3, PK2, PK1 and also include columns C1 and C2.
> For point lookups on the data table, it might still be helpful to query index 
> table depending on the usecase. We should allow using index hint to query the 
> index table for point lookup.
> When the query optimizer identifies that the query is point lookup for the 
> data table and if the "stop at best plan" is true, then it immediately 
> returns without checking the hint. We should check for hint and if applicable 
> index based hint plan is identified, use it.
>  
> Assuming getHintedPlanIfApplicable() retrieves hinted plan, it can look 
> something like:
> {code:java}
> if (dataPlan.getContext().getScanRanges().isPointLookup() && stopAtBestPlan
> && dataPlan.isApplicable()) {
> if (indexes.isEmpty() || select.getHint().getHint(Hint.INDEX) == null) {
> return Collections.singletonList(dataPlan);
> }
> QueryPlan hintedPlan = getHintedPlanIfApplicable(dataPlan, statement, 
> targetColumns,
> parallelIteratorFactory, select, indexes);
> if (hintedPlan != null) {
> PTable index = hintedPlan.getTableRef().getTable();
> if (hintedPlan.isApplicable() && (index.getIndexWhere() == null
> || isPartialIndexUsable(select, dataPlan, index))) {
> return Collections.singletonList(hintedPlan);
> }
> }
> return Collections.singletonList(dataPlan);
> } {code}
> We still need to be optimal i.e. if the hinted index plan is not applicable 
> or useful, we still need to immediately return the data plan of point lookup.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-7253) Perf improvement for non-full scan queries on large table

2024-03-07 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7253?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated PHOENIX-7253:
--
Description: 
Any considerably large table with more than 100k regions can give problematic 
performance if we access all region locations from meta for the given table 
before generating parallel or sequential scans for the given query. The perf 
impact can really hurt range scan queries.

Consider a table with hundreds of thousands of tenant views. Unless the query 
is strict point lookup, any query on any tenant view would end up retrieving 
region locations of all regions of the base table. In case if IOException is 
thrown by HBase client during any region location lookup in meta, we only 
perform single retry.

Proposal:
 # All non point lookup queries should only retrieve region locations that 
cover the scan boundary. Avoid fetching all region locations of the base table.
 # Make retries configurable with higher default value.

 

The proposal should improve the performance of queries:
 * Range Scan
 * Range scan on Salted table
 * Range scan on Salted table with Tenant id and/or View index id
 * Range Scan on Tenant connection
 * Full Scan on Tenant connection

 

Sample stacktrace from the multiple failures observed:
{code:java}
java.sql.SQLException: ERROR 1102 (XCL02): Cannot get all table regions.Stack 
trace: java.sql.SQLException: ERROR 1102 (XCL02): Cannot get all table regions.
    at 
org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:620)
    at 
org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:229)
    at 
org.apache.phoenix.query.ConnectionQueryServicesImpl.getAllTableRegions(ConnectionQueryServicesImpl.java:781)
    at 
org.apache.phoenix.query.DelegateConnectionQueryServices.getAllTableRegions(DelegateConnectionQueryServices.java:87)
    at 
org.apache.phoenix.query.DelegateConnectionQueryServices.getAllTableRegions(DelegateConnectionQueryServices.java:87)
    at 
org.apache.phoenix.iterate.DefaultParallelScanGrouper.getRegionBoundaries(DefaultParallelScanGrouper.java:74)
    at 
org.apache.phoenix.iterate.BaseResultIterators.getRegionBoundaries(BaseResultIterators.java:587)
    at 
org.apache.phoenix.iterate.BaseResultIterators.getParallelScans(BaseResultIterators.java:936)
    at 
org.apache.phoenix.iterate.BaseResultIterators.getParallelScans(BaseResultIterators.java:669)
    at 
org.apache.phoenix.iterate.BaseResultIterators.(BaseResultIterators.java:555)
    at 
org.apache.phoenix.iterate.SerialIterators.(SerialIterators.java:69)
    at org.apache.phoenix.execute.ScanPlan.newIterator(ScanPlan.java:278)
    at org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:374)
    at org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:222)
    at org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:217)
    at org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:212)
    at 
org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:370)
    at 
org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:328)
    at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
    at 
org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:328)
    at 
org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:320)
    at 
org.apache.phoenix.jdbc.PhoenixPreparedStatement.executeQuery(PhoenixPreparedStatement.java:188)
    ...
    ...
    Caused by: java.io.InterruptedIOException: Origin: InterruptedException
        at 
org.apache.hadoop.hbase.util.ExceptionUtil.asInterrupt(ExceptionUtil.java:72)
        at 
org.apache.hadoop.hbase.client.ConnectionImplementation.takeUserRegionLock(ConnectionImplementation.java:1129)
        at 
org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegionInMeta(ConnectionImplementation.java:994)
        at 
org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegion(ConnectionImplementation.java:895)
        at 
org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegion(ConnectionImplementation.java:881)
        at 
org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegion(ConnectionImplementation.java:851)
        at 
org.apache.hadoop.hbase.client.ConnectionImplementation.getRegionLocation(ConnectionImplementation.java:730)
        at 
org.apache.phoenix.query.ConnectionQueryServicesImpl.getAllTableRegions(ConnectionQueryServicesImpl.java:766)
        ... 254 more
Caused by: java.lang.InterruptedException
        at 
java.base/java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireNanos(AbstractQueuedSynchronizer.java:982)
        at 
java.base/java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireNanos(AbstractQueuedSynchronizer.java:1288)
        at 
java.base

[jira] [Updated] (PHOENIX-7258) Query Optimizer should pick Index hint even for point lookup queries

2024-03-05 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated PHOENIX-7258:
--
Affects Version/s: 5.1.3
   5.2.0

> Query Optimizer should pick Index hint even for point lookup queries
> 
>
> Key: PHOENIX-7258
> URL: https://issues.apache.org/jira/browse/PHOENIX-7258
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 5.2.0, 5.1.3
>    Reporter: Viraj Jasani
>Assignee: Sanjeet Malhotra
>Priority: Major
>
> For better performance, user can create covered indexes such that the indexed 
> columns are same as composite primary key of the data table, but with 
> different order. For instance, create data table with columns PK1, PK2, PK3, 
> C1, C2 columns with primary key being PK1, Pk2, PK3. In order to get better 
> performance from HBase block caching, if the data with same PK3 values are 
> going to reside as close to each other as possible, we can create an index on 
> PK3, PK2, PK1 and also include columns C1 and C2.
> For point lookups on the data table, it might still be helpful to query index 
> table depending on the usecase. We should allow using index hint to query the 
> index table for point lookup.
> When the query optimizer identifies that the query is point lookup for the 
> data table and if the "stop at best plan" is true, then it immediately 
> returns without checking the hint. We should check for hint and if applicable 
> index based hint plan is identified, use it.
>  
> Assuming getHintedPlanIfApplicable() retrieves hinted plan, it can look 
> something like:
> {code:java}
> if (dataPlan.getContext().getScanRanges().isPointLookup() && stopAtBestPlan
> && dataPlan.isApplicable()) {
> if (indexes.isEmpty() || select.getHint().getHint(Hint.INDEX) == null) {
> return Collections.singletonList(dataPlan);
> }
> QueryPlan hintedPlan = getHintedPlanIfApplicable(dataPlan, statement, 
> targetColumns,
> parallelIteratorFactory, select, indexes);
> if (hintedPlan != null) {
> PTable index = hintedPlan.getTableRef().getTable();
> if (hintedPlan.isApplicable() && (index.getIndexWhere() == null
> || isPartialIndexUsable(select, dataPlan, index))) {
> return Collections.singletonList(hintedPlan);
> }
> }
> return Collections.singletonList(dataPlan);
> } {code}
> We still need to be optimal i.e. if the hinted index plan is not applicable 
> or useful, we still need to immediately return the data plan of point lookup.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (PHOENIX-7258) Query Optimizer should pick Index hint even for point lookup queries

2024-03-05 Thread Viraj Jasani (Jira)
Viraj Jasani created PHOENIX-7258:
-

 Summary: Query Optimizer should pick Index hint even for point 
lookup queries
 Key: PHOENIX-7258
 URL: https://issues.apache.org/jira/browse/PHOENIX-7258
 Project: Phoenix
  Issue Type: Improvement
Reporter: Viraj Jasani


For better performance, user can create covered indexes such that the indexed 
columns are same as composite primary key of the data table, but with different 
order. For instance, create data table with columns PK1, PK2, PK3, C1, C2 
columns with primary key being PK1, Pk2, PK3. In order to get better 
performance from HBase block caching, if the data with same PK3 values are 
going to reside as close to each other as possible, we can create an index on 
PK3, PK2, PK1 and also include columns C1 and C2.

For point lookups on the data table, it might still be helpful to query index 
table depending on the usecase. We should allow using index hint to query the 
index table for point lookup.

When the query optimizer identifies that the query is point lookup for the data 
table and if the "stop at best plan" is true, then it immediately returns 
without checking the hint. We should check for hint and if applicable index 
based hint plan is identified, use it.

 

Assuming getHintedPlanIfApplicable() retrieves hinted plan, it can look 
something like:
{code:java}
if (dataPlan.getContext().getScanRanges().isPointLookup() && stopAtBestPlan
&& dataPlan.isApplicable()) {
if (indexes.isEmpty() || select.getHint().getHint(Hint.INDEX) == null) {
return Collections.singletonList(dataPlan);
}
QueryPlan hintedPlan = getHintedPlanIfApplicable(dataPlan, statement, 
targetColumns,
parallelIteratorFactory, select, indexes);
if (hintedPlan != null) {
PTable index = hintedPlan.getTableRef().getTable();
if (hintedPlan.isApplicable() && (index.getIndexWhere() == null
|| isPartialIndexUsable(select, dataPlan, index))) {
return Collections.singletonList(hintedPlan);
}
}
return Collections.singletonList(dataPlan);
} {code}
We still need to be optimal i.e. if the hinted index plan is not applicable or 
useful, we still need to immediately return the data plan of point lookup.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (PHOENIX-7258) Query Optimizer should pick Index hint even for point lookup queries

2024-03-05 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani reassigned PHOENIX-7258:
-

Assignee: Sanjeet Malhotra

> Query Optimizer should pick Index hint even for point lookup queries
> 
>
> Key: PHOENIX-7258
> URL: https://issues.apache.org/jira/browse/PHOENIX-7258
> Project: Phoenix
>  Issue Type: Improvement
>        Reporter: Viraj Jasani
>Assignee: Sanjeet Malhotra
>Priority: Major
>
> For better performance, user can create covered indexes such that the indexed 
> columns are same as composite primary key of the data table, but with 
> different order. For instance, create data table with columns PK1, PK2, PK3, 
> C1, C2 columns with primary key being PK1, Pk2, PK3. In order to get better 
> performance from HBase block caching, if the data with same PK3 values are 
> going to reside as close to each other as possible, we can create an index on 
> PK3, PK2, PK1 and also include columns C1 and C2.
> For point lookups on the data table, it might still be helpful to query index 
> table depending on the usecase. We should allow using index hint to query the 
> index table for point lookup.
> When the query optimizer identifies that the query is point lookup for the 
> data table and if the "stop at best plan" is true, then it immediately 
> returns without checking the hint. We should check for hint and if applicable 
> index based hint plan is identified, use it.
>  
> Assuming getHintedPlanIfApplicable() retrieves hinted plan, it can look 
> something like:
> {code:java}
> if (dataPlan.getContext().getScanRanges().isPointLookup() && stopAtBestPlan
> && dataPlan.isApplicable()) {
> if (indexes.isEmpty() || select.getHint().getHint(Hint.INDEX) == null) {
> return Collections.singletonList(dataPlan);
> }
> QueryPlan hintedPlan = getHintedPlanIfApplicable(dataPlan, statement, 
> targetColumns,
> parallelIteratorFactory, select, indexes);
> if (hintedPlan != null) {
> PTable index = hintedPlan.getTableRef().getTable();
> if (hintedPlan.isApplicable() && (index.getIndexWhere() == null
> || isPartialIndexUsable(select, dataPlan, index))) {
> return Collections.singletonList(hintedPlan);
> }
> }
> return Collections.singletonList(dataPlan);
> } {code}
> We still need to be optimal i.e. if the hinted index plan is not applicable 
> or useful, we still need to immediately return the data plan of point lookup.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-7253) Perf improvement for non-full scan queries on large table

2024-03-04 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7253?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated PHOENIX-7253:
--
Fix Version/s: 5.2.0
   5.1.4

> Perf improvement for non-full scan queries on large table
> -
>
> Key: PHOENIX-7253
> URL: https://issues.apache.org/jira/browse/PHOENIX-7253
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 5.2.0, 5.1.3
>    Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Critical
> Fix For: 5.2.0, 5.1.4
>
>
> Any considerably large table with more than 100k regions can give problematic 
> performance if we access all region locations from meta for the given table 
> before generating parallel or sequential scans for the given query. The perf 
> impact can really hurt range scan queries.
> Consider a table with hundreds of thousands of tenant views. Unless the query 
> is strict point lookup, any query on any tenant view would end up retrieving 
> region locations of all regions of the base table. In case if IOException is 
> thrown by HBase client during any region location lookup in meta, we only 
> perform single retry.
> Proposal:
>  # All non point lookup queries should only retrieve region locations that 
> cover the scan boundary. Avoid fetching all region locations of the base 
> table.
>  # Make retries configurable with higher default value.
>  
> Sample stacktrace from the multiple failures observed:
> {code:java}
> java.sql.SQLException: ERROR 1102 (XCL02): Cannot get all table regions.Stack 
> trace: java.sql.SQLException: ERROR 1102 (XCL02): Cannot get all table 
> regions.
>     at 
> org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:620)
>     at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:229)
>     at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.getAllTableRegions(ConnectionQueryServicesImpl.java:781)
>     at 
> org.apache.phoenix.query.DelegateConnectionQueryServices.getAllTableRegions(DelegateConnectionQueryServices.java:87)
>     at 
> org.apache.phoenix.query.DelegateConnectionQueryServices.getAllTableRegions(DelegateConnectionQueryServices.java:87)
>     at 
> org.apache.phoenix.iterate.DefaultParallelScanGrouper.getRegionBoundaries(DefaultParallelScanGrouper.java:74)
>     at 
> org.apache.phoenix.iterate.BaseResultIterators.getRegionBoundaries(BaseResultIterators.java:587)
>     at 
> org.apache.phoenix.iterate.BaseResultIterators.getParallelScans(BaseResultIterators.java:936)
>     at 
> org.apache.phoenix.iterate.BaseResultIterators.getParallelScans(BaseResultIterators.java:669)
>     at 
> org.apache.phoenix.iterate.BaseResultIterators.(BaseResultIterators.java:555)
>     at 
> org.apache.phoenix.iterate.SerialIterators.(SerialIterators.java:69)
>     at org.apache.phoenix.execute.ScanPlan.newIterator(ScanPlan.java:278)
>     at 
> org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:374)
>     at 
> org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:222)
>     at 
> org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:217)
>     at 
> org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:212)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:370)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:328)
>     at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:328)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:320)
>     at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.executeQuery(PhoenixPreparedStatement.java:188)
>     ...
>     ...
>     Caused by: java.io.InterruptedIOException: Origin: InterruptedException
>         at 
> org.apache.hadoop.hbase.util.ExceptionUtil.asInterrupt(ExceptionUtil.java:72)
>         at 
> org.apache.hadoop.hbase.client.ConnectionImplementation.takeUserRegionLock(ConnectionImplementation.java:1129)
>         at 
> org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegionInMeta(ConnectionImplementation.java:994)
>         at 
> org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegion(ConnectionImplementation.java:895)
>         at 
> org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegion(ConnectionI

[jira] [Created] (PHOENIX-7253) Perf improvement for non-full scan queries on large table

2024-03-04 Thread Viraj Jasani (Jira)
Viraj Jasani created PHOENIX-7253:
-

 Summary: Perf improvement for non-full scan queries on large table
 Key: PHOENIX-7253
 URL: https://issues.apache.org/jira/browse/PHOENIX-7253
 Project: Phoenix
  Issue Type: Improvement
Affects Versions: 5.1.3, 5.2.0
Reporter: Viraj Jasani


Any considerably large table with more than 100k regions can give problematic 
performance if we access all region locations from meta for the given table 
before generating parallel or sequential scans for the given query. The perf 
impact can really hurt range scan queries.

Consider a table with hundreds of thousands of tenant views. Unless the query 
is strict point lookup, any query on any tenant view would end up retrieving 
region locations of all regions of the base table. In case if IOException is 
thrown by HBase client during any region location lookup in meta, we only 
perform single retry.

Proposal:
 # All non point lookup queries should only retrieve region locations that 
cover the scan boundary. Avoid fetching all region locations of the base table.
 # Make retries configurable with higher default value.

 

Sample stacktrace from the multiple failures observed:
{code:java}
java.sql.SQLException: ERROR 1102 (XCL02): Cannot get all table regions.Stack 
trace: java.sql.SQLException: ERROR 1102 (XCL02): Cannot get all table regions.
    at 
org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:620)
    at 
org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:229)
    at 
org.apache.phoenix.query.ConnectionQueryServicesImpl.getAllTableRegions(ConnectionQueryServicesImpl.java:781)
    at 
org.apache.phoenix.query.DelegateConnectionQueryServices.getAllTableRegions(DelegateConnectionQueryServices.java:87)
    at 
org.apache.phoenix.query.DelegateConnectionQueryServices.getAllTableRegions(DelegateConnectionQueryServices.java:87)
    at 
org.apache.phoenix.iterate.DefaultParallelScanGrouper.getRegionBoundaries(DefaultParallelScanGrouper.java:74)
    at 
org.apache.phoenix.iterate.BaseResultIterators.getRegionBoundaries(BaseResultIterators.java:587)
    at 
org.apache.phoenix.iterate.BaseResultIterators.getParallelScans(BaseResultIterators.java:936)
    at 
org.apache.phoenix.iterate.BaseResultIterators.getParallelScans(BaseResultIterators.java:669)
    at 
org.apache.phoenix.iterate.BaseResultIterators.(BaseResultIterators.java:555)
    at 
org.apache.phoenix.iterate.SerialIterators.(SerialIterators.java:69)
    at org.apache.phoenix.execute.ScanPlan.newIterator(ScanPlan.java:278)
    at org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:374)
    at org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:222)
    at org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:217)
    at org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:212)
    at 
org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:370)
    at 
org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:328)
    at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
    at 
org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:328)
    at 
org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:320)
    at 
org.apache.phoenix.jdbc.PhoenixPreparedStatement.executeQuery(PhoenixPreparedStatement.java:188)
    ...
    ...
    Caused by: java.io.InterruptedIOException: Origin: InterruptedException
        at 
org.apache.hadoop.hbase.util.ExceptionUtil.asInterrupt(ExceptionUtil.java:72)
        at 
org.apache.hadoop.hbase.client.ConnectionImplementation.takeUserRegionLock(ConnectionImplementation.java:1129)
        at 
org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegionInMeta(ConnectionImplementation.java:994)
        at 
org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegion(ConnectionImplementation.java:895)
        at 
org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegion(ConnectionImplementation.java:881)
        at 
org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegion(ConnectionImplementation.java:851)
        at 
org.apache.hadoop.hbase.client.ConnectionImplementation.getRegionLocation(ConnectionImplementation.java:730)
        at 
org.apache.phoenix.query.ConnectionQueryServicesImpl.getAllTableRegions(ConnectionQueryServicesImpl.java:766)
        ... 254 more
Caused by: java.lang.InterruptedException
        at 
java.base/java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireNanos(AbstractQueuedSynchronizer.java:982)
        at 
java.base/java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireNanos(AbstractQueuedSynchronizer.java:1288)
        at 
java.base/java.util.concurrent.locks.ReentrantLock.tryLock(ReentrantLock.java:424

[jira] [Assigned] (PHOENIX-7253) Perf improvement for non-full scan queries on large table

2024-03-04 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7253?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani reassigned PHOENIX-7253:
-

Assignee: Viraj Jasani

> Perf improvement for non-full scan queries on large table
> -
>
> Key: PHOENIX-7253
> URL: https://issues.apache.org/jira/browse/PHOENIX-7253
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 5.2.0, 5.1.3
>    Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Critical
>
> Any considerably large table with more than 100k regions can give problematic 
> performance if we access all region locations from meta for the given table 
> before generating parallel or sequential scans for the given query. The perf 
> impact can really hurt range scan queries.
> Consider a table with hundreds of thousands of tenant views. Unless the query 
> is strict point lookup, any query on any tenant view would end up retrieving 
> region locations of all regions of the base table. In case if IOException is 
> thrown by HBase client during any region location lookup in meta, we only 
> perform single retry.
> Proposal:
>  # All non point lookup queries should only retrieve region locations that 
> cover the scan boundary. Avoid fetching all region locations of the base 
> table.
>  # Make retries configurable with higher default value.
>  
> Sample stacktrace from the multiple failures observed:
> {code:java}
> java.sql.SQLException: ERROR 1102 (XCL02): Cannot get all table regions.Stack 
> trace: java.sql.SQLException: ERROR 1102 (XCL02): Cannot get all table 
> regions.
>     at 
> org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:620)
>     at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:229)
>     at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.getAllTableRegions(ConnectionQueryServicesImpl.java:781)
>     at 
> org.apache.phoenix.query.DelegateConnectionQueryServices.getAllTableRegions(DelegateConnectionQueryServices.java:87)
>     at 
> org.apache.phoenix.query.DelegateConnectionQueryServices.getAllTableRegions(DelegateConnectionQueryServices.java:87)
>     at 
> org.apache.phoenix.iterate.DefaultParallelScanGrouper.getRegionBoundaries(DefaultParallelScanGrouper.java:74)
>     at 
> org.apache.phoenix.iterate.BaseResultIterators.getRegionBoundaries(BaseResultIterators.java:587)
>     at 
> org.apache.phoenix.iterate.BaseResultIterators.getParallelScans(BaseResultIterators.java:936)
>     at 
> org.apache.phoenix.iterate.BaseResultIterators.getParallelScans(BaseResultIterators.java:669)
>     at 
> org.apache.phoenix.iterate.BaseResultIterators.(BaseResultIterators.java:555)
>     at 
> org.apache.phoenix.iterate.SerialIterators.(SerialIterators.java:69)
>     at org.apache.phoenix.execute.ScanPlan.newIterator(ScanPlan.java:278)
>     at 
> org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:374)
>     at 
> org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:222)
>     at 
> org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:217)
>     at 
> org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:212)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:370)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:328)
>     at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:328)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:320)
>     at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.executeQuery(PhoenixPreparedStatement.java:188)
>     ...
>     ...
>     Caused by: java.io.InterruptedIOException: Origin: InterruptedException
>         at 
> org.apache.hadoop.hbase.util.ExceptionUtil.asInterrupt(ExceptionUtil.java:72)
>         at 
> org.apache.hadoop.hbase.client.ConnectionImplementation.takeUserRegionLock(ConnectionImplementation.java:1129)
>         at 
> org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegionInMeta(ConnectionImplementation.java:994)
>         at 
> org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegion(ConnectionImplementation.java:895)
>         at 
> org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegion(ConnectionImplementation.java:881)
>         at 
> org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegion(ConnectionImpleme

[jira] [Updated] (PHOENIX-7236) Fix release scripts and Update version to 5.3.0

2024-03-02 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7236?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated PHOENIX-7236:
--
Summary: Fix release scripts and Update version to 5.3.0  (was: Fix release 
scripts for 5.2)

> Fix release scripts and Update version to 5.3.0
> ---
>
> Key: PHOENIX-7236
> URL: https://issues.apache.org/jira/browse/PHOENIX-7236
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 5.2.0
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Major
> Fix For: 5.3.0
>
>
> We see problems with the release scripts when trying to release 5.2.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [VOTE] Release of Apache Phoenix 5.2.0 RC6

2024-02-29 Thread Viraj Jasani
Thanks Istvan, Rajeshbabu and Lars for identifying, fixing and now starting
with new Omid release.
I will re-create RCs once upgraded Omid version is checked-in.

In the meantime, if there is any important fix anyone thinks is worth
including in 5.2.0, please let me know.

This vote is now closed and the RC is rejected for the release purpose.



On Thu, Feb 29, 2024 at 8:13 AM rajeshb...@apache.org <
chrajeshbab...@gmail.com> wrote:

> Istvan,
>
> Sure, yes will do 1.1.2 release tomorrow.
>
> Thanks,
> Rajeshbabu.
>
> On Thu, Feb 29, 2024, 9:40 PM Istvan Toth  wrote:
>
> > The fix for OMID-277 has landed (among with two other fixes)
> >
> > Is there a chance you could manage the Omid 1.2.1 release Rajeshbabu ?
> >
> > On Thu, Feb 29, 2024 at 7:10 AM Istvan Toth  wrote:
> >
> > > -1 We need a new Omid release first that fixes OMID-277 first
> (binding)
> > >
> > > On Thu, Feb 29, 2024 at 6:57 AM Istvan Toth 
> wrote:
> > >
> > >> I have done no testing yet, but
> > >> https://issues.apache.org/jira/browse/OMID-277 looks like a blocker.
> > >>
> > >> On Wed, Feb 28, 2024 at 10:55 PM Viraj Jasani 
> > wrote:
> > >>
> > >>> Please vote on this Apache Phoenix release candidate,
> Phoenix-5.2.0RC6
> > >>>
> > >>> The VOTE will remain open for at least 72 hours.
> > >>>
> > >>> [ ] +1 Release this package as Apache Phoenix 5.2.0
> > >>> [ ] -1 Do not release this package because ...
> > >>>
> > >>> The tag to be voted on is 5.2.0RC6:
> > >>>
> > >>>   https://github.com/apache/phoenix/tree/5.2.0RC6
> > >>>
> > >>> The release files, including signatures, digests, as well as
> CHANGES.md
> > >>> and RELEASENOTES.md included in this RC can be found at:
> > >>>
> > >>> https://dist.apache.org/repos/dist/dev/phoenix/phoenix-5.2.0RC6/
> > >>>
> > >>> Maven artifacts are available in a staging repository at:
> > >>>
> > >>>
> >
> https://repository.apache.org/content/repositories/orgapachephoenix-1254/
> > >>>
> > >>> Artifacts were signed with the 1012D134 key which can be found in:
> > >>>
> > >>>   https://dist.apache.org/repos/dist/release/phoenix/KEYS
> > >>>
> > >>> To learn more about Apache Phoenix, please see
> > >>>
> > >>>   https://phoenix.apache.org/
> > >>>
> > >>> Thanks,
> > >>> Your Phoenix Release Manager
> > >>>
> > >>
> > >>
> > >> --
> > >> *István Tóth* | Sr. Staff Software Engineer
> > >> *Email*: st...@cloudera.com
> > >> cloudera.com <https://www.cloudera.com>
> > >> [image: Cloudera] <https://www.cloudera.com/>
> > >> [image: Cloudera on Twitter] <https://twitter.com/cloudera> [image:
> > >> Cloudera on Facebook] <https://www.facebook.com/cloudera> [image:
> > >> Cloudera on LinkedIn] <https://www.linkedin.com/company/cloudera>
> > >> --
> > >> --
> > >>
> > >
> >
>


[jira] [Created] (PHOENIX-7244) BloomFilter tests that can be run with HBase 2.4 and 2.5 profiles

2024-02-28 Thread Viraj Jasani (Jira)
Viraj Jasani created PHOENIX-7244:
-

 Summary: BloomFilter tests that can be run with HBase 2.4 and 2.5 
profiles
 Key: PHOENIX-7244
 URL: https://issues.apache.org/jira/browse/PHOENIX-7244
 Project: Phoenix
  Issue Type: Task
Reporter: Viraj Jasani


PHOENIX-7229 fixes how start and stop rowkey generated for single point lookups 
in a way that we can leverage bloom filter. However, BloomFilterIT written with 
initial commit is only compatible with HBase 2.5 versions since HBASE-27241 is 
only present on 2.5 onwards. BloomFilterIT is removed for now, as part of this 
Jira we should re-write BloomFilterIT such that we can check verify bloom 
filter metrics for any HBase profile and only if the mini-cluster is brought up 
with any version less than 2.5.0, we can ignore the test.

This means that we cannot directly access BloomFilterMetrics class, we should 
check regionserver JMX endpoint and verify whether store metrics have 
"bloomFilterRequestsCount", "bloomFilterNegativeResultsCount" and 
"bloomFilterEligibleRequestsCount" present, and verify the values accordingly.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[VOTE] Release of Apache Phoenix 5.2.0 RC6

2024-02-28 Thread Viraj Jasani
Please vote on this Apache Phoenix release candidate, Phoenix-5.2.0RC6

The VOTE will remain open for at least 72 hours.

[ ] +1 Release this package as Apache Phoenix 5.2.0
[ ] -1 Do not release this package because ...

The tag to be voted on is 5.2.0RC6:

  https://github.com/apache/phoenix/tree/5.2.0RC6

The release files, including signatures, digests, as well as CHANGES.md
and RELEASENOTES.md included in this RC can be found at:

https://dist.apache.org/repos/dist/dev/phoenix/phoenix-5.2.0RC6/

Maven artifacts are available in a staging repository at:

https://repository.apache.org/content/repositories/orgapachephoenix-1254/

Artifacts were signed with the 1012D134 key which can be found in:

  https://dist.apache.org/repos/dist/release/phoenix/KEYS

To learn more about Apache Phoenix, please see

  https://phoenix.apache.org/

Thanks,
Your Phoenix Release Manager


Re: [DISCUSS] 5.2.0 RC blocking issues

2024-02-28 Thread Viraj Jasani
After resolving a couple more issues, I finally have the RC ready for vote.
I will start the thread soon.


On Tue, Feb 27, 2024 at 8:26 AM Viraj Jasani  wrote:

> Another release attempt failed during publish release step, pushed fix and
> ported to 5.2 branch:
>
> https://github.com/apache/phoenix/commit/bc1e2e7bea40c7d03940748e8f1d9f6b23339867
>
>
> On Mon, Feb 26, 2024 at 5:36 PM Viraj Jasani  wrote:
>
>> Thank you Istvan!
>>
>> Except for the arm64 vs amd64, I was able to get over other issues. For
>> arm64 JDK, I have done local change to unblock the RC and I hope that
>> should be fine.
>>
>> However, publish-release step is failing with gpg error:
>>
>> 01:03:53 [INFO] --- maven-gpg-plugin:3.1.0:sign (sign-release-artifacts)
>> @ phoenix ---
>> 01:03:53 [INFO] Signing 3 files with 0x1012D134 secret key.
>> gpg: setting pinentry mode 'error' failed: Forbidden
>> gpg: keydb_search failed: Forbidden
>> gpg: skipped "0x1012D134": Forbidden
>> gpg: signing failed: Forbidden
>>
>> I am not sure of the exact root cause here, but it is quite likely that
>> this is related to MGPG-92
>> <https://issues.apache.org/jira/browse/MGPG-92> that Nick created. I
>> wonder if we can run the publish-release step directly for debugging
>> purpose by any chance.
>>
>>
>>
>>
>> On Sun, Feb 25, 2024 at 10:03 PM Istvan Toth 
>> wrote:
>>
>>> IIRC I copied the docker release originally from HBase, which took them
>>> from Spark.
>>> The M1 issues may have been already fixed in one of those projects.
>>>
>>> A simple Ubuntu base image upgrade to 22.04 may fix the M1 specific
>>> issues.
>>> I can't help directly, as I do not have access to a Mac, but ping me on
>>> Slack if you get stuck.
>>>
>>> As for the third issue, the scripts generate logs in the working
>>> directory.
>>> If they do not log the maven command line, you could easily add a line to
>>> log them.
>>> The ERRORS logged are a known issue, as Maven does not like the tricks
>>> used
>>> for multi-profile building, but even 3.9.6 accepts them, and only logs
>>> WARNINGs in my experience.
>>>
>>> I'm going to do a dry-run of the release scripts locally, and see if I
>>> can
>>> repro some problems on my Intel Linux machine.
>>> If you have access to a secure Intel Linux host, you may also want to try
>>> to run the scripts there.
>>> (though getting the ssh password entry working can be tricky)
>>>
>>> Istvan
>>>
>>> On Sun, Feb 25, 2024 at 9:37 PM Viraj Jasani  wrote:
>>>
>>> > Hi,
>>> >
>>> > I have started with creating 5.2.0 RC, I am starting this thread to
>>> discuss
>>> > some of the issues I have come across so far.
>>> >
>>> > 1) do-release-docker.sh is not able to grep and identify snapshot and
>>> > release versions in release-utils.
>>> > While the function parse_version works fine, if run manually on the
>>> 5.2 pom
>>> > contents. Hence, I manually updated the utility to take 5.2.0-SNAPSHOT
>>> > version:
>>> >
>>> > --- a/dev/create-release/release-util.sh
>>> > +++ b/dev/create-release/release-util.sh
>>> > @@ -149,6 +149,7 @@ function get_release_info {
>>> >local version
>>> >version="$(curl -s
>>> > "$ASF_REPO_WEBUI;a=blob_plain;f=pom.xml;hb=refs/heads/$GIT_BRANCH" |
>>> >  parse_version)"
>>> > +  version="5.2.0-SNAPSHOT"
>>> >echo "Current branch VERSION is $version."
>>> >
>>> >RELEASE_VERSION=""
>>> >
>>> >
>>> > This is done to unblock the release for now. We can investigate and fix
>>> > this later.
>>> >
>>> > 2) openjdk-8-amd64 installation fails because I am using M1 Mac:
>>> >
>>> > Setting up openjdk-8-jdk:arm64 (8u372-ga~us1-0ubuntu1~18.04) ...
>>> > update-alternatives: using
>>> > /usr/lib/jvm/java-8-openjdk-arm64/bin/appletviewer to provide
>>> > /usr/bin/appletviewer (appletviewer) in auto mode
>>> > update-alternatives: using
>>> /usr/lib/jvm/java-8-openjdk-arm64/bin/jconsole
>>> > to provide /usr/bin/jconsole (jconsole) in auto mode
>>> > Setting up ubuntu-mono (16.10+18.04.20181005-0ubuntu1) ...
>>> > Processing triggers for l

[jira] [Updated] (PHOENIX-7227) Phoenix 5.2.0 release

2024-02-27 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated PHOENIX-7227:
--
Description: 
# Clean up fix versions
 # Spin RCs + Close the repository on 
https://repository.apache.org/#stagingRepositories
 # "Release" stages nexus repository
 # Promote RC artifacts in SVN
 # Update reporter tool with the released version
 # Push signed release tag
 # Add release version to the download page
 # Send announce email

  was:
# Clean up fix versions
 # Spin RCs
 # "Release" stages nexus repository
 # Promote RC artifacts in SVN
 # Update reporter tool with the released version
 # Push signed release tag
 # Add release version to the download page
 # Send announce email


> Phoenix 5.2.0 release
> -
>
> Key: PHOENIX-7227
> URL: https://issues.apache.org/jira/browse/PHOENIX-7227
> Project: Phoenix
>  Issue Type: Task
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>
> # Clean up fix versions
>  # Spin RCs + Close the repository on 
> https://repository.apache.org/#stagingRepositories
>  # "Release" stages nexus repository
>  # Promote RC artifacts in SVN
>  # Update reporter tool with the released version
>  # Push signed release tag
>  # Add release version to the download page
>  # Send announce email



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [DISCUSS] 5.2.0 RC blocking issues

2024-02-27 Thread Viraj Jasani
Another release attempt failed during publish release step, pushed fix and
ported to 5.2 branch:
https://github.com/apache/phoenix/commit/bc1e2e7bea40c7d03940748e8f1d9f6b23339867


On Mon, Feb 26, 2024 at 5:36 PM Viraj Jasani  wrote:

> Thank you Istvan!
>
> Except for the arm64 vs amd64, I was able to get over other issues. For
> arm64 JDK, I have done local change to unblock the RC and I hope that
> should be fine.
>
> However, publish-release step is failing with gpg error:
>
> 01:03:53 [INFO] --- maven-gpg-plugin:3.1.0:sign (sign-release-artifacts) @
> phoenix ---
> 01:03:53 [INFO] Signing 3 files with 0x1012D134 secret key.
> gpg: setting pinentry mode 'error' failed: Forbidden
> gpg: keydb_search failed: Forbidden
> gpg: skipped "0x1012D134": Forbidden
> gpg: signing failed: Forbidden
>
> I am not sure of the exact root cause here, but it is quite likely that
> this is related to MGPG-92 <https://issues.apache.org/jira/browse/MGPG-92>
> that Nick created. I wonder if we can run the publish-release step directly
> for debugging purpose by any chance.
>
>
>
>
> On Sun, Feb 25, 2024 at 10:03 PM Istvan Toth 
> wrote:
>
>> IIRC I copied the docker release originally from HBase, which took them
>> from Spark.
>> The M1 issues may have been already fixed in one of those projects.
>>
>> A simple Ubuntu base image upgrade to 22.04 may fix the M1 specific
>> issues.
>> I can't help directly, as I do not have access to a Mac, but ping me on
>> Slack if you get stuck.
>>
>> As for the third issue, the scripts generate logs in the working
>> directory.
>> If they do not log the maven command line, you could easily add a line to
>> log them.
>> The ERRORS logged are a known issue, as Maven does not like the tricks
>> used
>> for multi-profile building, but even 3.9.6 accepts them, and only logs
>> WARNINGs in my experience.
>>
>> I'm going to do a dry-run of the release scripts locally, and see if I can
>> repro some problems on my Intel Linux machine.
>> If you have access to a secure Intel Linux host, you may also want to try
>> to run the scripts there.
>> (though getting the ssh password entry working can be tricky)
>>
>> Istvan
>>
>> On Sun, Feb 25, 2024 at 9:37 PM Viraj Jasani  wrote:
>>
>> > Hi,
>> >
>> > I have started with creating 5.2.0 RC, I am starting this thread to
>> discuss
>> > some of the issues I have come across so far.
>> >
>> > 1) do-release-docker.sh is not able to grep and identify snapshot and
>> > release versions in release-utils.
>> > While the function parse_version works fine, if run manually on the 5.2
>> pom
>> > contents. Hence, I manually updated the utility to take 5.2.0-SNAPSHOT
>> > version:
>> >
>> > --- a/dev/create-release/release-util.sh
>> > +++ b/dev/create-release/release-util.sh
>> > @@ -149,6 +149,7 @@ function get_release_info {
>> >local version
>> >version="$(curl -s
>> > "$ASF_REPO_WEBUI;a=blob_plain;f=pom.xml;hb=refs/heads/$GIT_BRANCH" |
>> >  parse_version)"
>> > +  version="5.2.0-SNAPSHOT"
>> >echo "Current branch VERSION is $version."
>> >
>> >RELEASE_VERSION=""
>> >
>> >
>> > This is done to unblock the release for now. We can investigate and fix
>> > this later.
>> >
>> > 2) openjdk-8-amd64 installation fails because I am using M1 Mac:
>> >
>> > Setting up openjdk-8-jdk:arm64 (8u372-ga~us1-0ubuntu1~18.04) ...
>> > update-alternatives: using
>> > /usr/lib/jvm/java-8-openjdk-arm64/bin/appletviewer to provide
>> > /usr/bin/appletviewer (appletviewer) in auto mode
>> > update-alternatives: using
>> /usr/lib/jvm/java-8-openjdk-arm64/bin/jconsole
>> > to provide /usr/bin/jconsole (jconsole) in auto mode
>> > Setting up ubuntu-mono (16.10+18.04.20181005-0ubuntu1) ...
>> > Processing triggers for libc-bin (2.27-3ubuntu1.6) ...
>> > Processing triggers for ca-certificates (20230311ubuntu0.18.04.1) ...
>> > Updating certificates in /etc/ssl/certs...
>> > 0 added, 0 removed; done.
>> > Running hooks in /etc/ca-certificates/update.d...
>> > done.
>> > done.
>> > Processing triggers for libgdk-pixbuf2.0-0:arm64 (2.36.11-2) ...
>> > update-alternatives: error: alternative
>> > /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java for java not registered;
>> not
>> > setting
>> >

Re: [DISCUSS] 5.2.0 RC blocking issues

2024-02-26 Thread Viraj Jasani
Thank you Istvan!

Except for the arm64 vs amd64, I was able to get over other issues. For
arm64 JDK, I have done local change to unblock the RC and I hope that
should be fine.

However, publish-release step is failing with gpg error:

01:03:53 [INFO] --- maven-gpg-plugin:3.1.0:sign (sign-release-artifacts) @
phoenix ---
01:03:53 [INFO] Signing 3 files with 0x1012D134 secret key.
gpg: setting pinentry mode 'error' failed: Forbidden
gpg: keydb_search failed: Forbidden
gpg: skipped "0x1012D134": Forbidden
gpg: signing failed: Forbidden

I am not sure of the exact root cause here, but it is quite likely that
this is related to MGPG-92 <https://issues.apache.org/jira/browse/MGPG-92>
that Nick created. I wonder if we can run the publish-release step directly
for debugging purpose by any chance.




On Sun, Feb 25, 2024 at 10:03 PM Istvan Toth 
wrote:

> IIRC I copied the docker release originally from HBase, which took them
> from Spark.
> The M1 issues may have been already fixed in one of those projects.
>
> A simple Ubuntu base image upgrade to 22.04 may fix the M1 specific issues.
> I can't help directly, as I do not have access to a Mac, but ping me on
> Slack if you get stuck.
>
> As for the third issue, the scripts generate logs in the working directory.
> If they do not log the maven command line, you could easily add a line to
> log them.
> The ERRORS logged are a known issue, as Maven does not like the tricks used
> for multi-profile building, but even 3.9.6 accepts them, and only logs
> WARNINGs in my experience.
>
> I'm going to do a dry-run of the release scripts locally, and see if I can
> repro some problems on my Intel Linux machine.
> If you have access to a secure Intel Linux host, you may also want to try
> to run the scripts there.
> (though getting the ssh password entry working can be tricky)
>
> Istvan
>
> On Sun, Feb 25, 2024 at 9:37 PM Viraj Jasani  wrote:
>
> > Hi,
> >
> > I have started with creating 5.2.0 RC, I am starting this thread to
> discuss
> > some of the issues I have come across so far.
> >
> > 1) do-release-docker.sh is not able to grep and identify snapshot and
> > release versions in release-utils.
> > While the function parse_version works fine, if run manually on the 5.2
> pom
> > contents. Hence, I manually updated the utility to take 5.2.0-SNAPSHOT
> > version:
> >
> > --- a/dev/create-release/release-util.sh
> > +++ b/dev/create-release/release-util.sh
> > @@ -149,6 +149,7 @@ function get_release_info {
> >local version
> >version="$(curl -s
> > "$ASF_REPO_WEBUI;a=blob_plain;f=pom.xml;hb=refs/heads/$GIT_BRANCH" |
> >  parse_version)"
> > +  version="5.2.0-SNAPSHOT"
> >echo "Current branch VERSION is $version."
> >
> >RELEASE_VERSION=""
> >
> >
> > This is done to unblock the release for now. We can investigate and fix
> > this later.
> >
> > 2) openjdk-8-amd64 installation fails because I am using M1 Mac:
> >
> > Setting up openjdk-8-jdk:arm64 (8u372-ga~us1-0ubuntu1~18.04) ...
> > update-alternatives: using
> > /usr/lib/jvm/java-8-openjdk-arm64/bin/appletviewer to provide
> > /usr/bin/appletviewer (appletviewer) in auto mode
> > update-alternatives: using /usr/lib/jvm/java-8-openjdk-arm64/bin/jconsole
> > to provide /usr/bin/jconsole (jconsole) in auto mode
> > Setting up ubuntu-mono (16.10+18.04.20181005-0ubuntu1) ...
> > Processing triggers for libc-bin (2.27-3ubuntu1.6) ...
> > Processing triggers for ca-certificates (20230311ubuntu0.18.04.1) ...
> > Updating certificates in /etc/ssl/certs...
> > 0 added, 0 removed; done.
> > Running hooks in /etc/ca-certificates/update.d...
> > done.
> > done.
> > Processing triggers for libgdk-pixbuf2.0-0:arm64 (2.36.11-2) ...
> > update-alternatives: error: alternative
> > /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java for java not registered;
> not
> > setting
> >
> > In order to resolve this, I set java to use java-8-openjdk-arm64 instead.
> > e.g. update-alternatives --set java
> > /usr/lib/jvm/java-8-openjdk-arm64/jre/bin/java
> > (and all other places where we use amd64)
> >
> > This is done to make the release progress, we can fix this later.
> >
> > 3) make_binary_release fails as it is unable to resolve ${hbase.version}
> > and ${hbase.compat.version}
> >
> > Packaging release source tarballs
> > 2024-02-25T19:43:46Z make_src_release start
> > 2024-02-25T19:43:47Z make_src_release stop (1 seconds)
> > 2024-02-25T19:43:47Z make_binary_release start
> >

[DISCUSS] 5.2.0 RC blocking issues

2024-02-25 Thread Viraj Jasani
Hi,

I have started with creating 5.2.0 RC, I am starting this thread to discuss
some of the issues I have come across so far.

1) do-release-docker.sh is not able to grep and identify snapshot and
release versions in release-utils.
While the function parse_version works fine, if run manually on the 5.2 pom
contents. Hence, I manually updated the utility to take 5.2.0-SNAPSHOT
version:

--- a/dev/create-release/release-util.sh
+++ b/dev/create-release/release-util.sh
@@ -149,6 +149,7 @@ function get_release_info {
   local version
   version="$(curl -s
"$ASF_REPO_WEBUI;a=blob_plain;f=pom.xml;hb=refs/heads/$GIT_BRANCH" |
 parse_version)"
+  version="5.2.0-SNAPSHOT"
   echo "Current branch VERSION is $version."

   RELEASE_VERSION=""


This is done to unblock the release for now. We can investigate and fix
this later.

2) openjdk-8-amd64 installation fails because I am using M1 Mac:

Setting up openjdk-8-jdk:arm64 (8u372-ga~us1-0ubuntu1~18.04) ...
update-alternatives: using
/usr/lib/jvm/java-8-openjdk-arm64/bin/appletviewer to provide
/usr/bin/appletviewer (appletviewer) in auto mode
update-alternatives: using /usr/lib/jvm/java-8-openjdk-arm64/bin/jconsole
to provide /usr/bin/jconsole (jconsole) in auto mode
Setting up ubuntu-mono (16.10+18.04.20181005-0ubuntu1) ...
Processing triggers for libc-bin (2.27-3ubuntu1.6) ...
Processing triggers for ca-certificates (20230311ubuntu0.18.04.1) ...
Updating certificates in /etc/ssl/certs...
0 added, 0 removed; done.
Running hooks in /etc/ca-certificates/update.d...
done.
done.
Processing triggers for libgdk-pixbuf2.0-0:arm64 (2.36.11-2) ...
update-alternatives: error: alternative
/usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java for java not registered; not
setting

In order to resolve this, I set java to use java-8-openjdk-arm64 instead.
e.g. update-alternatives --set java
/usr/lib/jvm/java-8-openjdk-arm64/jre/bin/java
(and all other places where we use amd64)

This is done to make the release progress, we can fix this later.

3) make_binary_release fails as it is unable to resolve ${hbase.version}
and ${hbase.compat.version}

Packaging release source tarballs
2024-02-25T19:43:46Z make_src_release start
2024-02-25T19:43:47Z make_src_release stop (1 seconds)
2024-02-25T19:43:47Z make_binary_release start
19:45:27 [INFO] Scanning for projects...
19:45:27 [ERROR] [ERROR] Some problems were encountered while processing
the POMs:
[ERROR] 'dependencies.dependency.artifactId' for
org.apache.phoenix:phoenix-hbase-compat-${hbase.compat.version}:jar with
value 'phoenix-hbase-compat-${hbase.compat.version}' does not match a valid
id pattern. @ org.apache.phoenix:phoenix-core-client:[unknown-version],
/home/vjasani/phoenix-rm/output/phoenix/phoenix-core-client/pom.xml, line
220, column 19
[ERROR] 'dependencies.dependency.version' for
org.apache.phoenix:phoenix-hbase-compat-${hbase.compat.version}:jar is
missing. @ org.apache.phoenix:phoenix-core-client:[unknown-version],
/home/vjasani/phoenix-rm/output/phoenix/phoenix-core-client/pom.xml, line
218, column 17
[ERROR] 'dependencies.dependency.version' for
org.apache.hbase:hbase-common:jar must be a valid version but is
'${hbase.version}'. @ org.apache.phoenix:phoenix:5.2.0,
/home/vjasani/phoenix-rm/output/phoenix/pom.xml, line 1128, column 18
[ERROR] 'dependencies.dependency.version' for
org.apache.hbase:hbase-metrics-api:jar must be a valid version but is
'${hbase.version}'. @ org.apache.phoenix:phoenix:5.2.0,
/home/vjasani/phoenix-rm/output/phoenix/pom.xml, line 1151, column 18
[ERROR] 'dependencies.dependency.version' for
org.apache.hbase:hbase-client:jar must be a valid version but is
'${hbase.version}'. @ org.apache.phoenix:phoenix:5.2.0,
/home/vjasani/phoenix-rm/output/phoenix/pom.xml, line 1161, column 18
[ERROR] 'dependencies.dependency.version' for
org.apache.hbase:hbase-hadoop-compat:jar must be a valid version but is
'${hbase.version}'. @ org.apache.phoenix:phoenix:5.2.0,
/home/vjasani/phoenix-rm/output/phoenix/pom.xml, line 1226, column 18
...
...


As I do not see "Hbase version is already compiled for Hadoop3. Skipping
rebuild", I assume this should be the first profile from the profile.list
i.e. 2.4 and we are unable to build for the first profile.

While 1) and 2) have workarounds, 3) is currently blocking the release.


[jira] [Resolved] (PHOENIX-7229) Leverage bloom filters for single key point lookups

2024-02-24 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7229?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani resolved PHOENIX-7229.
---
Resolution: Fixed

> Leverage bloom filters for single key point lookups
> ---
>
> Key: PHOENIX-7229
> URL: https://issues.apache.org/jira/browse/PHOENIX-7229
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 5.1.3
>Reporter: Tanuj Khurana
>Assignee: Tanuj Khurana
>Priority: Major
> Fix For: 5.2.0
>
>
> PHOENIX-6710 enabled bloom filters by default when Phoenix tables are 
> created. However, we were not making use of it because Phoenix translates 
> point lookups to scans with the scan range [startkey, stopkey) where startkey 
> is inclusive and is equal to the row key and stopkey is exclusive and is the 
> next key after the row key. 
> This fails the check inside the hbase code in 
> [StoreFileReader#passesBloomFilter|https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFileReader.java#L245-L250]
>  because it applies bloom filter only to scans which are gets and a scan is a 
> GET only if startkey = stopkey and both are inclusive. This is defined here 
> [Scan#isGetScan|https://github.com/apache/hbase/blob/master/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Scan.java#L253-L255]
> We recently have some customers whose use case involves doing point lookups 
> where the row key is not going to be present in the table. Bloom filters are 
> ideal for those use cases.
> We can change our scan range for point lookups to leverage Bloom filters.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Reopened] (PHOENIX-7229) Leverage bloom filters for single key point lookups

2024-02-24 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7229?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani reopened PHOENIX-7229:
---

HBASE-27241 is only present on hbase 2.5+ versions, hence hbase 2.4 based 
builds fail if we try to access bloom filter metrics with IT test. 

Reopening the Jira to remove BloomFilterIT test, we can figure out how best to 
write test later, after 5.2.0 release.

> Leverage bloom filters for single key point lookups
> ---
>
> Key: PHOENIX-7229
> URL: https://issues.apache.org/jira/browse/PHOENIX-7229
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 5.1.3
>Reporter: Tanuj Khurana
>Assignee: Tanuj Khurana
>Priority: Major
> Fix For: 5.2.0
>
>
> PHOENIX-6710 enabled bloom filters by default when Phoenix tables are 
> created. However, we were not making use of it because Phoenix translates 
> point lookups to scans with the scan range [startkey, stopkey) where startkey 
> is inclusive and is equal to the row key and stopkey is exclusive and is the 
> next key after the row key. 
> This fails the check inside the hbase code in 
> [StoreFileReader#passesBloomFilter|https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFileReader.java#L245-L250]
>  because it applies bloom filter only to scans which are gets and a scan is a 
> GET only if startkey = stopkey and both are inclusive. This is defined here 
> [Scan#isGetScan|https://github.com/apache/hbase/blob/master/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Scan.java#L253-L255]
> We recently have some customers whose use case involves doing point lookups 
> where the row key is not going to be present in the table. Bloom filters are 
> ideal for those use cases.
> We can change our scan range for point lookups to leverage Bloom filters.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (PHOENIX-7229) Leverage bloom filters for single key point lookups

2024-02-24 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7229?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani resolved PHOENIX-7229.
---
Resolution: Fixed

> Leverage bloom filters for single key point lookups
> ---
>
> Key: PHOENIX-7229
> URL: https://issues.apache.org/jira/browse/PHOENIX-7229
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 5.1.3
>Reporter: Tanuj Khurana
>Assignee: Tanuj Khurana
>Priority: Major
> Fix For: 5.2.0
>
>
> PHOENIX-6710 enabled bloom filters by default when Phoenix tables are 
> created. However, we were not making use of it because Phoenix translates 
> point lookups to scans with the scan range [startkey, stopkey) where startkey 
> is inclusive and is equal to the row key and stopkey is exclusive and is the 
> next key after the row key. 
> This fails the check inside the hbase code in 
> [StoreFileReader#passesBloomFilter|https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFileReader.java#L245-L250]
>  because it applies bloom filter only to scans which are gets and a scan is a 
> GET only if startkey = stopkey and both are inclusive. This is defined here 
> [Scan#isGetScan|https://github.com/apache/hbase/blob/master/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Scan.java#L253-L255]
> We recently have some customers whose use case involves doing point lookups 
> where the row key is not going to be present in the table. Bloom filters are 
> ideal for those use cases.
> We can change our scan range for point lookups to leverage Bloom filters.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-7229) Leverage bloom filters for single key point lookups

2024-02-24 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7229?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated PHOENIX-7229:
--
Fix Version/s: 5.2.0

> Leverage bloom filters for single key point lookups
> ---
>
> Key: PHOENIX-7229
> URL: https://issues.apache.org/jira/browse/PHOENIX-7229
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 5.1.3
>Reporter: Tanuj Khurana
>Assignee: Tanuj Khurana
>Priority: Major
> Fix For: 5.2.0
>
>
> PHOENIX-6710 enabled bloom filters by default when Phoenix tables are 
> created. However, we were not making use of it because Phoenix translates 
> point lookups to scans with the scan range [startkey, stopkey) where startkey 
> is inclusive and is equal to the row key and stopkey is exclusive and is the 
> next key after the row key. 
> This fails the check inside the hbase code in 
> [StoreFileReader#passesBloomFilter|https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFileReader.java#L245-L250]
>  because it applies bloom filter only to scans which are gets and a scan is a 
> GET only if startkey = stopkey and both are inclusive. This is defined here 
> [Scan#isGetScan|https://github.com/apache/hbase/blob/master/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Scan.java#L253-L255]
> We recently have some customers whose use case involves doing point lookups 
> where the row key is not going to be present in the table. Bloom filters are 
> ideal for those use cases.
> We can change our scan range for point lookups to leverage Bloom filters.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (PHOENIX-7234) Bump org.apache.commons:commons-compress from 1.21 to 1.26.0

2024-02-24 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani resolved PHOENIX-7234.
---
Resolution: Fixed

> Bump org.apache.commons:commons-compress from 1.21 to 1.26.0
> 
>
> Key: PHOENIX-7234
> URL: https://issues.apache.org/jira/browse/PHOENIX-7234
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 5.2.0, 5.1.4
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Major
> Fix For: 5.2.0, 5.1.4
>
>
> Reported by github dependabot.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-7234) Bump org.apache.commons:commons-compress from 1.21 to 1.26.0

2024-02-24 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated PHOENIX-7234:
--
Fix Version/s: 5.2.0
   5.1.4

> Bump org.apache.commons:commons-compress from 1.21 to 1.26.0
> 
>
> Key: PHOENIX-7234
> URL: https://issues.apache.org/jira/browse/PHOENIX-7234
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 5.2.0, 5.1.4
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Major
> Fix For: 5.2.0, 5.1.4
>
>
> Reported by github dependabot.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-7230) Optimize rpc call to master if all indexes are migrated to new coprocs

2024-02-23 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated PHOENIX-7230:
--
Release Note: Use "phoenix.index.region.observer.enabled.all.tables" as 
"false" only if new index coprocs (GlobalIndexChecker, IndexRegionObserver etc) 
are not in use by all tables. Default value "true" indicates that we will not 
perform extra RPC call to retrieve TableDescriptor for all Mutations.

> Optimize rpc call to master if all indexes are migrated to new coprocs
> --
>
> Key: PHOENIX-7230
> URL: https://issues.apache.org/jira/browse/PHOENIX-7230
> Project: Phoenix
>  Issue Type: Improvement
>    Affects Versions: 5.1.3
>Reporter: Viraj Jasani
>Assignee: Palash Chauhan
>Priority: Major
> Fix For: 5.2.0
>
>
> If all the tables of the cluster have been migrated to the new index coprocs 
> (GlobalIndexChecker, IndexRegionObserver), for every mutation, we should 
> avoid making additional rpc call to master to retrieve TableDescriptor to 
> determine if the table descriptor has GlobalIndexChecker coproc enabled:
> {code:java}
> at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.getTableDescriptor(ConnectionQueryServicesImpl.java:605)
> at 
> org.apache.phoenix.util.IndexUtil.isGlobalIndexCheckerEnabled(IndexUtil.java:311)
> at 
> org.apache.phoenix.execute.MutationState.filterIndexCheckerMutations(MutationState.java:1680)
> at org.apache.phoenix.execute.MutationState.sendBatch(MutationState.java:1255)
> at org.apache.phoenix.execute.MutationState.send(MutationState.java:1186)
> at org.apache.phoenix.execute.MutationState.send(MutationState.java:2028)
> at org.apache.phoenix.execute.MutationState.commit(MutationState.java:1840)
> at 
> org.apache.phoenix.jdbc.PhoenixConnection$2.call(PhoenixConnection.java:841)
> at 
> org.apache.phoenix.jdbc.PhoenixConnection$2.call(PhoenixConnection.java:836)
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
> at 
> org.apache.phoenix.jdbc.PhoenixConnection.commit(PhoenixConnection.java:836) 
> {code}
> We already use config "phoenix.index.region.observer.enabled" to 
> disable/enable new index coproc during upgrade. The primary purpose of this 
> config is to add/remove new or old index coprocs during upgrade process.
> We can introduce new config "phoenix.index.region.observer.enabled.alltables" 
> with default value true. Unless the config is disabled, we should avoid call 
> to "IndexUtil#isGlobalIndexCheckerEnabled" within 
> filterIndexCheckerMutations().



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (PHOENIX-7230) Optimize rpc call to master if all indexes are migrated to new coprocs

2024-02-23 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani resolved PHOENIX-7230.
---
Resolution: Fixed

> Optimize rpc call to master if all indexes are migrated to new coprocs
> --
>
> Key: PHOENIX-7230
> URL: https://issues.apache.org/jira/browse/PHOENIX-7230
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 5.1.3
>    Reporter: Viraj Jasani
>Assignee: Palash Chauhan
>Priority: Major
> Fix For: 5.2.0
>
>
> If all the tables of the cluster have been migrated to the new index coprocs 
> (GlobalIndexChecker, IndexRegionObserver), for every mutation, we should 
> avoid making additional rpc call to master to retrieve TableDescriptor to 
> determine if the table descriptor has GlobalIndexChecker coproc enabled:
> {code:java}
> at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.getTableDescriptor(ConnectionQueryServicesImpl.java:605)
> at 
> org.apache.phoenix.util.IndexUtil.isGlobalIndexCheckerEnabled(IndexUtil.java:311)
> at 
> org.apache.phoenix.execute.MutationState.filterIndexCheckerMutations(MutationState.java:1680)
> at org.apache.phoenix.execute.MutationState.sendBatch(MutationState.java:1255)
> at org.apache.phoenix.execute.MutationState.send(MutationState.java:1186)
> at org.apache.phoenix.execute.MutationState.send(MutationState.java:2028)
> at org.apache.phoenix.execute.MutationState.commit(MutationState.java:1840)
> at 
> org.apache.phoenix.jdbc.PhoenixConnection$2.call(PhoenixConnection.java:841)
> at 
> org.apache.phoenix.jdbc.PhoenixConnection$2.call(PhoenixConnection.java:836)
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
> at 
> org.apache.phoenix.jdbc.PhoenixConnection.commit(PhoenixConnection.java:836) 
> {code}
> We already use config "phoenix.index.region.observer.enabled" to 
> disable/enable new index coproc during upgrade. The primary purpose of this 
> config is to add/remove new or old index coprocs during upgrade process.
> We can introduce new config "phoenix.index.region.observer.enabled.alltables" 
> with default value true. Unless the config is disabled, we should avoid call 
> to "IndexUtil#isGlobalIndexCheckerEnabled" within 
> filterIndexCheckerMutations().



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-7230) Optimize rpc call to master if all indexes are migrated to new coprocs

2024-02-23 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated PHOENIX-7230:
--
Fix Version/s: 5.2.0

> Optimize rpc call to master if all indexes are migrated to new coprocs
> --
>
> Key: PHOENIX-7230
> URL: https://issues.apache.org/jira/browse/PHOENIX-7230
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 5.1.3
>    Reporter: Viraj Jasani
>Assignee: Palash Chauhan
>Priority: Major
> Fix For: 5.2.0
>
>
> If all the tables of the cluster have been migrated to the new index coprocs 
> (GlobalIndexChecker, IndexRegionObserver), for every mutation, we should 
> avoid making additional rpc call to master to retrieve TableDescriptor to 
> determine if the table descriptor has GlobalIndexChecker coproc enabled:
> {code:java}
> at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.getTableDescriptor(ConnectionQueryServicesImpl.java:605)
> at 
> org.apache.phoenix.util.IndexUtil.isGlobalIndexCheckerEnabled(IndexUtil.java:311)
> at 
> org.apache.phoenix.execute.MutationState.filterIndexCheckerMutations(MutationState.java:1680)
> at org.apache.phoenix.execute.MutationState.sendBatch(MutationState.java:1255)
> at org.apache.phoenix.execute.MutationState.send(MutationState.java:1186)
> at org.apache.phoenix.execute.MutationState.send(MutationState.java:2028)
> at org.apache.phoenix.execute.MutationState.commit(MutationState.java:1840)
> at 
> org.apache.phoenix.jdbc.PhoenixConnection$2.call(PhoenixConnection.java:841)
> at 
> org.apache.phoenix.jdbc.PhoenixConnection$2.call(PhoenixConnection.java:836)
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
> at 
> org.apache.phoenix.jdbc.PhoenixConnection.commit(PhoenixConnection.java:836) 
> {code}
> We already use config "phoenix.index.region.observer.enabled" to 
> disable/enable new index coproc during upgrade. The primary purpose of this 
> config is to add/remove new or old index coprocs during upgrade process.
> We can introduce new config "phoenix.index.region.observer.enabled.alltables" 
> with default value true. Unless the config is disabled, we should avoid call 
> to "IndexUtil#isGlobalIndexCheckerEnabled" within 
> filterIndexCheckerMutations().



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (PHOENIX-7233) CQSI openConnection should timeout to unblock other connection threads

2024-02-23 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7233?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani reassigned PHOENIX-7233:
-

Assignee: Divneet Kaur

> CQSI openConnection should timeout to unblock other connection threads
> --
>
> Key: PHOENIX-7233
> URL: https://issues.apache.org/jira/browse/PHOENIX-7233
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 5.1.3
>    Reporter: Viraj Jasani
>Assignee: Divneet Kaur
>Priority: Major
>
> PhoenixDriver initializes and caches ConnectionQueryServices objects with 
> connectionQueryServicesCache. As part of the CQSI initialization, connection 
> is opened with HBase server by using HBase client provided ConnectionFactory, 
> which provides Connection object to the client. The Connection object 
> provided by HBase allows clients to share Zookeeper connection, meta cache as 
> well as remote connections to regionservers and master daemons. The 
> Connection object is used to perform Table CRUD operations as well as 
> Administrative actions on the cluster.
> HBase Connection object initialization requires ClusterId, which is 
> maintained either in Zookeeper or Master daemons (or both) and retrieved by 
> client depending on whether the client is configured to use 
> ZKConnectionRegistry or MasterRegistry/RpcConnectionRegistry.
> For ZKConnectionRegistry, we have run into an edge case wherein the 
> connection to Zookeeper server got stuck for more than 12 hours. When the 
> client tried to create connection to Zookeeper quorum to retrieve the 
> ClusterId, Zookeeper leader was switched from one server to another. While 
> the leader switch event resulting into stuck connection requires RCA, it is 
> not appropriate for Phoenix/HBase client to indefinitely wait for the 
> response from Zookeeper without any connection timeout.
> For Phoenix client, if one thread is stuck in opening connection during 
> CQSI#init, all other threads trying to create connections would get stuck 
> because we take class level lock before opening the connection, leading to 
> all threads getting stuck and potential termination or degradation of the 
> client JVM.
> While HBase client should also use timeout, however not having timeout from 
> Phoenix client side has far worse complications. As part of this Jira, we 
> should introduce a way for CQSI#openConnection to timeout, either by using 
> CompletableFuture API or using our preconfigured thread-pool.
>  
> Stacktrace for reference:
>  
> {code:java}
> jdk.internal.misc.Unsafe.park
> java.util.concurrent.locks.LockSupport.park
> java.util.concurrent.CompletableFuture$Signaller.block
> java.util.concurrent.ForkJoinPool.managedBlock
> java.util.concurrent.CompletableFuture.waitingGet
> java.util.concurrent.CompletableFuture.get
> org.apache.hadoop.hbase.client.ConnectionImplementation.retrieveClusterId
> org.apache.hadoop.hbase.client.ConnectionImplementation.
> jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance?
> jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance
> jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance
> java.lang.reflect.Constructor.newInstance
> org.apache.hadoop.hbase.client.ConnectionFactory.lambda$createConnection$?
> org.apache.hadoop.hbase.client.ConnectionFactory$$Lambda$?.run
> java.security.AccessController.doPrivileged
> javax.security.auth.Subject.doAs
> org.apache.hadoop.security.UserGroupInformation.doAs
> org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs
> org.apache.hadoop.hbase.client.ConnectionFactory.createConnection
> org.apache.hadoop.hbase.client.ConnectionFactory.createConnection
> org.apache.phoenix.query.ConnectionQueryServicesImpl.openConnection
> org.apache.phoenix.query.ConnectionQueryServicesImpl.access$?
> org.apache.phoenix.query.ConnectionQueryServicesImpl$?.call
> org.apache.phoenix.query.ConnectionQueryServicesImpl$?.call
> org.apache.phoenix.util.PhoenixContextExecutor.call
> org.apache.phoenix.query.ConnectionQueryServicesImpl.init
> org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices
> org.apache.phoenix.jdbc.HighAvailabilityGroup.connectToOneCluster
> org.apache.phoenix.jdbc.ParallelPhoenixConnection.getConnection
> org.apache.phoenix.jdbc.ParallelPhoenixConnection.lambda$new$?
> org.apache.phoenix.jdbc.ParallelPhoenixConnection$$Lambda$?.get
> org.apache.phoenix.jdbc.ParallelPhoenixContext.lambda$chainOnConnClusterContext$?
> org.apache.phoenix.jdbc.ParallelPhoenixContext$$Lambda$?.apply {code}
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (PHOENIX-7233) CQSI openConnection should timeout to unblock other connection threads

2024-02-23 Thread Viraj Jasani (Jira)
Viraj Jasani created PHOENIX-7233:
-

 Summary: CQSI openConnection should timeout to unblock other 
connection threads
 Key: PHOENIX-7233
 URL: https://issues.apache.org/jira/browse/PHOENIX-7233
 Project: Phoenix
  Issue Type: Improvement
Affects Versions: 5.1.3
Reporter: Viraj Jasani


PhoenixDriver initializes and caches ConnectionQueryServices objects with 
connectionQueryServicesCache. As part of the CQSI initialization, connection is 
opened with HBase server by using HBase client provided ConnectionFactory, 
which provides Connection object to the client. The Connection object provided 
by HBase allows clients to share Zookeeper connection, meta cache as well as 
remote connections to regionservers and master daemons. The Connection object 
is used to perform Table CRUD operations as well as Administrative actions on 
the cluster.

HBase Connection object initialization requires ClusterId, which is maintained 
either in Zookeeper or Master daemons (or both) and retrieved by client 
depending on whether the client is configured to use ZKConnectionRegistry or 
MasterRegistry/RpcConnectionRegistry.

For ZKConnectionRegistry, we have run into an edge case wherein the connection 
to Zookeeper server got stuck for more than 12 hours. When the client tried to 
create connection to Zookeeper quorum to retrieve the ClusterId, Zookeeper 
leader was switched from one server to another. While the leader switch event 
resulting into stuck connection requires RCA, it is not appropriate for 
Phoenix/HBase client to indefinitely wait for the response from Zookeeper 
without any connection timeout.

For Phoenix client, if one thread is stuck in opening connection during 
CQSI#init, all other threads trying to create connections would get stuck 
because we take class level lock before opening the connection, leading to all 
threads getting stuck and potential termination or degradation of the client 
JVM.

While HBase client should also use timeout, however not having timeout from 
Phoenix client side has far worse complications. As part of this Jira, we 
should introduce a way for CQSI#openConnection to timeout, either by using 
CompletableFuture API or using our preconfigured thread-pool.

 

Stacktrace for reference:

 
{code:java}
jdk.internal.misc.Unsafe.park
java.util.concurrent.locks.LockSupport.park
java.util.concurrent.CompletableFuture$Signaller.block
java.util.concurrent.ForkJoinPool.managedBlock
java.util.concurrent.CompletableFuture.waitingGet
java.util.concurrent.CompletableFuture.get
org.apache.hadoop.hbase.client.ConnectionImplementation.retrieveClusterId
org.apache.hadoop.hbase.client.ConnectionImplementation.
jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance?
jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance
jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance
java.lang.reflect.Constructor.newInstance
org.apache.hadoop.hbase.client.ConnectionFactory.lambda$createConnection$?
org.apache.hadoop.hbase.client.ConnectionFactory$$Lambda$?.run
java.security.AccessController.doPrivileged
javax.security.auth.Subject.doAs
org.apache.hadoop.security.UserGroupInformation.doAs
org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs
org.apache.hadoop.hbase.client.ConnectionFactory.createConnection
org.apache.hadoop.hbase.client.ConnectionFactory.createConnection
org.apache.phoenix.query.ConnectionQueryServicesImpl.openConnection
org.apache.phoenix.query.ConnectionQueryServicesImpl.access$?
org.apache.phoenix.query.ConnectionQueryServicesImpl$?.call
org.apache.phoenix.query.ConnectionQueryServicesImpl$?.call
org.apache.phoenix.util.PhoenixContextExecutor.call
org.apache.phoenix.query.ConnectionQueryServicesImpl.init
org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices
org.apache.phoenix.jdbc.HighAvailabilityGroup.connectToOneCluster
org.apache.phoenix.jdbc.ParallelPhoenixConnection.getConnection
org.apache.phoenix.jdbc.ParallelPhoenixConnection.lambda$new$?
org.apache.phoenix.jdbc.ParallelPhoenixConnection$$Lambda$?.get
org.apache.phoenix.jdbc.ParallelPhoenixContext.lambda$chainOnConnClusterContext$?
org.apache.phoenix.jdbc.ParallelPhoenixContext$$Lambda$?.apply {code}
 

 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (PHOENIX-7230) Optimize rpc call to master if all indexes are migrated to new coprocs

2024-02-22 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani reassigned PHOENIX-7230:
-

Assignee: Palash Chauhan

> Optimize rpc call to master if all indexes are migrated to new coprocs
> --
>
> Key: PHOENIX-7230
> URL: https://issues.apache.org/jira/browse/PHOENIX-7230
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 5.1.3
>    Reporter: Viraj Jasani
>Assignee: Palash Chauhan
>Priority: Major
>
> If all the tables of the cluster have been migrated to the new index coprocs 
> (GlobalIndexChecker, IndexRegionObserver), for every mutation, we should 
> avoid making additional rpc call to master to retrieve TableDescriptor to 
> determine if the table descriptor has GlobalIndexChecker coproc enabled:
> {code:java}
> at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.getTableDescriptor(ConnectionQueryServicesImpl.java:605)
> at 
> org.apache.phoenix.util.IndexUtil.isGlobalIndexCheckerEnabled(IndexUtil.java:311)
> at 
> org.apache.phoenix.execute.MutationState.filterIndexCheckerMutations(MutationState.java:1680)
> at org.apache.phoenix.execute.MutationState.sendBatch(MutationState.java:1255)
> at org.apache.phoenix.execute.MutationState.send(MutationState.java:1186)
> at org.apache.phoenix.execute.MutationState.send(MutationState.java:2028)
> at org.apache.phoenix.execute.MutationState.commit(MutationState.java:1840)
> at 
> org.apache.phoenix.jdbc.PhoenixConnection$2.call(PhoenixConnection.java:841)
> at 
> org.apache.phoenix.jdbc.PhoenixConnection$2.call(PhoenixConnection.java:836)
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
> at 
> org.apache.phoenix.jdbc.PhoenixConnection.commit(PhoenixConnection.java:836) 
> {code}
> We already use config "phoenix.index.region.observer.enabled" to 
> disable/enable new index coproc during upgrade. The primary purpose of this 
> config is to add/remove new or old index coprocs during upgrade process.
> We can introduce new config "phoenix.index.region.observer.enabled.alltables" 
> with default value true. Unless the config is disabled, we should avoid call 
> to "IndexUtil#isGlobalIndexCheckerEnabled" within 
> filterIndexCheckerMutations().



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (PHOENIX-7230) Optimize rpc call to master if all indexes are migrated to new coprocs

2024-02-22 Thread Viraj Jasani (Jira)
Viraj Jasani created PHOENIX-7230:
-

 Summary: Optimize rpc call to master if all indexes are migrated 
to new coprocs
 Key: PHOENIX-7230
 URL: https://issues.apache.org/jira/browse/PHOENIX-7230
 Project: Phoenix
  Issue Type: Improvement
Affects Versions: 5.1.3
Reporter: Viraj Jasani


If all the tables of the cluster have been migrated to the new index coprocs 
(GlobalIndexChecker, IndexRegionObserver), for every mutation, we should avoid 
making additional rpc call to master to retrieve TableDescriptor to determine 
if the table descriptor has GlobalIndexChecker coproc enabled:
{code:java}
at 
org.apache.phoenix.query.ConnectionQueryServicesImpl.getTableDescriptor(ConnectionQueryServicesImpl.java:605)
at 
org.apache.phoenix.util.IndexUtil.isGlobalIndexCheckerEnabled(IndexUtil.java:311)
at 
org.apache.phoenix.execute.MutationState.filterIndexCheckerMutations(MutationState.java:1680)
at org.apache.phoenix.execute.MutationState.sendBatch(MutationState.java:1255)
at org.apache.phoenix.execute.MutationState.send(MutationState.java:1186)
at org.apache.phoenix.execute.MutationState.send(MutationState.java:2028)
at org.apache.phoenix.execute.MutationState.commit(MutationState.java:1840)
at org.apache.phoenix.jdbc.PhoenixConnection$2.call(PhoenixConnection.java:841)
at org.apache.phoenix.jdbc.PhoenixConnection$2.call(PhoenixConnection.java:836)
at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
at org.apache.phoenix.jdbc.PhoenixConnection.commit(PhoenixConnection.java:836) 
{code}
We already use config "phoenix.index.region.observer.enabled" to disable/enable 
new index coproc during upgrade. The primary purpose of this config is to 
add/remove new or old index coprocs during upgrade process.

We can introduce new config "phoenix.index.region.observer.enabled.alltables" 
with default value true. Unless the config is disabled, we should avoid call to 
"IndexUtil#isGlobalIndexCheckerEnabled" within filterIndexCheckerMutations().



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [DISCUSS] 5.2.0 priority : PHOENIX-7106 Data Integrity Issues

2024-02-22 Thread Viraj Jasani
For 5.2.0 release, hadoop is bumped to 3.2.4.
I have merged the change to master and 5.2 branches. Thank you Istvan!

Awaiting pre-commit build results before creating 5.2.0 RC.


On Thu, Feb 22, 2024 at 9:32 AM Viraj Jasani  wrote:

> Istvan, is it possible to get hadoop version bumped with 5.2.1? That would
> provide sufficient time to focus on resolving any issues that arise? Or you
> have already run tests with new hadoop version on hbase 2.5 profile?
>
>
> On Thu, Feb 22, 2024 at 12:38 AM Istvan Toth  wrote:
>
>> The 2.4.0 drop is committed.
>> Since there was no consensus on the 5.2.0 removal, I've kept that.
>>
>> Regarding the Hadoop version update:
>> I have not made as much progress with testing as I hoped.
>> I have reduced the scope of PHOENIX-7216 to just the 2.5 profile, as that
>> does not need more testing, and I want to get at least the latest Hadoop
>> patch releases into 5.2.0/5.1.4.
>>
>> I also see a new commons-compress version update by dependabot.
>>
>>
>> Istvan
>>
>>
>> On Fri, Feb 16, 2024 at 6:24 PM Viraj Jasani  wrote:
>>
>>> +1 for dropping support for 2.4.0.
>>> For 2.5.0-2.5.3, I think we might need more opinion?
>>>
>>>
>>> On Fri, Feb 16, 2024 at 12:12 AM Istvan Toth  wrote:
>>>
>>>> Nothing stimulates the mind like an upcoming release:
>>>> Since we have not yet released a 5.2 version which supports HBase 2.4.0
>>>> or pre 2.5.4 HBase versions, we could drop support for those.
>>>> I have opened separate tickets for both:
>>>>
>>>> https://issues.apache.org/jira/browse/PHOENIX-7218 for 2.4.0
>>>> https://issues.apache.org/jira/browse/PHOENIX-7219 for 2.5.3
>>>>
>>>> I don't think anyone will miss 2.4.0 support, but we may want to keep
>>>> HBase 2.5.0-2.5.3 as 2.5.3 is only a year old.
>>>>
>>>> Please share your opinion here or on the tickets.
>>>>
>>>> Istvan
>>>>
>>>>
>>>> On Fri, Feb 16, 2024 at 7:50 AM Istvan Toth  wrote:
>>>>
>>>>> I agree, this is a few lines (if it works) which takes no time to
>>>>> backport, so we need not hold up cutting the release branch for this.
>>>>>
>>>>> The HBase 2.5 and 2.5.0 profiles work fine with Hadoop 3.2.4, as
>>>>> expected, so updating those is kind of a non-brainer.
>>>>>
>>>>> I see many errors on the 2.4.0 and 2.4 profile, but I'm not yet sure
>>>>> if those are simply flakey, or if they are caused by the newer Hadoop.
>>>>>
>>>>> I haven't run the tests with Hadoop 3.3 yet. My HBase 3 WIP branch
>>>>> seems to work fine with it, but HBase 3 itself is built with Hadoop 3.3, 
>>>>> so
>>>>> that's a different situation.
>>>>>
>>>>> I will report back when I have more results.
>>>>>
>>>>> Istvan
>>>>>
>>>>>
>>>>>
>>>>> On Fri, Feb 16, 2024 at 6:23 AM Viraj Jasani 
>>>>> wrote:
>>>>>
>>>> Sure let's go for it. I understand downstreamers are not happy with
>>>>>> CVEs coming from our artifacts that were released in 2022.
>>>>>>
>>>>>>
>>>>>> On Thu, Feb 15, 2024 at 6:05 PM rajeshb...@apache.org <
>>>>>> chrajeshbab...@gmail.com> wrote:
>>>>>>
>>>>> Would be better to bump up Hadoop to 3.3.x I feel which has minimal
>>>>>>> vulnerabilities compared to Hadoop 3.2.4.
>>>>>>>
>>>>>>> Thanks,
>>>>>>> Rajeshbabu.
>>>>>>>
>>>>>>> On Fri, Feb 16, 2024, 7:25 AM Viraj Jasani 
>>>>>>> wrote:
>>>>>>>
>>>>>>> > Sure it sounds good to create PR for version upgrades while we are
>>>>>>> getting
>>>>>>> > close to releasing 5.2.0 and 5.1.4.
>>>>>>> > However, if the build has unexpected test failures, we can cut 5.2
>>>>>>> first,
>>>>>>> > and focus on stabilizing the upgrade changes on master branch PR
>>>>>>> rather
>>>>>>> > than 5.2 branch, allowing faster release.
>>>>>>> >
>>>>>>> > Some new features like CDC and JSON support will anyway need 5.3
>>>>>>> rele

[jira] [Updated] (PHOENIX-7216) Bump Hadoop version to 3.2.4 for 2.5.x profile

2024-02-22 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated PHOENIX-7216:
--
Fix Version/s: 5.2.0

> Bump Hadoop version to 3.2.4 for 2.5.x profile
> --
>
> Key: PHOENIX-7216
> URL: https://issues.apache.org/jira/browse/PHOENIX-7216
> Project: Phoenix
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 5.2.0, 5.1.4
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Major
> Fix For: 5.2.0
>
>
> This was discussed on the mailing list.
> We may want to bump other Hadoop versions as well, and we may want to bump 
> the 2.5 profile to Hadoop 3.3, but I want to make sure that at least this one 
> makes it into 5.2.0, while we test other updates.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [DISCUSS] 5.2.0 priority : PHOENIX-7106 Data Integrity Issues

2024-02-22 Thread Viraj Jasani
Istvan, is it possible to get hadoop version bumped with 5.2.1? That would
provide sufficient time to focus on resolving any issues that arise? Or you
have already run tests with new hadoop version on hbase 2.5 profile?


On Thu, Feb 22, 2024 at 12:38 AM Istvan Toth  wrote:

> The 2.4.0 drop is committed.
> Since there was no consensus on the 5.2.0 removal, I've kept that.
>
> Regarding the Hadoop version update:
> I have not made as much progress with testing as I hoped.
> I have reduced the scope of PHOENIX-7216 to just the 2.5 profile, as that
> does not need more testing, and I want to get at least the latest Hadoop
> patch releases into 5.2.0/5.1.4.
>
> I also see a new commons-compress version update by dependabot.
>
>
> Istvan
>
>
> On Fri, Feb 16, 2024 at 6:24 PM Viraj Jasani  wrote:
>
>> +1 for dropping support for 2.4.0.
>> For 2.5.0-2.5.3, I think we might need more opinion?
>>
>>
>> On Fri, Feb 16, 2024 at 12:12 AM Istvan Toth  wrote:
>>
>>> Nothing stimulates the mind like an upcoming release:
>>> Since we have not yet released a 5.2 version which supports HBase 2.4.0
>>> or pre 2.5.4 HBase versions, we could drop support for those.
>>> I have opened separate tickets for both:
>>>
>>> https://issues.apache.org/jira/browse/PHOENIX-7218 for 2.4.0
>>> https://issues.apache.org/jira/browse/PHOENIX-7219 for 2.5.3
>>>
>>> I don't think anyone will miss 2.4.0 support, but we may want to keep
>>> HBase 2.5.0-2.5.3 as 2.5.3 is only a year old.
>>>
>>> Please share your opinion here or on the tickets.
>>>
>>> Istvan
>>>
>>>
>>> On Fri, Feb 16, 2024 at 7:50 AM Istvan Toth  wrote:
>>>
>>>> I agree, this is a few lines (if it works) which takes no time to
>>>> backport, so we need not hold up cutting the release branch for this.
>>>>
>>>> The HBase 2.5 and 2.5.0 profiles work fine with Hadoop 3.2.4, as
>>>> expected, so updating those is kind of a non-brainer.
>>>>
>>>> I see many errors on the 2.4.0 and 2.4 profile, but I'm not yet sure if
>>>> those are simply flakey, or if they are caused by the newer Hadoop.
>>>>
>>>> I haven't run the tests with Hadoop 3.3 yet. My HBase 3 WIP branch
>>>> seems to work fine with it, but HBase 3 itself is built with Hadoop 3.3, so
>>>> that's a different situation.
>>>>
>>>> I will report back when I have more results.
>>>>
>>>> Istvan
>>>>
>>>>
>>>>
>>>> On Fri, Feb 16, 2024 at 6:23 AM Viraj Jasani 
>>>> wrote:
>>>>
>>> Sure let's go for it. I understand downstreamers are not happy with CVEs
>>>>> coming from our artifacts that were released in 2022.
>>>>>
>>>>>
>>>>> On Thu, Feb 15, 2024 at 6:05 PM rajeshb...@apache.org <
>>>>> chrajeshbab...@gmail.com> wrote:
>>>>>
>>>> Would be better to bump up Hadoop to 3.3.x I feel which has minimal
>>>>>> vulnerabilities compared to Hadoop 3.2.4.
>>>>>>
>>>>>> Thanks,
>>>>>> Rajeshbabu.
>>>>>>
>>>>>> On Fri, Feb 16, 2024, 7:25 AM Viraj Jasani 
>>>>>> wrote:
>>>>>>
>>>>>> > Sure it sounds good to create PR for version upgrades while we are
>>>>>> getting
>>>>>> > close to releasing 5.2.0 and 5.1.4.
>>>>>> > However, if the build has unexpected test failures, we can cut 5.2
>>>>>> first,
>>>>>> > and focus on stabilizing the upgrade changes on master branch PR
>>>>>> rather
>>>>>> > than 5.2 branch, allowing faster release.
>>>>>> >
>>>>>> > Some new features like CDC and JSON support will anyway need 5.3
>>>>>> release
>>>>>> > soon.
>>>>>> >
>>>>>> >
>>>>>> > On Thu, Feb 15, 2024 at 7:54 AM Istvan Toth 
>>>>>> wrote:
>>>>>> >
>>>>>> > > This comment
>>>>>> > > <
>>>>>> https://github.com/apache/phoenix/pull/1810#issuecomment-1945998086>
>>>>>> > got
>>>>>> > > me thinking.
>>>>>> > >
>>>>>> > > Most of the community (i.e. SFDC and CLDR) does not particularly
>>>&

[jira] [Updated] (PHOENIX-6829) Add code coverage report aggregation with Jacoco

2024-02-21 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated PHOENIX-6829:
--
Fix Version/s: 5.2.0

> Add code coverage report aggregation with Jacoco
> 
>
> Key: PHOENIX-6829
> URL: https://issues.apache.org/jira/browse/PHOENIX-6829
> Project: Phoenix
>  Issue Type: Task
>Reporter: Dóra Horváth
>Assignee: Dóra Horváth
>Priority: Major
> Fix For: 5.2.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-6849) git_jira_fix_version_check.py does not retrieve all JIRAs for a fixed version

2024-02-21 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated PHOENIX-6849:
--
Fix Version/s: 5.2.0

> git_jira_fix_version_check.py does not retrieve all JIRAs for a fixed version
> -
>
> Key: PHOENIX-6849
> URL: https://issues.apache.org/jira/browse/PHOENIX-6849
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Tanuj Khurana
>Assignee: Tanuj Khurana
>Priority: Major
> Fix For: 5.2.0
>
>
> The below API only returns 50 results by default as per the documentation 
> [jira.search_issues|https://jira.readthedocs.io/examples.html#searching]
> {code:java}
> all_issues_with_fix_version = jira.search_issues('project=' + 
> jira_project_name + ' and status in (Resolved,Closed) and fixVersion='+ 
> fix_version) {code}
> We need to fetch all the matching issues.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-7063) Track and account garbage collected phoenix connections

2024-02-21 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7063?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated PHOENIX-7063:
--
Fix Version/s: 5.2.0

> Track and account garbage collected phoenix connections
> ---
>
> Key: PHOENIX-7063
> URL: https://issues.apache.org/jira/browse/PHOENIX-7063
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 5.1.3
>Reporter: Jacob Isaac
>Assignee: Lokesh Khurana
>Priority: Major
> Fix For: 5.2.0
>
>
> In production env, misbehaving clients can forget to close Phoenix 
> connections. This can result in Phoenix connections leaking. 
> Moreover, when Phoenix connections are tracked and limited by the 
> GLOBAL_OPEN_PHOENIX_CONNECTIONS metrics counter per jvm, it can lead to 
> client requests for Phoenix connections being rejected.
> Tracking and keeping count of garbage collected Phoenix connections can 
> alleviate the above issues.
> Providing additional logging during such reclaims will provide additional 
> insights into a production env.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-6141) Ensure consistency between SYSTEM.CATALOG and SYSTEM.CHILD_LINK

2024-02-21 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated PHOENIX-6141:
--
Fix Version/s: (was: 5.2.0)
   (was: 5.1.4)

> Ensure consistency between SYSTEM.CATALOG and SYSTEM.CHILD_LINK
> ---
>
> Key: PHOENIX-6141
> URL: https://issues.apache.org/jira/browse/PHOENIX-6141
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 5.0.0, 4.15.0
>Reporter: Chinmay Kulkarni
>Assignee: Palash Chauhan
>Priority: Blocker
>
> Before 4.15, "CREATE/DROP VIEW" was an atomic operation since we were issuing 
> batch mutations on just the 1 SYSTEM.CATALOG region. In 4.15 we introduced 
> SYSTEM.CHILD_LINK to store the parent->child links and so a CREATE VIEW is no 
> longer atomic since it consists of 2 separate RPCs  (1 to SYSTEM.CHILD_LINK 
> to add the linking row and another to SYSTEM.CATALOG to write metadata for 
> the new view). 
> If the second RPC i.e. the RPC to write metadata to SYSTEM.CATALOG fails 
> after the 1st RPC has already gone through, there will be an inconsistency 
> between both metadata tables. We will see orphan parent->child linking rows 
> in SYSTEM.CHILD_LINK in this case. This can cause the following issues:
> # ALTER TABLE calls on the base table will fail
> # DROP TABLE without CASCADE will fail
> # The upgrade path has calls like UpgradeUtil.upgradeTable() which will fail
> # Any metadata consistency checks can be thrown off
> # Unnecessary extra storage of orphan links
> The first 3 issues happen because we wrongly deduce that a base table has 
> child views due to the orphan linking rows.
> This Jira aims at trying to come up with a way to make mutations among 
> SYSTEM.CATALOG and SYSTEM.CHILD_LINK an atomic transaction. We can use a 
> 2-phase commit approach like in global indexing or also potentially explore 
> using a transaction manager. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (PHOENIX-7144) TableLevel Phoenix Metrics returns base tableName when queried for Index Table.

2024-02-21 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7144?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani resolved PHOENIX-7144.
---
Resolution: Fixed

> TableLevel Phoenix Metrics returns base tableName when queried for Index 
> Table.
> ---
>
> Key: PHOENIX-7144
> URL: https://issues.apache.org/jira/browse/PHOENIX-7144
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: vikas meka
>Assignee: vikas meka
>Priority: Major
> Fix For: 5.2.0
>
>
> TableClient Metrics returns BaseTable Name when Indexes are used. Phoenix 
> Result set uses Index TableName while storing as DDL queries use BaseTable 
> Name while storing metrics in HashMap.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (PHOENIX-7160) Change the TSO default port to be compatible with Omid 1.1.1

2024-02-21 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7160?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani resolved PHOENIX-7160.
---
Resolution: Fixed

> Change the TSO default port to be compatible with Omid 1.1.1
> 
>
> Key: PHOENIX-7160
> URL: https://issues.apache.org/jira/browse/PHOENIX-7160
> Project: Phoenix
>  Issue Type: Bug
>  Components: omid
>Reporter: Cong Luo
>Assignee: Cong Luo
>Priority: Major
> Fix For: 5.2.0
>
>
> Since 
> [Omid-247|https://github.com/apache/phoenix-omid/commit/7d3cf3e83586bc523e20277113ecb844172cefc0]
>  has been merged, the default port has changed from 54758 to 24758. The TSO 
> configuration in the Phoenix component also needs to be updated.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-7163) Update commons-configuration2 to 2.8.0

2024-02-21 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7163?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated PHOENIX-7163:
--
Fix Version/s: (was: 5.2.0)
   (was: 5.1.4)

> Update commons-configuration2 to 2.8.0
> --
>
> Key: PHOENIX-7163
> URL: https://issues.apache.org/jira/browse/PHOENIX-7163
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 5.2.0, 5.1.4
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Major
>
> We are using commons-configurations2 for the Hadoop metrics code, because 
> that Hadoop API is badly broken.
> Because of this, I have added dependency management for that dependency.
> We are setting an old version, which is known to have CVEs.
> -Remove the dependency managment so that we can pick up any possible future 
> fixes from Hadoop instead.-
> Hadoop has updated to 2.8.0 without any code changes.
> Since we only add this for the Hadoop API leak , we may update to 2.8.0 just 
> as well.
> It is also not needed in hbase-server and hbase-mapreduce, as it is provided 
> by the expected Hadoop on the classpath.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (PHOENIX-7175) Set java.io.tmpdir to the maven build directory for tests

2024-02-21 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani resolved PHOENIX-7175.
---
Resolution: Fixed

> Set java.io.tmpdir to the maven build directory for tests
> -
>
> Key: PHOENIX-7175
> URL: https://issues.apache.org/jira/browse/PHOENIX-7175
> Project: Phoenix
>  Issue Type: Bug
>  Components: connectors, core, queryserver
>Reporter: Istvan Toth
>Assignee: Divneet Kaur
>Priority: Minor
>  Labels: test
> Fix For: 5.2.0, 5.1.4
>
>
> Our tests are currently using a global tmpdir.
> This cuses conflicts when running multiple test runs on the same machine.
> Set java.io.tmpdir to the build directory.
> We can copy this from HBase:
> https://github.com/apache/hbase/blob/a09305d5854fc98300426271fad3b53a69d2ae71/pom.xml#L1879



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


  1   2   3   4   5   6   7   8   9   10   >