[jira] [Commented] (PHOENIX-7130) Support skipping of shade sources jar creation

2024-03-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-7130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17824269#comment-17824269
 ] 

ASF GitHub Bot commented on PHOENIX-7130:
-

stoty commented on PR #1773:
URL: https://github.com/apache/phoenix/pull/1773#issuecomment-1982506030

   merged manually.
   Note that 5.2 still does not have this.




> Support skipping of shade sources jar creation
> --
>
> Key: PHOENIX-7130
> URL: https://issues.apache.org/jira/browse/PHOENIX-7130
> Project: Phoenix
>  Issue Type: Improvement
>  Components: phoenix
>Reporter: Nihal Jain
>Assignee: Nihal Jain
>Priority: Minor
>  Labels: build
>
> Shade sources jar creation takes a lot of time and we do not want to do this 
> for every dev build (in our internal phoenix jenkins). Hence with this Jira, 
> will add a profile to disable shade jar creation optionally by running with 
> '-PskipShadeSources'.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PHOENIX-7130) Support skipping of shade sources jar creation

2024-03-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-7130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17824268#comment-17824268
 ] 

ASF GitHub Bot commented on PHOENIX-7130:
-

stoty closed pull request #1773: PHOENIX-7130 Support skipping of shade sources 
jar creation
URL: https://github.com/apache/phoenix/pull/1773




> Support skipping of shade sources jar creation
> --
>
> Key: PHOENIX-7130
> URL: https://issues.apache.org/jira/browse/PHOENIX-7130
> Project: Phoenix
>  Issue Type: Improvement
>  Components: phoenix
>Reporter: Nihal Jain
>Assignee: Nihal Jain
>Priority: Minor
>  Labels: build
>
> Shade sources jar creation takes a lot of time and we do not want to do this 
> for every dev build (in our internal phoenix jenkins). Hence with this Jira, 
> will add a profile to disable shade jar creation optionally by running with 
> '-PskipShadeSources'.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] PHOENIX-7130 Support skipping of shade sources jar creation [phoenix]

2024-03-06 Thread via GitHub


stoty commented on PR #1773:
URL: https://github.com/apache/phoenix/pull/1773#issuecomment-1982506030

   merged manually.
   Note that 5.2 still does not have this.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@phoenix.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] PHOENIX-7130 Support skipping of shade sources jar creation [phoenix]

2024-03-06 Thread via GitHub


stoty closed pull request #1773: PHOENIX-7130 Support skipping of shade sources 
jar creation
URL: https://github.com/apache/phoenix/pull/1773


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@phoenix.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (PHOENIX-7236) Fix release scripts and Update version to 5.3.0

2024-03-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-7236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17824265#comment-17824265
 ] 

ASF GitHub Bot commented on PHOENIX-7236:
-

stoty closed pull request #1836: PHOENIX-7236 Fix release scripts for 5.2
URL: https://github.com/apache/phoenix/pull/1836




> Fix release scripts and Update version to 5.3.0
> ---
>
> Key: PHOENIX-7236
> URL: https://issues.apache.org/jira/browse/PHOENIX-7236
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 5.2.0
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Major
> Fix For: 5.3.0
>
>
> We see problems with the release scripts when trying to release 5.2.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] PHOENIX-7236 Fix release scripts for 5.2 [phoenix]

2024-03-06 Thread via GitHub


stoty closed pull request #1836: PHOENIX-7236 Fix release scripts for 5.2
URL: https://github.com/apache/phoenix/pull/1836


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@phoenix.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (PHOENIX-7251) Refactor server-side code to support multiple ServerMetadataCache for HA tests

2024-03-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-7251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17824255#comment-17824255
 ] 

ASF GitHub Bot commented on PHOENIX-7251:
-

shahrs87 commented on PR #1845:
URL: https://github.com/apache/phoenix/pull/1845#issuecomment-1982362971

   There are 3 checkstyle warning, 1 asf license warning, 1 spotbug warning. 
Can you please fix them? @palashc 




> Refactor server-side code to support multiple ServerMetadataCache for HA tests
> --
>
> Key: PHOENIX-7251
> URL: https://issues.apache.org/jira/browse/PHOENIX-7251
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Palash Chauhan
>Assignee: Palash Chauhan
>Priority: Major
>
> In the metadata caching re-design, `ServerMetadataCache` is required to be a 
> singleton in the implementation. This affects tests for the HA use case 
> because the coprocessors on the 2 clusters end up using the same 
> `ServerMetadataCache`. All tests which execute queries with 1 of the clusters 
> unavailable will fail. 
> We can refactor the implementation in the following way to support HA test 
> cases:
> 1. Create a `ServerMetadataCache` interface and use the current 
> implementation as `ServerMetadataCacheImpl` for all other tests. This would 
> be a singleton.
> 2. Implement `ServerMetadataCacheHAImpl` with a map of instances keyed on 
> config.
> 3. Extend `PhoenixRegionServerEndpoint` and use `ServerMetadataCacheHAImpl`. 
> 4. In HA tests, load this new endpoint on the region servers. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] PHOENIX-7251 : Refactor server-side code to support multiple ServerMetadataCache for ITs which create multiple RSs or mini clusters [phoenix]

2024-03-06 Thread via GitHub


shahrs87 commented on PR #1845:
URL: https://github.com/apache/phoenix/pull/1845#issuecomment-1982362971

   There are 3 checkstyle warning, 1 asf license warning, 1 spotbug warning. 
Can you please fix them? @palashc 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@phoenix.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (PHOENIX-7261) Align mockito version with Hadoop and HBase in QueryServer

2024-03-06 Thread Istvan Toth (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-7261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17824253#comment-17824253
 ] 

Istvan Toth commented on PHOENIX-7261:
--

Nit: The quoted text in the description is from Andrew, not me.

> Align mockito version with Hadoop and HBase in QueryServer
> --
>
> Key: PHOENIX-7261
> URL: https://issues.apache.org/jira/browse/PHOENIX-7261
> Project: Phoenix
>  Issue Type: Bug
>  Components: queryserver
>Reporter: Nihal Jain
>Assignee: Nihal Jain
>Priority: Major
>
> As mentioned in PHOENIX-6769 by [~stoty] 
> {quote}There is a well known incompatibility between old versions of 
> mockito-all and mockito-core and newer versions. It manifests as 
> IncompatibleClassChangeErrors and other linkage problems. The Hadoop 
> minicluster in versions 3.x embed mockito classes in the minicluster.
> To avoid potential problems it would be best to align Phoenix use of mockito 
> (mockito-core) with downstreamers. HBase uses mockito-core 2.28.2 on 
> branch-2.4 and branch-2.5. (Phoenix is on 1.10.19.) I checked Hadoop 
> branch-3.3 and it's also on 2.28.2.
> I recently opened a PR for OMID-226 to fix the same concern in phoenix-omid.
> {quote}
>  
> Goal is to  Update mockito to 4.11.0, same as Hbase branch-3. Same was done 
> in PHOENIX-6769 for phoenix.
> Also Context on why I want this:
>  # Currently we are working on building phoenix, pqs and omid with hadoop 
> 3.3.6 and seems we fail to even start a minicluster with the mockito that is 
> bundled in code, with following error:
> {code:java}
> [ERROR] 
> org.apache.phoenix.tool.ParameterizedPhoenixCanaryToolIT.phoenixCanaryToolTest[ParameterizedPhoenixCanaryToolIT_isPositiveTestType=false,isNamespaceEnabled=false,resultSinkOption=org.apache.phoenix.tool.PhoenixCanaryTool$StdOutSink]
>  -- Time elapsed: 4.234 s <<< ERROR!
> java.lang.RuntimeException: java.lang.IncompatibleClassChangeError: class 
> org.apache.hadoop.hdfs.server.namenode.NameNodeAdapter$2 can not implement 
> org.mockito.ArgumentMatcher, because it is not an interface 
> (org.mockito.ArgumentMatcher is in unnamed module of loader 'app')
> at 
> org.apache.phoenix.query.BaseTest.initMiniCluster(BaseTest.java:551)
> at 
> org.apache.phoenix.query.BaseTest.setUpTestCluster(BaseTest.java:450)
> at 
> org.apache.phoenix.query.BaseTest.checkClusterInitialized(BaseTest.java:436)
> at 
> org.apache.phoenix.query.BaseTest.setUpTestDriver(BaseTest.java:518)
> at 
> org.apache.phoenix.tool.ParameterizedPhoenixCanaryToolIT.setup(ParameterizedPhoenixCanaryToolIT.java:115)
> at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
> at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.base/java.lang.reflect.Method.invoke(Method.java:568)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> at 
> org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33)
> at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
> at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
> at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
> at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
> at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
> at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
> at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
> at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
> at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
> at org.junit.runners.Suite.runChild(Suite.java:128)
> at org.junit.runners.Suite.runChild(Suite.java:27)
> at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
> at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
> at 

[jira] [Commented] (PHOENIX-7251) Refactor server-side code to support multiple ServerMetadataCache for HA tests

2024-03-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-7251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17824245#comment-17824245
 ] 

ASF GitHub Bot commented on PHOENIX-7251:
-

palashc commented on PR #1845:
URL: https://github.com/apache/phoenix/pull/1845#issuecomment-1982304969

   @shahrs87 looks like no test failures apart from the one flapper - 
https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-1845/6/




> Refactor server-side code to support multiple ServerMetadataCache for HA tests
> --
>
> Key: PHOENIX-7251
> URL: https://issues.apache.org/jira/browse/PHOENIX-7251
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Palash Chauhan
>Assignee: Palash Chauhan
>Priority: Major
>
> In the metadata caching re-design, `ServerMetadataCache` is required to be a 
> singleton in the implementation. This affects tests for the HA use case 
> because the coprocessors on the 2 clusters end up using the same 
> `ServerMetadataCache`. All tests which execute queries with 1 of the clusters 
> unavailable will fail. 
> We can refactor the implementation in the following way to support HA test 
> cases:
> 1. Create a `ServerMetadataCache` interface and use the current 
> implementation as `ServerMetadataCacheImpl` for all other tests. This would 
> be a singleton.
> 2. Implement `ServerMetadataCacheHAImpl` with a map of instances keyed on 
> config.
> 3. Extend `PhoenixRegionServerEndpoint` and use `ServerMetadataCacheHAImpl`. 
> 4. In HA tests, load this new endpoint on the region servers. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] PHOENIX-7251 : Refactor server-side code to support multiple ServerMetadataCache for ITs which create multiple RSs or mini clusters [phoenix]

2024-03-06 Thread via GitHub


palashc commented on PR #1845:
URL: https://github.com/apache/phoenix/pull/1845#issuecomment-1982304969

   @shahrs87 looks like no test failures apart from the one flapper - 
https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-1845/6/


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@phoenix.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (PHOENIX-7251) Refactor server-side code to support multiple ServerMetadataCache for HA tests

2024-03-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-7251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17824220#comment-17824220
 ] 

ASF GitHub Bot commented on PHOENIX-7251:
-

palashc commented on code in PR #1845:
URL: https://github.com/apache/phoenix/pull/1845#discussion_r1515307599


##
phoenix-core/src/it/java/org/apache/phoenix/end2end/ServerMetadataCacheTestImpl.java:
##
@@ -0,0 +1,66 @@
+package org.apache.phoenix.end2end;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.ServerName;
+import org.apache.phoenix.cache.ServerMetadataCacheImpl;
+import org.apache.phoenix.hbase.index.util.ImmutableBytesPtr;
+import org.apache.phoenix.util.QueryUtil;
+import org.apache.phoenix.util.SchemaUtil;
+
+import java.sql.Connection;
+import java.sql.SQLException;
+import java.util.HashMap;
+import java.util.Map;
+import java.util.Properties;
+
+/**
+ * Implementation of {@link ServerMetadataCache} for Integration Tests.
+ * Supports keeping more than one instance keyed on the regionserver 
ServerName.
+ */
+public class ServerMetadataCacheTestImpl extends ServerMetadataCacheImpl {

Review Comment:
   done.





> Refactor server-side code to support multiple ServerMetadataCache for HA tests
> --
>
> Key: PHOENIX-7251
> URL: https://issues.apache.org/jira/browse/PHOENIX-7251
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Palash Chauhan
>Assignee: Palash Chauhan
>Priority: Major
>
> In the metadata caching re-design, `ServerMetadataCache` is required to be a 
> singleton in the implementation. This affects tests for the HA use case 
> because the coprocessors on the 2 clusters end up using the same 
> `ServerMetadataCache`. All tests which execute queries with 1 of the clusters 
> unavailable will fail. 
> We can refactor the implementation in the following way to support HA test 
> cases:
> 1. Create a `ServerMetadataCache` interface and use the current 
> implementation as `ServerMetadataCacheImpl` for all other tests. This would 
> be a singleton.
> 2. Implement `ServerMetadataCacheHAImpl` with a map of instances keyed on 
> config.
> 3. Extend `PhoenixRegionServerEndpoint` and use `ServerMetadataCacheHAImpl`. 
> 4. In HA tests, load this new endpoint on the region servers. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] PHOENIX-7251 : Refactor server-side code to support multiple ServerMetadataCache for ITs which create multiple RSs or mini clusters [phoenix]

2024-03-06 Thread via GitHub


palashc commented on code in PR #1845:
URL: https://github.com/apache/phoenix/pull/1845#discussion_r1515307599


##
phoenix-core/src/it/java/org/apache/phoenix/end2end/ServerMetadataCacheTestImpl.java:
##
@@ -0,0 +1,66 @@
+package org.apache.phoenix.end2end;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.ServerName;
+import org.apache.phoenix.cache.ServerMetadataCacheImpl;
+import org.apache.phoenix.hbase.index.util.ImmutableBytesPtr;
+import org.apache.phoenix.util.QueryUtil;
+import org.apache.phoenix.util.SchemaUtil;
+
+import java.sql.Connection;
+import java.sql.SQLException;
+import java.util.HashMap;
+import java.util.Map;
+import java.util.Properties;
+
+/**
+ * Implementation of {@link ServerMetadataCache} for Integration Tests.
+ * Supports keeping more than one instance keyed on the regionserver 
ServerName.
+ */
+public class ServerMetadataCacheTestImpl extends ServerMetadataCacheImpl {

Review Comment:
   done.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@phoenix.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (PHOENIX-7251) Refactor server-side code to support multiple ServerMetadataCache for HA tests

2024-03-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-7251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17824217#comment-17824217
 ] 

ASF GitHub Bot commented on PHOENIX-7251:
-

shahrs87 commented on code in PR #1845:
URL: https://github.com/apache/phoenix/pull/1845#discussion_r1515288037


##
phoenix-core/src/it/java/org/apache/phoenix/end2end/ServerMetadataCacheTestImpl.java:
##
@@ -0,0 +1,66 @@
+package org.apache.phoenix.end2end;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.ServerName;
+import org.apache.phoenix.cache.ServerMetadataCacheImpl;
+import org.apache.phoenix.hbase.index.util.ImmutableBytesPtr;
+import org.apache.phoenix.util.QueryUtil;
+import org.apache.phoenix.util.SchemaUtil;
+
+import java.sql.Connection;
+import java.sql.SQLException;
+import java.util.HashMap;
+import java.util.Map;
+import java.util.Properties;
+
+/**
+ * Implementation of {@link ServerMetadataCache} for Integration Tests.
+ * Supports keeping more than one instance keyed on the regionserver 
ServerName.
+ */
+public class ServerMetadataCacheTestImpl extends ServerMetadataCacheImpl {

Review Comment:
   Can you add some more context in the comment something like:
   
   PhoenixRegionServerEndpoint is a region server coproc. There is a 1-1 
correspondence between PhoenixRegionServerEndpoint and ServerMetadataCache.
   In ITs we can have multiple regionservers per cluster so we need multiple 
instances of ServerMetadataCache.
   And HighAvailabilityTestingUtility creates 2 clusters so we need to have one 
instance of ServerMetadataCache for each regionserver in each cluster.





> Refactor server-side code to support multiple ServerMetadataCache for HA tests
> --
>
> Key: PHOENIX-7251
> URL: https://issues.apache.org/jira/browse/PHOENIX-7251
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Palash Chauhan
>Assignee: Palash Chauhan
>Priority: Major
>
> In the metadata caching re-design, `ServerMetadataCache` is required to be a 
> singleton in the implementation. This affects tests for the HA use case 
> because the coprocessors on the 2 clusters end up using the same 
> `ServerMetadataCache`. All tests which execute queries with 1 of the clusters 
> unavailable will fail. 
> We can refactor the implementation in the following way to support HA test 
> cases:
> 1. Create a `ServerMetadataCache` interface and use the current 
> implementation as `ServerMetadataCacheImpl` for all other tests. This would 
> be a singleton.
> 2. Implement `ServerMetadataCacheHAImpl` with a map of instances keyed on 
> config.
> 3. Extend `PhoenixRegionServerEndpoint` and use `ServerMetadataCacheHAImpl`. 
> 4. In HA tests, load this new endpoint on the region servers. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] PHOENIX-7251 : Refactor server-side code to support multiple ServerMetadataCache for ITs which create multiple RSs or mini clusters [phoenix]

2024-03-06 Thread via GitHub


shahrs87 commented on code in PR #1845:
URL: https://github.com/apache/phoenix/pull/1845#discussion_r1515288037


##
phoenix-core/src/it/java/org/apache/phoenix/end2end/ServerMetadataCacheTestImpl.java:
##
@@ -0,0 +1,66 @@
+package org.apache.phoenix.end2end;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.ServerName;
+import org.apache.phoenix.cache.ServerMetadataCacheImpl;
+import org.apache.phoenix.hbase.index.util.ImmutableBytesPtr;
+import org.apache.phoenix.util.QueryUtil;
+import org.apache.phoenix.util.SchemaUtil;
+
+import java.sql.Connection;
+import java.sql.SQLException;
+import java.util.HashMap;
+import java.util.Map;
+import java.util.Properties;
+
+/**
+ * Implementation of {@link ServerMetadataCache} for Integration Tests.
+ * Supports keeping more than one instance keyed on the regionserver 
ServerName.
+ */
+public class ServerMetadataCacheTestImpl extends ServerMetadataCacheImpl {

Review Comment:
   Can you add some more context in the comment something like:
   
   PhoenixRegionServerEndpoint is a region server coproc. There is a 1-1 
correspondence between PhoenixRegionServerEndpoint and ServerMetadataCache.
   In ITs we can have multiple regionservers per cluster so we need multiple 
instances of ServerMetadataCache.
   And HighAvailabilityTestingUtility creates 2 clusters so we need to have one 
instance of ServerMetadataCache for each regionserver in each cluster.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@phoenix.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] PHOENIX-7251 : Refactor server-side code to support multiple ServerMetadataCache for ITs which create multiple RSs or mini clusters [phoenix]

2024-03-06 Thread via GitHub


palashc commented on PR #1845:
URL: https://github.com/apache/phoenix/pull/1845#issuecomment-1982017344

   > we need to call ServerMetadaCacheTestImpl#resetCache in all the tests with 
category: NeedsOwnMiniClusterTest
   @shahrs87 Does it make sense to have that change as part of 
[PHOENIX-7166](https://github.com/apache/phoenix/pull/1778)?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@phoenix.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (PHOENIX-7251) Refactor server-side code to support multiple ServerMetadataCache for HA tests

2024-03-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-7251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17824211#comment-17824211
 ] 

ASF GitHub Bot commented on PHOENIX-7251:
-

palashc commented on PR #1845:
URL: https://github.com/apache/phoenix/pull/1845#issuecomment-1982017344

   > we need to call ServerMetadaCacheTestImpl#resetCache in all the tests with 
category: NeedsOwnMiniClusterTest
   @shahrs87 Does it make sense to have that change as part of 
[PHOENIX-7166](https://github.com/apache/phoenix/pull/1778)?




> Refactor server-side code to support multiple ServerMetadataCache for HA tests
> --
>
> Key: PHOENIX-7251
> URL: https://issues.apache.org/jira/browse/PHOENIX-7251
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Palash Chauhan
>Assignee: Palash Chauhan
>Priority: Major
>
> In the metadata caching re-design, `ServerMetadataCache` is required to be a 
> singleton in the implementation. This affects tests for the HA use case 
> because the coprocessors on the 2 clusters end up using the same 
> `ServerMetadataCache`. All tests which execute queries with 1 of the clusters 
> unavailable will fail. 
> We can refactor the implementation in the following way to support HA test 
> cases:
> 1. Create a `ServerMetadataCache` interface and use the current 
> implementation as `ServerMetadataCacheImpl` for all other tests. This would 
> be a singleton.
> 2. Implement `ServerMetadataCacheHAImpl` with a map of instances keyed on 
> config.
> 3. Extend `PhoenixRegionServerEndpoint` and use `ServerMetadataCacheHAImpl`. 
> 4. In HA tests, load this new endpoint on the region servers. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PHOENIX-7251) Refactor server-side code to support multiple ServerMetadataCache for HA tests

2024-03-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-7251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17824201#comment-17824201
 ] 

ASF GitHub Bot commented on PHOENIX-7251:
-

shahrs87 commented on PR #1845:
URL: https://github.com/apache/phoenix/pull/1845#issuecomment-1981942724

   Also, we need to call ServerMetadaCacheTestImpl#resetCache in all the tests 
with category: NeedsOwnMiniClusterTest
   I have all of them in [this 
PR](https://github.com/apache/phoenix/pull/1849/files).




> Refactor server-side code to support multiple ServerMetadataCache for HA tests
> --
>
> Key: PHOENIX-7251
> URL: https://issues.apache.org/jira/browse/PHOENIX-7251
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Palash Chauhan
>Assignee: Palash Chauhan
>Priority: Major
>
> In the metadata caching re-design, `ServerMetadataCache` is required to be a 
> singleton in the implementation. This affects tests for the HA use case 
> because the coprocessors on the 2 clusters end up using the same 
> `ServerMetadataCache`. All tests which execute queries with 1 of the clusters 
> unavailable will fail. 
> We can refactor the implementation in the following way to support HA test 
> cases:
> 1. Create a `ServerMetadataCache` interface and use the current 
> implementation as `ServerMetadataCacheImpl` for all other tests. This would 
> be a singleton.
> 2. Implement `ServerMetadataCacheHAImpl` with a map of instances keyed on 
> config.
> 3. Extend `PhoenixRegionServerEndpoint` and use `ServerMetadataCacheHAImpl`. 
> 4. In HA tests, load this new endpoint on the region servers. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] PHOENIX-7251 : Refactor server-side code to support multiple ServerMetadataCache for ITs which create multiple RSs or mini clusters [phoenix]

2024-03-06 Thread via GitHub


shahrs87 commented on PR #1845:
URL: https://github.com/apache/phoenix/pull/1845#issuecomment-1981942724

   Also, we need to call ServerMetadaCacheTestImpl#resetCache in all the tests 
with category: NeedsOwnMiniClusterTest
   I have all of them in [this 
PR](https://github.com/apache/phoenix/pull/1849/files).


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@phoenix.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (PHOENIX-7251) Refactor server-side code to support multiple ServerMetadataCache for HA tests

2024-03-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-7251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17824191#comment-17824191
 ] 

ASF GitHub Bot commented on PHOENIX-7251:
-

shahrs87 commented on code in PR #1845:
URL: https://github.com/apache/phoenix/pull/1845#discussion_r1515171337


##
phoenix-core/src/it/java/org/apache/phoenix/end2end/ServerMetadataCacheTestImpl.java:
##
@@ -0,0 +1,66 @@
+package org.apache.phoenix.end2end;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.ServerName;
+import org.apache.phoenix.cache.ServerMetadataCacheImpl;
+import org.apache.phoenix.hbase.index.util.ImmutableBytesPtr;
+import org.apache.phoenix.util.QueryUtil;
+import org.apache.phoenix.util.SchemaUtil;
+
+import java.sql.Connection;
+import java.sql.SQLException;
+import java.util.HashMap;
+import java.util.Map;
+import java.util.Properties;
+
+/**
+ * Implementation of {@link ServerMetadataCache} for Integration Tests.
+ * Supports keeping more than one instance keyed on the regionserver 
ServerName.
+ */
+public class ServerMetadataCacheTestImpl extends ServerMetadataCacheImpl {
+private static volatile Map 
INSTANCES = new HashMap<>();
+private Connection connectionForTesting;
+ServerMetadataCacheTestImpl(Configuration conf) {

Review Comment:
   new line before constructor.



##
phoenix-core/src/it/java/org/apache/phoenix/end2end/ServerMetadataCacheTestImpl.java:
##
@@ -0,0 +1,66 @@
+package org.apache.phoenix.end2end;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.ServerName;
+import org.apache.phoenix.cache.ServerMetadataCacheImpl;
+import org.apache.phoenix.hbase.index.util.ImmutableBytesPtr;
+import org.apache.phoenix.util.QueryUtil;
+import org.apache.phoenix.util.SchemaUtil;
+
+import java.sql.Connection;
+import java.sql.SQLException;
+import java.util.HashMap;
+import java.util.Map;
+import java.util.Properties;
+
+/**
+ * Implementation of {@link ServerMetadataCache} for Integration Tests.
+ * Supports keeping more than one instance keyed on the regionserver 
ServerName.
+ */
+public class ServerMetadataCacheTestImpl extends ServerMetadataCacheImpl {
+private static volatile Map 
INSTANCES = new HashMap<>();
+private Connection connectionForTesting;
+ServerMetadataCacheTestImpl(Configuration conf) {
+super(conf);
+}
+
+public static ServerMetadataCacheTestImpl getInstance(Configuration conf, 
ServerName serverName) {
+ServerMetadataCacheTestImpl result = INSTANCES.get(serverName);
+if (result == null) {
+synchronized (ServerMetadataCacheTestImpl.class) {
+result = INSTANCES.get(serverName);
+if (result == null) {
+result = new ServerMetadataCacheTestImpl(conf);
+INSTANCES.put(serverName, result);
+}
+}
+}
+return result;
+}
+
+public static void setInstance(ServerName serverName, 
ServerMetadataCacheTestImpl cache) {
+INSTANCES.put(serverName, cache);
+}
+
+public Long getLastDDLTimestampForTableFromCacheOnly(byte[] tenantID, 
byte[] schemaName,
+ byte[] tableName) {
+byte[] tableKey = SchemaUtil.getTableKey(tenantID, schemaName, 
tableName);
+ImmutableBytesPtr tableKeyPtr = new ImmutableBytesPtr(tableKey);
+return lastDDLTimestampMap.getIfPresent(tableKeyPtr);
+}
+
+public void setConnectionForTesting(Connection connection) {
+this.connectionForTesting = connection;
+}
+
+public static void resetCache() {
+INSTANCES.clear();
+}
+
+@Override
+protected Connection getConnection(Properties properties) throws 
SQLException {
+System.out.println("USED");
+return connectionForTesting != null ? connectionForTesting
+: QueryUtil.getConnectionOnServer(properties, this.conf);

Review Comment:
   ```suggestion
   return connectionForTesting != null ? connectionForTesting : 
super.getConnection(properties);
   ```



##
phoenix-core/src/it/java/org/apache/phoenix/end2end/ServerMetadataCacheTestImpl.java:
##
@@ -0,0 +1,66 @@
+package org.apache.phoenix.end2end;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.ServerName;
+import org.apache.phoenix.cache.ServerMetadataCacheImpl;
+import org.apache.phoenix.hbase.index.util.ImmutableBytesPtr;
+import org.apache.phoenix.util.QueryUtil;
+import org.apache.phoenix.util.SchemaUtil;
+
+import java.sql.Connection;
+import java.sql.SQLException;
+import java.util.HashMap;
+import java.util.Map;
+import java.util.Properties;
+
+/**
+ * Implementation of {@link ServerMetadataCache} for Integration Tests.
+ * Supports keeping more than one instance keyed on the regionserver 
ServerName.
+ */

Re: [PR] PHOENIX-7251 : Refactor server-side code to support multiple ServerMetadataCache for ITs which create multiple RSs or mini clusters [phoenix]

2024-03-06 Thread via GitHub


shahrs87 commented on code in PR #1845:
URL: https://github.com/apache/phoenix/pull/1845#discussion_r1515171337


##
phoenix-core/src/it/java/org/apache/phoenix/end2end/ServerMetadataCacheTestImpl.java:
##
@@ -0,0 +1,66 @@
+package org.apache.phoenix.end2end;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.ServerName;
+import org.apache.phoenix.cache.ServerMetadataCacheImpl;
+import org.apache.phoenix.hbase.index.util.ImmutableBytesPtr;
+import org.apache.phoenix.util.QueryUtil;
+import org.apache.phoenix.util.SchemaUtil;
+
+import java.sql.Connection;
+import java.sql.SQLException;
+import java.util.HashMap;
+import java.util.Map;
+import java.util.Properties;
+
+/**
+ * Implementation of {@link ServerMetadataCache} for Integration Tests.
+ * Supports keeping more than one instance keyed on the regionserver 
ServerName.
+ */
+public class ServerMetadataCacheTestImpl extends ServerMetadataCacheImpl {
+private static volatile Map 
INSTANCES = new HashMap<>();
+private Connection connectionForTesting;
+ServerMetadataCacheTestImpl(Configuration conf) {

Review Comment:
   new line before constructor.



##
phoenix-core/src/it/java/org/apache/phoenix/end2end/ServerMetadataCacheTestImpl.java:
##
@@ -0,0 +1,66 @@
+package org.apache.phoenix.end2end;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.ServerName;
+import org.apache.phoenix.cache.ServerMetadataCacheImpl;
+import org.apache.phoenix.hbase.index.util.ImmutableBytesPtr;
+import org.apache.phoenix.util.QueryUtil;
+import org.apache.phoenix.util.SchemaUtil;
+
+import java.sql.Connection;
+import java.sql.SQLException;
+import java.util.HashMap;
+import java.util.Map;
+import java.util.Properties;
+
+/**
+ * Implementation of {@link ServerMetadataCache} for Integration Tests.
+ * Supports keeping more than one instance keyed on the regionserver 
ServerName.
+ */
+public class ServerMetadataCacheTestImpl extends ServerMetadataCacheImpl {
+private static volatile Map 
INSTANCES = new HashMap<>();
+private Connection connectionForTesting;
+ServerMetadataCacheTestImpl(Configuration conf) {
+super(conf);
+}
+
+public static ServerMetadataCacheTestImpl getInstance(Configuration conf, 
ServerName serverName) {
+ServerMetadataCacheTestImpl result = INSTANCES.get(serverName);
+if (result == null) {
+synchronized (ServerMetadataCacheTestImpl.class) {
+result = INSTANCES.get(serverName);
+if (result == null) {
+result = new ServerMetadataCacheTestImpl(conf);
+INSTANCES.put(serverName, result);
+}
+}
+}
+return result;
+}
+
+public static void setInstance(ServerName serverName, 
ServerMetadataCacheTestImpl cache) {
+INSTANCES.put(serverName, cache);
+}
+
+public Long getLastDDLTimestampForTableFromCacheOnly(byte[] tenantID, 
byte[] schemaName,
+ byte[] tableName) {
+byte[] tableKey = SchemaUtil.getTableKey(tenantID, schemaName, 
tableName);
+ImmutableBytesPtr tableKeyPtr = new ImmutableBytesPtr(tableKey);
+return lastDDLTimestampMap.getIfPresent(tableKeyPtr);
+}
+
+public void setConnectionForTesting(Connection connection) {
+this.connectionForTesting = connection;
+}
+
+public static void resetCache() {
+INSTANCES.clear();
+}
+
+@Override
+protected Connection getConnection(Properties properties) throws 
SQLException {
+System.out.println("USED");
+return connectionForTesting != null ? connectionForTesting
+: QueryUtil.getConnectionOnServer(properties, this.conf);

Review Comment:
   ```suggestion
   return connectionForTesting != null ? connectionForTesting : 
super.getConnection(properties);
   ```



##
phoenix-core/src/it/java/org/apache/phoenix/end2end/ServerMetadataCacheTestImpl.java:
##
@@ -0,0 +1,66 @@
+package org.apache.phoenix.end2end;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.ServerName;
+import org.apache.phoenix.cache.ServerMetadataCacheImpl;
+import org.apache.phoenix.hbase.index.util.ImmutableBytesPtr;
+import org.apache.phoenix.util.QueryUtil;
+import org.apache.phoenix.util.SchemaUtil;
+
+import java.sql.Connection;
+import java.sql.SQLException;
+import java.util.HashMap;
+import java.util.Map;
+import java.util.Properties;
+
+/**
+ * Implementation of {@link ServerMetadataCache} for Integration Tests.
+ * Supports keeping more than one instance keyed on the regionserver 
ServerName.
+ */
+public class ServerMetadataCacheTestImpl extends ServerMetadataCacheImpl {
+private static volatile Map 
INSTANCES = new HashMap<>();
+private Connection connectionForTesting;
+ServerMetadataCacheTestImpl(Configuration conf) {
+   

[jira] [Commented] (PHOENIX-7251) Refactor server-side code to support multiple ServerMetadataCache for HA tests

2024-03-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-7251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17824132#comment-17824132
 ] 

ASF GitHub Bot commented on PHOENIX-7251:
-

shahrs87 opened a new pull request, #1849:
URL: https://github.com/apache/phoenix/pull/1849

   (no comment)




> Refactor server-side code to support multiple ServerMetadataCache for HA tests
> --
>
> Key: PHOENIX-7251
> URL: https://issues.apache.org/jira/browse/PHOENIX-7251
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Palash Chauhan
>Assignee: Palash Chauhan
>Priority: Major
>
> In the metadata caching re-design, `ServerMetadataCache` is required to be a 
> singleton in the implementation. This affects tests for the HA use case 
> because the coprocessors on the 2 clusters end up using the same 
> `ServerMetadataCache`. All tests which execute queries with 1 of the clusters 
> unavailable will fail. 
> We can refactor the implementation in the following way to support HA test 
> cases:
> 1. Create a `ServerMetadataCache` interface and use the current 
> implementation as `ServerMetadataCacheImpl` for all other tests. This would 
> be a singleton.
> 2. Implement `ServerMetadataCacheHAImpl` with a map of instances keyed on 
> config.
> 3. Extend `PhoenixRegionServerEndpoint` and use `ServerMetadataCacheHAImpl`. 
> 4. In HA tests, load this new endpoint on the region servers. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] PHOENIX-7251 Refactor server-side code to support multiple ServerMetadataCache for HA tests [phoenix]

2024-03-06 Thread via GitHub


shahrs87 opened a new pull request, #1849:
URL: https://github.com/apache/phoenix/pull/1849

   (no comment)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@phoenix.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (PHOENIX-6769) Align mockito version with Hadoop and HBase

2024-03-06 Thread Nihal Jain (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-6769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17824113#comment-17824113
 ] 

Nihal Jain commented on PHOENIX-6769:
-

Created PHOENIX-7262 for backport.

> Align mockito version with Hadoop and HBase
> ---
>
> Key: PHOENIX-6769
> URL: https://issues.apache.org/jira/browse/PHOENIX-6769
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.2.0
>Reporter: Andrew Kyle Purtell
>Assignee: Istvan Toth
>Priority: Major
> Fix For: 5.2.0
>
>
> There is a well known incompatibility between old versions of mockito-all and 
> mockito-core and newer versions. It manifests as 
> IncompatibleClassChangeErrors and other linkage problems. The Hadoop 
> minicluster in versions 3.x embed mockito classes in the minicluster. 
> To avoid potential problems it would be best to align Phoenix use of mockito 
> (mockito-core) with downstreamers. HBase uses mockito-core 2.28.2 on 
> branch-2.4 and branch-2.5. (Phoenix is on 1.10.19.) I checked Hadoop 
> branch-3.3 and it's also on 2.28.2.
> I recently opened a PR for OMID-226 to fix the same concern in phoenix-omid. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PHOENIX-6769) Align mockito version with Hadoop and HBase

2024-03-06 Thread Nihal Jain (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-6769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17824112#comment-17824112
 ] 

Nihal Jain commented on PHOENIX-6769:
-

> If it does break them, then we may need to play with the mockito versions in 
> the HBase profiles.

Yes this definitely makes sense. Will fall back to your suggested approach, if 
need be.

Let's see how it goes. Will clone this JIRA for backport work.

> Align mockito version with Hadoop and HBase
> ---
>
> Key: PHOENIX-6769
> URL: https://issues.apache.org/jira/browse/PHOENIX-6769
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.2.0
>Reporter: Andrew Kyle Purtell
>Assignee: Istvan Toth
>Priority: Major
> Fix For: 5.2.0
>
>
> There is a well known incompatibility between old versions of mockito-all and 
> mockito-core and newer versions. It manifests as 
> IncompatibleClassChangeErrors and other linkage problems. The Hadoop 
> minicluster in versions 3.x embed mockito classes in the minicluster. 
> To avoid potential problems it would be best to align Phoenix use of mockito 
> (mockito-core) with downstreamers. HBase uses mockito-core 2.28.2 on 
> branch-2.4 and branch-2.5. (Phoenix is on 1.10.19.) I checked Hadoop 
> branch-3.3 and it's also on 2.28.2.
> I recently opened a PR for OMID-226 to fix the same concern in phoenix-omid. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PHOENIX-6769) Align mockito version with Hadoop and HBase

2024-03-06 Thread Nihal Jain (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-6769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17824111#comment-17824111
 ] 

Nihal Jain commented on PHOENIX-6769:
-

> If it doesn't break them, then go ahead, Nihal Jain. 

Sure let me give a try. 

> Align mockito version with Hadoop and HBase
> ---
>
> Key: PHOENIX-6769
> URL: https://issues.apache.org/jira/browse/PHOENIX-6769
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.2.0
>Reporter: Andrew Kyle Purtell
>Assignee: Istvan Toth
>Priority: Major
> Fix For: 5.2.0
>
>
> There is a well known incompatibility between old versions of mockito-all and 
> mockito-core and newer versions. It manifests as 
> IncompatibleClassChangeErrors and other linkage problems. The Hadoop 
> minicluster in versions 3.x embed mockito classes in the minicluster. 
> To avoid potential problems it would be best to align Phoenix use of mockito 
> (mockito-core) with downstreamers. HBase uses mockito-core 2.28.2 on 
> branch-2.4 and branch-2.5. (Phoenix is on 1.10.19.) I checked Hadoop 
> branch-3.3 and it's also on 2.28.2.
> I recently opened a PR for OMID-226 to fix the same concern in phoenix-omid. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PHOENIX-6769) Align mockito version with Hadoop and HBase

2024-03-06 Thread Istvan Toth (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-6769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17824110#comment-17824110
 ] 

Istvan Toth commented on PHOENIX-6769:
--

If it does break them, then we may need to play with the mockito versions in 
the HBase profiles.
IIRC, the source changes were needed for the 1.0->2.0 mockito upgrade, the rest 
is probably source compatbile.

> Align mockito version with Hadoop and HBase
> ---
>
> Key: PHOENIX-6769
> URL: https://issues.apache.org/jira/browse/PHOENIX-6769
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.2.0
>Reporter: Andrew Kyle Purtell
>Assignee: Istvan Toth
>Priority: Major
> Fix For: 5.2.0
>
>
> There is a well known incompatibility between old versions of mockito-all and 
> mockito-core and newer versions. It manifests as 
> IncompatibleClassChangeErrors and other linkage problems. The Hadoop 
> minicluster in versions 3.x embed mockito classes in the minicluster. 
> To avoid potential problems it would be best to align Phoenix use of mockito 
> (mockito-core) with downstreamers. HBase uses mockito-core 2.28.2 on 
> branch-2.4 and branch-2.5. (Phoenix is on 1.10.19.) I checked Hadoop 
> branch-3.3 and it's also on 2.28.2.
> I recently opened a PR for OMID-226 to fix the same concern in phoenix-omid. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PHOENIX-6769) Align mockito version with Hadoop and HBase

2024-03-06 Thread Istvan Toth (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-6769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17824108#comment-17824108
 ] 

Istvan Toth commented on PHOENIX-6769:
--

We still support Hbase 2.1 with Hadoop 3.0.x in 5.1.

I was not sure if it would work with those old versions.
If it doesn't break them, then go ahead, [~nihaljain.cs].

> Align mockito version with Hadoop and HBase
> ---
>
> Key: PHOENIX-6769
> URL: https://issues.apache.org/jira/browse/PHOENIX-6769
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.2.0
>Reporter: Andrew Kyle Purtell
>Assignee: Istvan Toth
>Priority: Major
> Fix For: 5.2.0
>
>
> There is a well known incompatibility between old versions of mockito-all and 
> mockito-core and newer versions. It manifests as 
> IncompatibleClassChangeErrors and other linkage problems. The Hadoop 
> minicluster in versions 3.x embed mockito classes in the minicluster. 
> To avoid potential problems it would be best to align Phoenix use of mockito 
> (mockito-core) with downstreamers. HBase uses mockito-core 2.28.2 on 
> branch-2.4 and branch-2.5. (Phoenix is on 1.10.19.) I checked Hadoop 
> branch-3.3 and it's also on 2.28.2.
> I recently opened a PR for OMID-226 to fix the same concern in phoenix-omid. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (PHOENIX-6769) Align mockito version with Hadoop and HBase

2024-03-06 Thread Nihal Jain (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-6769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17824072#comment-17824072
 ] 

Nihal Jain edited comment on PHOENIX-6769 at 3/6/24 4:02 PM:
-

Hi [~stoty] Please let me know, if you have any concerns if we can backport 
JIRA to branch 5.1? Also, let me know if you want to take it up yourself or 
otherwise, I can volunteer to do the same.

Context on why I want this, please see PHOENIX-7261


was (Author: nihaljain.cs):
Hi [~stoty] I want to backport this to branch-5.1. Do you have any concerns if 
i raise a backport JIRA?

Context on why I want this, please see PHOENIX-7261

> Align mockito version with Hadoop and HBase
> ---
>
> Key: PHOENIX-6769
> URL: https://issues.apache.org/jira/browse/PHOENIX-6769
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.2.0
>Reporter: Andrew Kyle Purtell
>Assignee: Istvan Toth
>Priority: Major
> Fix For: 5.2.0
>
>
> There is a well known incompatibility between old versions of mockito-all and 
> mockito-core and newer versions. It manifests as 
> IncompatibleClassChangeErrors and other linkage problems. The Hadoop 
> minicluster in versions 3.x embed mockito classes in the minicluster. 
> To avoid potential problems it would be best to align Phoenix use of mockito 
> (mockito-core) with downstreamers. HBase uses mockito-core 2.28.2 on 
> branch-2.4 and branch-2.5. (Phoenix is on 1.10.19.) I checked Hadoop 
> branch-3.3 and it's also on 2.28.2.
> I recently opened a PR for OMID-226 to fix the same concern in phoenix-omid. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PHOENIX-6769) Align mockito version with Hadoop and HBase

2024-03-06 Thread Nihal Jain (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-6769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17824072#comment-17824072
 ] 

Nihal Jain commented on PHOENIX-6769:
-

Hi [~stoty] I want to backport this to branch-5.1. Do you have any concerns if 
i raise a backport JIRA?

Context on why I want this, please see PHOENIX-7261

> Align mockito version with Hadoop and HBase
> ---
>
> Key: PHOENIX-6769
> URL: https://issues.apache.org/jira/browse/PHOENIX-6769
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.2.0
>Reporter: Andrew Kyle Purtell
>Assignee: Istvan Toth
>Priority: Major
> Fix For: 5.2.0
>
>
> There is a well known incompatibility between old versions of mockito-all and 
> mockito-core and newer versions. It manifests as 
> IncompatibleClassChangeErrors and other linkage problems. The Hadoop 
> minicluster in versions 3.x embed mockito classes in the minicluster. 
> To avoid potential problems it would be best to align Phoenix use of mockito 
> (mockito-core) with downstreamers. HBase uses mockito-core 2.28.2 on 
> branch-2.4 and branch-2.5. (Phoenix is on 1.10.19.) I checked Hadoop 
> branch-3.3 and it's also on 2.28.2.
> I recently opened a PR for OMID-226 to fix the same concern in phoenix-omid. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PHOENIX-7006) Configure maxLookbackAge at table level

2024-03-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-7006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17824062#comment-17824062
 ] 

ASF GitHub Bot commented on PHOENIX-7006:
-

sanjeet006py commented on PR #1751:
URL: https://github.com/apache/phoenix/pull/1751#issuecomment-1981155663

   Once current PR validation is done. Shall I squash and rebase all my changes 
into one commit or merge procedure will take care of that?




> Configure maxLookbackAge at table level
> ---
>
> Key: PHOENIX-7006
> URL: https://issues.apache.org/jira/browse/PHOENIX-7006
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Viraj Jasani
>Assignee: Sanjeet Malhotra
>Priority: Major
>
> Phoenix max lookback age feature preserves live or deleted row versions that 
> are only visible through the max lookback window, it does not preserve any 
> unwanted row versions that should not be visible through the max lookback 
> window. More details on the max lookback redesign: PHOENIX-6888
> As of today, maxlookback age is only configurable at the cluster level 
> (config key: {_}phoenix.max.lookback.age.seconds{_}), meaning the same value 
> is used by all tables. This does not allow individual table level compaction 
> scanner to be able to retain data based on the table level maxlookback age. 
> Setting max lookback age at the table level can serve multiple purposes e.g. 
> change-data-capture (PHOENIX-7001) for individual table should have it's own 
> latest data retention period.
> The purpose of this Jira is to allow maxlookback age as a table level 
> property:
>  * New column in SYSTEM.CATALOG to preserve table level maxlookback age
>  * PTable object to read the value of maxlookback from SYSTEM.CATALOG
>  * Allow CREATE/ALTER TABLE DDLs to provide maxlookback attribute
>  * CompactionScanner should use table level maxlookbackAge, if available, 
> else use cluster level config



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] PHOENIX-7006: Configure maxLookbackAge at table level [phoenix]

2024-03-06 Thread via GitHub


sanjeet006py commented on PR #1751:
URL: https://github.com/apache/phoenix/pull/1751#issuecomment-1981155663

   Once current PR validation is done. Shall I squash and rebase all my changes 
into one commit or merge procedure will take care of that?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@phoenix.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (PHOENIX-7006) Configure maxLookbackAge at table level

2024-03-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-7006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17824055#comment-17824055
 ] 

ASF GitHub Bot commented on PHOENIX-7006:
-

sanjeet006py commented on PR #1751:
URL: https://github.com/apache/phoenix/pull/1751#issuecomment-1981135259

   Thanks a lot @virajjasani . I have resolved the conflicts. The conflicts 
were around the version changes for 5.3.0 creation. Please re-review. Thanks




> Configure maxLookbackAge at table level
> ---
>
> Key: PHOENIX-7006
> URL: https://issues.apache.org/jira/browse/PHOENIX-7006
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Viraj Jasani
>Assignee: Sanjeet Malhotra
>Priority: Major
>
> Phoenix max lookback age feature preserves live or deleted row versions that 
> are only visible through the max lookback window, it does not preserve any 
> unwanted row versions that should not be visible through the max lookback 
> window. More details on the max lookback redesign: PHOENIX-6888
> As of today, maxlookback age is only configurable at the cluster level 
> (config key: {_}phoenix.max.lookback.age.seconds{_}), meaning the same value 
> is used by all tables. This does not allow individual table level compaction 
> scanner to be able to retain data based on the table level maxlookback age. 
> Setting max lookback age at the table level can serve multiple purposes e.g. 
> change-data-capture (PHOENIX-7001) for individual table should have it's own 
> latest data retention period.
> The purpose of this Jira is to allow maxlookback age as a table level 
> property:
>  * New column in SYSTEM.CATALOG to preserve table level maxlookback age
>  * PTable object to read the value of maxlookback from SYSTEM.CATALOG
>  * Allow CREATE/ALTER TABLE DDLs to provide maxlookback attribute
>  * CompactionScanner should use table level maxlookbackAge, if available, 
> else use cluster level config



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] PHOENIX-7006: Configure maxLookbackAge at table level [phoenix]

2024-03-06 Thread via GitHub


sanjeet006py commented on PR #1751:
URL: https://github.com/apache/phoenix/pull/1751#issuecomment-1981135259

   Thanks a lot @virajjasani . I have resolved the conflicts. The conflicts 
were around the version changes for 5.3.0 creation. Please re-review. Thanks


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@phoenix.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org