[jira] [Resolved] (PHOENIX-7006) Configure maxLookbackAge at table level

2024-03-11 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7006?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani resolved PHOENIX-7006.
---
Fix Version/s: 5.3.0
   Resolution: Fixed

> Configure maxLookbackAge at table level
> ---
>
> Key: PHOENIX-7006
> URL: https://issues.apache.org/jira/browse/PHOENIX-7006
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Viraj Jasani
>Assignee: Sanjeet Malhotra
>Priority: Major
> Fix For: 5.3.0
>
>
> Phoenix max lookback age feature preserves live or deleted row versions that 
> are only visible through the max lookback window, it does not preserve any 
> unwanted row versions that should not be visible through the max lookback 
> window. More details on the max lookback redesign: PHOENIX-6888
> As of today, maxlookback age is only configurable at the cluster level 
> (config key: {_}phoenix.max.lookback.age.seconds{_}), meaning the same value 
> is used by all tables. This does not allow individual table level compaction 
> scanner to be able to retain data based on the table level maxlookback age. 
> Setting max lookback age at the table level can serve multiple purposes e.g. 
> change-data-capture (PHOENIX-7001) for individual table should have it's own 
> latest data retention period.
> The purpose of this Jira is to allow maxlookback age as a table level 
> property:
>  * New column in SYSTEM.CATALOG to preserve table level maxlookback age
>  * PTable object to read the value of maxlookback from SYSTEM.CATALOG
>  * Allow CREATE/ALTER TABLE DDLs to provide maxlookback attribute
>  * CompactionScanner should use table level maxlookbackAge, if available, 
> else use cluster level config



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [DISCUSS] 5.2.0 RC blocking issues

2024-03-11 Thread Viraj Jasani
Are we good to start with Omid release?


On Wed, Feb 28, 2024 at 1:51 PM Viraj Jasani  wrote:

> After resolving a couple more issues, I finally have the RC ready for
> vote. I will start the thread soon.
>
>
> On Tue, Feb 27, 2024 at 8:26 AM Viraj Jasani  wrote:
>
>> Another release attempt failed during publish release step, pushed fix
>> and ported to 5.2 branch:
>>
>> https://github.com/apache/phoenix/commit/bc1e2e7bea40c7d03940748e8f1d9f6b23339867
>>
>>
>> On Mon, Feb 26, 2024 at 5:36 PM Viraj Jasani  wrote:
>>
>>> Thank you Istvan!
>>>
>>> Except for the arm64 vs amd64, I was able to get over other issues. For
>>> arm64 JDK, I have done local change to unblock the RC and I hope that
>>> should be fine.
>>>
>>> However, publish-release step is failing with gpg error:
>>>
>>> 01:03:53 [INFO] --- maven-gpg-plugin:3.1.0:sign (sign-release-artifacts)
>>> @ phoenix ---
>>> 01:03:53 [INFO] Signing 3 files with 0x1012D134 secret key.
>>> gpg: setting pinentry mode 'error' failed: Forbidden
>>> gpg: keydb_search failed: Forbidden
>>> gpg: skipped "0x1012D134": Forbidden
>>> gpg: signing failed: Forbidden
>>>
>>> I am not sure of the exact root cause here, but it is quite likely that
>>> this is related to MGPG-92
>>>  that Nick created. I
>>> wonder if we can run the publish-release step directly for debugging
>>> purpose by any chance.
>>>
>>>
>>>
>>>
>>> On Sun, Feb 25, 2024 at 10:03 PM Istvan Toth 
>>> wrote:
>>>
 IIRC I copied the docker release originally from HBase, which took them
 from Spark.
 The M1 issues may have been already fixed in one of those projects.

 A simple Ubuntu base image upgrade to 22.04 may fix the M1 specific
 issues.
 I can't help directly, as I do not have access to a Mac, but ping me on
 Slack if you get stuck.

 As for the third issue, the scripts generate logs in the working
 directory.
 If they do not log the maven command line, you could easily add a line
 to
 log them.
 The ERRORS logged are a known issue, as Maven does not like the tricks
 used
 for multi-profile building, but even 3.9.6 accepts them, and only logs
 WARNINGs in my experience.

 I'm going to do a dry-run of the release scripts locally, and see if I
 can
 repro some problems on my Intel Linux machine.
 If you have access to a secure Intel Linux host, you may also want to
 try
 to run the scripts there.
 (though getting the ssh password entry working can be tricky)

 Istvan

 On Sun, Feb 25, 2024 at 9:37 PM Viraj Jasani 
 wrote:

 > Hi,
 >
 > I have started with creating 5.2.0 RC, I am starting this thread to
 discuss
 > some of the issues I have come across so far.
 >
 > 1) do-release-docker.sh is not able to grep and identify snapshot and
 > release versions in release-utils.
 > While the function parse_version works fine, if run manually on the
 5.2 pom
 > contents. Hence, I manually updated the utility to take 5.2.0-SNAPSHOT
 > version:
 >
 > --- a/dev/create-release/release-util.sh
 > +++ b/dev/create-release/release-util.sh
 > @@ -149,6 +149,7 @@ function get_release_info {
 >local version
 >version="$(curl -s
 > "$ASF_REPO_WEBUI;a=blob_plain;f=pom.xml;hb=refs/heads/$GIT_BRANCH" |
 >  parse_version)"
 > +  version="5.2.0-SNAPSHOT"
 >echo "Current branch VERSION is $version."
 >
 >RELEASE_VERSION=""
 >
 >
 > This is done to unblock the release for now. We can investigate and
 fix
 > this later.
 >
 > 2) openjdk-8-amd64 installation fails because I am using M1 Mac:
 >
 > Setting up openjdk-8-jdk:arm64 (8u372-ga~us1-0ubuntu1~18.04) ...
 > update-alternatives: using
 > /usr/lib/jvm/java-8-openjdk-arm64/bin/appletviewer to provide
 > /usr/bin/appletviewer (appletviewer) in auto mode
 > update-alternatives: using
 /usr/lib/jvm/java-8-openjdk-arm64/bin/jconsole
 > to provide /usr/bin/jconsole (jconsole) in auto mode
 > Setting up ubuntu-mono (16.10+18.04.20181005-0ubuntu1) ...
 > Processing triggers for libc-bin (2.27-3ubuntu1.6) ...
 > Processing triggers for ca-certificates (20230311ubuntu0.18.04.1) ...
 > Updating certificates in /etc/ssl/certs...
 > 0 added, 0 removed; done.
 > Running hooks in /etc/ca-certificates/update.d...
 > done.
 > done.
 > Processing triggers for libgdk-pixbuf2.0-0:arm64 (2.36.11-2) ...
 > update-alternatives: error: alternative
 > /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java for java not
 registered; not
 > setting
 >
 > In order to resolve this, I set java to use java-8-openjdk-arm64
 instead.
 > e.g. update-alternatives --set java
 > /usr/lib/jvm/java-8-openjdk-arm64/jre/bin/java
 > (and all other places where we use amd64)
 >
 > This is done to 

Re: [DISCUSS] 5.2.0 RC blocking issues

2024-03-11 Thread rajeshb...@apache.org
Yes Viraj, almost done testing. Mostly will start the release today.

Thanks,
Rajeshbabu.

On Tue, Mar 12, 2024, 9:53 AM Viraj Jasani  wrote:

> Are we good to start with Omid release?
>
>
> On Wed, Feb 28, 2024 at 1:51 PM Viraj Jasani  wrote:
>
> > After resolving a couple more issues, I finally have the RC ready for
> > vote. I will start the thread soon.
> >
> >
> > On Tue, Feb 27, 2024 at 8:26 AM Viraj Jasani  wrote:
> >
> >> Another release attempt failed during publish release step, pushed fix
> >> and ported to 5.2 branch:
> >>
> >>
> https://github.com/apache/phoenix/commit/bc1e2e7bea40c7d03940748e8f1d9f6b23339867
> >>
> >>
> >> On Mon, Feb 26, 2024 at 5:36 PM Viraj Jasani 
> wrote:
> >>
> >>> Thank you Istvan!
> >>>
> >>> Except for the arm64 vs amd64, I was able to get over other issues. For
> >>> arm64 JDK, I have done local change to unblock the RC and I hope that
> >>> should be fine.
> >>>
> >>> However, publish-release step is failing with gpg error:
> >>>
> >>> 01:03:53 [INFO] --- maven-gpg-plugin:3.1.0:sign
> (sign-release-artifacts)
> >>> @ phoenix ---
> >>> 01:03:53 [INFO] Signing 3 files with 0x1012D134 secret key.
> >>> gpg: setting pinentry mode 'error' failed: Forbidden
> >>> gpg: keydb_search failed: Forbidden
> >>> gpg: skipped "0x1012D134": Forbidden
> >>> gpg: signing failed: Forbidden
> >>>
> >>> I am not sure of the exact root cause here, but it is quite likely that
> >>> this is related to MGPG-92
> >>>  that Nick created. I
> >>> wonder if we can run the publish-release step directly for debugging
> >>> purpose by any chance.
> >>>
> >>>
> >>>
> >>>
> >>> On Sun, Feb 25, 2024 at 10:03 PM Istvan Toth
> 
> >>> wrote:
> >>>
>  IIRC I copied the docker release originally from HBase, which took
> them
>  from Spark.
>  The M1 issues may have been already fixed in one of those projects.
> 
>  A simple Ubuntu base image upgrade to 22.04 may fix the M1 specific
>  issues.
>  I can't help directly, as I do not have access to a Mac, but ping me
> on
>  Slack if you get stuck.
> 
>  As for the third issue, the scripts generate logs in the working
>  directory.
>  If they do not log the maven command line, you could easily add a line
>  to
>  log them.
>  The ERRORS logged are a known issue, as Maven does not like the tricks
>  used
>  for multi-profile building, but even 3.9.6 accepts them, and only logs
>  WARNINGs in my experience.
> 
>  I'm going to do a dry-run of the release scripts locally, and see if I
>  can
>  repro some problems on my Intel Linux machine.
>  If you have access to a secure Intel Linux host, you may also want to
>  try
>  to run the scripts there.
>  (though getting the ssh password entry working can be tricky)
> 
>  Istvan
> 
>  On Sun, Feb 25, 2024 at 9:37 PM Viraj Jasani 
>  wrote:
> 
>  > Hi,
>  >
>  > I have started with creating 5.2.0 RC, I am starting this thread to
>  discuss
>  > some of the issues I have come across so far.
>  >
>  > 1) do-release-docker.sh is not able to grep and identify snapshot
> and
>  > release versions in release-utils.
>  > While the function parse_version works fine, if run manually on the
>  5.2 pom
>  > contents. Hence, I manually updated the utility to take
> 5.2.0-SNAPSHOT
>  > version:
>  >
>  > --- a/dev/create-release/release-util.sh
>  > +++ b/dev/create-release/release-util.sh
>  > @@ -149,6 +149,7 @@ function get_release_info {
>  >local version
>  >version="$(curl -s
>  > "$ASF_REPO_WEBUI;a=blob_plain;f=pom.xml;hb=refs/heads/$GIT_BRANCH" |
>  >  parse_version)"
>  > +  version="5.2.0-SNAPSHOT"
>  >echo "Current branch VERSION is $version."
>  >
>  >RELEASE_VERSION=""
>  >
>  >
>  > This is done to unblock the release for now. We can investigate and
>  fix
>  > this later.
>  >
>  > 2) openjdk-8-amd64 installation fails because I am using M1 Mac:
>  >
>  > Setting up openjdk-8-jdk:arm64 (8u372-ga~us1-0ubuntu1~18.04) ...
>  > update-alternatives: using
>  > /usr/lib/jvm/java-8-openjdk-arm64/bin/appletviewer to provide
>  > /usr/bin/appletviewer (appletviewer) in auto mode
>  > update-alternatives: using
>  /usr/lib/jvm/java-8-openjdk-arm64/bin/jconsole
>  > to provide /usr/bin/jconsole (jconsole) in auto mode
>  > Setting up ubuntu-mono (16.10+18.04.20181005-0ubuntu1) ...
>  > Processing triggers for libc-bin (2.27-3ubuntu1.6) ...
>  > Processing triggers for ca-certificates (20230311ubuntu0.18.04.1)
> ...
>  > Updating certificates in /etc/ssl/certs...
>  > 0 added, 0 removed; done.
>  > Running hooks in /etc/ca-certificates/update.d...
>  > done.
>  > done.
>  > Processing triggers for libgdk-pixbuf2.0-0:arm64 (2.36.11-2) ...
>  > 

[jira] [Created] (PHOENIX-7269) Upgrade fails when HBase table for index is missing

2024-03-11 Thread Istvan Toth (Jira)
Istvan Toth created PHOENIX-7269:


 Summary: Upgrade fails when HBase table for index is missing
 Key: PHOENIX-7269
 URL: https://issues.apache.org/jira/browse/PHOENIX-7269
 Project: Phoenix
  Issue Type: Bug
  Components: core
Reporter: Istvan Toth


When attempting to upgrade the metadata during upgrade, the process is aborted 
if Phoenix encounters indexes defined in SYSTEM.CATALOG, but missing the 
corresponding HBase backing table.

Upgrade should log a warning, but continue in this case, as those indexes are 
broken anyway.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-7269) Upgrade fails when HBase table for index is missing

2024-03-11 Thread Istvan Toth (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Istvan Toth updated PHOENIX-7269:
-
Description: 
When attempting to upgrade the metadata during upgrade, the process is aborted 
if Phoenix encounters indexes defined in SYSTEM.CATALOG, but missing the 
corresponding HBase backing table.

Upgrade should log a warning, but continue in this case, as those indexes are 
broken anyway.



{noformat}
Caused by: org.apache.phoenix.schema.TableNotFoundException: ERROR 1012 
(42M03): Table undefined. tableName=REDACTED
... 14 more

at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:192)
at 
org.apache.hadoop.hbase.client.HTable.coprocessorService(HTable.java:991)
at 
org.apache.hadoop.hbase.client.HTable.coprocessorService(HTable.java:953)
at 
org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1785)
at 
org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1764)
at 
org.apache.phoenix.query.ConnectionQueryServicesImpl.getTable(ConnectionQueryServicesImpl.java:2013)
at 
org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:657)
at 
org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:545)
at 
org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:541)
at 
org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:536)
at 
org.apache.phoenix.util.PhoenixRuntime.getTable(PhoenixRuntime.java:457)
at 
org.apache.phoenix.util.UpgradeUtil.addViewIndexToParentLinks(UpgradeUtil.java:1244)
at 
org.apache.phoenix.query.ConnectionQueryServicesImpl.upgradeSystemCatalogIfRequired(ConnectionQueryServicesImpl.java:3794)
at 
org.apache.phoenix.query.ConnectionQueryServicesImpl.upgradeSystemTables(ConnectionQueryServicesImpl.java:3951)
at 
org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:3337)
at 
org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:3238)
at 
org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:76)
at 
org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:3238)
at 
org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:255)
at 
org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:144)
at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:221)
at sqlline.DatabaseConnection.connect(DatabaseConnection.java:135)
at sqlline.DatabaseConnection.getConnection(DatabaseConnection.java:192)
at sqlline.DatabaseConnection.getConnection(DatabaseConnection.java:192)
at sqlline.Commands.connect(Commands.java:1364)
at sqlline.Commands.connect(Commands.java:1244)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:38)
at sqlline.SqlLine.dispatch(SqlLine.java:730)
at sqlline.SqlLine.initArgs(SqlLine.java:410)
at sqlline.SqlLine.begin(SqlLine.java:515)
at sqlline.SqlLine.start(SqlLine.java:267)
at sqlline.SqlLine.main(SqlLine.java:206){noformat}


  was:
When attempting to upgrade the metadata during upgrade, the process is aborted 
if Phoenix encounters indexes defined in SYSTEM.CATALOG, but missing the 
corresponding HBase backing table.

Upgrade should log a warning, but continue in this case, as those indexes are 
broken anyway.


> Upgrade fails when HBase table for index is missing
> ---
>
> Key: PHOENIX-7269
> URL: https://issues.apache.org/jira/browse/PHOENIX-7269
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Reporter: Istvan Toth
>Priority: Major
>
> When attempting to upgrade the metadata during upgrade, the process is 
> aborted if Phoenix encounters indexes defined in SYSTEM.CATALOG, but missing 
> the corresponding HBase backing table.
> Upgrade should log a warning, but continue in this case, as those indexes are 
> broken anyway.
> {noformat}
> Caused by: org.apache.phoenix.schema.TableNotFoundException: ERROR 1012 
> (42M03): Table undefined. tableName=REDACTED
> ... 14 more
>   

[jira] [Created] (PHOENIX-7268) Namespace mapped system tables are not snapshotted before upgrade

2024-03-11 Thread Istvan Toth (Jira)
Istvan Toth created PHOENIX-7268:


 Summary: Namespace mapped system tables are not snapshotted before 
upgrade
 Key: PHOENIX-7268
 URL: https://issues.apache.org/jira/browse/PHOENIX-7268
 Project: Phoenix
  Issue Type: Bug
  Components: core
Reporter: Istvan Toth


When upgrading the system tables, Phoenix tries to take a snapshot of 
system.catalog.
This is useful, as it provides a checkpoint that can be used to restore the 
previous state if the upgrade runs into problems.

However, The Phoenix system table name is passed to HBase, which does not exist 
if system table namespace mapping is enabled.

Either delay taking a snapshot until after we know if syscat is namespace 
mapped or not, or just try to take a snapshot of both possible HBase tables.




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-7269) Upgrade fails when HBase table for index is missing

2024-03-11 Thread Istvan Toth (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Istvan Toth updated PHOENIX-7269:
-
Description: 
When attempting to upgrade the metadata during upgrade, the process is aborted 
if Phoenix encounters indexes defined in SYSTEM.CATALOG, but missing the 
corresponding HBase backing table.

Upgrade should log a warning, but continue in this case, as those indexes are 
broken anyway.

The problem is in 
org.apache.phoenix.util.UpgradeUtil.addViewIndexToParentLinks()

{noformat}
Caused by: org.apache.phoenix.schema.TableNotFoundException: ERROR 1012 
(42M03): Table undefined. tableName=REDACTED
... 14 more

at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:192)
at 
org.apache.hadoop.hbase.client.HTable.coprocessorService(HTable.java:991)
at 
org.apache.hadoop.hbase.client.HTable.coprocessorService(HTable.java:953)
at 
org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1785)
at 
org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1764)
at 
org.apache.phoenix.query.ConnectionQueryServicesImpl.getTable(ConnectionQueryServicesImpl.java:2013)
at 
org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:657)
at 
org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:545)
at 
org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:541)
at 
org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:536)
at 
org.apache.phoenix.util.PhoenixRuntime.getTable(PhoenixRuntime.java:457)
at 
org.apache.phoenix.util.UpgradeUtil.addViewIndexToParentLinks(UpgradeUtil.java:1244)
at 
org.apache.phoenix.query.ConnectionQueryServicesImpl.upgradeSystemCatalogIfRequired(ConnectionQueryServicesImpl.java:3794)
at 
org.apache.phoenix.query.ConnectionQueryServicesImpl.upgradeSystemTables(ConnectionQueryServicesImpl.java:3951)
at 
org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:3337)
at 
org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:3238)
at 
org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:76)
at 
org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:3238)
at 
org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:255)
at 
org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:144)
at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:221)
at sqlline.DatabaseConnection.connect(DatabaseConnection.java:135)
at sqlline.DatabaseConnection.getConnection(DatabaseConnection.java:192)
at sqlline.DatabaseConnection.getConnection(DatabaseConnection.java:192)
at sqlline.Commands.connect(Commands.java:1364)
at sqlline.Commands.connect(Commands.java:1244)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:38)
at sqlline.SqlLine.dispatch(SqlLine.java:730)
at sqlline.SqlLine.initArgs(SqlLine.java:410)
at sqlline.SqlLine.begin(SqlLine.java:515)
at sqlline.SqlLine.start(SqlLine.java:267)
at sqlline.SqlLine.main(SqlLine.java:206){noformat}


  was:
When attempting to upgrade the metadata during upgrade, the process is aborted 
if Phoenix encounters indexes defined in SYSTEM.CATALOG, but missing the 
corresponding HBase backing table.

Upgrade should log a warning, but continue in this case, as those indexes are 
broken anyway.



{noformat}
Caused by: org.apache.phoenix.schema.TableNotFoundException: ERROR 1012 
(42M03): Table undefined. tableName=REDACTED
... 14 more

at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:192)
at 
org.apache.hadoop.hbase.client.HTable.coprocessorService(HTable.java:991)
at 
org.apache.hadoop.hbase.client.HTable.coprocessorService(HTable.java:953)
at 
org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1785)
at 
org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1764)
at 

[jira] [Updated] (PHOENIX-7268) Namespace mapped system catalog table not snapshotted before upgrade

2024-03-11 Thread Istvan Toth (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Istvan Toth updated PHOENIX-7268:
-
Summary: Namespace mapped system catalog table not snapshotted before 
upgrade  (was: Namespace mapped system tables are not snapshotted before 
upgrade)

> Namespace mapped system catalog table not snapshotted before upgrade
> 
>
> Key: PHOENIX-7268
> URL: https://issues.apache.org/jira/browse/PHOENIX-7268
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Reporter: Istvan Toth
>Priority: Major
>
> When upgrading the system tables, Phoenix tries to take a snapshot of 
> system.catalog.
> This is useful, as it provides a checkpoint that can be used to restore the 
> previous state if the upgrade runs into problems.
> However, The Phoenix system table name is passed to HBase, which does not 
> exist if system table namespace mapping is enabled.
> Either delay taking a snapshot until after we know if syscat is namespace 
> mapped or not, or just try to take a snapshot of both possible HBase tables.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (PHOENIX-7270) Always resolve table before DDL operations.

2024-03-11 Thread Rushabh Shah (Jira)
Rushabh Shah created PHOENIX-7270:
-

 Summary: Always resolve table before DDL operations.
 Key: PHOENIX-7270
 URL: https://issues.apache.org/jira/browse/PHOENIX-7270
 Project: Phoenix
  Issue Type: Sub-task
Reporter: Rushabh Shah


After we set the UCF = NEVER to all tables, we are validating last ddl 
timestamps for read and write queries.
For DDL operations, we are reading the PTable from the client side cache.
In some cases, after the DDL operations we are updating/invalidating the cache 
for the table which is being altered but we don't invalidate the cache for the 
parent table (in case of views) or indexes. 

When column encoding is set to true, we increment the seq number for base 
physical table (for views) whenever we create a view. Refer 
[here|https://github.com/apache/phoenix/blob/master/phoenix-core-client/src/main/java/org/apache/phoenix/schema/MetaDataClient.java#L2924-L2931]
 for more details. Once the create view command is executed successfully, we 
only add the view to the cache but we don't update the base table in the cache. 
This can cause an inconsistency when we use the same cached PTable object for 
next DDL operations on the base table.

Solutions:
1. Validate last ddl timestamps for table, view hierarchy and indexes for every 
DDL operations like we do for read and write queries.
2. Always resolve the table, view hierarchy and indexes for every DDL 
operation. It will have the same effect as UCF is set to ALWAYS but just for 
DDL operations.

I would prefer option#2 since that will guarantee we always get the latest 
Ptable object for DDL operations.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (PHOENIX-7271) Always resolve table before DDL operations.

2024-03-11 Thread Rushabh Shah (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rushabh Shah resolved PHOENIX-7271.
---
Resolution: Duplicate

Duplicate of PHOENIX-7270.

> Always resolve table before DDL operations.
> ---
>
> Key: PHOENIX-7271
> URL: https://issues.apache.org/jira/browse/PHOENIX-7271
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Rushabh Shah
>Priority: Major
>
> After we set the UCF = NEVER to all tables, we are validating last ddl 
> timestamps for read and write queries.
> For DDL operations, we are reading the PTable from the client side cache.
> In some cases, after the DDL operations we are updating/invalidating the 
> cache for the table which is being altered but we don't invalidate the cache 
> for the parent table (in case of views) or indexes. 
> When column encoding is set to true, we increment the seq number for base 
> physical table (for views) whenever we create a view. Refer 
> [here|https://github.com/apache/phoenix/blob/master/phoenix-core-client/src/main/java/org/apache/phoenix/schema/MetaDataClient.java#L2924-L2931]
>  for more details. Once the create view command is executed successfully, we 
> only add the view to the cache but we don't update the base table in the 
> cache. This can cause an inconsistency when we use the same cached PTable 
> object for next DDL operations on the base table.
> Solutions:
> 1. Validate last ddl timestamps for table, view hierarchy and indexes for 
> every DDL operations like we do for read and write queries.
> 2. Always resolve the table, view hierarchy and indexes for every DDL 
> operation. It will have the same effect as UCF is set to ALWAYS but just for 
> DDL operations.
> I would prefer option#2 since that will guarantee we always get the latest 
> Ptable object for DDL operations.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (PHOENIX-7271) Always resolve table before DDL operations.

2024-03-11 Thread Rushabh Shah (Jira)
Rushabh Shah created PHOENIX-7271:
-

 Summary: Always resolve table before DDL operations.
 Key: PHOENIX-7271
 URL: https://issues.apache.org/jira/browse/PHOENIX-7271
 Project: Phoenix
  Issue Type: Sub-task
Reporter: Rushabh Shah


After we set the UCF = NEVER to all tables, we are validating last ddl 
timestamps for read and write queries.
For DDL operations, we are reading the PTable from the client side cache.
In some cases, after the DDL operations we are updating/invalidating the cache 
for the table which is being altered but we don't invalidate the cache for the 
parent table (in case of views) or indexes. 

When column encoding is set to true, we increment the seq number for base 
physical table (for views) whenever we create a view. Refer 
[here|https://github.com/apache/phoenix/blob/master/phoenix-core-client/src/main/java/org/apache/phoenix/schema/MetaDataClient.java#L2924-L2931]
 for more details. Once the create view command is executed successfully, we 
only add the view to the cache but we don't update the base table in the cache. 
This can cause an inconsistency when we use the same cached PTable object for 
next DDL operations on the base table.

Solutions:
1. Validate last ddl timestamps for table, view hierarchy and indexes for every 
DDL operations like we do for read and write queries.
2. Always resolve the table, view hierarchy and indexes for every DDL 
operation. It will have the same effect as UCF is set to ALWAYS but just for 
DDL operations.

I would prefer option#2 since that will guarantee we always get the latest 
Ptable object for DDL operations.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (PHOENIX-7272) Increment Sequence Number should be a separate RPC method in MetaDataEndpointImpl.

2024-03-11 Thread Rushabh Shah (Jira)
Rushabh Shah created PHOENIX-7272:
-

 Summary: Increment Sequence Number should be a separate RPC method 
in MetaDataEndpointImpl.
 Key: PHOENIX-7272
 URL: https://issues.apache.org/jira/browse/PHOENIX-7272
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 5.1.3
Reporter: Rushabh Shah


Currently we increment the sequence of the base table whenever we create a view 
or alter the view (when we add new columns to the view) like this. Refer code 
[here|https://github.com/apache/phoenix/blob/master/phoenix-core-client/src/main/java/org/apache/phoenix/schema/MetaDataClient.java#L413-L419].

{noformat}
private static final String INCREMENT_SEQ_NUM =
"UPSERT INTO " + SYSTEM_CATALOG_SCHEMA + ".\"" + 
SYSTEM_CATALOG_TABLE + "\"( " +
TENANT_ID + "," +
TABLE_SCHEM + "," +
TABLE_NAME + "," +
TABLE_SEQ_NUM  +
") VALUES (?, ?, ?, ?)";

if (tableType == VIEW && !changedCqCounters.isEmpty()) {
PreparedStatement incrementStatement = 
connection.prepareStatement(INCREMENT_SEQ_NUM);
incrementStatement.setString(1, null);
incrementStatement.setString(2, 
viewPhysicalTable.getSchemaName().getString());
incrementStatement.setString(3, 
viewPhysicalTable.getTableName().getString());
incrementStatement.setLong(4, 
viewPhysicalTable.getSequenceNumber() + 1);
incrementStatement.execute();
}
{noformat}

We are generating the new sequence number on the client side. It is possible 
that the cached PTable (viewPhysicalTable in above example) could be stale (if 
UCF is set to some value or NEVER). 

Instead of creating an UPSERT command on the client side, we should create a 
new co proc method on MetadataEndpointImpl co proc something like:

bq. public void incrementSeqNumber(tenantID, schemaName, tableName, int 
originalSeqNum)

This RPC should fail if the current sequence number (stored in SYSCAT table) 
doesn't match with the originalSeqNum. Similar to checkAndPut behavior in hbase.

This has the following benefits:
1. We will lock the table while running the RPC
2. Once PHOENIX-6883 is committed, we will be able to invalidate the cache for 
the table on all regionservers.
3. If needed, we can increase the last ddl timestamp for the table. 
4. If the sequence number doesn't match, the client will refresh its cache and 
will resolve the table again.






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (PHOENIX-7270) Always resolve table before DDL operations.

2024-03-11 Thread Palash Chauhan (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Palash Chauhan reassigned PHOENIX-7270:
---

Assignee: Palash Chauhan

> Always resolve table before DDL operations.
> ---
>
> Key: PHOENIX-7270
> URL: https://issues.apache.org/jira/browse/PHOENIX-7270
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Rushabh Shah
>Assignee: Palash Chauhan
>Priority: Major
>
> After we set the UCF = NEVER to all tables, we are validating last ddl 
> timestamps for read and write queries.
> For DDL operations, we are reading the PTable from the client side cache.
> In some cases, after the DDL operations we are updating/invalidating the 
> cache for the table which is being altered but we don't invalidate the cache 
> for the parent table (in case of views) or indexes. 
> When column encoding is set to true, we increment the seq number for base 
> physical table (for views) whenever we create a view. Refer 
> [here|https://github.com/apache/phoenix/blob/master/phoenix-core-client/src/main/java/org/apache/phoenix/schema/MetaDataClient.java#L2924-L2931]
>  for more details. Once the create view command is executed successfully, we 
> only add the view to the cache but we don't update the base table in the 
> cache. This can cause an inconsistency when we use the same cached PTable 
> object for next DDL operations on the base table.
> Solutions:
> 1. Validate last ddl timestamps for table, view hierarchy and indexes for 
> every DDL operations like we do for read and write queries.
> 2. Always resolve the table, view hierarchy and indexes for every DDL 
> operation. It will have the same effect as UCF is set to ALWAYS but just for 
> DDL operations.
> I would prefer option#2 since that will guarantee we always get the latest 
> Ptable object for DDL operations.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)