[jira] [Assigned] (PHOENIX-7214) Purging expired rows during minor compaction for immutable tables

2024-03-28 Thread Jacob Isaac (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacob Isaac reassigned PHOENIX-7214:


Assignee: Jacob Isaac

> Purging expired rows during minor compaction for immutable tables
> -
>
> Key: PHOENIX-7214
> URL: https://issues.apache.org/jira/browse/PHOENIX-7214
> Project: Phoenix
>  Issue Type: New Feature
>Reporter: Kadir Ozdemir
>Assignee: Jacob Isaac
>Priority: Major
>
> HBase minor compaction does not remove deleted or expired cells since the 
> minor compaction works on a subset of HFiles. However, it is safe to remove 
> expired rows for immutable tables. For immutable tables, rows are inserted 
> but not updated. This means a given row will have only one version.This means 
> we can safely remove expired rows during minor compaction using 
> CompactionScanner in Phoenix.
> CompactionScanner currently runs only for major compaction. We can introduce 
> an new table attribute called MINOR_COMPACT_TTL.  Phoenix can run 
> CompactionScanner for minor compaction too for the tables with 
> MINOR_COMPACT_TTL = TRUE. By doing so, the expired rows will be purged during 
> minor compaction for these tables. This will be useful when TTL is less than 
> 7 days, say 2 days, as major compaction typically runs only once a week. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (PHOENIX-7211) Identify IT tests that can be run successfully against real distributed cluster

2024-02-09 Thread Jacob Isaac (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacob Isaac reassigned PHOENIX-7211:


Assignee: Divneet Kaur

> Identify IT tests that can be run successfully against real distributed 
> cluster
> ---
>
> Key: PHOENIX-7211
> URL: https://issues.apache.org/jira/browse/PHOENIX-7211
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Jacob Isaac
>Assignee: Divneet Kaur
>Priority: Major
>
> Categorize/Identify the tests that can be run against real distributed 
> clusters with minimal changes to the tests and test framework.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (PHOENIX-7211) Identify IT tests that can be run successfully against real distributed cluster

2024-02-09 Thread Jacob Isaac (Jira)
Jacob Isaac created PHOENIX-7211:


 Summary: Identify IT tests that can be run successfully against 
real distributed cluster
 Key: PHOENIX-7211
 URL: https://issues.apache.org/jira/browse/PHOENIX-7211
 Project: Phoenix
  Issue Type: Sub-task
Reporter: Jacob Isaac


Categorize/Identify the tests that can be run against real distributed clusters 
with minimal changes to the tests and test framework.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (PHOENIX-7210) Ensure Phoenix IT tests can be run against a real distributed cluster

2024-02-09 Thread Jacob Isaac (Jira)
Jacob Isaac created PHOENIX-7210:


 Summary: Ensure Phoenix IT tests can be run against a real 
distributed cluster
 Key: PHOENIX-7210
 URL: https://issues.apache.org/jira/browse/PHOENIX-7210
 Project: Phoenix
  Issue Type: Improvement
Reporter: Jacob Isaac


When planning a new Phoenix Release in OSS  and subsequent upgrades in 
production, we must ensure that our release build is well tested. Currently 
running the IT test suite ensures that a version (build version) of client and 
server are working as desired. A few backward compatibility tests are also run 
as part of the IT test suite but they are very minimal in coverage. The purpose 
of this JIRA is to explore how we can enhance our IT test framework to provide 
test coverage and backward compatibility for various combinations of 
client-server versions.

Our current OSS release sign-off process is as described 
[here|https://phoenix.apache.org/release.html]


Apache Phoenix follows semantic versioning i.e. for a given version x.y.z, we 
have:

* Major version:
    * x is the major version.
    * A major upgrade needs to be done when you make incompatible API changes. 
There will generally be public-facing APIs that have changed, metadata changes 
and/or changes that affect existing end-user behavior.
* Minor version:
    * y is the minor version.
    * A minor upgrade needs to be done when you add functionality in a 
backwards compatible manner. Any changes to system table schema (for ex: 
SYSTEM.CATALOG) such as addition of columns, must be done in either a minor or 
major version upgrade.
* Patch version:
    * z is the patch version.
    * A patch upgrade can be done when you make backwards compatible bug fixes. 
This is particularly useful in providing a quick minimal change release on top 
of a pre-existing minor/major version release which fixes bugs.


When upgrading the Major/Minor version we typically run tests other than the IT 
tests to cover various client/server combinations that can manifest during an 
upgrade.

1. Client with old Phoenix jar + Servers with mixed Phoenix jars + old metadata 
(few servers have been upgraded)
2. Client with old Phoenix jar + Server with new Phoenix jar + old metadata 
(bits upgraded)
3. Client with old Phoenix jar + Server with new Phoenix jar + new metadata 
(metadata upgraded)
4. Client with new Phoenix jar + Server with new Phoenix jar + new metadata 
(clients upgraded)


It would be a more exhaustive set of tests if we could run the Phoenix IT test 
suites against a distributed cluster with above combinations.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (PHOENIX-7040) Support TTL for views using the new column TTL in SYSTEM.CATALOG

2023-12-06 Thread Jacob Isaac (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacob Isaac resolved PHOENIX-7040.
--
Resolution: Fixed

> Support TTL for views using the new column TTL in SYSTEM.CATALOG
> 
>
> Key: PHOENIX-7040
> URL: https://issues.apache.org/jira/browse/PHOENIX-7040
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Jacob Isaac
>Assignee: Lokesh Khurana
>Priority: Major
>
> Allow views to be created with TTL specs.
> Ensure TTL is specified only once in the view hierarchy.
> Child views should inherit TTL values from their parent, when not specified 
> for the given view.
> Indexes should inherit the TTL values from the base tables/views.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (PHOENIX-7041) Populate ROW_KEY_PREFIX column when creating views

2023-12-06 Thread Jacob Isaac (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7041?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacob Isaac resolved PHOENIX-7041.
--
Resolution: Fixed

> Populate ROW_KEY_PREFIX column when creating views
> --
>
> Key: PHOENIX-7041
> URL: https://issues.apache.org/jira/browse/PHOENIX-7041
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Jacob Isaac
>Assignee: Jacob Isaac
>Priority: Major
> Fix For: 5.2.1
>
>
> When a view statement is defined by the constraints articulated in 
> PHOENIX-4555, all rows created by the view will be prefixed by a KeyRange. 
> The view thus can simply be represented by the prefixed KeyRange generated by 
> the expression representing the view statement. 
> The ROW_KEY_PREFIX column will store this KeyRange. The prefixed KeyRange 
> will be used to create a PrefixIndex -> mapping a row prefix to a view.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (PHOENIX-7108) Provide support for pruning expired rows of views using Phoenix level compactions

2023-11-10 Thread Jacob Isaac (Jira)
Jacob Isaac created PHOENIX-7108:


 Summary: Provide support for pruning expired rows of views using 
Phoenix level compactions
 Key: PHOENIX-7108
 URL: https://issues.apache.org/jira/browse/PHOENIX-7108
 Project: Phoenix
  Issue Type: Sub-task
Reporter: Jacob Isaac
Assignee: Jacob Isaac


Modify Phoenix compaction framework introduced in PHOENIX-6888 to prune TTL 
expired rows of views.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (PHOENIX-7107) Add support for indexing on SYSTEM.CATALOG table

2023-11-10 Thread Jacob Isaac (Jira)
Jacob Isaac created PHOENIX-7107:


 Summary: Add support for indexing on SYSTEM.CATALOG table
 Key: PHOENIX-7107
 URL: https://issues.apache.org/jira/browse/PHOENIX-7107
 Project: Phoenix
  Issue Type: Improvement
Reporter: Jacob Isaac
Assignee: Jacob Isaac


With partial indexing available (PHOENIX-7032)
Having the ability to partially index system catalog rows would be useful as it 
would allow us to scan catalog properties more efficiently.
For e.g 
The SYSTEM.CHILD_LINK table can be thought of as a partial index of 
SYSTEM.CATALOG row with link_type=4.
Being able to query which tables/views have the TTL set.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (PHOENIX-7068) Update Phoenix apache website views page with additional information on usage of views and view indexes

2023-10-12 Thread Jacob Isaac (Jira)
Jacob Isaac created PHOENIX-7068:


 Summary: Update Phoenix apache website views page with additional 
information on usage of views and view indexes
 Key: PHOENIX-7068
 URL: https://issues.apache.org/jira/browse/PHOENIX-7068
 Project: Phoenix
  Issue Type: Task
Reporter: Jacob Isaac


Document the findings from PHOENIX-4555, PHOENIX-7047, PHOENIX-7067 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (PHOENIX-7067) View indexes should be created only on non overlapping updatable views

2023-10-12 Thread Jacob Isaac (Jira)
Jacob Isaac created PHOENIX-7067:


 Summary: View indexes should be created only on non overlapping 
updatable views
 Key: PHOENIX-7067
 URL: https://issues.apache.org/jira/browse/PHOENIX-7067
 Project: Phoenix
  Issue Type: Improvement
Reporter: Jacob Isaac


Updatable views by definition outlined in PHOENIX-4555 are disjoint 
partitions/virtual tables on the base HBase table.
View indexes should only be allowed to be defined on these partitions.
As PHOENIX-7047 revealed index rows are not generated or get clobbered for 
certain multi-level views.

This JIRA will try and address these issues and add the proper constraints on 
when updatable views and view indexes can be created.
1. View should be allowed to extend the parent PK i.e. adding its own PK column 
in the view definition only when there are no indexes in the parent hierarchy.
and vice versa
2. View indexes can defined on a given view only when there are no child views 
that have extended the PK of the base view.

 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (PHOENIX-7063) Track and account garbage collected phoenix connections

2023-10-05 Thread Jacob Isaac (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7063?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacob Isaac reassigned PHOENIX-7063:


Assignee: Lokesh Khurana

> Track and account garbage collected phoenix connections
> ---
>
> Key: PHOENIX-7063
> URL: https://issues.apache.org/jira/browse/PHOENIX-7063
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 5.1.3
>Reporter: Jacob Isaac
>Assignee: Lokesh Khurana
>Priority: Major
>
> In production env, misbehaving clients can forget to close Phoenix 
> connections. This can result in Phoenix connections leaking. 
> Moreover, when Phoenix connections are tracked and limited by the 
> GLOBAL_OPEN_PHOENIX_CONNECTIONS metrics counter per jvm, it can lead to 
> client requests for Phoenix connections being rejected.
> Tracking and keeping count of garbage collected Phoenix connections can 
> alleviate the above issues.
> Providing additional logging during such reclaims will provide additional 
> insights into a production env.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (PHOENIX-7063) Track and account garbage collected phoenix connections

2023-10-05 Thread Jacob Isaac (Jira)
Jacob Isaac created PHOENIX-7063:


 Summary: Track and account garbage collected phoenix connections
 Key: PHOENIX-7063
 URL: https://issues.apache.org/jira/browse/PHOENIX-7063
 Project: Phoenix
  Issue Type: Improvement
Affects Versions: 5.1.3
Reporter: Jacob Isaac


In production env, misbehaving clients can forget to close Phoenix connections. 
This can result in Phoenix connections leaking. 

Moreover, when Phoenix connections are tracked and limited by the 
GLOBAL_OPEN_PHOENIX_CONNECTIONS metrics counter per jvm, it can lead to client 
requests for Phoenix connections being rejected.

Tracking and keeping count of garbage collected Phoenix connections can 
alleviate the above issues.

Providing additional logging during such reclaims will provide additional 
insights into a production env.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-7047) Index rows not generated for certain multilevel views

2023-09-21 Thread Jacob Isaac (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7047?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacob Isaac updated PHOENIX-7047:
-
Description: 
 
{code:java}
@Test
public void testTenantViewUpdate() throws Exception {
Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
try (Connection conn = DriverManager.getConnection(getUrl(), props)) {
String schemaName = generateUniqueName();
String dataTableName = generateUniqueName();
String dataTableFullName = SchemaUtil.getTableName(schemaName, 
dataTableName);
String globalViewName = generateUniqueName();
String globalViewFullName = SchemaUtil.getTableName(schemaName, 
globalViewName);
String viewName = generateUniqueName();
String viewFullName = SchemaUtil.getTableName(schemaName, viewName);
String leafViewName = generateUniqueName();
String leafViewFullName = SchemaUtil.getTableName(schemaName, 
leafViewName);
String indexTableName1 = generateUniqueName();
String indexTableFullName1 = SchemaUtil.getTableName(schemaName, 
indexTableName1);

conn.createStatement().execute("CREATE TABLE " + dataTableFullName
+ " (OID CHAR(15) NOT NULL, KP CHAR(3) NOT NULL, VAL1 INTEGER, 
VAL2 INTEGER CONSTRAINT PK PRIMARY KEY (OID, KP)) MULTI_TENANT=true");
conn.commit();
conn.createStatement().execute(String.format("CREATE VIEW IF NOT EXISTS 
%s(ID1 INTEGER not null, COL4 VARCHAR, CONSTRAINT pk PRIMARY KEY (ID1)) AS 
SELECT * FROM %s WHERE KP = 'P01'", globalViewFullName, dataTableFullName));
conn.commit();

conn.createStatement().execute(String.format(
"CREATE INDEX IF NOT EXISTS %s ON %s(ID1) include (COL4)", 
indexTableName1, globalViewFullName));

conn.createStatement().execute(String.format("CREATE VIEW IF NOT EXISTS 
%s(TP INTEGER not null, ROW_ID CHAR(15) NOT NULL,COLA VARCHAR CONSTRAINT pk 
PRIMARY KEY (TP,ROW_ID)) AS SELECT * FROM %s WHERE ID1 = 42724", 
viewFullName,globalViewFullName));
conn.commit();

conn.createStatement().execute(String.format("CREATE VIEW IF NOT EXISTS 
%s AS SELECT * from %s WHERE TP = 32", leafViewFullName, viewFullName));
conn.commit();

conn.createStatement().execute("UPSERT INTO " + leafViewFullName + " 
(OID, ROW_ID, COL4, COLA) values ('00D0y01', 
'00Z0y01','d07223','a05493')");
conn.commit();


TestUtil.dumpTable(conn, TableName.valueOf(dataTableFullName));
TestUtil.dumpTable(conn, TableName.valueOf(("_IDX_" + 
dataTableFullName)));
}
}
{code}

  was:
 
{code:java}
@Test
public void testTenantViewUpdate() throws Exception {
Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
try (Connection conn = DriverManager.getConnection(getUrl(), props)) {
String schemaName = generateUniqueName();
String dataTableName = generateUniqueName();
String dataTableFullName = SchemaUtil.getTableName(schemaName, 
dataTableName);
String globalViewName = generateUniqueName();
String globalViewFullName = SchemaUtil.getTableName(schemaName, 
globalViewName);
String viewName = generateUniqueName();
String viewFullName = SchemaUtil.getTableName(schemaName, viewName);
String leafViewName = generateUniqueName();
String leafViewFullName = SchemaUtil.getTableName(schemaName, 
leafViewName);
String indexTableName1 = generateUniqueName();
String indexTableFullName1 = SchemaUtil.getTableName(schemaName, 
indexTableName1);

conn.createStatement().execute("CREATE TABLE " + dataTableFullName
+ " (OID CHAR(15) NOT NULL, KP CHAR(3) NOT NULL, VAL1 INTEGER, 
VAL2 INTEGER CONSTRAINT PK PRIMARY KEY (OID, KP)) MULTI_TENANT=true");
conn.commit();
conn.createStatement().execute(String.format("CREATE VIEW IF NOT EXISTS 
%s(ID1 INTEGER not null, COL4 VARCHAR, CONSTRAINT pk PRIMARY KEY (ID1)) AS 
SELECT * FROM %s WHERE KP = 'P01'", globalViewFullName, dataTableFullName));
conn.commit();

conn.createStatement().execute(String.format(
"CREATE INDEX IF NOT EXISTS %s ON %s(ID1) include (COL4)", 
indexTableName1, globalViewFullName));

conn.createStatement().execute(String.format("CREATE VIEW IF NOT EXISTS 
%s(TP INTEGER not null, ROW_ID CHAR(15) NOT NULL,COLA VARCHAR CONSTRAINT pk 
PRIMARY KEY (TP,ROW_ID)) AS SELECT * FROM %s WHERE ID1 = 42724", 
viewFullName,globalViewFullName));
conn.commit();

conn.createStatement().execute(String.format("CREATE VIEW IF NOT EXISTS 
%s AS SELECT * from %s WHERE TP = 32", leafViewFullName, viewFullName));
conn.commit();

conn.createStatement().execute("UPSERT INTO " + leafViewFullName + " 
(OID, ROW_ID, COL4, COLA) values ('00D0y01', 
'00Z0y01','d07223','a05493')");
conn.commit();



[jira] [Updated] (PHOENIX-7047) Index rows not generated for certain multilevel views

2023-09-21 Thread Jacob Isaac (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7047?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacob Isaac updated PHOENIX-7047:
-
Description: 
 
{code:java}
@Test
public void testTenantViewUpdate() throws Exception {
Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
try (Connection conn = DriverManager.getConnection(getUrl(), props)) {
String schemaName = generateUniqueName();
String dataTableName = generateUniqueName();
String dataTableFullName = SchemaUtil.getTableName(schemaName, 
dataTableName);
String globalViewName = generateUniqueName();
String globalViewFullName = SchemaUtil.getTableName(schemaName, 
globalViewName);
String viewName = generateUniqueName();
String viewFullName = SchemaUtil.getTableName(schemaName, viewName);
String leafViewName = generateUniqueName();
String leafViewFullName = SchemaUtil.getTableName(schemaName, 
leafViewName);
String indexTableName1 = generateUniqueName();
String indexTableFullName1 = SchemaUtil.getTableName(schemaName, 
indexTableName1);

conn.createStatement().execute("CREATE TABLE " + dataTableFullName
+ " (OID CHAR(15) NOT NULL, KP CHAR(3) NOT NULL, VAL1 INTEGER, 
VAL2 INTEGER CONSTRAINT PK PRIMARY KEY (OID, KP)) MULTI_TENANT=true");
conn.commit();
conn.createStatement().execute(String.format("CREATE VIEW IF NOT EXISTS 
%s(ID1 INTEGER not null, COL4 VARCHAR, CONSTRAINT pk PRIMARY KEY (ID1)) AS 
SELECT * FROM %s WHERE KP = 'P01'", globalViewFullName, dataTableFullName));
conn.commit();

conn.createStatement().execute(String.format(
"CREATE INDEX IF NOT EXISTS %s ON %s(ID1) include (COL4)", 
indexTableName1, globalViewFullName));

conn.createStatement().execute(String.format("CREATE VIEW IF NOT EXISTS 
%s(TP INTEGER not null, ROW_ID CHAR(15) NOT NULL,COLA VARCHAR CONSTRAINT pk 
PRIMARY KEY (TP,ROW_ID)) AS SELECT * FROM %s WHERE ID1 = 42724", 
viewFullName,globalViewFullName));
conn.commit();

conn.createStatement().execute(String.format("CREATE VIEW IF NOT EXISTS 
%s AS SELECT * from %s WHERE TP = 32", leafViewFullName, viewFullName));
conn.commit();

conn.createStatement().execute("UPSERT INTO " + leafViewFullName + " 
(OID, ROW_ID, COL4, COLA) values ('00D0y01', 
'00Z0y01','d07223','a05493')");
conn.commit();


TestUtil.dumpTable(conn, TableName.valueOf(dataTableFullName));
TestUtil.dumpTable(conn, TableName.valueOf(("_IDX_" + 
dataTableFullName)));
}
}
}
{code}
 

  was:
@Test
public void testTenantViewUpdate() throws Exception {
Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
try (Connection conn = DriverManager.getConnection(getUrl(), props)) {
String schemaName = generateUniqueName();
String dataTableName = generateUniqueName();
String dataTableFullName = SchemaUtil.getTableName(schemaName, dataTableName);
String globalViewName = generateUniqueName();
String globalViewFullName = SchemaUtil.getTableName(schemaName, globalViewName);
String viewName = generateUniqueName();
String viewFullName = SchemaUtil.getTableName(schemaName, viewName);
String leafViewName = generateUniqueName();
String leafViewFullName = SchemaUtil.getTableName(schemaName, leafViewName);
String indexTableName1 = generateUniqueName();
String indexTableFullName1 = SchemaUtil.getTableName(schemaName, 
indexTableName1);

conn.createStatement().execute("CREATE TABLE " + dataTableFullName
+ " (OID CHAR(15) NOT NULL, KP CHAR(3) NOT NULL, VAL1 INTEGER, VAL2 INTEGER 
CONSTRAINT PK PRIMARY KEY (OID, KP)) MULTI_TENANT=true");
conn.commit();
conn.createStatement().execute(String.format("CREATE VIEW IF NOT EXISTS %s(ID1 
INTEGER not null, COL4 VARCHAR, CONSTRAINT pk PRIMARY KEY (ID1)) AS SELECT * 
FROM %s WHERE KP = 'P01'", globalViewFullName, dataTableFullName));
conn.commit();

conn.createStatement().execute(String.format(
"CREATE INDEX IF NOT EXISTS %s ON %s(ID1) include (COL4)", indexTableName1, 
globalViewFullName));

conn.createStatement().execute(String.format("CREATE VIEW IF NOT EXISTS %s(TP 
INTEGER not null, ROW_ID CHAR(15) NOT NULL,COLA VARCHAR CONSTRAINT pk PRIMARY 
KEY (TP,ROW_ID)) AS SELECT * FROM %s WHERE ID1 = 42724", 
viewFullName,globalViewFullName));
conn.commit();

conn.createStatement().execute(String.format("CREATE VIEW IF NOT EXISTS %s AS 
SELECT * from %s WHERE TP = 32", leafViewFullName, viewFullName));
conn.commit();

conn.createStatement().execute("UPSERT INTO " + leafViewFullName + " (OID, 
ROW_ID, COL4, COLA) values ('00D0y01', 
'00Z0y01','d07223','a05493')");
conn.commit();


TestUtil.dumpTable(conn, TableName.valueOf(dataTableFullName));
TestUtil.dumpTable(conn, TableName.valueOf(("_IDX_" + dataTableFullName)));
}
}


> Index rows not generated for certain multilevel views
> 

[jira] [Created] (PHOENIX-7047) Index rows not generated for certain multilevel views

2023-09-21 Thread Jacob Isaac (Jira)
Jacob Isaac created PHOENIX-7047:


 Summary: Index rows not generated for certain multilevel views
 Key: PHOENIX-7047
 URL: https://issues.apache.org/jira/browse/PHOENIX-7047
 Project: Phoenix
  Issue Type: Bug
Reporter: Jacob Isaac


@Test
public void testTenantViewUpdate() throws Exception {
Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
try (Connection conn = DriverManager.getConnection(getUrl(), props)) {
String schemaName = generateUniqueName();
String dataTableName = generateUniqueName();
String dataTableFullName = SchemaUtil.getTableName(schemaName, dataTableName);
String globalViewName = generateUniqueName();
String globalViewFullName = SchemaUtil.getTableName(schemaName, globalViewName);
String viewName = generateUniqueName();
String viewFullName = SchemaUtil.getTableName(schemaName, viewName);
String leafViewName = generateUniqueName();
String leafViewFullName = SchemaUtil.getTableName(schemaName, leafViewName);
String indexTableName1 = generateUniqueName();
String indexTableFullName1 = SchemaUtil.getTableName(schemaName, 
indexTableName1);

conn.createStatement().execute("CREATE TABLE " + dataTableFullName
+ " (OID CHAR(15) NOT NULL, KP CHAR(3) NOT NULL, VAL1 INTEGER, VAL2 INTEGER 
CONSTRAINT PK PRIMARY KEY (OID, KP)) MULTI_TENANT=true");
conn.commit();
conn.createStatement().execute(String.format("CREATE VIEW IF NOT EXISTS %s(ID1 
INTEGER not null, COL4 VARCHAR, CONSTRAINT pk PRIMARY KEY (ID1)) AS SELECT * 
FROM %s WHERE KP = 'P01'", globalViewFullName, dataTableFullName));
conn.commit();

conn.createStatement().execute(String.format(
"CREATE INDEX IF NOT EXISTS %s ON %s(ID1) include (COL4)", indexTableName1, 
globalViewFullName));

conn.createStatement().execute(String.format("CREATE VIEW IF NOT EXISTS %s(TP 
INTEGER not null, ROW_ID CHAR(15) NOT NULL,COLA VARCHAR CONSTRAINT pk PRIMARY 
KEY (TP,ROW_ID)) AS SELECT * FROM %s WHERE ID1 = 42724", 
viewFullName,globalViewFullName));
conn.commit();

conn.createStatement().execute(String.format("CREATE VIEW IF NOT EXISTS %s AS 
SELECT * from %s WHERE TP = 32", leafViewFullName, viewFullName));
conn.commit();

conn.createStatement().execute("UPSERT INTO " + leafViewFullName + " (OID, 
ROW_ID, COL4, COLA) values ('00D0y01', 
'00Z0y01','d07223','a05493')");
conn.commit();


TestUtil.dumpTable(conn, TableName.valueOf(dataTableFullName));
TestUtil.dumpTable(conn, TableName.valueOf(("_IDX_" + dataTableFullName)));
}
}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (PHOENIX-7046) Query results return different values when PKs of view have DESC order

2023-09-21 Thread Jacob Isaac (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacob Isaac reassigned PHOENIX-7046:


Assignee: Viraj Jasani

> Query results return different values when PKs of view have DESC order
> --
>
> Key: PHOENIX-7046
> URL: https://issues.apache.org/jira/browse/PHOENIX-7046
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Jacob Isaac
>Assignee: Viraj Jasani
>Priority: Major
> Attachments: Screenshot 2023-09-21 at 10.54.08 AM.png
>
>
> To reproduce -
> CREATE TABLE IF NOT EXISTS TEST_ENTITY.T1(OID CHAR(15) NOT NULL,KP CHAR(3) 
> NOT NULL, COL1 VARCHAR CONSTRAINT pk PRIMARY KEY (OID,KP)) 
> MULTI_TENANT=true,COLUMN_ENCODED_BYTES=0;
> CREATE VIEW IF NOT EXISTS TEST_ENTITY.G1_P01(ID1 INTEGER not null, COL4 
> VARCHAR, CONSTRAINT pk PRIMARY KEY (ID1 DESC)) AS SELECT * FROM 
> TEST_ENTITY.T1 WHERE KP = 'P01';
> CREATE VIEW IF NOT EXISTS TEST_ENTITY.TV_P01(ROW_ID CHAR(15) NOT NULL,COLA 
> VARCHAR CONSTRAINT pk PRIMARY KEY (ROW_ID)) AS SELECT * FROM 
> TEST_ENTITY.G1_P01 WHERE ID1 = 42724;
> UPSERT INTO TEST_ENTITY.TV_P01(OID, ROW_ID, COL4, COLA) 
> VALUES('00D0y01', '00Z0y01','d07223','a05493');
> SELECT ID1, COL4 FROM TEST_ENTITY.TV_P01;
> SELECT ID1, COL4 FROM TEST_ENTITY.G1_P01;
>  
>  
> !Screenshot 2023-09-21 at 10.54.08 AM.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (PHOENIX-7046) Query results return different values when PKs of view have DESC order

2023-09-21 Thread Jacob Isaac (Jira)
Jacob Isaac created PHOENIX-7046:


 Summary: Query results return different values when PKs of view 
have DESC order
 Key: PHOENIX-7046
 URL: https://issues.apache.org/jira/browse/PHOENIX-7046
 Project: Phoenix
  Issue Type: Bug
Reporter: Jacob Isaac
 Attachments: Screenshot 2023-09-21 at 10.54.08 AM.png

To reproduce -

CREATE TABLE IF NOT EXISTS TEST_ENTITY.T1(OID CHAR(15) NOT NULL,KP CHAR(3) NOT 
NULL, COL1 VARCHAR CONSTRAINT pk PRIMARY KEY (OID,KP)) 
MULTI_TENANT=true,COLUMN_ENCODED_BYTES=0;

CREATE VIEW IF NOT EXISTS TEST_ENTITY.G1_P01(ID1 INTEGER not null, COL4 
VARCHAR, CONSTRAINT pk PRIMARY KEY (ID1 DESC)) AS SELECT * FROM TEST_ENTITY.T1 
WHERE KP = 'P01';

CREATE VIEW IF NOT EXISTS TEST_ENTITY.TV_P01(ROW_ID CHAR(15) NOT NULL,COLA 
VARCHAR CONSTRAINT pk PRIMARY KEY (ROW_ID)) AS SELECT * FROM TEST_ENTITY.G1_P01 
WHERE ID1 = 42724;

UPSERT INTO TEST_ENTITY.TV_P01(OID, ROW_ID, COL4, COLA) 
VALUES('00D0y01', '00Z0y01','d07223','a05493');

SELECT ID1, COL4 FROM TEST_ENTITY.TV_P01;
SELECT ID1, COL4 FROM TEST_ENTITY.G1_P01;

 

 

!Screenshot 2023-09-21 at 10.54.08 AM.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (PHOENIX-7041) Populate ROW_KEY_PREFIX column when creating views

2023-09-18 Thread Jacob Isaac (Jira)
Jacob Isaac created PHOENIX-7041:


 Summary: Populate ROW_KEY_PREFIX column when creating views
 Key: PHOENIX-7041
 URL: https://issues.apache.org/jira/browse/PHOENIX-7041
 Project: Phoenix
  Issue Type: Sub-task
Reporter: Jacob Isaac
 Fix For: 5.2.1


When a view statement is defined by the constraints articulated in 
PHOENIX-4555, all rows created by the view will be prefixed by a KeyRange. The 
view thus can simply be represented by the prefixed KeyRange generated by the 
expression representing the view statement. 

The ROW_KEY_PREFIX column will store this KeyRange. The prefixed KeyRange will 
be used to create a PrefixIndex -> mapping a row prefix to a view.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (PHOENIX-7041) Populate ROW_KEY_PREFIX column when creating views

2023-09-18 Thread Jacob Isaac (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7041?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacob Isaac reassigned PHOENIX-7041:


Assignee: Jacob Isaac

> Populate ROW_KEY_PREFIX column when creating views
> --
>
> Key: PHOENIX-7041
> URL: https://issues.apache.org/jira/browse/PHOENIX-7041
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Jacob Isaac
>Assignee: Jacob Isaac
>Priority: Major
> Fix For: 5.2.1
>
>
> When a view statement is defined by the constraints articulated in 
> PHOENIX-4555, all rows created by the view will be prefixed by a KeyRange. 
> The view thus can simply be represented by the prefixed KeyRange generated by 
> the expression representing the view statement. 
> The ROW_KEY_PREFIX column will store this KeyRange. The prefixed KeyRange 
> will be used to create a PrefixIndex -> mapping a row prefix to a view.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (PHOENIX-7040) Support TTL for views using the new column TTL in SYSTEM.CATALOG

2023-09-18 Thread Jacob Isaac (Jira)
Jacob Isaac created PHOENIX-7040:


 Summary: Support TTL for views using the new column TTL in 
SYSTEM.CATALOG
 Key: PHOENIX-7040
 URL: https://issues.apache.org/jira/browse/PHOENIX-7040
 Project: Phoenix
  Issue Type: Sub-task
Reporter: Jacob Isaac


Allow views to be created with TTL specs.

Ensure TTL is specified only once in the view hierarchy.

Child views should inherit TTL values from their parent, when not specified for 
the given view.

Indexes should inherit the TTL values from the base tables/views.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (PHOENIX-7040) Support TTL for views using the new column TTL in SYSTEM.CATALOG

2023-09-18 Thread Jacob Isaac (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacob Isaac reassigned PHOENIX-7040:


Assignee: Lokesh Khurana

> Support TTL for views using the new column TTL in SYSTEM.CATALOG
> 
>
> Key: PHOENIX-7040
> URL: https://issues.apache.org/jira/browse/PHOENIX-7040
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Jacob Isaac
>Assignee: Lokesh Khurana
>Priority: Major
>
> Allow views to be created with TTL specs.
> Ensure TTL is specified only once in the view hierarchy.
> Child views should inherit TTL values from their parent, when not specified 
> for the given view.
> Indexes should inherit the TTL values from the base tables/views.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (PHOENIX-7022) Add new columns TTL and ROWKEY_PREFIX

2023-08-17 Thread Jacob Isaac (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7022?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacob Isaac reassigned PHOENIX-7022:


Assignee: Lokesh Khurana

> Add new columns TTL and ROWKEY_PREFIX
> -
>
> Key: PHOENIX-7022
> URL: https://issues.apache.org/jira/browse/PHOENIX-7022
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Jacob Isaac
>Assignee: Lokesh Khurana
>Priority: Major
>
> When a view statement is defined by the constraints articulated in 
> PHOENIX-4555, all rows created by the view will be prefixed by a KeyRange. 
> The view thus can simply be represented by the prefixed KeyRange generated by 
> the expression representing the view statement. In other words, there exists 
> a one-to-one mapping between the view (defined by tenant, schema, tablename) 
> and PREFIXED KeyRange.
> For lookup on the PREFIXED KeyRange we will create a new column ROWKEY_PREFIX 
> in SYSTEM.CATALOG. This new column will be populated during view creation 
> when TTL is specified.
>  
> The TTL column (INTEGER) will store the TTL when specified in line with the 
> HBase spec (which uses an int). The PHOENIX_TTL-related columns and code will 
> be deprecated in a separate jira.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (PHOENIX-7023) Deprecate columns PHOENIX_TTL and PHOENIX_TTL_HWM and related code

2023-08-17 Thread Jacob Isaac (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7023?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacob Isaac reassigned PHOENIX-7023:


Assignee: Lokesh Khurana

> Deprecate columns PHOENIX_TTL and PHOENIX_TTL_HWM and related code
> --
>
> Key: PHOENIX-7023
> URL: https://issues.apache.org/jira/browse/PHOENIX-7023
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Jacob Isaac
>Assignee: Lokesh Khurana
>Priority: Major
>
> Deprecate the old columns PHOENIX_TTL and PHOENIX_TTL_HWM and related code 
> since they are not compatible with the new 
> [design|[https://docs.google.com/document/d/1D2B0G_sVe9eE66bk-sxUfSgoGtQCvD7xBZRxZz-Q1TM/edit]|https://docs.google.com/document/d/1D2B0G_sVe9eE66bk-sxUfSgoGtQCvD7xBZRxZz-Q1TM/edit].]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (PHOENIX-6978) Redesign Phoenix TTL for Views

2023-08-17 Thread Jacob Isaac (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6978?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacob Isaac reassigned PHOENIX-6978:


Assignee: Jacob Isaac  (was: Lokesh Khurana)

> Redesign Phoenix TTL for Views
> --
>
> Key: PHOENIX-6978
> URL: https://issues.apache.org/jira/browse/PHOENIX-6978
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Jacob Isaac
>Assignee: Jacob Isaac
>Priority: Major
>
> With Phoenix TTL for views (PHOENIX-3725), the basic gist was the TTL should 
> be a Phoenix view level setting instead of being at the table level as 
> implemented in HBase. More details on the old design are here ([Phoenix TTL 
> old design 
> doc|https://docs.google.com/document/d/1aZWhJQCARBVt9VIXNgINCB8O0fk2GucxXeu7472SVL8/edit#heading=h.kpf13qig3vdl]).
> Both HBase TTL and Phoenix TTL rely on applying expiration logic during the 
> scanning phase when serving query results and apply deletion logic when 
> pruning the rows from the store. In HBase, the pruning is achieved during the 
> compaction phase.
> The initial design and implementation of Phoenix TTL for views used the MR 
> framework to run delete jobs to prune away the expired rows. We knew this was 
> a sub-optimal solution since it required managing and monitoring MR jobs. It 
> would also have introduced additional delete markers which would have 
> temporarily added more rows (delete markers) have made the scans less 
> performant.
> Using the HBase compaction framework instead to prune away the expired rows 
> would fit nicely into the existing architecture and would be efficient like 
> pruning the HBase TTL rows. 
> This jira proposes a redesign of Phoenix TTL for Views using PHOENIX-6888 and 
> PHOENIX-4555
> [New Design 
> doc|https://docs.google.com/document/d/1D2B0G_sVe9eE66bk-sxUfSgoGtQCvD7xBZRxZz-Q1TM/edit]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (PHOENIX-7023) Deprecate columns PHOENIX_TTL and PHOENIX_TTL_HWM and related code

2023-08-17 Thread Jacob Isaac (Jira)
Jacob Isaac created PHOENIX-7023:


 Summary: Deprecate columns PHOENIX_TTL and PHOENIX_TTL_HWM and 
related code
 Key: PHOENIX-7023
 URL: https://issues.apache.org/jira/browse/PHOENIX-7023
 Project: Phoenix
  Issue Type: Sub-task
Reporter: Jacob Isaac


Deprecate the old columns PHOENIX_TTL and PHOENIX_TTL_HWM and related code 
since they are not compatible with the new 
[design|[https://docs.google.com/document/d/1D2B0G_sVe9eE66bk-sxUfSgoGtQCvD7xBZRxZz-Q1TM/edit]|https://docs.google.com/document/d/1D2B0G_sVe9eE66bk-sxUfSgoGtQCvD7xBZRxZz-Q1TM/edit].]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (PHOENIX-7022) Add new columns TTL and ROWKEY_PREFIX

2023-08-17 Thread Jacob Isaac (Jira)
Jacob Isaac created PHOENIX-7022:


 Summary: Add new columns TTL and ROWKEY_PREFIX
 Key: PHOENIX-7022
 URL: https://issues.apache.org/jira/browse/PHOENIX-7022
 Project: Phoenix
  Issue Type: Sub-task
Reporter: Jacob Isaac


When a view statement is defined by the constraints articulated in 
PHOENIX-4555, all rows created by the view will be prefixed by a KeyRange. The 
view thus can simply be represented by the prefixed KeyRange generated by the 
expression representing the view statement. In other words, there exists a 
one-to-one mapping between the view (defined by tenant, schema, tablename) and 
PREFIXED KeyRange.

For lookup on the PREFIXED KeyRange we will create a new column ROWKEY_PREFIX 
in SYSTEM.CATALOG. This new column will be populated during view creation when 
TTL is specified.

 

The TTL column (INTEGER) will store the TTL when specified in line with the 
HBase spec (which uses an int). The PHOENIX_TTL-related columns and code will 
be deprecated in a separate jira.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-6978) Redesign Phoenix TTL for Views

2023-08-17 Thread Jacob Isaac (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6978?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacob Isaac updated PHOENIX-6978:
-
Description: 
With Phoenix TTL for views (PHOENIX-3725), the basic gist was the TTL should be 
a Phoenix view level setting instead of being at the table level as implemented 
in HBase. More details on the old design are here ([Phoenix TTL old design 
doc|https://docs.google.com/document/d/1aZWhJQCARBVt9VIXNgINCB8O0fk2GucxXeu7472SVL8/edit#heading=h.kpf13qig3vdl]).

Both HBase TTL and Phoenix TTL rely on applying expiration logic during the 
scanning phase when serving query results and apply deletion logic when pruning 
the rows from the store. In HBase, the pruning is achieved during the 
compaction phase.

The initial design and implementation of Phoenix TTL for views used the MR 
framework to run delete jobs to prune away the expired rows. We knew this was a 
sub-optimal solution since it required managing and monitoring MR jobs. It 
would also have introduced additional delete markers which would have 
temporarily added more rows (delete markers) have made the scans less 
performant.

Using the HBase compaction framework instead to prune away the expired rows 
would fit nicely into the existing architecture and would be efficient like 
pruning the HBase TTL rows. 

This jira proposes a redesign of Phoenix TTL for Views using PHOENIX-6888 and 
PHOENIX-4555

[New Design 
doc|https://docs.google.com/document/d/1D2B0G_sVe9eE66bk-sxUfSgoGtQCvD7xBZRxZz-Q1TM/edit]

  was:
With Phoenix TTL for views (PHOENIX-3725), the basic gist was the TTL should be 
a Phoenix view level setting instead of being at the table level as implemented 
in HBase. More details on the old design are here ([Phoenix TTL design 
doc|https://docs.google.com/document/d/1aZWhJQCARBVt9VIXNgINCB8O0fk2GucxXeu7472SVL8/edit#heading=h.kpf13qig3vdl]).

Both HBase TTL and Phoenix TTL rely on applying expiration logic during the 
scanning phase when serving query results and apply deletion logic when pruning 
the rows from the store. In HBase, the pruning is achieved during the 
compaction phase.

The initial design and implementation of Phoenix TTL for views used the MR 
framework to run delete jobs to prune away the expired rows. We knew this was a 
sub-optimal solution since it required managing and monitoring MR jobs. It 
would also have introduced additional delete markers which would have 
temporarily added more rows (delete markers) have made the scans less 
performant.

Using the HBase compaction framework instead to prune away the expired rows 
would fit nicely into the existing architecture and would be efficient like 
pruning the HBase TTL rows. 

This jira proposes a redesign of Phoenix TTL for Views using PHOENIX-6888 and 
PHOENIX-4555


> Redesign Phoenix TTL for Views
> --
>
> Key: PHOENIX-6978
> URL: https://issues.apache.org/jira/browse/PHOENIX-6978
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Jacob Isaac
>Assignee: Lokesh Khurana
>Priority: Major
>
> With Phoenix TTL for views (PHOENIX-3725), the basic gist was the TTL should 
> be a Phoenix view level setting instead of being at the table level as 
> implemented in HBase. More details on the old design are here ([Phoenix TTL 
> old design 
> doc|https://docs.google.com/document/d/1aZWhJQCARBVt9VIXNgINCB8O0fk2GucxXeu7472SVL8/edit#heading=h.kpf13qig3vdl]).
> Both HBase TTL and Phoenix TTL rely on applying expiration logic during the 
> scanning phase when serving query results and apply deletion logic when 
> pruning the rows from the store. In HBase, the pruning is achieved during the 
> compaction phase.
> The initial design and implementation of Phoenix TTL for views used the MR 
> framework to run delete jobs to prune away the expired rows. We knew this was 
> a sub-optimal solution since it required managing and monitoring MR jobs. It 
> would also have introduced additional delete markers which would have 
> temporarily added more rows (delete markers) have made the scans less 
> performant.
> Using the HBase compaction framework instead to prune away the expired rows 
> would fit nicely into the existing architecture and would be efficient like 
> pruning the HBase TTL rows. 
> This jira proposes a redesign of Phoenix TTL for Views using PHOENIX-6888 and 
> PHOENIX-4555
> [New Design 
> doc|https://docs.google.com/document/d/1D2B0G_sVe9eE66bk-sxUfSgoGtQCvD7xBZRxZz-Q1TM/edit]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-7021) Design doc for Phoenix view TTL using Phoenix compactions

2023-08-17 Thread Jacob Isaac (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacob Isaac updated PHOENIX-7021:
-
Description: [Design 
doc|[https://docs.google.com/document/d/1D2B0G_sVe9eE66bk-sxUfSgoGtQCvD7xBZRxZz-Q1TM/edit]]
  (was: [Design 
doc|[http://example.com|https://docs.google.com/document/d/1D2B0G_sVe9eE66bk-sxUfSgoGtQCvD7xBZRxZz-Q1TM/edit]])

> Design doc for Phoenix view TTL using Phoenix compactions
> -
>
> Key: PHOENIX-7021
> URL: https://issues.apache.org/jira/browse/PHOENIX-7021
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Jacob Isaac
>Assignee: Jacob Isaac
>Priority: Major
>
> [Design 
> doc|[https://docs.google.com/document/d/1D2B0G_sVe9eE66bk-sxUfSgoGtQCvD7xBZRxZz-Q1TM/edit]]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (PHOENIX-7021) Design doc for Phoenix view TTL using Phoenix compactions

2023-08-17 Thread Jacob Isaac (Jira)
Jacob Isaac created PHOENIX-7021:


 Summary: Design doc for Phoenix view TTL using Phoenix compactions
 Key: PHOENIX-7021
 URL: https://issues.apache.org/jira/browse/PHOENIX-7021
 Project: Phoenix
  Issue Type: Sub-task
Reporter: Jacob Isaac
Assignee: Jacob Isaac


[Design 
doc|[http://example.com|https://docs.google.com/document/d/1D2B0G_sVe9eE66bk-sxUfSgoGtQCvD7xBZRxZz-Q1TM/edit]]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (PHOENIX-7017) Recreating a view deletes the metadata in CHILD_LINK table

2023-08-09 Thread Jacob Isaac (Jira)
Jacob Isaac created PHOENIX-7017:


 Summary: Recreating a view deletes the metadata in CHILD_LINK table
 Key: PHOENIX-7017
 URL: https://issues.apache.org/jira/browse/PHOENIX-7017
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 5.2.0, 5.1.4
Reporter: Jacob Isaac


Steps to reproduce :

Create the same view 2 times.

The link from a parent table to its child view (link_type = 4) in the 
SYSTEM.CHILD_LINK is deleted.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (PHOENIX-6978) Redesign Phoenix TTL for Views

2023-07-13 Thread Jacob Isaac (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6978?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacob Isaac reassigned PHOENIX-6978:


Assignee: Lokesh Khurana

> Redesign Phoenix TTL for Views
> --
>
> Key: PHOENIX-6978
> URL: https://issues.apache.org/jira/browse/PHOENIX-6978
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Jacob Isaac
>Assignee: Lokesh Khurana
>Priority: Major
>
> With Phoenix TTL for views (PHOENIX-3725), the basic gist was the TTL should 
> be a Phoenix view level setting instead of being at the table level as 
> implemented in HBase. More details on the old design are here ([Phoenix TTL 
> design 
> doc|https://docs.google.com/document/d/1aZWhJQCARBVt9VIXNgINCB8O0fk2GucxXeu7472SVL8/edit#heading=h.kpf13qig3vdl]).
> Both HBase TTL and Phoenix TTL rely on applying expiration logic during the 
> scanning phase when serving query results and apply deletion logic when 
> pruning the rows from the store. In HBase, the pruning is achieved during the 
> compaction phase.
> The initial design and implementation of Phoenix TTL for views used the MR 
> framework to run delete jobs to prune away the expired rows. We knew this was 
> a sub-optimal solution since it required managing and monitoring MR jobs. It 
> would also have introduced additional delete markers which would have 
> temporarily added more rows (delete markers) have made the scans less 
> performant.
> Using the HBase compaction framework instead to prune away the expired rows 
> would fit nicely into the existing architecture and would be efficient like 
> pruning the HBase TTL rows. 
> This jira proposes a redesign of Phoenix TTL for Views using PHOENIX-6888 and 
> PHOENIX-4555



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (PHOENIX-6996) Provide an upgrade path for Phoenix tables with HBase TTL to move their TTL spec to SYSTEM.CATALOG

2023-07-13 Thread Jacob Isaac (Jira)
Jacob Isaac created PHOENIX-6996:


 Summary: Provide an upgrade path for Phoenix tables with HBase TTL 
to move their TTL spec to SYSTEM.CATALOG
 Key: PHOENIX-6996
 URL: https://issues.apache.org/jira/browse/PHOENIX-6996
 Project: Phoenix
  Issue Type: Sub-task
Reporter: Jacob Isaac






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-6978) Redesign Phoenix TTL for Views

2023-07-13 Thread Jacob Isaac (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6978?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacob Isaac updated PHOENIX-6978:
-
Description: 
With Phoenix TTL for views (PHOENIX-3725), the basic gist was the TTL should be 
a Phoenix view level setting instead of being at the table level as implemented 
in HBase. More details on the old design are here ([Phoenix TTL design 
doc|https://docs.google.com/document/d/1aZWhJQCARBVt9VIXNgINCB8O0fk2GucxXeu7472SVL8/edit#heading=h.kpf13qig3vdl]).

Both HBase TTL and Phoenix TTL rely on applying expiration logic during the 
scanning phase when serving query results and apply deletion logic when pruning 
the rows from the store. In HBase, the pruning is achieved during the 
compaction phase.

The initial design and implementation of Phoenix TTL for views used the MR 
framework to run delete jobs to prune away the expired rows. We knew this was a 
sub-optimal solution since it required managing and monitoring MR jobs. It 
would also have introduced additional delete markers which would have 
temporarily added more rows (delete markers) have made the scans less 
performant.

Using the HBase compaction framework instead to prune away the expired rows 
would fit nicely into the existing architecture and would be efficient like 
pruning the HBase TTL rows. 

This jira proposes a redesign of Phoenix TTL for Views using PHOENIX-6888 and 
PHOENIX-4555

  was:
With Phoenix TTL for views (PHOENIX-3725), the basic gist was the TTL should be 
a Phoenix view level setting instead of being at the table level as implemented 
in HBase. More details on the design are here ([Phoenix TTL design 
doc|https://docs.google.com/document/d/1aZWhJQCARBVt9VIXNgINCB8O0fk2GucxXeu7472SVL8/edit#heading=h.kpf13qig3vdl]).

Both HBase TTL and Phoenix TTL rely on applying expiration logic during the 
scanning phase when serving query results and apply deletion logic when pruning 
the rows from the store. In HBase, the pruning is achieved during the 
compaction phase.

The initial design and implementation of Phoenix TTL for views used the MR 
framework to run delete jobs to prune away the expired rows. We knew this was a 
sub-optimal solution since it required managing and monitoring MR jobs. It 
would also have introduced additional delete markers which would have 
temporarily added more rows (delete markers) have made the scans less 
performant.

Using the HBase compaction framework instead to prune away the expired rows 
would fit nicely into the existing architecture and would be efficient like 
pruning the HBase TTL rows. 

This jira proposes a redesign of Phoenix TTL for Views using PHOENIX-6888 and 
PHOENIX-4555


> Redesign Phoenix TTL for Views
> --
>
> Key: PHOENIX-6978
> URL: https://issues.apache.org/jira/browse/PHOENIX-6978
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Jacob Isaac
>Priority: Major
>
> With Phoenix TTL for views (PHOENIX-3725), the basic gist was the TTL should 
> be a Phoenix view level setting instead of being at the table level as 
> implemented in HBase. More details on the old design are here ([Phoenix TTL 
> design 
> doc|https://docs.google.com/document/d/1aZWhJQCARBVt9VIXNgINCB8O0fk2GucxXeu7472SVL8/edit#heading=h.kpf13qig3vdl]).
> Both HBase TTL and Phoenix TTL rely on applying expiration logic during the 
> scanning phase when serving query results and apply deletion logic when 
> pruning the rows from the store. In HBase, the pruning is achieved during the 
> compaction phase.
> The initial design and implementation of Phoenix TTL for views used the MR 
> framework to run delete jobs to prune away the expired rows. We knew this was 
> a sub-optimal solution since it required managing and monitoring MR jobs. It 
> would also have introduced additional delete markers which would have 
> temporarily added more rows (delete markers) have made the scans less 
> performant.
> Using the HBase compaction framework instead to prune away the expired rows 
> would fit nicely into the existing architecture and would be efficient like 
> pruning the HBase TTL rows. 
> This jira proposes a redesign of Phoenix TTL for Views using PHOENIX-6888 and 
> PHOENIX-4555



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (PHOENIX-6979) When phoenix.table.ttl.enabled=true use HBase TTL property value to be the TTL for tables and views

2023-06-14 Thread Jacob Isaac (Jira)
Jacob Isaac created PHOENIX-6979:


 Summary: When phoenix.table.ttl.enabled=true use HBase TTL 
property value to be the TTL for tables and views
 Key: PHOENIX-6979
 URL: https://issues.apache.org/jira/browse/PHOENIX-6979
 Project: Phoenix
  Issue Type: Sub-task
Reporter: Jacob Isaac
Assignee: Lokesh Khurana






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (PHOENIX-6978) Redesign Phoenix TTL for Views

2023-06-14 Thread Jacob Isaac (Jira)
Jacob Isaac created PHOENIX-6978:


 Summary: Redesign Phoenix TTL for Views
 Key: PHOENIX-6978
 URL: https://issues.apache.org/jira/browse/PHOENIX-6978
 Project: Phoenix
  Issue Type: Improvement
Reporter: Jacob Isaac


With Phoenix TTL for views (PHOENIX-3725), the basic gist was the TTL should be 
a Phoenix view level setting instead of being at the table level as implemented 
in HBase. More details on the design are here ([Phoenix TTL design 
doc|https://docs.google.com/document/d/1aZWhJQCARBVt9VIXNgINCB8O0fk2GucxXeu7472SVL8/edit#heading=h.kpf13qig3vdl]).

Both HBase TTL and Phoenix TTL rely on applying expiration logic during the 
scanning phase when serving query results and apply deletion logic when pruning 
the rows from the store. In HBase, the pruning is achieved during the 
compaction phase.

The initial design and implementation of Phoenix TTL for views used the MR 
framework to run delete jobs to prune away the expired rows. We knew this was a 
sub-optimal solution since it required managing and monitoring MR jobs. It 
would also have introduced additional delete markers which would have 
temporarily added more rows (delete markers) have made the scans less 
performant.

Using the HBase compaction framework instead to prune away the expired rows 
would fit nicely into the existing architecture and would be efficient like 
pruning the HBase TTL rows. 

This jira proposes a redesign of Phoenix TTL for Views using PHOENIX-6888 and 
PHOENIX-4555



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-6910) Scans created during query compilation and execution against salted tables need to be more resilient

2023-05-18 Thread Jacob Isaac (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacob Isaac updated PHOENIX-6910:
-
Attachment: 0001-PHOENIX-6910-initial-commit.patch

> Scans created during query compilation and execution against salted tables 
> need to be more resilient
> 
>
> Key: PHOENIX-6910
> URL: https://issues.apache.org/jira/browse/PHOENIX-6910
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.1.3
>Reporter: Jacob Isaac
>Assignee: Istvan Toth
>Priority: Major
> Attachments: 0001-PHOENIX-6910-initial-commit.patch
>
>
> The Scan objects created during where compilation and execution phases are 
> incorrect when salted tables are involved and their regions have moved. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (PHOENIX-6910) Scans created during query compilation and execution against salted tables need to be more resilient

2023-03-15 Thread Jacob Isaac (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacob Isaac reassigned PHOENIX-6910:


Assignee: Jacob Isaac

> Scans created during query compilation and execution against salted tables 
> need to be more resilient
> 
>
> Key: PHOENIX-6910
> URL: https://issues.apache.org/jira/browse/PHOENIX-6910
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.1.3
>Reporter: Jacob Isaac
>Assignee: Jacob Isaac
>Priority: Major
>
> The Scan objects created during where compilation and execution phases are 
> incorrect when salted tables are involved and their regions have moved. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (PHOENIX-6910) Scans created during query compilation and execution against salted tables need to be more resilient

2023-03-15 Thread Jacob Isaac (Jira)
Jacob Isaac created PHOENIX-6910:


 Summary: Scans created during query compilation and execution 
against salted tables need to be more resilient
 Key: PHOENIX-6910
 URL: https://issues.apache.org/jira/browse/PHOENIX-6910
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 5.1.3
Reporter: Jacob Isaac


The Scan objects created during where compilation and execution phases are 
incorrect when salted tables are involved and their regions have moved. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-6752) Duplicate expression nodes in extract nodes during WHERE compilation phase leads to poor performance.

2022-09-28 Thread Jacob Isaac (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacob Isaac updated PHOENIX-6752:
-
Description: 
SQL queries using the OR operator were taking a long time during the WHERE 
clause compilation phase when a large number of OR clauses (~50k) are used.

The key observation was that during the AND/OR processing, when there are a 
large number of OR expression nodes the same set of extracted nodes was getting 
added.

Thus bloating the set size and slowing down the processing.

[code|https://github.com/apache/phoenix/blob/0c2008ddf32566c525df26cb94d60be32acc10da/phoenix-core/src/main/java/org/apache/phoenix/compile/WhereOptimizer.java#L930]

  was:SQL queries using the OR operator were taking a long time during the 
WHERE clause compilation phase when a large number of OR clauses (~50k) are 
used.


> Duplicate expression nodes in extract nodes during WHERE compilation phase 
> leads to poor performance.
> -
>
> Key: PHOENIX-6752
> URL: https://issues.apache.org/jira/browse/PHOENIX-6752
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.15.0, 5.1.0, 4.16.1, 5.2.0
>Reporter: Jacob Isaac
>Assignee: Jacob Isaac
>Priority: Critical
> Fix For: 5.2.0
>
> Attachments: test-case.txt
>
>
> SQL queries using the OR operator were taking a long time during the WHERE 
> clause compilation phase when a large number of OR clauses (~50k) are used.
> The key observation was that during the AND/OR processing, when there are a 
> large number of OR expression nodes the same set of extracted nodes was 
> getting added.
> Thus bloating the set size and slowing down the processing.
> [code|https://github.com/apache/phoenix/blob/0c2008ddf32566c525df26cb94d60be32acc10da/phoenix-core/src/main/java/org/apache/phoenix/compile/WhereOptimizer.java#L930]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (PHOENIX-6751) Force using range scan vs skip scan when using the IN operator and large number of RVC elements

2022-07-20 Thread Jacob Isaac (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacob Isaac reassigned PHOENIX-6751:


Assignee: Jacob Isaac

> Force using range scan vs skip scan when using the IN operator and large 
> number of RVC elements 
> 
>
> Key: PHOENIX-6751
> URL: https://issues.apache.org/jira/browse/PHOENIX-6751
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.15.0, 5.1.1, 4.16.0, 5.2.0
>Reporter: Jacob Isaac
>Assignee: Jacob Isaac
>Priority: Critical
> Fix For: 5.2.0, 5.1.3
>
>
> SQL queries using the IN operator using PKs of different SortOrder were 
> failing during the WHERE clause compilation phase and causing OOM issues on 
> the servers when a large number (~50k) of RVC elements were used in the IN 
> operator.
> SQL queries were failing specifically during the skip scan filter generation. 
> The skip scan filter is generated using a list of point key ranges. 
> [ScanRanges.create|https://git.soma.salesforce.com/bigdata-packaging/phoenix/blob/e0737e0ea7ba7501e78fe23c16e7abca27bfd944/phoenix-core/src/main/java/org/apache/phoenix/compile/ScanRanges.java#L80]
> The following getPointKeys 
> [code|https://git.soma.salesforce.com/bigdata-packaging/phoenix/blob/e0737e0ea7ba7501e78fe23c16e7abca27bfd944/phoenix-core/src/main/java/org/apache/phoenix/compile/ScanRanges.java#L541]
>  uses the KeyRange sets to create a new list of point-keys. When there are a 
> large number of RVC elements the above



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (PHOENIX-6752) Duplicate expression nodes in extract nodes during WHERE compilation phase leads to poor performance.

2022-07-20 Thread Jacob Isaac (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacob Isaac reassigned PHOENIX-6752:


Assignee: Jacob Isaac

> Duplicate expression nodes in extract nodes during WHERE compilation phase 
> leads to poor performance.
> -
>
> Key: PHOENIX-6752
> URL: https://issues.apache.org/jira/browse/PHOENIX-6752
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.15.0, 5.1.0, 4.16.1, 5.2.0
>Reporter: Jacob Isaac
>Assignee: Jacob Isaac
>Priority: Major
> Attachments: test-case.txt
>
>
> SQL queries using the OR operator were taking a long time during the WHERE 
> clause compilation phase when a large number of OR clauses (~50k) are used.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-6751) Force using range scan vs skip scan when using the IN operator and large number of RVC elements

2022-07-20 Thread Jacob Isaac (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacob Isaac updated PHOENIX-6751:
-
Attachment: (was: test-case.txt)

> Force using range scan vs skip scan when using the IN operator and large 
> number of RVC elements 
> 
>
> Key: PHOENIX-6751
> URL: https://issues.apache.org/jira/browse/PHOENIX-6751
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.15.0, 5.1.1, 4.16.0, 5.2.0
>Reporter: Jacob Isaac
>Priority: Critical
> Fix For: 5.2.0, 5.1.3
>
>
> SQL queries using the IN operator using PKs of different SortOrder were 
> failing during the WHERE clause compilation phase and causing OOM issues on 
> the servers when a large number (~50k) of RVC elements were used in the IN 
> operator.
> SQL queries were failing specifically during the skip scan filter generation. 
> The skip scan filter is generated using a list of point key ranges. 
> [ScanRanges.create|https://git.soma.salesforce.com/bigdata-packaging/phoenix/blob/e0737e0ea7ba7501e78fe23c16e7abca27bfd944/phoenix-core/src/main/java/org/apache/phoenix/compile/ScanRanges.java#L80]
> The following getPointKeys 
> [code|https://git.soma.salesforce.com/bigdata-packaging/phoenix/blob/e0737e0ea7ba7501e78fe23c16e7abca27bfd944/phoenix-core/src/main/java/org/apache/phoenix/compile/ScanRanges.java#L541]
>  uses the KeyRange sets to create a new list of point-keys. When there are a 
> large number of RVC elements the above



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-6752) Duplicate expression nodes in extract nodes during WHERE compilation phase leads to poor performance.

2022-07-20 Thread Jacob Isaac (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacob Isaac updated PHOENIX-6752:
-
Attachment: test-case.txt

> Duplicate expression nodes in extract nodes during WHERE compilation phase 
> leads to poor performance.
> -
>
> Key: PHOENIX-6752
> URL: https://issues.apache.org/jira/browse/PHOENIX-6752
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.15.0, 5.1.0, 4.16.1, 5.2.0
>Reporter: Jacob Isaac
>Priority: Major
> Attachments: test-case.txt
>
>
> SQL queries using the OR operator were taking a long time during the WHERE 
> clause compilation phase when a large number of OR clauses (~50k) are used.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-6751) Force using range scan vs skip scan when using the IN operator and large number of RVC elements

2022-07-20 Thread Jacob Isaac (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacob Isaac updated PHOENIX-6751:
-
Attachment: test-case.txt

> Force using range scan vs skip scan when using the IN operator and large 
> number of RVC elements 
> 
>
> Key: PHOENIX-6751
> URL: https://issues.apache.org/jira/browse/PHOENIX-6751
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.15.0, 5.1.1, 4.16.0, 5.2.0
>Reporter: Jacob Isaac
>Priority: Critical
> Fix For: 5.2.0, 5.1.3
>
> Attachments: test-case.txt
>
>
> SQL queries using the IN operator using PKs of different SortOrder were 
> failing during the WHERE clause compilation phase and causing OOM issues on 
> the servers when a large number (~50k) of RVC elements were used in the IN 
> operator.
> SQL queries were failing specifically during the skip scan filter generation. 
> The skip scan filter is generated using a list of point key ranges. 
> [ScanRanges.create|https://git.soma.salesforce.com/bigdata-packaging/phoenix/blob/e0737e0ea7ba7501e78fe23c16e7abca27bfd944/phoenix-core/src/main/java/org/apache/phoenix/compile/ScanRanges.java#L80]
> The following getPointKeys 
> [code|https://git.soma.salesforce.com/bigdata-packaging/phoenix/blob/e0737e0ea7ba7501e78fe23c16e7abca27bfd944/phoenix-core/src/main/java/org/apache/phoenix/compile/ScanRanges.java#L541]
>  uses the KeyRange sets to create a new list of point-keys. When there are a 
> large number of RVC elements the above



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (PHOENIX-6752) Duplicate expression nodes in extract nodes during WHERE compilation phase leads to poor performance.

2022-07-20 Thread Jacob Isaac (Jira)
Jacob Isaac created PHOENIX-6752:


 Summary: Duplicate expression nodes in extract nodes during WHERE 
compilation phase leads to poor performance.
 Key: PHOENIX-6752
 URL: https://issues.apache.org/jira/browse/PHOENIX-6752
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.16.1, 5.1.0, 4.15.0, 5.2.0
Reporter: Jacob Isaac


SQL queries using the OR operator were taking a long time during the WHERE 
clause compilation phase when a large number of OR clauses (~50k) are used.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-6751) Force using range scan vs skip scan when using the IN operator and large number of RVC elements

2022-07-20 Thread Jacob Isaac (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacob Isaac updated PHOENIX-6751:
-
Description: 
SQL queries using the IN operator using PKs of different SortOrder were failing 
during the WHERE clause compilation phase and causing OOM issues on the servers 
when a large number (~50k) of RVC elements were used in the IN operator.

SQL queries were failing specifically during the skip scan filter generation. 
The skip scan filter is generated using a list of point key ranges. 
[ScanRanges.create|https://git.soma.salesforce.com/bigdata-packaging/phoenix/blob/e0737e0ea7ba7501e78fe23c16e7abca27bfd944/phoenix-core/src/main/java/org/apache/phoenix/compile/ScanRanges.java#L80]

The following getPointKeys 
[code|https://git.soma.salesforce.com/bigdata-packaging/phoenix/blob/e0737e0ea7ba7501e78fe23c16e7abca27bfd944/phoenix-core/src/main/java/org/apache/phoenix/compile/ScanRanges.java#L541]
 uses the KeyRange sets to create a new list of point-keys. When there are a 
large number of RVC elements the above

  was:
SQL queries using the IN operator using PKs of different SortOrder were failing 
during the WHERE clause compilation phase and causing OOM issues on the servers 
when a large number (~50k) of RVC elements were used in the IN operator.

SQL queries were failing specifically during the skip scan filter generation. 
The skip scan filter is generated using a list of point key 
ranges.[ScanRanges.create|https://git.soma.salesforce.com/bigdata-packaging/phoenix/blob/e0737e0ea7ba7501e78fe23c16e7abca27bfd944/phoenix-core/src/main/java/org/apache/phoenix/compile/ScanRanges.java#L80]

The following getPointKeys 
[code|https://git.soma.salesforce.com/bigdata-packaging/phoenix/blob/e0737e0ea7ba7501e78fe23c16e7abca27bfd944/phoenix-core/src/main/java/org/apache/phoenix/compile/ScanRanges.java#L541]
 uses the KeyRange sets to create a new list of point-keys. When there are a 
large number of RVC elements the above


> Force using range scan vs skip scan when using the IN operator and large 
> number of RVC elements 
> 
>
> Key: PHOENIX-6751
> URL: https://issues.apache.org/jira/browse/PHOENIX-6751
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.15.0, 5.1.1, 4.16.0, 5.2.0
>Reporter: Jacob Isaac
>Priority: Major
>
> SQL queries using the IN operator using PKs of different SortOrder were 
> failing during the WHERE clause compilation phase and causing OOM issues on 
> the servers when a large number (~50k) of RVC elements were used in the IN 
> operator.
> SQL queries were failing specifically during the skip scan filter generation. 
> The skip scan filter is generated using a list of point key ranges. 
> [ScanRanges.create|https://git.soma.salesforce.com/bigdata-packaging/phoenix/blob/e0737e0ea7ba7501e78fe23c16e7abca27bfd944/phoenix-core/src/main/java/org/apache/phoenix/compile/ScanRanges.java#L80]
> The following getPointKeys 
> [code|https://git.soma.salesforce.com/bigdata-packaging/phoenix/blob/e0737e0ea7ba7501e78fe23c16e7abca27bfd944/phoenix-core/src/main/java/org/apache/phoenix/compile/ScanRanges.java#L541]
>  uses the KeyRange sets to create a new list of point-keys. When there are a 
> large number of RVC elements the above



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (PHOENIX-6751) Force using range scan vs skip scan when using the IN operator and large number of RVC elements

2022-07-20 Thread Jacob Isaac (Jira)
Jacob Isaac created PHOENIX-6751:


 Summary: Force using range scan vs skip scan when using the IN 
operator and large number of RVC elements 
 Key: PHOENIX-6751
 URL: https://issues.apache.org/jira/browse/PHOENIX-6751
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.16.0, 5.1.1, 4.15.0, 5.2.0
Reporter: Jacob Isaac


SQL queries using the IN operator using PKs of different SortOrder were failing 
during the WHERE clause compilation phase and causing OOM issues on the servers 
when a large number (~50k) of RVC elements were used in the IN operator.

SQL queries were failing specifically during the skip scan filter generation. 
The skip scan filter is generated using a list of point key 
ranges.[ScanRanges.create|https://git.soma.salesforce.com/bigdata-packaging/phoenix/blob/e0737e0ea7ba7501e78fe23c16e7abca27bfd944/phoenix-core/src/main/java/org/apache/phoenix/compile/ScanRanges.java#L80]

The following getPointKeys 
[code|https://git.soma.salesforce.com/bigdata-packaging/phoenix/blob/e0737e0ea7ba7501e78fe23c16e7abca27bfd944/phoenix-core/src/main/java/org/apache/phoenix/compile/ScanRanges.java#L541]
 uses the KeyRange sets to create a new list of point-keys. When there are a 
large number of RVC elements the above



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (PHOENIX-6688) Upgrade to phoenix 4.16 metadata upgrade fails when SYSCAT has large number of tenant views

2022-04-22 Thread Jacob Isaac (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6688?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacob Isaac reassigned PHOENIX-6688:


Assignee: Jacob Isaac

> Upgrade to phoenix 4.16 metadata upgrade fails when SYSCAT has large number 
> of tenant views
> ---
>
> Key: PHOENIX-6688
> URL: https://issues.apache.org/jira/browse/PHOENIX-6688
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.16.1, 4.17.0, 5.2.0
>Reporter: Jacob Isaac
>Assignee: Jacob Isaac
>Priority: Major
>
> Caused by: org.apache.phoenix.schema.MaxMutationSizeExceededException: ERROR 
> 729 (LIM01): MutationState size is bigger than maximum allowed number of 
> rows, try upserting rows in smaller batches or using autocommit on for 
> deletes.
> at 
> org.apache.phoenix.exception.SQLExceptionCode$21.newException(SQLExceptionCode.java:526)
> at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:228)
> at 
> org.apache.phoenix.util.ServerUtil.parseRemoteException(ServerUtil.java:191)
> at 
> org.apache.phoenix.util.ServerUtil.parseServerExceptionOrNull(ServerUtil.java:175)
> at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:142)
> at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:1341)
> at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:1280)
> at 
> org.apache.phoenix.iterate.RoundRobinResultIterator.getIterators(RoundRobinResultIterator.java:187)
> at 
> org.apache.phoenix.iterate.RoundRobinResultIterator.next(RoundRobinResultIterator.java:93)
> at 
> org.apache.phoenix.compile.UpsertCompiler$ClientUpsertSelectMutationPlan.execute(UpsertCompiler.java:1409)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:414)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:396)
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:395)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:383)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1885)
> at 
> org.apache.phoenix.util.UpgradeUtil.moveChildLinks(UpgradeUtil.java:1181)
> at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.upgradeSystemChildLink(ConnectionQueryServicesImpl.java:4055)
> at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.upgradeOtherSystemTablesIfRequired(ConnectionQueryServicesImpl.java:4033)
> at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.upgradeSystemTables(ConnectionQueryServicesImpl.java:3958)
> at 
> org.apache.phoenix.query.DelegateConnectionQueryServices.upgradeSystemTables(DelegateConnectionQueryServices.java:362)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableExecuteUpgradeStatement$1.execute(PhoenixStatement.java:1445)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:414)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:396)
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:395)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:383)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeUpdate(PhoenixStatement.java:1866)



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Assigned] (PHOENIX-6687) The region server hosting the SYSTEM.CATALOG fails to serve any metadata requests as default handler pool threads are exhausted.

2022-04-22 Thread Jacob Isaac (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6687?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacob Isaac reassigned PHOENIX-6687:


Assignee: Jacob Isaac

> The region server hosting the SYSTEM.CATALOG fails to serve any metadata 
> requests as default handler pool  threads are exhausted.
> -
>
> Key: PHOENIX-6687
> URL: https://issues.apache.org/jira/browse/PHOENIX-6687
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 5.1.0, 5.1.1, 4.16.1, 5.2.0, 5.1.2
>Reporter: Jacob Isaac
>Assignee: Jacob Isaac
>Priority: Major
> Fix For: 4.17.0, 5.2.0
>
> Attachments: stacktraces.txt
>
>
> When SYSTEM.CATALOG region server is restarted and the server is experiencing 
> heavy metadata call volume.
> The stack traces indicate that all the default handler pool threads are 
> waiting for the CQSI.init thread to finish initializing.
> The CQSI.init thread itself cannot proceed since it cannot complete the 
> second RPC call 
> (org.apache.phoenix.query.ConnectionQueryServicesImpl.checkClientServerCompatibility)
>  due to thread starvation.
> For e.g
> The following 
> [code|https://github.com/apache/phoenix/blob/3cff97087d79b85e282fca4ac69ddf499fb1f40f/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L661]
>  turned the getTable(..) into needing an additional server-to-server RPC call 
> when initializing a PhoenixConnection (CQSI.init) for the first time on the 
> JVM. 
> It is well-known that server-to-server RPC calls are prone to deadlocking due 
> to thread pool exhaustion.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Created] (PHOENIX-6688) Upgrade to phoenix 4.16 metadata upgrade fails when SYSCAT has large number of tenant views

2022-04-15 Thread Jacob Isaac (Jira)
Jacob Isaac created PHOENIX-6688:


 Summary: Upgrade to phoenix 4.16 metadata upgrade fails when 
SYSCAT has large number of tenant views
 Key: PHOENIX-6688
 URL: https://issues.apache.org/jira/browse/PHOENIX-6688
 Project: Phoenix
  Issue Type: Bug
  Components: core
Affects Versions: 4.16.1, 4.17.0, 5.2.0
Reporter: Jacob Isaac


Caused by: org.apache.phoenix.schema.MaxMutationSizeExceededException: ERROR 
729 (LIM01): MutationState size is bigger than maximum allowed number of rows, 
try upserting rows in smaller batches or using autocommit on for deletes.

at 
org.apache.phoenix.exception.SQLExceptionCode$21.newException(SQLExceptionCode.java:526)

at 
org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:228)

at 
org.apache.phoenix.util.ServerUtil.parseRemoteException(ServerUtil.java:191)

at 
org.apache.phoenix.util.ServerUtil.parseServerExceptionOrNull(ServerUtil.java:175)

at 
org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:142)

at 
org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:1341)

at 
org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:1280)

at 
org.apache.phoenix.iterate.RoundRobinResultIterator.getIterators(RoundRobinResultIterator.java:187)

at 
org.apache.phoenix.iterate.RoundRobinResultIterator.next(RoundRobinResultIterator.java:93)

at 
org.apache.phoenix.compile.UpsertCompiler$ClientUpsertSelectMutationPlan.execute(UpsertCompiler.java:1409)

at 
org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:414)

at 
org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:396)

at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)

at 
org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:395)

at 
org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:383)

at 
org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1885)

at org.apache.phoenix.util.UpgradeUtil.moveChildLinks(UpgradeUtil.java:1181)

at 
org.apache.phoenix.query.ConnectionQueryServicesImpl.upgradeSystemChildLink(ConnectionQueryServicesImpl.java:4055)

at 
org.apache.phoenix.query.ConnectionQueryServicesImpl.upgradeOtherSystemTablesIfRequired(ConnectionQueryServicesImpl.java:4033)

at 
org.apache.phoenix.query.ConnectionQueryServicesImpl.upgradeSystemTables(ConnectionQueryServicesImpl.java:3958)

at 
org.apache.phoenix.query.DelegateConnectionQueryServices.upgradeSystemTables(DelegateConnectionQueryServices.java:362)

at 
org.apache.phoenix.jdbc.PhoenixStatement$ExecutableExecuteUpgradeStatement$1.execute(PhoenixStatement.java:1445)

at 
org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:414)

at 
org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:396)

at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)

at 
org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:395)

at 
org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:383)

at 
org.apache.phoenix.jdbc.PhoenixStatement.executeUpdate(PhoenixStatement.java:1866)



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (PHOENIX-6687) The region server hosting the SYSTEM.CATALOG fails to serve any metadata requests as default handler pool threads are exhausted.

2022-04-15 Thread Jacob Isaac (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6687?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacob Isaac updated PHOENIX-6687:
-
Summary: The region server hosting the SYSTEM.CATALOG fails to serve any 
metadata requests as default handler pool  threads are exhausted.  (was: The 
region server hosting the SYSTEM.CATALOG fails to serve any metadata requests 
as handler pools are exhausted.)

> The region server hosting the SYSTEM.CATALOG fails to serve any metadata 
> requests as default handler pool  threads are exhausted.
> -
>
> Key: PHOENIX-6687
> URL: https://issues.apache.org/jira/browse/PHOENIX-6687
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.16.1, 5.2.0
>Reporter: Jacob Isaac
>Priority: Major
> Fix For: 4.17.0, 5.2.0
>
> Attachments: stacktraces.txt
>
>
> When SYSTEM.CATALOG region server is restarted and the server is experiencing 
> heavy metadata call volume.
> The stack traces indicate that all the default handler pool threads are 
> waiting for the CQSI.init thread to finish initializing.
> The CQSI.init thread itself cannot proceed since it cannot complete the 
> second RPC call 
> (org.apache.phoenix.query.ConnectionQueryServicesImpl.checkClientServerCompatibility)
>  due to thread starvation.
> For e.g
> The following 
> [code|https://github.com/apache/phoenix/blob/3cff97087d79b85e282fca4ac69ddf499fb1f40f/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L661]
>  turned the getTable(..) into needing an additional server-to-server RPC call 
> when initializing a PhoenixConnection (CQSI.init) for the first time on the 
> JVM. 
> It is well-known that server-to-server RPC calls are prone to deadlocking due 
> to thread pool exhaustion.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (PHOENIX-6687) The region server hosting the SYSTEM.CATALOG fails to serve any metadata requests as handler pools are exhausted.

2022-04-15 Thread Jacob Isaac (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6687?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacob Isaac updated PHOENIX-6687:
-
Attachment: stacktraces.txt

> The region server hosting the SYSTEM.CATALOG fails to serve any metadata 
> requests as handler pools are exhausted.
> -
>
> Key: PHOENIX-6687
> URL: https://issues.apache.org/jira/browse/PHOENIX-6687
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.16.1, 5.2.0
>Reporter: Jacob Isaac
>Priority: Major
> Fix For: 4.17.0, 5.2.0
>
> Attachments: stacktraces.txt
>
>
> When SYSTEM.CATALOG region server is restarted and the server is experiencing 
> heavy metadata call volume.
> The stack traces indicate that all the default handler pool threads are 
> waiting for the CQSI.init thread to finish initializing.
> The CQSI.init thread itself cannot proceed since it cannot complete the 
> second RPC call 
> (org.apache.phoenix.query.ConnectionQueryServicesImpl.checkClientServerCompatibility)
>  due to thread starvation.
> For e.g
> The following 
> [code|https://github.com/apache/phoenix/blob/3cff97087d79b85e282fca4ac69ddf499fb1f40f/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L661]
>  turned the getTable(..) into needing an additional server-to-server RPC call 
> when initializing a PhoenixConnection (CQSI.init) for the first time on the 
> JVM. 
> It is well-known that server-to-server RPC calls are prone to deadlocking due 
> to thread pool exhaustion.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (PHOENIX-6687) The region server hosting the SYSTEM.CATALOG fails to serve any metadata requests as handler pools are exhausted.

2022-04-15 Thread Jacob Isaac (Jira)
Jacob Isaac created PHOENIX-6687:


 Summary: The region server hosting the SYSTEM.CATALOG fails to 
serve any metadata requests as handler pools are exhausted.
 Key: PHOENIX-6687
 URL: https://issues.apache.org/jira/browse/PHOENIX-6687
 Project: Phoenix
  Issue Type: Bug
  Components: core
Affects Versions: 4.16.1, 5.2.0
Reporter: Jacob Isaac
 Fix For: 4.17.0, 5.2.0


When SYSTEM.CATALOG region server is restarted and the server is experiencing 
heavy metadata call volume.

The stack traces indicate that all the default handler pool threads are waiting 
for the CQSI.init thread to finish initializing.
The CQSI.init thread itself cannot proceed since it cannot complete the second 
RPC call 
(org.apache.phoenix.query.ConnectionQueryServicesImpl.checkClientServerCompatibility)
 due to thread starvation.

For e.g
The following 
[code|https://github.com/apache/phoenix/blob/3cff97087d79b85e282fca4ac69ddf499fb1f40f/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L661]
 turned the getTable(..) into needing an additional server-to-server RPC call 
when initializing a PhoenixConnection (CQSI.init) for the first time on the 
JVM. 
It is well-known that server-to-server RPC calls are prone to deadlocking due 
to thread pool exhaustion.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (PHOENIX-6530) Fix tenantId generation for Sequential and Uniform load generators

2021-08-18 Thread Jacob Isaac (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacob Isaac updated PHOENIX-6530:
-
Fix Version/s: 5.1.3
Affects Version/s: 5.1.2

> Fix tenantId generation for Sequential and Uniform load generators
> --
>
> Key: PHOENIX-6530
> URL: https://issues.apache.org/jira/browse/PHOENIX-6530
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.17.0, 5.1.2
>Reporter: Jacob Isaac
>Priority: Major
> Fix For: 4.17.0, 5.1.3
>
>
> While running the perf workloads for 4.16, found that tenantId generation for 
> the various generators do not match.
> As result the read queries fail when the writes/data was created using 
> different generator.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (PHOENIX-6530) Fix tenantId generation for Sequential and Uniform load generators

2021-08-18 Thread Jacob Isaac (Jira)
Jacob Isaac created PHOENIX-6530:


 Summary: Fix tenantId generation for Sequential and Uniform load 
generators
 Key: PHOENIX-6530
 URL: https://issues.apache.org/jira/browse/PHOENIX-6530
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.17.0
Reporter: Jacob Isaac
 Fix For: 4.17.0


While running the perf workloads for 4.16, found that tenantId generation for 
the various generators do not match.

As result the read queries fail when the writes/data was created using 
different generator.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (PHOENIX-6432) Add support for additional load generators

2021-03-27 Thread Jacob Isaac (Jira)
Jacob Isaac created PHOENIX-6432:


 Summary: Add support for additional load generators
 Key: PHOENIX-6432
 URL: https://issues.apache.org/jira/browse/PHOENIX-6432
 Project: Phoenix
  Issue Type: Sub-task
Reporter: Jacob Isaac






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6430) Add support for full row update for tables when no columns specfied in scenario

2021-03-27 Thread Jacob Isaac (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6430?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacob Isaac updated PHOENIX-6430:
-
Summary: Add support for full row update for tables when no columns 
specfied in scenario  (was: Added support for full row update for tables when 
no columns specfied in scenario)

> Add support for full row update for tables when no columns specfied in 
> scenario
> ---
>
> Key: PHOENIX-6430
> URL: https://issues.apache.org/jira/browse/PHOENIX-6430
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Jacob Isaac
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6431) Add support for auto assigning pmfs

2021-03-27 Thread Jacob Isaac (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacob Isaac updated PHOENIX-6431:
-
Summary: Add support for auto assigning pmfs  (was: Added support for auto 
assigning pmfs)

> Add support for auto assigning pmfs
> ---
>
> Key: PHOENIX-6431
> URL: https://issues.apache.org/jira/browse/PHOENIX-6431
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Jacob Isaac
>Priority: Major
>
> When defining a load profile it may be convenient to not specify the 
> probability distribution weights at all times



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (PHOENIX-6431) Added support for auto assigning pmfs

2021-03-27 Thread Jacob Isaac (Jira)
Jacob Isaac created PHOENIX-6431:


 Summary: Added support for auto assigning pmfs
 Key: PHOENIX-6431
 URL: https://issues.apache.org/jira/browse/PHOENIX-6431
 Project: Phoenix
  Issue Type: Sub-task
Reporter: Jacob Isaac


When defining a load profile it may be convenient to not specify the 
probability distribution weights at all times



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (PHOENIX-6430) Added support for full row update for tables when no columns specfied in scenario

2021-03-26 Thread Jacob Isaac (Jira)
Jacob Isaac created PHOENIX-6430:


 Summary: Added support for full row update for tables when no 
columns specfied in scenario
 Key: PHOENIX-6430
 URL: https://issues.apache.org/jira/browse/PHOENIX-6430
 Project: Phoenix
  Issue Type: Sub-task
Reporter: Jacob Isaac






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (PHOENIX-6429) Add support for global connections and sequential data generators

2021-03-26 Thread Jacob Isaac (Jira)
Jacob Isaac created PHOENIX-6429:


 Summary: Add support for global connections and sequential data 
generators
 Key: PHOENIX-6429
 URL: https://issues.apache.org/jira/browse/PHOENIX-6429
 Project: Phoenix
  Issue Type: Sub-task
Reporter: Jacob Isaac


We may at times want to upsert or query using global connections. 

Also add additional sequential data generators in addition to INTEGER and 
VARCHAR data types.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (PHOENIX-6417) Fix PHERF ITs that are failing in the local builds

2021-03-17 Thread Jacob Isaac (Jira)
Jacob Isaac created PHOENIX-6417:


 Summary: Fix PHERF ITs that are failing in the local builds
 Key: PHOENIX-6417
 URL: https://issues.apache.org/jira/browse/PHOENIX-6417
 Project: Phoenix
  Issue Type: Sub-task
Reporter: Jacob Isaac
Assignee: Jacob Isaac
 Fix For: 4.17.0






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (PHOENIX-6416) Ensure that PHERF ITs are enabled and run during builds

2021-03-17 Thread Jacob Isaac (Jira)
Jacob Isaac created PHOENIX-6416:


 Summary: Ensure that PHERF ITs are enabled and run during builds
 Key: PHOENIX-6416
 URL: https://issues.apache.org/jira/browse/PHOENIX-6416
 Project: Phoenix
  Issue Type: Sub-task
Reporter: Jacob Isaac
Assignee: Jacob Isaac
 Fix For: 4.17.0






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6118) Multi Tenant Workloads using PHERF

2021-03-08 Thread Jacob Isaac (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacob Isaac updated PHOENIX-6118:
-
Parent: PHOENIX-6406
Issue Type: Sub-task  (was: Improvement)

> Multi Tenant Workloads using PHERF
> --
>
> Key: PHOENIX-6118
> URL: https://issues.apache.org/jira/browse/PHOENIX-6118
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Jacob Isaac
>Assignee: Jacob Isaac
>Priority: Major
>  Time Spent: 3h 50m
>  Remaining Estimate: 0h
>
> Features like PHOENIX_TTL and Splittable SYSCAT need to be tested for a large 
> number of tenant views.
> In the absence of support for creating a large number of tenant views - Multi 
> leveled views dynamically and be able to query them in a generic framework, 
> the teams have to write custom logic to replay/run functional and perf 
> testing.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (PHOENIX-6406) PHERF Improvements

2021-03-08 Thread Jacob Isaac (Jira)
Jacob Isaac created PHOENIX-6406:


 Summary: PHERF Improvements
 Key: PHOENIX-6406
 URL: https://issues.apache.org/jira/browse/PHOENIX-6406
 Project: Phoenix
  Issue Type: Improvement
Affects Versions: 4.17.0
Reporter: Jacob Isaac
Assignee: Jacob Isaac


Features like PHOENIX_TTL and Splittable SYSCAT need to be tested for a large 
number of tenant views.

In general, during releases, we need to have a perf framework to assess 
improvements/regressions that were introduced as part of the release.
 * Support for Multi leveled views dynamically and be able to query them in a 
generic framework
 * Support for global vs tenant connection when running load.
 * Support for various load generators.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (PHOENIX-6374) Publish perf workload results and analysis

2021-02-09 Thread Jacob Isaac (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacob Isaac reassigned PHOENIX-6374:


Assignee: Jacob Isaac

> Publish perf workload results and analysis
> --
>
> Key: PHOENIX-6374
> URL: https://issues.apache.org/jira/browse/PHOENIX-6374
> Project: Phoenix
>  Issue Type: Test
>Affects Versions: 4.x
>Reporter: Jacob Isaac
>Assignee: Jacob Isaac
>Priority: Major
>
> Ran perf workloads against 4.14.x, 4.15.x and 4.16RC1 build.
> The  results and observations are published here for review -
> https://docs.google.com/document/d/19QHG6vvdxwCNkT3nqu8N-ift_1OIn161pqtJx1UcXiY/edit#
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (PHOENIX-6374) Publish perf workload results and analysis

2021-02-09 Thread Jacob Isaac (Jira)
Jacob Isaac created PHOENIX-6374:


 Summary: Publish perf workload results and analysis
 Key: PHOENIX-6374
 URL: https://issues.apache.org/jira/browse/PHOENIX-6374
 Project: Phoenix
  Issue Type: Test
Affects Versions: 4.x
Reporter: Jacob Isaac


Ran perf workloads against 4.14.x, 4.15.x and 4.16RC1 build.

The  results and observations are published here for review -

https://docs.google.com/document/d/19QHG6vvdxwCNkT3nqu8N-ift_1OIn161pqtJx1UcXiY/edit#

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (PHOENIX-6348) java.lang.NoClassDefFoundError: when running with hbase-1.6

2021-01-28 Thread Jacob Isaac (Jira)
Jacob Isaac created PHOENIX-6348:


 Summary: java.lang.NoClassDefFoundError: when running with 
hbase-1.6
 Key: PHOENIX-6348
 URL: https://issues.apache.org/jira/browse/PHOENIX-6348
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.16.0
Reporter: Jacob Isaac
 Fix For: 4.16.0


Getting this error, when running with hbase-1.6

I think this stems from the jar dependency mismatch between phoenix 4.x/4.16 
and hbase1.6

hbase-1.6 :  commons-cli-1.2.jar 
(https://github.com/apache/hbase/blob/5ec5a5b115ee36fb28903667c008218abd21b3f5/pom.xml#L1260)

phoenix 4.x : commons-cli-1.4.jar 
([https://github.com/apache/phoenix/blob/44d44029597d032af1be54d5e9a70342c1fe4769/pom.xml#L100)]

 

What is the best way to resolve this? Shading?

[~stoty] [~vjasani]

FYI

[~yanxinyi] [~ChinmayKulkarni] [~kadir]

 

**Exception in thread "main" java.lang.NoClassDefFoundError: 
org/apache/commons/cli/DefaultParser
 at 
org.apache.phoenix.mapreduce.index.IndexTool.parseOptions(IndexTool.java:354)
 at org.apache.phoenix.mapreduce.index.IndexTool.run(IndexTool.java:788)
 at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
 at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
 at org.apache.phoenix.mapreduce.index.IndexTool.main(IndexTool.java:1201)
Caused by: java.lang.ClassNotFoundException: 
org.apache.commons.cli.DefaultParser
 at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
 at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
 at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
 at java.lang.ClassLoader.loadClass(ClassLoader.java:357)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (PHOENIX-6341) Enable running IT tests from PHERF module during builds and patch checkins

2021-01-26 Thread Jacob Isaac (Jira)
Jacob Isaac created PHOENIX-6341:


 Summary: Enable running IT tests from PHERF module during builds 
and patch checkins
 Key: PHOENIX-6341
 URL: https://issues.apache.org/jira/browse/PHOENIX-6341
 Project: Phoenix
  Issue Type: Test
Affects Versions: 4.x
Reporter: Jacob Isaac
 Fix For: 4.x






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6339) Older client using aggregate queries shows incorrect results.

2021-01-25 Thread Jacob Isaac (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacob Isaac updated PHOENIX-6339:
-
Issue Type: Bug  (was: Improvement)

> Older client using aggregate queries shows incorrect results.
> -
>
> Key: PHOENIX-6339
> URL: https://issues.apache.org/jira/browse/PHOENIX-6339
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.16.0
>Reporter: Jacob Isaac
>Priority: Blocker
> Fix For: 4.16.0
>
>
> When running an older client for eg (4.15) against a 4.16 server
> The output of aggregate queries are incorrect -
> expected one row with the count, actual 9 rows with counts.
> The 9 rows correspond to the number of regions in the data set. As shown in 
> the explain plan.
> Connected to: Phoenix (version 4.15)
> Driver: PhoenixEmbeddedDriver (version 4.15)
> Autocommit status: true
> Transaction isolation: TRANSACTION_READ_COMMITTED
> Building list of tables and columns for tab-completion (set fastconnect to 
> true to skip)...
> 225/225 (100%) Done
> Done
> sqlline version 1.5.0
> 0: jdbc:phoenix:localhost> select count(*) from 
> BENCHMARK.BM_AGGREGATION_TABLE_2;
> +---+
> | COUNT(1) |
> +---+
> | 2389483 |
> | 2319177 |
> | 1958007 |
> | 2389483 |
> | 2319178 |
> | 1958005 |
> | 2233646 |
> | 2249033 |
> | 2183988 |
> +---+
> 9 rows selected (6.56 seconds)
> 0: jdbc:phoenix:localhost> explain select count(*) from 
> BENCHMARK.BM_AGGREGATION_TABLE_2;
> +---+-+++
> | PLAN | EST_BYTES_READ | EST_ROWS_READ | EST_INFO_TS |
> +---+-+++
> | CLIENT 9-CHUNK 10191406 ROWS 1887436990 BYTES PARALLEL 1-WAY FULL SCAN OVER 
> BENCHMARK.BM_AGGREGATION_TABLE_2 | 1887436990 | 10191406 | 1611584394492 |
> | SERVER FILTER BY FIRST KEY ONLY | 1887436990 | 10191406 | 1611584394492 |
> | SERVER AGGREGATE INTO SINGLE ROW | 1887436990 | 10191406 | 1611584394492 |
> +---+-+++



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (PHOENIX-6339) Older client using aggregate queries shows incorrect results.

2021-01-25 Thread Jacob Isaac (Jira)
Jacob Isaac created PHOENIX-6339:


 Summary: Older client using aggregate queries shows incorrect 
results.
 Key: PHOENIX-6339
 URL: https://issues.apache.org/jira/browse/PHOENIX-6339
 Project: Phoenix
  Issue Type: Improvement
Affects Versions: 4.16.0
Reporter: Jacob Isaac
 Fix For: 4.16.0


When running an older client for eg (4.15) against a 4.16 server
The output of aggregate queries are incorrect -

expected one row with the count, actual 9 rows with counts.

The 9 rows correspond to the number of regions in the data set. As shown in the 
explain plan.

Connected to: Phoenix (version 4.15)
Driver: PhoenixEmbeddedDriver (version 4.15)
Autocommit status: true
Transaction isolation: TRANSACTION_READ_COMMITTED
Building list of tables and columns for tab-completion (set fastconnect to true 
to skip)...
225/225 (100%) Done
Done
sqlline version 1.5.0
0: jdbc:phoenix:localhost> select count(*) from 
BENCHMARK.BM_AGGREGATION_TABLE_2;
+---+
| COUNT(1) |
+---+
| 2389483 |
| 2319177 |
| 1958007 |
| 2389483 |
| 2319178 |
| 1958005 |
| 2233646 |
| 2249033 |
| 2183988 |
+---+
9 rows selected (6.56 seconds)
0: jdbc:phoenix:localhost> explain select count(*) from 
BENCHMARK.BM_AGGREGATION_TABLE_2;
+---+-+++
| PLAN | EST_BYTES_READ | EST_ROWS_READ | EST_INFO_TS |
+---+-+++
| CLIENT 9-CHUNK 10191406 ROWS 1887436990 BYTES PARALLEL 1-WAY FULL SCAN OVER 
BENCHMARK.BM_AGGREGATION_TABLE_2 | 1887436990 | 10191406 | 1611584394492 |
| SERVER FILTER BY FIRST KEY ONLY | 1887436990 | 10191406 | 1611584394492 |
| SERVER AGGREGATE INTO SINGLE ROW | 1887436990 | 10191406 | 1611584394492 |
+---+-+++



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6312) Need a util method in PhoenixMapReduceUtil along the lines of TableMapReduceUtil.addHBaseDependencyJars

2021-01-12 Thread Jacob Isaac (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacob Isaac updated PHOENIX-6312:
-
Fix Version/s: 4.16.0

> Need a util method in PhoenixMapReduceUtil along the lines of 
> TableMapReduceUtil.addHBaseDependencyJars
> ---
>
> Key: PHOENIX-6312
> URL: https://issues.apache.org/jira/browse/PHOENIX-6312
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.x
>Reporter: Jacob Isaac
>Priority: Blocker
> Fix For: 4.16.0, 4.x
>
>
> Now that we have phoenix-hbase-compat-x-x-x jars, We need to have the classes 
> in the compat jar made available to the MR jobs.
> TableMapReduceUtil.addHBaseDependencyJars is an example of how hbase 
> dependency jars are made available to the MR job.
> We get the following exception when these jars are not made available to MR 
> jobs
> Error: java.lang.ClassNotFoundException: 
> org.apache.phoenix.compat.hbase.CompatRpcControllerFactory at 
> java.net.URLClassLoader.findClass(URLClassLoader.java:381) at 
> java.lang.ClassLoader.loadClass(ClassLoader.java:424) at 
> sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349) at 
> java.lang.ClassLoader.loadClass(ClassLoader.java:357) at 
> java.lang.ClassLoader.defineClass1(Native Method) at 
> java.lang.ClassLoader.defineClass(ClassLoader.java:763) at 
> java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142) at 
> java.net.URLClassLoader.defineClass(URLClassLoader.java:467) at 
> java.net.URLClassLoader.access$100(URLClassLoader.java:73) at 
> java.net.URLClassLoader$1.run(URLClassLoader.java:368) at 
> java.net.URLClassLoader$1.run(URLClassLoader.java:362) at 
> java.security.AccessController.doPrivileged(Native Method) at 
> java.net.URLClassLoader.findClass(URLClassLoader.java:361) at 
> java.lang.ClassLoader.loadClass(ClassLoader.java:424) at 
> sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349) at 
> java.lang.ClassLoader.loadClass(ClassLoader.java:357) at 
> org.apache.phoenix.query.QueryServicesOptions.(QueryServicesOptions.java:288)
>  at 
> org.apache.phoenix.query.QueryServicesImpl.(QueryServicesImpl.java:36) 
> at 
> org.apache.phoenix.jdbc.PhoenixDriver.getQueryServices(PhoenixDriver.java:197)
>  at 
> org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:235)
>  at 
> org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:142)
>  at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:221) at 
> java.sql.DriverManager.getConnection(DriverManager.java:664) at 
> java.sql.DriverManager.getConnection(DriverManager.java:208) at 
> org.apache.phoenix.mapreduce.util.ConnectionUtil.getConnection(ConnectionUtil.java:113)
>  at 
> org.apache.phoenix.mapreduce.util.ConnectionUtil.getInputConnection(ConnectionUtil.java:58)
>  at 
> org.apache.phoenix.mapreduce.PhoenixServerBuildIndexInputFormat.getQueryPlan(PhoenixServerBuildIndexInputFormat.java:94)
>  at 
> org.apache.phoenix.mapreduce.PhoenixInputFormat.createRecordReader(PhoenixInputFormat.java:79)
>  at 
> org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.(MapTask.java:521)
>  at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764) at 
> org.apache.hadoop.mapred.MapTask.run(MapTask.java:341) at 
> org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:177) at 
> java.security.AccessController.doPrivileged(Native Method) at 
> javax.security.auth.Subject.doAs(Subject.java:422) at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1926)
>  at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:171)
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (PHOENIX-6312) Need a util method in PhoenixMapReduceUtil along the lines of TableMapReduceUtil.addHBaseDependencyJars

2021-01-12 Thread Jacob Isaac (Jira)
Jacob Isaac created PHOENIX-6312:


 Summary: Need a util method in PhoenixMapReduceUtil along the 
lines of TableMapReduceUtil.addHBaseDependencyJars
 Key: PHOENIX-6312
 URL: https://issues.apache.org/jira/browse/PHOENIX-6312
 Project: Phoenix
  Issue Type: Improvement
Affects Versions: 4.x
Reporter: Jacob Isaac
 Fix For: 4.x


Now that we have phoenix-hbase-compat-x-x-x jars, We need to have the classes 
in the compat jar made available to the MR jobs.

TableMapReduceUtil.addHBaseDependencyJars is an example of how hbase dependency 
jars are made available to the MR job.

We get the following exception when these jars are not made available to MR jobs
Error: java.lang.ClassNotFoundException: 
org.apache.phoenix.compat.hbase.CompatRpcControllerFactory at 
java.net.URLClassLoader.findClass(URLClassLoader.java:381) at 
java.lang.ClassLoader.loadClass(ClassLoader.java:424) at 
sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349) at 
java.lang.ClassLoader.loadClass(ClassLoader.java:357) at 
java.lang.ClassLoader.defineClass1(Native Method) at 
java.lang.ClassLoader.defineClass(ClassLoader.java:763) at 
java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142) at 
java.net.URLClassLoader.defineClass(URLClassLoader.java:467) at 
java.net.URLClassLoader.access$100(URLClassLoader.java:73) at 
java.net.URLClassLoader$1.run(URLClassLoader.java:368) at 
java.net.URLClassLoader$1.run(URLClassLoader.java:362) at 
java.security.AccessController.doPrivileged(Native Method) at 
java.net.URLClassLoader.findClass(URLClassLoader.java:361) at 
java.lang.ClassLoader.loadClass(ClassLoader.java:424) at 
sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349) at 
java.lang.ClassLoader.loadClass(ClassLoader.java:357) at 
org.apache.phoenix.query.QueryServicesOptions.(QueryServicesOptions.java:288)
 at 
org.apache.phoenix.query.QueryServicesImpl.(QueryServicesImpl.java:36) at 
org.apache.phoenix.jdbc.PhoenixDriver.getQueryServices(PhoenixDriver.java:197) 
at 
org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:235)
 at 
org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:142)
 at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:221) at 
java.sql.DriverManager.getConnection(DriverManager.java:664) at 
java.sql.DriverManager.getConnection(DriverManager.java:208) at 
org.apache.phoenix.mapreduce.util.ConnectionUtil.getConnection(ConnectionUtil.java:113)
 at 
org.apache.phoenix.mapreduce.util.ConnectionUtil.getInputConnection(ConnectionUtil.java:58)
 at 
org.apache.phoenix.mapreduce.PhoenixServerBuildIndexInputFormat.getQueryPlan(PhoenixServerBuildIndexInputFormat.java:94)
 at 
org.apache.phoenix.mapreduce.PhoenixInputFormat.createRecordReader(PhoenixInputFormat.java:79)
 at 
org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.(MapTask.java:521)
 at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764) at 
org.apache.hadoop.mapred.MapTask.run(MapTask.java:341) at 
org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:177) at 
java.security.AccessController.doPrivileged(Native Method) at 
javax.security.auth.Subject.doAs(Subject.java:422) at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1926)
 at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:171)
 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5601) PHOENIX-5601 Add a new coprocessor for PHOENIX_TTL - PhoenixTTLRegionObserver

2020-11-17 Thread Jacob Isaac (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacob Isaac updated PHOENIX-5601:
-
Attachment: PHOENIX-5601.master.001.patch

> PHOENIX-5601 Add a new coprocessor for PHOENIX_TTL - PhoenixTTLRegionObserver
> -
>
> Key: PHOENIX-5601
> URL: https://issues.apache.org/jira/browse/PHOENIX-5601
> Project: Phoenix
>  Issue Type: Sub-task
>Affects Versions: 5.1.0, 4.16.0
>Reporter: Jacob Isaac
>Assignee: Jacob Isaac
>Priority: Major
> Attachments: PHOENIX-5601.4.x.003.patch, PHOENIX-5601.master.001.patch
>
>
>  * Add a New coprocessor - ViewTTLAware Coprocessor that will intercept 
> scan/get requests to inject a new ViewTTLAware scanner.
> The scanner will -
>   * Use the row timestamp of the empty column to determine whether row TTL 
> has expired  and mask the rows from underlying query results.
>   * Use the row timestamp to delete expired rows when DELETE_VIEW_TTL_EXPIRED 
> flag is present.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5601) PHOENIX-5601 Add a new coprocessor for PHOENIX_TTL - PhoenixTTLRegionObserver

2020-11-16 Thread Jacob Isaac (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacob Isaac updated PHOENIX-5601:
-
Attachment: (was: PHOENIX-5601.4.x.002.patch)

> PHOENIX-5601 Add a new coprocessor for PHOENIX_TTL - PhoenixTTLRegionObserver
> -
>
> Key: PHOENIX-5601
> URL: https://issues.apache.org/jira/browse/PHOENIX-5601
> Project: Phoenix
>  Issue Type: Sub-task
>Affects Versions: 5.1.0, 4.16.0
>Reporter: Jacob Isaac
>Assignee: Jacob Isaac
>Priority: Major
>
>  * Add a New coprocessor - ViewTTLAware Coprocessor that will intercept 
> scan/get requests to inject a new ViewTTLAware scanner.
> The scanner will -
>   * Use the row timestamp of the empty column to determine whether row TTL 
> has expired  and mask the rows from underlying query results.
>   * Use the row timestamp to delete expired rows when DELETE_VIEW_TTL_EXPIRED 
> flag is present.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5601) PHOENIX-5601 Add a new coprocessor for PHOENIX_TTL - PhoenixTTLRegionObserver

2020-11-16 Thread Jacob Isaac (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacob Isaac updated PHOENIX-5601:
-
Summary: PHOENIX-5601 Add a new coprocessor for PHOENIX_TTL - 
PhoenixTTLRegionObserver  (was: Add a new Coprocessor - ViewTTLAware 
Coprocessor)

> PHOENIX-5601 Add a new coprocessor for PHOENIX_TTL - PhoenixTTLRegionObserver
> -
>
> Key: PHOENIX-5601
> URL: https://issues.apache.org/jira/browse/PHOENIX-5601
> Project: Phoenix
>  Issue Type: Sub-task
>Affects Versions: 5.1.0, 4.16.0
>Reporter: Jacob Isaac
>Assignee: Jacob Isaac
>Priority: Major
>
>  * Add a New coprocessor - ViewTTLAware Coprocessor that will intercept 
> scan/get requests to inject a new ViewTTLAware scanner.
> The scanner will -
>   * Use the row timestamp of the empty column to determine whether row TTL 
> has expired  and mask the rows from underlying query results.
>   * Use the row timestamp to delete expired rows when DELETE_VIEW_TTL_EXPIRED 
> flag is present.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5601) PHOENIX-5601 Add a new coprocessor for PHOENIX_TTL - PhoenixTTLRegionObserver

2020-11-16 Thread Jacob Isaac (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacob Isaac updated PHOENIX-5601:
-
Attachment: (was: PHOENIX-5601.4.x.001.patch)

> PHOENIX-5601 Add a new coprocessor for PHOENIX_TTL - PhoenixTTLRegionObserver
> -
>
> Key: PHOENIX-5601
> URL: https://issues.apache.org/jira/browse/PHOENIX-5601
> Project: Phoenix
>  Issue Type: Sub-task
>Affects Versions: 5.1.0, 4.16.0
>Reporter: Jacob Isaac
>Assignee: Jacob Isaac
>Priority: Major
>
>  * Add a New coprocessor - ViewTTLAware Coprocessor that will intercept 
> scan/get requests to inject a new ViewTTLAware scanner.
> The scanner will -
>   * Use the row timestamp of the empty column to determine whether row TTL 
> has expired  and mask the rows from underlying query results.
>   * Use the row timestamp to delete expired rows when DELETE_VIEW_TTL_EXPIRED 
> flag is present.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5601) Add a new Coprocessor - ViewTTLAware Coprocessor

2020-11-16 Thread Jacob Isaac (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacob Isaac updated PHOENIX-5601:
-
Attachment: (was: PHOENIX-5601.master.008.patch)

> Add a new Coprocessor - ViewTTLAware Coprocessor
> 
>
> Key: PHOENIX-5601
> URL: https://issues.apache.org/jira/browse/PHOENIX-5601
> Project: Phoenix
>  Issue Type: Sub-task
>Affects Versions: 5.1.0, 4.16.0
>Reporter: Jacob Isaac
>Assignee: Jacob Isaac
>Priority: Major
>
>  * Add a New coprocessor - ViewTTLAware Coprocessor that will intercept 
> scan/get requests to inject a new ViewTTLAware scanner.
> The scanner will -
>   * Use the row timestamp of the empty column to determine whether row TTL 
> has expired  and mask the rows from underlying query results.
>   * Use the row timestamp to delete expired rows when DELETE_VIEW_TTL_EXPIRED 
> flag is present.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5601) Add a new Coprocessor - ViewTTLAware Coprocessor

2020-11-16 Thread Jacob Isaac (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacob Isaac updated PHOENIX-5601:
-
Attachment: (was: PHOENIX-5601.4.x-HBase-1.3.008.patch)

> Add a new Coprocessor - ViewTTLAware Coprocessor
> 
>
> Key: PHOENIX-5601
> URL: https://issues.apache.org/jira/browse/PHOENIX-5601
> Project: Phoenix
>  Issue Type: Sub-task
>Affects Versions: 5.1.0, 4.16.0
>Reporter: Jacob Isaac
>Assignee: Jacob Isaac
>Priority: Major
>
>  * Add a New coprocessor - ViewTTLAware Coprocessor that will intercept 
> scan/get requests to inject a new ViewTTLAware scanner.
> The scanner will -
>   * Use the row timestamp of the empty column to determine whether row TTL 
> has expired  and mask the rows from underlying query results.
>   * Use the row timestamp to delete expired rows when DELETE_VIEW_TTL_EXPIRED 
> flag is present.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Reopened] (PHOENIX-5601) Add a new Coprocessor - ViewTTLAware Coprocessor

2020-11-11 Thread Jacob Isaac (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacob Isaac reopened PHOENIX-5601:
--

After discussion with [~kozdemir] [~larsh] and others, we arrived at the 
following decision
 # The client-side masking may not handle all use cases, for eg server-side 
scans 
 # As the long term goal is to extend this to Phoenix Tables too, using a 
co-proc might be more efficient and may be easier to manage dependencies with 
other backend processes like backups, compaction ...

More details in this design doc - PHOENIX-5934

> Add a new Coprocessor - ViewTTLAware Coprocessor
> 
>
> Key: PHOENIX-5601
> URL: https://issues.apache.org/jira/browse/PHOENIX-5601
> Project: Phoenix
>  Issue Type: Sub-task
>Affects Versions: 5.1.0, 4.16.0
>Reporter: Jacob Isaac
>Assignee: Jacob Isaac
>Priority: Major
> Attachments: PHOENIX-5601.4.x-HBase-1.3.008.patch, 
> PHOENIX-5601.master.008.patch
>
>
>  * Add a New coprocessor - ViewTTLAware Coprocessor that will intercept 
> scan/get requests to inject a new ViewTTLAware scanner.
> The scanner will -
>   * Use the row timestamp of the empty column to determine whether row TTL 
> has expired  and mask the rows from underlying query results.
>   * Use the row timestamp to delete expired rows when DELETE_VIEW_TTL_EXPIRED 
> flag is present.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5601) Add a new Coprocessor - ViewTTLAware Coprocessor

2020-11-11 Thread Jacob Isaac (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacob Isaac updated PHOENIX-5601:
-
Affects Version/s: (was: 4.15.0)
   4.16.0

> Add a new Coprocessor - ViewTTLAware Coprocessor
> 
>
> Key: PHOENIX-5601
> URL: https://issues.apache.org/jira/browse/PHOENIX-5601
> Project: Phoenix
>  Issue Type: Sub-task
>Affects Versions: 5.1.0, 4.16.0
>Reporter: Jacob Isaac
>Assignee: Jacob Isaac
>Priority: Major
> Attachments: PHOENIX-5601.4.x-HBase-1.3.008.patch, 
> PHOENIX-5601.master.008.patch
>
>
>  * Add a New coprocessor - ViewTTLAware Coprocessor that will intercept 
> scan/get requests to inject a new ViewTTLAware scanner.
> The scanner will -
>   * Use the row timestamp of the empty column to determine whether row TTL 
> has expired  and mask the rows from underlying query results.
>   * Use the row timestamp to delete expired rows when DELETE_VIEW_TTL_EXPIRED 
> flag is present.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6171) Child views should not be allowed to override the parent view PHOENIX_TTL attribute.

2020-10-28 Thread Jacob Isaac (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacob Isaac updated PHOENIX-6171:
-
Attachment: (was: PHOENIX-6171.4.x.002.patch)

> Child views should not be allowed to override the parent view PHOENIX_TTL 
> attribute.
> 
>
> Key: PHOENIX-6171
> URL: https://issues.apache.org/jira/browse/PHOENIX-6171
> Project: Phoenix
>  Issue Type: Sub-task
>Affects Versions: 4.15.0, 4.x
>Reporter: Jacob Isaac
>Assignee: Jacob Isaac
>Priority: Major
> Fix For: 4.16.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6171) Child views should not be allowed to override the parent view PHOENIX_TTL attribute.

2020-10-25 Thread Jacob Isaac (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacob Isaac updated PHOENIX-6171:
-
Attachment: (was: PHOENIX-6171.4.x.001.patch)

> Child views should not be allowed to override the parent view PHOENIX_TTL 
> attribute.
> 
>
> Key: PHOENIX-6171
> URL: https://issues.apache.org/jira/browse/PHOENIX-6171
> Project: Phoenix
>  Issue Type: Sub-task
>Affects Versions: 4.x
>Reporter: Jacob Isaac
>Assignee: Jacob Isaac
>Priority: Major
> Attachments: PHOENIX-6171.4.x.002.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6171) Child views should not be allowed to override the parent view PHOENIX_TTL attribute.

2020-10-25 Thread Jacob Isaac (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacob Isaac updated PHOENIX-6171:
-
Attachment: PHOENIX-6171.4.x.002.patch

> Child views should not be allowed to override the parent view PHOENIX_TTL 
> attribute.
> 
>
> Key: PHOENIX-6171
> URL: https://issues.apache.org/jira/browse/PHOENIX-6171
> Project: Phoenix
>  Issue Type: Sub-task
>Affects Versions: 4.x
>Reporter: Jacob Isaac
>Assignee: Jacob Isaac
>Priority: Major
> Attachments: PHOENIX-6171.4.x.002.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (PHOENIX-6179) Relax the MaxLookBack age checks during an upgrade

2020-10-06 Thread Jacob Isaac (Jira)
Jacob Isaac created PHOENIX-6179:


 Summary: Relax the MaxLookBack age checks during an upgrade
 Key: PHOENIX-6179
 URL: https://issues.apache.org/jira/browse/PHOENIX-6179
 Project: Phoenix
  Issue Type: Bug
Reporter: Jacob Isaac


Getting this error when trying to upgrade cluster - Error: ERROR 538 (42915): 
Cannot use SCN to look further back in the past beyond the configured max 
lookback age (state=42915,code=538)

 

During the upgrade the SCN for the connection is set to be the phoenix version 
timestamp which is a small number.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6170) PHOENIX_TTL spec should be in seconds instead of milliseconds

2020-10-05 Thread Jacob Isaac (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacob Isaac updated PHOENIX-6170:
-
Summary: PHOENIX_TTL spec should be in seconds instead of milliseconds  
(was: PHOENIX_TTL spec should in seconds instead of milliseconds)

> PHOENIX_TTL spec should be in seconds instead of milliseconds
> -
>
> Key: PHOENIX-6170
> URL: https://issues.apache.org/jira/browse/PHOENIX-6170
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Jacob Isaac
>Assignee: Jacob Isaac
>Priority: Major
>
> When defining the PHOENIX_TTL spec it should be specified in seconds, which 
> is also how HBase TTL value is set.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (PHOENIX-6170) PHOENIX_TTL spec should in seconds instead of milliseconds

2020-10-01 Thread Jacob Isaac (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacob Isaac reassigned PHOENIX-6170:


Assignee: Jacob Isaac

> PHOENIX_TTL spec should in seconds instead of milliseconds
> --
>
> Key: PHOENIX-6170
> URL: https://issues.apache.org/jira/browse/PHOENIX-6170
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Jacob Isaac
>Assignee: Jacob Isaac
>Priority: Major
>
> When defining the PHOENIX_TTL spec it should be specified in seconds, which 
> is also how HBase TTL value is set.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6170) PHOENIX_TTL spec should in seconds instead of milliseconds

2020-10-01 Thread Jacob Isaac (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacob Isaac updated PHOENIX-6170:
-
Parent: PHOENIX-3725
Issue Type: Sub-task  (was: Improvement)

> PHOENIX_TTL spec should in seconds instead of milliseconds
> --
>
> Key: PHOENIX-6170
> URL: https://issues.apache.org/jira/browse/PHOENIX-6170
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Jacob Isaac
>Priority: Major
>
> When defining the PHOENIX_TTL spec it should be specified in seconds, which 
> is also how HBase TTL value is set.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (PHOENIX-6171) Child views should not be allowed to override the parent view PHOENIX_TTL attribute.

2020-10-01 Thread Jacob Isaac (Jira)
Jacob Isaac created PHOENIX-6171:


 Summary: Child views should not be allowed to override the parent 
view PHOENIX_TTL attribute.
 Key: PHOENIX-6171
 URL: https://issues.apache.org/jira/browse/PHOENIX-6171
 Project: Phoenix
  Issue Type: Sub-task
Reporter: Jacob Isaac
Assignee: Jacob Isaac






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (PHOENIX-6170) PHOENIX_TTL spec should in seconds instead of milliseconds

2020-10-01 Thread Jacob Isaac (Jira)
Jacob Isaac created PHOENIX-6170:


 Summary: PHOENIX_TTL spec should in seconds instead of milliseconds
 Key: PHOENIX-6170
 URL: https://issues.apache.org/jira/browse/PHOENIX-6170
 Project: Phoenix
  Issue Type: Improvement
Reporter: Jacob Isaac


When defining the PHOENIX_TTL spec it should be specified in seconds, which is 
also how HBase TTL value is set.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (PHOENIX-6118) Multi Tenant Workloads using PHERF

2020-09-08 Thread Jacob Isaac (Jira)
Jacob Isaac created PHOENIX-6118:


 Summary: Multi Tenant Workloads using PHERF
 Key: PHOENIX-6118
 URL: https://issues.apache.org/jira/browse/PHOENIX-6118
 Project: Phoenix
  Issue Type: Improvement
Reporter: Jacob Isaac


Features like PHOENIX_TTL and Splittable SYSCAT need to be tested for a large 
number of tenant views.
In the absence of support for creating a large number of tenant views - Multi 
leveled views dynamically and be able to query them in a generic framework, the 
teams have to write custom logic to replay/run functional and perf testing.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (PHOENIX-6118) Multi Tenant Workloads using PHERF

2020-09-08 Thread Jacob Isaac (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacob Isaac reassigned PHOENIX-6118:


Assignee: Jacob Isaac

> Multi Tenant Workloads using PHERF
> --
>
> Key: PHOENIX-6118
> URL: https://issues.apache.org/jira/browse/PHOENIX-6118
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Jacob Isaac
>Assignee: Jacob Isaac
>Priority: Major
>
> Features like PHOENIX_TTL and Splittable SYSCAT need to be tested for a large 
> number of tenant views.
> In the absence of support for creating a large number of tenant views - Multi 
> leveled views dynamically and be able to query them in a generic framework, 
> the teams have to write custom logic to replay/run functional and perf 
> testing.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5933) Rename VIEW_TTL property to be PHOENIX_TTL

2020-07-16 Thread Jacob Isaac (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacob Isaac updated PHOENIX-5933:
-
Attachment: PHOENIX-5933.master.001.patch

> Rename VIEW_TTL property to be PHOENIX_TTL
> --
>
> Key: PHOENIX-5933
> URL: https://issues.apache.org/jira/browse/PHOENIX-5933
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Jacob Isaac
>Assignee: Jacob Isaac
>Priority: Major
> Fix For: 4.16.0
>
> Attachments: PHOENIX-5933.4.x.001.patch, PHOENIX-5933.master.001.patch
>
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> As there is more interest in extending the TTL feature to all Phoenix 
> entities (tables and views)
> More specifically separate the usage of Phoenix application-level TTL 
> (row-based) and HBase level TTL (Store/Column Family-based).
> It makes sense to rename the Phoenix Table property to be PHOENIX_TTL



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5935) Select with non primary keys and PHOENIX_ROW_TIMESTAMP() in where clause fails

2020-07-14 Thread Jacob Isaac (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacob Isaac updated PHOENIX-5935:
-
Attachment: (was: PHOENIX-5935.4.x.002.patch)

> Select with non primary keys and PHOENIX_ROW_TIMESTAMP() in where clause fails
> --
>
> Key: PHOENIX-5935
> URL: https://issues.apache.org/jira/browse/PHOENIX-5935
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Jacob Isaac
>Assignee: Jacob Isaac
>Priority: Major
> Fix For: 4.x
>
>
> This fails when COLUMN_ENCODED_BYTES != NON_ENCODED_QUALIFIERS
> Steps to reproduce:-
> CREATE TABLE IF NOT EXISTS N01 (PK1 INTEGER NOT NULL, PK2 DATE NOT NULL, 
> KV1 VARCHAR, KV2 VARCHAR CONSTRAINT PK PRIMARY KEY(PK1, PK2)) 
> COLUMN_ENCODED_BYTES = 1,IMMUTABLE_STORAGE_SCHEME = ONE_CELL_PER_COLUMN
>  
> SELECT COUNT(*) FROM N01 WHERE PHOENIX_ROW_TIMESTAMP() > PK2 AND KV1 = 
> 'KV1_1'";
>  
> Fails with the following exception -
> Caused by: java.util.NoSuchElementExceptionCaused by: 
> java.util.NoSuchElementException at 
> org.apache.phoenix.filter.MultiEncodedCQKeyValueComparisonFilter$FilteredKeyValueHolder.getCellAtIndex(MultiEncodedCQKeyValueComparisonFilter.java:151)
>  at 
> org.apache.phoenix.filter.MultiEncodedCQKeyValueComparisonFilter$EncodedCQIncrementalResultTuple.getValue(MultiEncodedCQKeyValueComparisonFilter.java:311)
>  at 
> org.apache.phoenix.expression.function.PhoenixRowTimestampFunction.evaluate(PhoenixRowTimestampFunction.java:98)
>  at 
> org.apache.phoenix.expression.ComparisonExpression.evaluate(ComparisonExpression.java:330)
>  at 
> org.apache.phoenix.expression.AndOrExpression.evaluate(AndOrExpression.java:75)
>  at 
> org.apache.phoenix.filter.BooleanExpressionFilter.evaluate(BooleanExpressionFilter.java:93)
>  at 
> org.apache.phoenix.filter.MultiEncodedCQKeyValueComparisonFilter.filterKeyValue(MultiEncodedCQKeyValueComparisonFilter.java:233)
>  at 
> org.apache.hadoop.hbase.regionserver.querymatcher.UserScanQueryMatcher.matchColumn(UserScanQueryMatcher.java:122)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5898) Phoenix function CURRENT_TIME() returns wrong result when view indexes are used.

2020-06-24 Thread Jacob Isaac (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacob Isaac updated PHOENIX-5898:
-
Attachment: (was: PHOENIX-5898.4.x.001.patch)

> Phoenix function CURRENT_TIME() returns wrong result when view indexes are 
> used.
> 
>
> Key: PHOENIX-5898
> URL: https://issues.apache.org/jira/browse/PHOENIX-5898
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0, 4.15.0
>Reporter: Jacob Isaac
>Assignee: Jacob Isaac
>Priority: Major
> Fix For: 5.1.0, 4.16.0
>
> Attachments: SampleTest.java
>
>
> Here is a sample test.
> [^SampleTest.java]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5935) Select with non primary keys and PHOENIX_ROW_TIMESTAMP() in where clause fails

2020-06-24 Thread Jacob Isaac (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacob Isaac updated PHOENIX-5935:
-
Attachment: (was: PHOENIX-5935.4.x.001.patch)

> Select with non primary keys and PHOENIX_ROW_TIMESTAMP() in where clause fails
> --
>
> Key: PHOENIX-5935
> URL: https://issues.apache.org/jira/browse/PHOENIX-5935
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Jacob Isaac
>Assignee: Jacob Isaac
>Priority: Major
>
> This fails when COLUMN_ENCODED_BYTES != NON_ENCODED_QUALIFIERS
> Steps to reproduce:-
> CREATE TABLE IF NOT EXISTS N01 (PK1 INTEGER NOT NULL, PK2 DATE NOT NULL, 
> KV1 VARCHAR, KV2 VARCHAR CONSTRAINT PK PRIMARY KEY(PK1, PK2)) 
> COLUMN_ENCODED_BYTES = 1,IMMUTABLE_STORAGE_SCHEME = ONE_CELL_PER_COLUMN
>  
> SELECT COUNT(*) FROM N01 WHERE PHOENIX_ROW_TIMESTAMP() > PK2 AND KV1 = 
> 'KV1_1'";
>  
> Fails with the following exception -
> Caused by: java.util.NoSuchElementExceptionCaused by: 
> java.util.NoSuchElementException at 
> org.apache.phoenix.filter.MultiEncodedCQKeyValueComparisonFilter$FilteredKeyValueHolder.getCellAtIndex(MultiEncodedCQKeyValueComparisonFilter.java:151)
>  at 
> org.apache.phoenix.filter.MultiEncodedCQKeyValueComparisonFilter$EncodedCQIncrementalResultTuple.getValue(MultiEncodedCQKeyValueComparisonFilter.java:311)
>  at 
> org.apache.phoenix.expression.function.PhoenixRowTimestampFunction.evaluate(PhoenixRowTimestampFunction.java:98)
>  at 
> org.apache.phoenix.expression.ComparisonExpression.evaluate(ComparisonExpression.java:330)
>  at 
> org.apache.phoenix.expression.AndOrExpression.evaluate(AndOrExpression.java:75)
>  at 
> org.apache.phoenix.filter.BooleanExpressionFilter.evaluate(BooleanExpressionFilter.java:93)
>  at 
> org.apache.phoenix.filter.MultiEncodedCQKeyValueComparisonFilter.filterKeyValue(MultiEncodedCQKeyValueComparisonFilter.java:233)
>  at 
> org.apache.hadoop.hbase.regionserver.querymatcher.UserScanQueryMatcher.matchColumn(UserScanQueryMatcher.java:122)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5935) Select with non primary keys and PHOENIX_ROW_TIMESTAMP() in where clause fails

2020-06-02 Thread Jacob Isaac (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacob Isaac updated PHOENIX-5935:
-
Description: 
This fails when COLUMN_ENCODED_BYTES != NON_ENCODED_QUALIFIERS

Steps to reproduce:-

CREATE TABLE IF NOT EXISTS N01 (PK1 INTEGER NOT NULL, PK2 DATE NOT NULL, 
KV1 VARCHAR, KV2 VARCHAR CONSTRAINT PK PRIMARY KEY(PK1, PK2)) 
COLUMN_ENCODED_BYTES = 1,IMMUTABLE_STORAGE_SCHEME = ONE_CELL_PER_COLUMN

 

SELECT COUNT(*) FROM N01 WHERE PHOENIX_ROW_TIMESTAMP() > PK2 AND KV1 = 
'KV1_1'";

 

Fails with the following exception -

Caused by: java.util.NoSuchElementExceptionCaused by: 
java.util.NoSuchElementException at 
org.apache.phoenix.filter.MultiEncodedCQKeyValueComparisonFilter$FilteredKeyValueHolder.getCellAtIndex(MultiEncodedCQKeyValueComparisonFilter.java:151)
 at 
org.apache.phoenix.filter.MultiEncodedCQKeyValueComparisonFilter$EncodedCQIncrementalResultTuple.getValue(MultiEncodedCQKeyValueComparisonFilter.java:311)
 at 
org.apache.phoenix.expression.function.PhoenixRowTimestampFunction.evaluate(PhoenixRowTimestampFunction.java:98)
 at 
org.apache.phoenix.expression.ComparisonExpression.evaluate(ComparisonExpression.java:330)
 at 
org.apache.phoenix.expression.AndOrExpression.evaluate(AndOrExpression.java:75) 
at 
org.apache.phoenix.filter.BooleanExpressionFilter.evaluate(BooleanExpressionFilter.java:93)
 at 
org.apache.phoenix.filter.MultiEncodedCQKeyValueComparisonFilter.filterKeyValue(MultiEncodedCQKeyValueComparisonFilter.java:233)
 at 
org.apache.hadoop.hbase.regionserver.querymatcher.UserScanQueryMatcher.matchColumn(UserScanQueryMatcher.java:122)

> Select with non primary keys and PHOENIX_ROW_TIMESTAMP() in where clause fails
> --
>
> Key: PHOENIX-5935
> URL: https://issues.apache.org/jira/browse/PHOENIX-5935
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Jacob Isaac
>Assignee: Jacob Isaac
>Priority: Major
>
> This fails when COLUMN_ENCODED_BYTES != NON_ENCODED_QUALIFIERS
> Steps to reproduce:-
> CREATE TABLE IF NOT EXISTS N01 (PK1 INTEGER NOT NULL, PK2 DATE NOT NULL, 
> KV1 VARCHAR, KV2 VARCHAR CONSTRAINT PK PRIMARY KEY(PK1, PK2)) 
> COLUMN_ENCODED_BYTES = 1,IMMUTABLE_STORAGE_SCHEME = ONE_CELL_PER_COLUMN
>  
> SELECT COUNT(*) FROM N01 WHERE PHOENIX_ROW_TIMESTAMP() > PK2 AND KV1 = 
> 'KV1_1'";
>  
> Fails with the following exception -
> Caused by: java.util.NoSuchElementExceptionCaused by: 
> java.util.NoSuchElementException at 
> org.apache.phoenix.filter.MultiEncodedCQKeyValueComparisonFilter$FilteredKeyValueHolder.getCellAtIndex(MultiEncodedCQKeyValueComparisonFilter.java:151)
>  at 
> org.apache.phoenix.filter.MultiEncodedCQKeyValueComparisonFilter$EncodedCQIncrementalResultTuple.getValue(MultiEncodedCQKeyValueComparisonFilter.java:311)
>  at 
> org.apache.phoenix.expression.function.PhoenixRowTimestampFunction.evaluate(PhoenixRowTimestampFunction.java:98)
>  at 
> org.apache.phoenix.expression.ComparisonExpression.evaluate(ComparisonExpression.java:330)
>  at 
> org.apache.phoenix.expression.AndOrExpression.evaluate(AndOrExpression.java:75)
>  at 
> org.apache.phoenix.filter.BooleanExpressionFilter.evaluate(BooleanExpressionFilter.java:93)
>  at 
> org.apache.phoenix.filter.MultiEncodedCQKeyValueComparisonFilter.filterKeyValue(MultiEncodedCQKeyValueComparisonFilter.java:233)
>  at 
> org.apache.hadoop.hbase.regionserver.querymatcher.UserScanQueryMatcher.matchColumn(UserScanQueryMatcher.java:122)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (PHOENIX-5935) Select with non primary keys and PHOENIX_ROW_TIMESTAMP() in where clause fails

2020-06-02 Thread Jacob Isaac (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacob Isaac reassigned PHOENIX-5935:


Assignee: Jacob Isaac

> Select with non primary keys and PHOENIX_ROW_TIMESTAMP() in where clause fails
> --
>
> Key: PHOENIX-5935
> URL: https://issues.apache.org/jira/browse/PHOENIX-5935
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Jacob Isaac
>Assignee: Jacob Isaac
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (PHOENIX-5935) Select with non primary keys and PHOENIX_ROW_TIMESTAMP() in where clause fails

2020-06-02 Thread Jacob Isaac (Jira)
Jacob Isaac created PHOENIX-5935:


 Summary: Select with non primary keys and PHOENIX_ROW_TIMESTAMP() 
in where clause fails
 Key: PHOENIX-5935
 URL: https://issues.apache.org/jira/browse/PHOENIX-5935
 Project: Phoenix
  Issue Type: Bug
Reporter: Jacob Isaac






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (PHOENIX-5934) Design doc for PHOENIX TTL

2020-06-02 Thread Jacob Isaac (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacob Isaac reassigned PHOENIX-5934:


Assignee: Jacob Isaac

> Design doc for PHOENIX TTL
> --
>
> Key: PHOENIX-5934
> URL: https://issues.apache.org/jira/browse/PHOENIX-5934
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Jacob Isaac
>Assignee: Jacob Isaac
>Priority: Major
> Fix For: 4.16.0
>
>
> In the absence of TTL support on Phoenix entities, teams have to write custom 
> application logic and use the Phoenix API to delete expired data in bulk. 
> This proposal lays out a design to support TTL on Phoenix entities by masking 
> expired rows during query time and providing a framework for deleting expired 
> rows.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


  1   2   >