[jira] [Commented] (PHOENIX-6636) Replace bundled log4j libraries with reload4j

2022-02-25 Thread Lars Hofhansl (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-6636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17498354#comment-17498354
 ] 

Lars Hofhansl commented on PHOENIX-6636:


(y)

> Replace bundled log4j libraries with reload4j
> -
>
> Key: PHOENIX-6636
> URL: https://issues.apache.org/jira/browse/PHOENIX-6636
> Project: Phoenix
>  Issue Type: Bug
>  Components: connectors, core, queryserver
>Affects Versions: 5.2.0
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Major
> Fix For: 4.17.0, 5.2.0, 4.16.2, 5.1.3
>
>
> To reduce the number of dependecies with unresolved CVEs, replace the bundled 
> log4j libraries with reload4j ([https://reload4j.qos.ch/).]
> This will also require bumping the slf4j version.
> This is a quick fix, and does not preclude moving to some different backend 
> later (like log4j2 or logback)



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (PHOENIX-6649) TransformTool should transform the tenant view content as well

2022-02-25 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-6649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17498343#comment-17498343
 ] 

ASF GitHub Bot commented on PHOENIX-6649:
-

gokceni commented on a change in pull request #1397:
URL: https://github.com/apache/phoenix/pull/1397#discussion_r815206739



##
File path: 
phoenix-core/src/it/java/org/apache/phoenix/end2end/transform/TransformMonitorExtendedIT.java
##
@@ -156,16 +162,138 @@ public void testTransformTableWithNamespaceEnabled() 
throws Exception {
 waitForTransformToGetToState(conn.unwrap(PhoenixConnection.class), 
record, PTable.TransformStatus.COMPLETED);
 SingleCellIndexIT.assertMetadata(conn, 
PTable.ImmutableStorageScheme.SINGLE_CELL_ARRAY_WITH_OFFSETS, 
PTable.QualifierEncodingScheme.TWO_BYTE_QUALIFIERS, 
record.getNewPhysicalTableName());
 TransformToolIT.upsertRows(conn, fullDataTableName, 2, 1);
-assertEquals(numOfRows+1, TransformMonitorIT.countRows(conn, 
fullDataTableName));
+assertEquals(numOfRows + 1, TransformMonitorIT.countRows(conn, 
fullDataTableName));
 
 ResultSet rs = conn.createStatement().executeQuery("SELECT ID, ZIP 
FROM " + fullDataTableName);
 assertTrue(rs.next());
 assertEquals("1", rs.getString(1));
-assertEquals( 95051, rs.getInt(2));
+assertEquals(95051, rs.getInt(2));
 assertTrue(rs.next());
 assertEquals("2", rs.getString(1));
-assertEquals( 95052, rs.getInt(2));
+assertEquals(95052, rs.getInt(2));
 assertFalse(rs.next());
 }
 }
+
+@Test
+public void testTransformWithGlobalAndTenantViews() throws Exception {
+String schemaName = generateUniqueName();
+String dataTableName1 = generateUniqueName();
+String dataTableFullName1 = SchemaUtil.getTableName(schemaName, 
dataTableName1);
+String namespaceMappedDataTableName1 = 
SchemaUtil.getPhysicalHBaseTableName(schemaName, dataTableName1, 
true).getString();
+String view1Name = SchemaUtil.getTableName(schemaName, "VW1_" + 
generateUniqueName());
+String view2Name = SchemaUtil.getTableName(schemaName, "VW2_" + 
generateUniqueName());
+String tenantView = SchemaUtil.getTableName(schemaName, "VWT_" + 
generateUniqueName());
+String readOnlyTenantView = SchemaUtil.getTableName(schemaName, 
"ROVWT_" + generateUniqueName());
+
+try (Connection conn = DriverManager.getConnection(getUrl(), 
propsNamespace)) {
+conn.setAutoCommit(true);
+int numOfRows = 1;
+conn.createStatement().execute("CREATE SCHEMA IF NOT EXISTS " + 
schemaName);
+TransformToolIT.createTableAndUpsertRows(conn, dataTableFullName1, 
numOfRows, "TABLE_ONLY", dataTableDdl);
+
+SingleCellIndexIT.assertMetadata(conn, 
PTable.ImmutableStorageScheme.ONE_CELL_PER_COLUMN, 
PTable.QualifierEncodingScheme.NON_ENCODED_QUALIFIERS, dataTableFullName1);
+
+String createViewSql = "CREATE VIEW " + view1Name + " ( VIEW_COL1 
INTEGER, VIEW_COL2 VARCHAR ) AS SELECT * FROM "
++ dataTableFullName1 + " where DATA='GLOBAL_VIEW' ";
+conn.createStatement().execute(createViewSql);
+PreparedStatement stmt1 = 
conn.prepareStatement(String.format("UPSERT INTO %s VALUES(?, ? , ?, ?, ?,?)", 
view1Name));
+stmt1.setInt(1, 2);
+stmt1.setString(2, "uname2");
+stmt1.setInt(3, 95053);
+stmt1.setString(4, "GLOBAL_VIEW");
+stmt1.setInt(5, 111);
+stmt1.setString(6, "viewcol2");
+stmt1.executeUpdate();
+
+createViewSql = "CREATE VIEW " + view2Name + " ( VIEW_COL1 
INTEGER, VIEW_COL2 VARCHAR ) AS SELECT * FROM "
++ dataTableFullName1 + " where DATA='GLOBAL_VIEW' AND 
ZIP=95053";

Review comment:
   @gjacoby126 for overlapping view




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@phoenix.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> TransformTool should transform the tenant view content as well
> --
>
> Key: PHOENIX-6649
> URL: https://issues.apache.org/jira/browse/PHOENIX-6649
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Gokcen Iskender
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [phoenix] gokceni commented on a change in pull request #1397: PHOENIX-6649 TransformTool to support views and tenant views

2022-02-25 Thread GitBox


gokceni commented on a change in pull request #1397:
URL: https://github.com/apache/phoenix/pull/1397#discussion_r815206739



##
File path: 
phoenix-core/src/it/java/org/apache/phoenix/end2end/transform/TransformMonitorExtendedIT.java
##
@@ -156,16 +162,138 @@ public void testTransformTableWithNamespaceEnabled() 
throws Exception {
 waitForTransformToGetToState(conn.unwrap(PhoenixConnection.class), 
record, PTable.TransformStatus.COMPLETED);
 SingleCellIndexIT.assertMetadata(conn, 
PTable.ImmutableStorageScheme.SINGLE_CELL_ARRAY_WITH_OFFSETS, 
PTable.QualifierEncodingScheme.TWO_BYTE_QUALIFIERS, 
record.getNewPhysicalTableName());
 TransformToolIT.upsertRows(conn, fullDataTableName, 2, 1);
-assertEquals(numOfRows+1, TransformMonitorIT.countRows(conn, 
fullDataTableName));
+assertEquals(numOfRows + 1, TransformMonitorIT.countRows(conn, 
fullDataTableName));
 
 ResultSet rs = conn.createStatement().executeQuery("SELECT ID, ZIP 
FROM " + fullDataTableName);
 assertTrue(rs.next());
 assertEquals("1", rs.getString(1));
-assertEquals( 95051, rs.getInt(2));
+assertEquals(95051, rs.getInt(2));
 assertTrue(rs.next());
 assertEquals("2", rs.getString(1));
-assertEquals( 95052, rs.getInt(2));
+assertEquals(95052, rs.getInt(2));
 assertFalse(rs.next());
 }
 }
+
+@Test
+public void testTransformWithGlobalAndTenantViews() throws Exception {
+String schemaName = generateUniqueName();
+String dataTableName1 = generateUniqueName();
+String dataTableFullName1 = SchemaUtil.getTableName(schemaName, 
dataTableName1);
+String namespaceMappedDataTableName1 = 
SchemaUtil.getPhysicalHBaseTableName(schemaName, dataTableName1, 
true).getString();
+String view1Name = SchemaUtil.getTableName(schemaName, "VW1_" + 
generateUniqueName());
+String view2Name = SchemaUtil.getTableName(schemaName, "VW2_" + 
generateUniqueName());
+String tenantView = SchemaUtil.getTableName(schemaName, "VWT_" + 
generateUniqueName());
+String readOnlyTenantView = SchemaUtil.getTableName(schemaName, 
"ROVWT_" + generateUniqueName());
+
+try (Connection conn = DriverManager.getConnection(getUrl(), 
propsNamespace)) {
+conn.setAutoCommit(true);
+int numOfRows = 1;
+conn.createStatement().execute("CREATE SCHEMA IF NOT EXISTS " + 
schemaName);
+TransformToolIT.createTableAndUpsertRows(conn, dataTableFullName1, 
numOfRows, "TABLE_ONLY", dataTableDdl);
+
+SingleCellIndexIT.assertMetadata(conn, 
PTable.ImmutableStorageScheme.ONE_CELL_PER_COLUMN, 
PTable.QualifierEncodingScheme.NON_ENCODED_QUALIFIERS, dataTableFullName1);
+
+String createViewSql = "CREATE VIEW " + view1Name + " ( VIEW_COL1 
INTEGER, VIEW_COL2 VARCHAR ) AS SELECT * FROM "
++ dataTableFullName1 + " where DATA='GLOBAL_VIEW' ";
+conn.createStatement().execute(createViewSql);
+PreparedStatement stmt1 = 
conn.prepareStatement(String.format("UPSERT INTO %s VALUES(?, ? , ?, ?, ?,?)", 
view1Name));
+stmt1.setInt(1, 2);
+stmt1.setString(2, "uname2");
+stmt1.setInt(3, 95053);
+stmt1.setString(4, "GLOBAL_VIEW");
+stmt1.setInt(5, 111);
+stmt1.setString(6, "viewcol2");
+stmt1.executeUpdate();
+
+createViewSql = "CREATE VIEW " + view2Name + " ( VIEW_COL1 
INTEGER, VIEW_COL2 VARCHAR ) AS SELECT * FROM "
++ dataTableFullName1 + " where DATA='GLOBAL_VIEW' AND 
ZIP=95053";

Review comment:
   @gjacoby126 for overlapping view




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@phoenix.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (PHOENIX-6656) Reindent NonAggregateRegionScannerFactory

2022-02-25 Thread Kadir OZDEMIR (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-6656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17498334#comment-17498334
 ] 

Kadir OZDEMIR commented on PHOENIX-6656:


[~giskender], Thank you for checking the patch. I pushed it master, 5.1 and 4.x.

> Reindent NonAggregateRegionScannerFactory
> -
>
> Key: PHOENIX-6656
> URL: https://issues.apache.org/jira/browse/PHOENIX-6656
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Kadir OZDEMIR
>Assignee: Kadir OZDEMIR
>Priority: Trivial
> Fix For: 4.17.0, 5.2.0, 5.1.3
>
> Attachments: PHOENIX-6656.master.001.patch
>
>
> The indentation in the NonAggregateRegionScannerFactory.java file is badly 
> broken and results in failures in code style checks whenever we make changes 
> on this file. 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (PHOENIX-6649) TransformTool should transform the tenant view content as well

2022-02-25 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-6649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17498331#comment-17498331
 ] 

ASF GitHub Bot commented on PHOENIX-6649:
-

gokceni commented on a change in pull request #1397:
URL: https://github.com/apache/phoenix/pull/1397#discussion_r815191511



##
File path: 
phoenix-core/src/main/java/org/apache/phoenix/mapreduce/transform/PhoenixTransformWithViewsInputFormat.java
##
@@ -0,0 +1,116 @@
+package org.apache.phoenix.mapreduce.transform;
+
+import org.apache.commons.lang.StringUtils;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.client.Table;
+import org.apache.hadoop.hbase.util.Pair;
+import org.apache.hadoop.mapreduce.InputSplit;
+import org.apache.hadoop.mapreduce.JobContext;
+import org.apache.hadoop.mapreduce.lib.db.DBWritable;
+import org.apache.phoenix.compile.MutationPlan;
+import org.apache.phoenix.compile.QueryPlan;
+import org.apache.phoenix.compile.ServerBuildTransformingTableCompiler;
+import org.apache.phoenix.coprocessor.TableInfo;
+import org.apache.phoenix.jdbc.PhoenixConnection;
+import org.apache.phoenix.mapreduce.PhoenixInputFormat;
+import org.apache.phoenix.mapreduce.PhoenixServerBuildIndexInputFormat;
+import org.apache.phoenix.mapreduce.util.ConnectionUtil;
+import org.apache.phoenix.mapreduce.util.PhoenixConfigurationUtil;
+import org.apache.phoenix.mapreduce.util.ViewInfoWritable;
+import org.apache.phoenix.query.QueryConstants;
+import org.apache.phoenix.schema.PColumn;
+import org.apache.phoenix.schema.PTable;
+import org.apache.phoenix.schema.transform.Transform;
+import org.apache.phoenix.thirdparty.com.google.common.base.Strings;
+import org.apache.phoenix.util.EnvironmentEdgeManager;
+import org.apache.phoenix.util.PhoenixRuntime;
+import org.apache.phoenix.util.SchemaUtil;
+import org.apache.phoenix.util.StringUtil;
+import org.apache.phoenix.util.ViewUtil;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.sql.DriverManager;
+import java.sql.SQLException;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Properties;
+
+import static 
org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.SYSTEM_CHILD_LINK_NAME_BYTES;
+import static 
org.apache.phoenix.mapreduce.util.PhoenixConfigurationUtil.getIndexToolIndexTableName;
+
+public class PhoenixTransformWithViewsInputFormat 
extends PhoenixServerBuildIndexInputFormat {
+private static final Logger LOGGER =
+
LoggerFactory.getLogger(PhoenixTransformWithViewsInputFormat.class);
+@Override
+public List getSplits(JobContext context) throws IOException, 
InterruptedException {
+final Configuration configuration = context.getConfiguration();
+try (PhoenixConnection connection = (PhoenixConnection)
+ConnectionUtil.getInputConnection(configuration)) {
+try (Table hTable = 
connection.unwrap(PhoenixConnection.class).getQueryServices().getTable(
+
SchemaUtil.getPhysicalTableName(SYSTEM_CHILD_LINK_NAME_BYTES, 
configuration).toBytes())) {
+String oldDataTableFullName = 
PhoenixConfigurationUtil.getIndexToolDataTableName(configuration);
+String newDataTableFullName = 
getIndexToolIndexTableName(configuration);
+PTable newDataTable = 
PhoenixRuntime.getTableNoCache(connection, newDataTableFullName);
+String schemaName = 
SchemaUtil.getSchemaNameFromFullName(oldDataTableFullName);
+String tableName = 
SchemaUtil.getTableNameFromFullName(oldDataTableFullName);
+byte[] schemaNameBytes = Strings.isNullOrEmpty(schemaName) ? 
null : schemaName.getBytes();
+Pair, List> allDescendantViews = 
ViewUtil.findAllDescendantViews(hTable, configuration, null, schemaNameBytes,
+tableName.getBytes(), 
EnvironmentEdgeManager.currentTimeMillis(), false);
+List legitimateDecendants = 
allDescendantViews.getFirst();
+
+List inputSplits = new ArrayList<>();
+
+HashMap columnMap = new HashMap<>();
+for (PColumn column : newDataTable.getColumns()) {
+columnMap.put(column.getName().getString(), column);
+}
+
+for (PTable decendant : legitimateDecendants) {
+if (decendant.getViewType() == PTable.ViewType.READ_ONLY) {
+continue;
+}
+PTable newView = Transform.getTransformedView(decendant, 
newDataTable, columnMap, true);
+QueryPlan queryPlan = getQueryPlan(newView, decendant, 
connection);
+inputSplits.addAll(generateSplits(queryPlan, 
configuration));

Review comment:
   Let me add a parameter to transform tool since it is 10 now (default).





[GitHub] [phoenix] gokceni commented on a change in pull request #1397: PHOENIX-6649 TransformTool to support views and tenant views

2022-02-25 Thread GitBox


gokceni commented on a change in pull request #1397:
URL: https://github.com/apache/phoenix/pull/1397#discussion_r815191511



##
File path: 
phoenix-core/src/main/java/org/apache/phoenix/mapreduce/transform/PhoenixTransformWithViewsInputFormat.java
##
@@ -0,0 +1,116 @@
+package org.apache.phoenix.mapreduce.transform;
+
+import org.apache.commons.lang.StringUtils;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.client.Table;
+import org.apache.hadoop.hbase.util.Pair;
+import org.apache.hadoop.mapreduce.InputSplit;
+import org.apache.hadoop.mapreduce.JobContext;
+import org.apache.hadoop.mapreduce.lib.db.DBWritable;
+import org.apache.phoenix.compile.MutationPlan;
+import org.apache.phoenix.compile.QueryPlan;
+import org.apache.phoenix.compile.ServerBuildTransformingTableCompiler;
+import org.apache.phoenix.coprocessor.TableInfo;
+import org.apache.phoenix.jdbc.PhoenixConnection;
+import org.apache.phoenix.mapreduce.PhoenixInputFormat;
+import org.apache.phoenix.mapreduce.PhoenixServerBuildIndexInputFormat;
+import org.apache.phoenix.mapreduce.util.ConnectionUtil;
+import org.apache.phoenix.mapreduce.util.PhoenixConfigurationUtil;
+import org.apache.phoenix.mapreduce.util.ViewInfoWritable;
+import org.apache.phoenix.query.QueryConstants;
+import org.apache.phoenix.schema.PColumn;
+import org.apache.phoenix.schema.PTable;
+import org.apache.phoenix.schema.transform.Transform;
+import org.apache.phoenix.thirdparty.com.google.common.base.Strings;
+import org.apache.phoenix.util.EnvironmentEdgeManager;
+import org.apache.phoenix.util.PhoenixRuntime;
+import org.apache.phoenix.util.SchemaUtil;
+import org.apache.phoenix.util.StringUtil;
+import org.apache.phoenix.util.ViewUtil;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.sql.DriverManager;
+import java.sql.SQLException;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Properties;
+
+import static 
org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.SYSTEM_CHILD_LINK_NAME_BYTES;
+import static 
org.apache.phoenix.mapreduce.util.PhoenixConfigurationUtil.getIndexToolIndexTableName;
+
+public class PhoenixTransformWithViewsInputFormat 
extends PhoenixServerBuildIndexInputFormat {
+private static final Logger LOGGER =
+
LoggerFactory.getLogger(PhoenixTransformWithViewsInputFormat.class);
+@Override
+public List getSplits(JobContext context) throws IOException, 
InterruptedException {
+final Configuration configuration = context.getConfiguration();
+try (PhoenixConnection connection = (PhoenixConnection)
+ConnectionUtil.getInputConnection(configuration)) {
+try (Table hTable = 
connection.unwrap(PhoenixConnection.class).getQueryServices().getTable(
+
SchemaUtil.getPhysicalTableName(SYSTEM_CHILD_LINK_NAME_BYTES, 
configuration).toBytes())) {
+String oldDataTableFullName = 
PhoenixConfigurationUtil.getIndexToolDataTableName(configuration);
+String newDataTableFullName = 
getIndexToolIndexTableName(configuration);
+PTable newDataTable = 
PhoenixRuntime.getTableNoCache(connection, newDataTableFullName);
+String schemaName = 
SchemaUtil.getSchemaNameFromFullName(oldDataTableFullName);
+String tableName = 
SchemaUtil.getTableNameFromFullName(oldDataTableFullName);
+byte[] schemaNameBytes = Strings.isNullOrEmpty(schemaName) ? 
null : schemaName.getBytes();
+Pair, List> allDescendantViews = 
ViewUtil.findAllDescendantViews(hTable, configuration, null, schemaNameBytes,
+tableName.getBytes(), 
EnvironmentEdgeManager.currentTimeMillis(), false);
+List legitimateDecendants = 
allDescendantViews.getFirst();
+
+List inputSplits = new ArrayList<>();
+
+HashMap columnMap = new HashMap<>();
+for (PColumn column : newDataTable.getColumns()) {
+columnMap.put(column.getName().getString(), column);
+}
+
+for (PTable decendant : legitimateDecendants) {
+if (decendant.getViewType() == PTable.ViewType.READ_ONLY) {
+continue;
+}
+PTable newView = Transform.getTransformedView(decendant, 
newDataTable, columnMap, true);
+QueryPlan queryPlan = getQueryPlan(newView, decendant, 
connection);
+inputSplits.addAll(generateSplits(queryPlan, 
configuration));

Review comment:
   Let me add a parameter to transform tool since it is 10 now (default).




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@phoenix.apache.org

For queries 

[jira] [Commented] (PHOENIX-6649) TransformTool should transform the tenant view content as well

2022-02-25 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-6649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17498330#comment-17498330
 ] 

ASF GitHub Bot commented on PHOENIX-6649:
-

gokceni commented on a change in pull request #1397:
URL: https://github.com/apache/phoenix/pull/1397#discussion_r815190751



##
File path: 
phoenix-core/src/it/java/org/apache/phoenix/end2end/transform/TransformToolIT.java
##
@@ -948,6 +959,175 @@ public void testTransformVerify_ForceCutover() throws 
Exception {
 }
 }
 
+@Test
+public void testTransformForGlobalViews() throws Exception {
+String schemaName = generateUniqueName();
+String dataTableName = generateUniqueName();
+String dataTableFullName = SchemaUtil.getTableName(schemaName, 
dataTableName);
+String view1Name = "VW1_" + generateUniqueName();
+String view2Name = "VW2_" + generateUniqueName();
+String upsertQuery = "UPSERT INTO %s VALUES(?, ?, ?, ?, ?, ?)";
+
+Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
+try (Connection conn = DriverManager.getConnection(getUrl(), props)) {
+conn.setAutoCommit(true);
+int numOfRows = 0;
+createTableAndUpsertRows(conn, dataTableFullName, numOfRows, 
tableDDLOptions);
+SingleCellIndexIT.assertMetadata(conn, 
PTable.ImmutableStorageScheme.ONE_CELL_PER_COLUMN, 
PTable.QualifierEncodingScheme.NON_ENCODED_QUALIFIERS, dataTableFullName);
+
+String createViewSql = "CREATE VIEW " + view1Name + " ( VIEW_COL11 
INTEGER, VIEW_COL12 VARCHAR ) AS SELECT * FROM "
++ dataTableFullName + " where ID=1";
+conn.createStatement().execute(createViewSql);
+
+createViewSql = "CREATE VIEW " + view2Name + " ( VIEW_COL21 
INTEGER, VIEW_COL22 VARCHAR ) AS SELECT * FROM "
++ dataTableFullName + " where ID=11";
+conn.createStatement().execute(createViewSql);
+
+PreparedStatement stmt1 = 
conn.prepareStatement(String.format(upsertQuery, view1Name));
+stmt1.setInt(1, 1);
+stmt1.setString(2, "uname1");
+stmt1.setInt(3, 95051);
+stmt1.setString(4, "");
+stmt1.setInt(5, 101);
+stmt1.setString(6, "viewCol12");
+stmt1.executeUpdate();
+conn.commit();
+
+stmt1 = conn.prepareStatement(String.format(upsertQuery, 
view2Name));
+stmt1.setInt(1, 11);
+stmt1.setString(2, "uname11");
+stmt1.setInt(3, 950511);
+stmt1.setString(4, "");
+stmt1.setInt(5, 111);
+stmt1.setString(6, "viewCol22");
+stmt1.executeUpdate();
+conn.commit();
+
+conn.createStatement().execute("ALTER TABLE " + dataTableFullName +
+" SET 
IMMUTABLE_STORAGE_SCHEME=SINGLE_CELL_ARRAY_WITH_OFFSETS, 
COLUMN_ENCODED_BYTES=2");
+SystemTransformRecord record = 
Transform.getTransformRecord(schemaName, dataTableName, null, null, 
conn.unwrap(PhoenixConnection.class));
+assertNotNull(record);
+assertMetadata(conn, 
PTable.ImmutableStorageScheme.SINGLE_CELL_ARRAY_WITH_OFFSETS, 
PTable.QualifierEncodingScheme.TWO_BYTE_QUALIFIERS, 
record.getNewPhysicalTableName());
+
+List args = getArgList(schemaName, dataTableName, null,
+null, null, null, false, false, false, false, false);
+runTransformTool(args.toArray(new String[0]), 0);
+Transform.doCutover(conn.unwrap(PhoenixConnection.class), record);
+
Transform.updateTransformRecord(conn.unwrap(PhoenixConnection.class), record, 
PTable.TransformStatus.COMPLETED);
+try (Admin admin = 
conn.unwrap(PhoenixConnection.class).getQueryServices().getAdmin()) {
+admin.disableTable(TableName.valueOf(dataTableFullName));
+admin.truncateTable(TableName.valueOf(dataTableFullName), 
true);
+}
+
+String sql = "SELECT VIEW_COL11, VIEW_COL12 FROM %s ";
+ResultSet rs1 = 
conn.createStatement().executeQuery(String.format(sql, view1Name));
+assertTrue(rs1.next());
+assertEquals(101, rs1.getInt(1));
+assertEquals("viewCol12", rs1.getString(2));
+
+sql = "SELECT VIEW_COL21, VIEW_COL22 FROM %s ";
+rs1 = conn.createStatement().executeQuery(String.format(sql, 
view2Name));
+assertTrue(rs1.next());
+assertEquals(111, rs1.getInt(1));
+assertEquals("viewCol22", rs1.getString(2));
+}
+}
+
+@Test
+public void testTransformForTenantViews() throws Exception {
+String schemaName = generateUniqueName();
+String dataTableName = generateUniqueName();
+String dataTableFullName = SchemaUtil.getTableName(schemaName, 
dataTableName);
+   

[GitHub] [phoenix] gokceni commented on a change in pull request #1397: PHOENIX-6649 TransformTool to support views and tenant views

2022-02-25 Thread GitBox


gokceni commented on a change in pull request #1397:
URL: https://github.com/apache/phoenix/pull/1397#discussion_r815190751



##
File path: 
phoenix-core/src/it/java/org/apache/phoenix/end2end/transform/TransformToolIT.java
##
@@ -948,6 +959,175 @@ public void testTransformVerify_ForceCutover() throws 
Exception {
 }
 }
 
+@Test
+public void testTransformForGlobalViews() throws Exception {
+String schemaName = generateUniqueName();
+String dataTableName = generateUniqueName();
+String dataTableFullName = SchemaUtil.getTableName(schemaName, 
dataTableName);
+String view1Name = "VW1_" + generateUniqueName();
+String view2Name = "VW2_" + generateUniqueName();
+String upsertQuery = "UPSERT INTO %s VALUES(?, ?, ?, ?, ?, ?)";
+
+Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
+try (Connection conn = DriverManager.getConnection(getUrl(), props)) {
+conn.setAutoCommit(true);
+int numOfRows = 0;
+createTableAndUpsertRows(conn, dataTableFullName, numOfRows, 
tableDDLOptions);
+SingleCellIndexIT.assertMetadata(conn, 
PTable.ImmutableStorageScheme.ONE_CELL_PER_COLUMN, 
PTable.QualifierEncodingScheme.NON_ENCODED_QUALIFIERS, dataTableFullName);
+
+String createViewSql = "CREATE VIEW " + view1Name + " ( VIEW_COL11 
INTEGER, VIEW_COL12 VARCHAR ) AS SELECT * FROM "
++ dataTableFullName + " where ID=1";
+conn.createStatement().execute(createViewSql);
+
+createViewSql = "CREATE VIEW " + view2Name + " ( VIEW_COL21 
INTEGER, VIEW_COL22 VARCHAR ) AS SELECT * FROM "
++ dataTableFullName + " where ID=11";
+conn.createStatement().execute(createViewSql);
+
+PreparedStatement stmt1 = 
conn.prepareStatement(String.format(upsertQuery, view1Name));
+stmt1.setInt(1, 1);
+stmt1.setString(2, "uname1");
+stmt1.setInt(3, 95051);
+stmt1.setString(4, "");
+stmt1.setInt(5, 101);
+stmt1.setString(6, "viewCol12");
+stmt1.executeUpdate();
+conn.commit();
+
+stmt1 = conn.prepareStatement(String.format(upsertQuery, 
view2Name));
+stmt1.setInt(1, 11);
+stmt1.setString(2, "uname11");
+stmt1.setInt(3, 950511);
+stmt1.setString(4, "");
+stmt1.setInt(5, 111);
+stmt1.setString(6, "viewCol22");
+stmt1.executeUpdate();
+conn.commit();
+
+conn.createStatement().execute("ALTER TABLE " + dataTableFullName +
+" SET 
IMMUTABLE_STORAGE_SCHEME=SINGLE_CELL_ARRAY_WITH_OFFSETS, 
COLUMN_ENCODED_BYTES=2");
+SystemTransformRecord record = 
Transform.getTransformRecord(schemaName, dataTableName, null, null, 
conn.unwrap(PhoenixConnection.class));
+assertNotNull(record);
+assertMetadata(conn, 
PTable.ImmutableStorageScheme.SINGLE_CELL_ARRAY_WITH_OFFSETS, 
PTable.QualifierEncodingScheme.TWO_BYTE_QUALIFIERS, 
record.getNewPhysicalTableName());
+
+List args = getArgList(schemaName, dataTableName, null,
+null, null, null, false, false, false, false, false);
+runTransformTool(args.toArray(new String[0]), 0);
+Transform.doCutover(conn.unwrap(PhoenixConnection.class), record);
+
Transform.updateTransformRecord(conn.unwrap(PhoenixConnection.class), record, 
PTable.TransformStatus.COMPLETED);
+try (Admin admin = 
conn.unwrap(PhoenixConnection.class).getQueryServices().getAdmin()) {
+admin.disableTable(TableName.valueOf(dataTableFullName));
+admin.truncateTable(TableName.valueOf(dataTableFullName), 
true);
+}
+
+String sql = "SELECT VIEW_COL11, VIEW_COL12 FROM %s ";
+ResultSet rs1 = 
conn.createStatement().executeQuery(String.format(sql, view1Name));
+assertTrue(rs1.next());
+assertEquals(101, rs1.getInt(1));
+assertEquals("viewCol12", rs1.getString(2));
+
+sql = "SELECT VIEW_COL21, VIEW_COL22 FROM %s ";
+rs1 = conn.createStatement().executeQuery(String.format(sql, 
view2Name));
+assertTrue(rs1.next());
+assertEquals(111, rs1.getInt(1));
+assertEquals("viewCol22", rs1.getString(2));
+}
+}
+
+@Test
+public void testTransformForTenantViews() throws Exception {
+String schemaName = generateUniqueName();
+String dataTableName = generateUniqueName();
+String dataTableFullName = SchemaUtil.getTableName(schemaName, 
dataTableName);
+String view1Name = "VW1_" + generateUniqueName();
+String view2Name = "VW2_" + generateUniqueName();
+String upsertQuery = "UPSERT INTO %s VALUES(?, ?, ?, ?, ?, ?)";
+
+Properties props = 

[jira] [Commented] (PHOENIX-6656) Reindent NonAggregateRegionScannerFactory

2022-02-25 Thread Kadir OZDEMIR (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-6656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17498325#comment-17498325
 ] 

Kadir OZDEMIR commented on PHOENIX-6656:


The patch is generated by auto fixing the indentation using IntelliJ. So, no 
manual editing was involved. [~giskender]

> Reindent NonAggregateRegionScannerFactory
> -
>
> Key: PHOENIX-6656
> URL: https://issues.apache.org/jira/browse/PHOENIX-6656
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Kadir OZDEMIR
>Assignee: Kadir OZDEMIR
>Priority: Trivial
> Attachments: PHOENIX-6656.master.001.patch
>
>
> The indentation in the NonAggregateRegionScannerFactory.java file is badly 
> broken and results in failures in code style checks whenever we make changes 
> on this file. 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (PHOENIX-6656) Reindent NonAggregateRegionScannerFactory

2022-02-25 Thread Gokcen Iskender (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-6656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17498326#comment-17498326
 ] 

Gokcen Iskender commented on PHOENIX-6656:
--

LGTM

> Reindent NonAggregateRegionScannerFactory
> -
>
> Key: PHOENIX-6656
> URL: https://issues.apache.org/jira/browse/PHOENIX-6656
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Kadir OZDEMIR
>Assignee: Kadir OZDEMIR
>Priority: Trivial
> Attachments: PHOENIX-6656.master.001.patch
>
>
> The indentation in the NonAggregateRegionScannerFactory.java file is badly 
> broken and results in failures in code style checks whenever we make changes 
> on this file. 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (PHOENIX-6636) Replace bundled log4j libraries with reload4j

2022-02-25 Thread Istvan Toth (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-6636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17498319#comment-17498319
 ] 

Istvan Toth commented on PHOENIX-6636:
--

Thanks [~larsh] ,

Seems like forgot to update the logging backend jar name pattern in the Python 
startup scripts.
I'm gonna push an addendum on Monday at the latest.

> Replace bundled log4j libraries with reload4j
> -
>
> Key: PHOENIX-6636
> URL: https://issues.apache.org/jira/browse/PHOENIX-6636
> Project: Phoenix
>  Issue Type: Bug
>  Components: connectors, core, queryserver
>Affects Versions: 5.2.0
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Major
> Fix For: 4.17.0, 5.2.0, 4.16.2, 5.1.3
>
>
> To reduce the number of dependecies with unresolved CVEs, replace the bundled 
> log4j libraries with reload4j ([https://reload4j.qos.ch/).]
> This will also require bumping the slf4j version.
> This is a quick fix, and does not preclude moving to some different backend 
> later (like log4j2 or logback)



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Comment Edited] (PHOENIX-6636) Replace bundled log4j libraries with reload4j

2022-02-25 Thread Lars Hofhansl (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-6636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17498305#comment-17498305
 ] 

Lars Hofhansl edited comment on PHOENIX-6636 at 2/25/22, 9:26 PM:
--

Hmm... Getting this now when running Phoenix client (hbase 2.4 and hadoop 3.3.x 
profile).

This used to work before. Might be due to the this, or the new 3rd party 
dependency.

Update: Yeah, it's this one, not the updated 3rd party dependency.

[~stoty] 
{code:java}
WARNING: Exception thrown by removal listener
java.lang.NoClassDefFoundError: Could not initialize class 
org.apache.phoenix.monitoring.GlobalClientMetrics
at 
org.apache.phoenix.query.ConnectionQueryServicesImpl.close(ConnectionQueryServicesImpl.java:569)
at org.apache.phoenix.jdbc.PhoenixDriver$2.onRemoval(PhoenixDriver.java:163)
at 
org.apache.phoenix.thirdparty.com.google.common.cache.LocalCache.processPendingNotifications(LocalCache.java:1808)
at 
org.apache.phoenix.thirdparty.com.google.common.cache.LocalCache$Segment.runUnlockedCleanup(LocalCache.java:3379)
at 
org.apache.phoenix.thirdparty.com.google.common.cache.LocalCache$Segment.postWriteCleanup(LocalCache.java:3355)
at 
org.apache.phoenix.thirdparty.com.google.common.cache.LocalCache$Segment.remove(LocalCache.java:2989)
at 
org.apache.phoenix.thirdparty.com.google.common.cache.LocalCache.remove(LocalCache.java:4104)
at 
org.apache.phoenix.thirdparty.com.google.common.cache.LocalCache$LocalManualCache.invalidate(LocalCache.java:4739)
at 
org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:270)
at 
org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:144)
at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:221)
at sqlline.DatabaseConnection.connect(DatabaseConnection.java:135)
at sqlline.DatabaseConnection.getConnection(DatabaseConnection.java:192)
at sqlline.Commands.connect(Commands.java:1364)
at sqlline.Commands.connect(Commands.java:1244)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 
Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:38)
at sqlline.SqlLine.dispatch(SqlLine.java:730)
at sqlline.SqlLine.initArgs(SqlLine.java:410)
at sqlline.SqlLine.begin(SqlLine.java:515)
at sqlline.SqlLine.start(SqlLine.java:267)
at sqlline.SqlLine.main(SqlLine.java:206)

java.lang.NoClassDefFoundError: org/apache/log4j/AppenderSkeleton
at java.base/java.lang.ClassLoader.defineClass1(Native Method)
at java.base/java.lang.ClassLoader.defineClass(ClassLoader.java:1017)
at 
java.base/java.security.SecureClassLoader.defineClass(SecureClassLoader.java:174)
at 
java.base/jdk.internal.loader.BuiltinClassLoader.defineClass(BuiltinClassLoader.java:800)
at 
java.base/jdk.internal.loader.BuiltinClassLoader.findClassOnClassPathOrNull(BuiltinClassLoader.java:698)
at 
java.base/jdk.internal.loader.BuiltinClassLoader.loadClassOrNull(BuiltinClassLoader.java:621)
at 
java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:579)
at 
java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:178)
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:522)
at 
org.apache.hadoop.metrics2.source.JvmMetrics.getEventCounters(JvmMetrics.java:288)
at org.apache.hadoop.metrics2.source.JvmMetrics.getMetrics(JvmMetrics.java:157)
at 
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMetrics(MetricsSourceAdapter.java:200)
at 
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateJmxCache(MetricsSourceAdapter.java:183)
at 
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMBeanInfo(MetricsSourceAdapter.java:156)
at 
java.management/com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getNewMBeanClassName(DefaultMBeanServerInterceptor.java:329)
at 
java.management/com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:315)
at 
java.management/com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522)
at org.apache.hadoop.metrics2.util.MBeans.register(MBeans.java:100)
at org.apache.hadoop.metrics2.util.MBeans.register(MBeans.java:73)
at 
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.startMBeans(MetricsSourceAdapter.java:222)
at 
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.start(MetricsSourceAdapter.java:101)
at 
org.apache.hadoop.metrics2.impl.MetricsSystemImpl.registerSource(MetricsSystemImpl.java:268)
at 
org.apache.hadoop.metrics2.impl.MetricsSystemImpl.register(MetricsSystemImpl.java:233)
at org.apache.hadoop.metrics2.source.JvmMetrics.create(JvmMetrics.java:123)
at 

[jira] [Comment Edited] (PHOENIX-6636) Replace bundled log4j libraries with reload4j

2022-02-25 Thread Lars Hofhansl (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-6636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17498305#comment-17498305
 ] 

Lars Hofhansl edited comment on PHOENIX-6636 at 2/25/22, 8:50 PM:
--

Hmm... Getting this now when running Phoenix client (hbase 2.4 and hadoop 3.3.x 
profile).

This used to work before. Might be due to the this, or the new 3rd party 
dependency.

[~stoty] 
{code:java}
WARNING: Exception thrown by removal listener
java.lang.NoClassDefFoundError: Could not initialize class 
org.apache.phoenix.monitoring.GlobalClientMetrics
at 
org.apache.phoenix.query.ConnectionQueryServicesImpl.close(ConnectionQueryServicesImpl.java:569)
at org.apache.phoenix.jdbc.PhoenixDriver$2.onRemoval(PhoenixDriver.java:163)
at 
org.apache.phoenix.thirdparty.com.google.common.cache.LocalCache.processPendingNotifications(LocalCache.java:1808)
at 
org.apache.phoenix.thirdparty.com.google.common.cache.LocalCache$Segment.runUnlockedCleanup(LocalCache.java:3379)
at 
org.apache.phoenix.thirdparty.com.google.common.cache.LocalCache$Segment.postWriteCleanup(LocalCache.java:3355)
at 
org.apache.phoenix.thirdparty.com.google.common.cache.LocalCache$Segment.remove(LocalCache.java:2989)
at 
org.apache.phoenix.thirdparty.com.google.common.cache.LocalCache.remove(LocalCache.java:4104)
at 
org.apache.phoenix.thirdparty.com.google.common.cache.LocalCache$LocalManualCache.invalidate(LocalCache.java:4739)
at 
org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:270)
at 
org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:144)
at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:221)
at sqlline.DatabaseConnection.connect(DatabaseConnection.java:135)
at sqlline.DatabaseConnection.getConnection(DatabaseConnection.java:192)
at sqlline.Commands.connect(Commands.java:1364)
at sqlline.Commands.connect(Commands.java:1244)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 
Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:38)
at sqlline.SqlLine.dispatch(SqlLine.java:730)
at sqlline.SqlLine.initArgs(SqlLine.java:410)
at sqlline.SqlLine.begin(SqlLine.java:515)
at sqlline.SqlLine.start(SqlLine.java:267)
at sqlline.SqlLine.main(SqlLine.java:206)

java.lang.NoClassDefFoundError: org/apache/log4j/AppenderSkeleton
at java.base/java.lang.ClassLoader.defineClass1(Native Method)
at java.base/java.lang.ClassLoader.defineClass(ClassLoader.java:1017)
at 
java.base/java.security.SecureClassLoader.defineClass(SecureClassLoader.java:174)
at 
java.base/jdk.internal.loader.BuiltinClassLoader.defineClass(BuiltinClassLoader.java:800)
at 
java.base/jdk.internal.loader.BuiltinClassLoader.findClassOnClassPathOrNull(BuiltinClassLoader.java:698)
at 
java.base/jdk.internal.loader.BuiltinClassLoader.loadClassOrNull(BuiltinClassLoader.java:621)
at 
java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:579)
at 
java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:178)
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:522)
at 
org.apache.hadoop.metrics2.source.JvmMetrics.getEventCounters(JvmMetrics.java:288)
at org.apache.hadoop.metrics2.source.JvmMetrics.getMetrics(JvmMetrics.java:157)
at 
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMetrics(MetricsSourceAdapter.java:200)
at 
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateJmxCache(MetricsSourceAdapter.java:183)
at 
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMBeanInfo(MetricsSourceAdapter.java:156)
at 
java.management/com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getNewMBeanClassName(DefaultMBeanServerInterceptor.java:329)
at 
java.management/com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:315)
at 
java.management/com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522)
at org.apache.hadoop.metrics2.util.MBeans.register(MBeans.java:100)
at org.apache.hadoop.metrics2.util.MBeans.register(MBeans.java:73)
at 
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.startMBeans(MetricsSourceAdapter.java:222)
at 
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.start(MetricsSourceAdapter.java:101)
at 
org.apache.hadoop.metrics2.impl.MetricsSystemImpl.registerSource(MetricsSystemImpl.java:268)
at 
org.apache.hadoop.metrics2.impl.MetricsSystemImpl.register(MetricsSystemImpl.java:233)
at org.apache.hadoop.metrics2.source.JvmMetrics.create(JvmMetrics.java:123)
at 

[jira] [Commented] (PHOENIX-6636) Replace bundled log4j libraries with reload4j

2022-02-25 Thread Lars Hofhansl (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-6636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17498305#comment-17498305
 ] 

Lars Hofhansl commented on PHOENIX-6636:


Hmm... Getting this now when starting Phoenix (hbase 2.4 and hadoop 3.3.x 
profile).

This used to work before. Might be due to the this, or the new 3rd party 
dependency.

[~stoty] 
{code:java}
WARNING: Exception thrown by removal listener
java.lang.NoClassDefFoundError: Could not initialize class 
org.apache.phoenix.monitoring.GlobalClientMetrics
at 
org.apache.phoenix.query.ConnectionQueryServicesImpl.close(ConnectionQueryServicesImpl.java:569)
at org.apache.phoenix.jdbc.PhoenixDriver$2.onRemoval(PhoenixDriver.java:163)
at 
org.apache.phoenix.thirdparty.com.google.common.cache.LocalCache.processPendingNotifications(LocalCache.java:1808)
at 
org.apache.phoenix.thirdparty.com.google.common.cache.LocalCache$Segment.runUnlockedCleanup(LocalCache.java:3379)
at 
org.apache.phoenix.thirdparty.com.google.common.cache.LocalCache$Segment.postWriteCleanup(LocalCache.java:3355)
at 
org.apache.phoenix.thirdparty.com.google.common.cache.LocalCache$Segment.remove(LocalCache.java:2989)
at 
org.apache.phoenix.thirdparty.com.google.common.cache.LocalCache.remove(LocalCache.java:4104)
at 
org.apache.phoenix.thirdparty.com.google.common.cache.LocalCache$LocalManualCache.invalidate(LocalCache.java:4739)
at 
org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:270)
at 
org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:144)
at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:221)
at sqlline.DatabaseConnection.connect(DatabaseConnection.java:135)
at sqlline.DatabaseConnection.getConnection(DatabaseConnection.java:192)
at sqlline.Commands.connect(Commands.java:1364)
at sqlline.Commands.connect(Commands.java:1244)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 
Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:38)
at sqlline.SqlLine.dispatch(SqlLine.java:730)
at sqlline.SqlLine.initArgs(SqlLine.java:410)
at sqlline.SqlLine.begin(SqlLine.java:515)
at sqlline.SqlLine.start(SqlLine.java:267)
at sqlline.SqlLine.main(SqlLine.java:206)

java.lang.NoClassDefFoundError: org/apache/log4j/AppenderSkeleton
at java.base/java.lang.ClassLoader.defineClass1(Native Method)
at java.base/java.lang.ClassLoader.defineClass(ClassLoader.java:1017)
at 
java.base/java.security.SecureClassLoader.defineClass(SecureClassLoader.java:174)
at 
java.base/jdk.internal.loader.BuiltinClassLoader.defineClass(BuiltinClassLoader.java:800)
at 
java.base/jdk.internal.loader.BuiltinClassLoader.findClassOnClassPathOrNull(BuiltinClassLoader.java:698)
at 
java.base/jdk.internal.loader.BuiltinClassLoader.loadClassOrNull(BuiltinClassLoader.java:621)
at 
java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:579)
at 
java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:178)
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:522)
at 
org.apache.hadoop.metrics2.source.JvmMetrics.getEventCounters(JvmMetrics.java:288)
at org.apache.hadoop.metrics2.source.JvmMetrics.getMetrics(JvmMetrics.java:157)
at 
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMetrics(MetricsSourceAdapter.java:200)
at 
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateJmxCache(MetricsSourceAdapter.java:183)
at 
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMBeanInfo(MetricsSourceAdapter.java:156)
at 
java.management/com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getNewMBeanClassName(DefaultMBeanServerInterceptor.java:329)
at 
java.management/com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:315)
at 
java.management/com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522)
at org.apache.hadoop.metrics2.util.MBeans.register(MBeans.java:100)
at org.apache.hadoop.metrics2.util.MBeans.register(MBeans.java:73)
at 
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.startMBeans(MetricsSourceAdapter.java:222)
at 
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.start(MetricsSourceAdapter.java:101)
at 
org.apache.hadoop.metrics2.impl.MetricsSystemImpl.registerSource(MetricsSystemImpl.java:268)
at 
org.apache.hadoop.metrics2.impl.MetricsSystemImpl.register(MetricsSystemImpl.java:233)
at org.apache.hadoop.metrics2.source.JvmMetrics.create(JvmMetrics.java:123)
at 
org.apache.hadoop.metrics2.source.JvmMetrics$Singleton.init(JvmMetrics.java:63)
at 

[jira] [Commented] (PHOENIX-6649) TransformTool should transform the tenant view content as well

2022-02-25 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-6649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17498304#comment-17498304
 ] 

ASF GitHub Bot commented on PHOENIX-6649:
-

gjacoby126 commented on a change in pull request #1397:
URL: https://github.com/apache/phoenix/pull/1397#discussion_r815118143



##
File path: 
phoenix-core/src/main/java/org/apache/phoenix/mapreduce/transform/PhoenixTransformWithViewsInputFormat.java
##
@@ -0,0 +1,116 @@
+package org.apache.phoenix.mapreduce.transform;
+
+import org.apache.commons.lang.StringUtils;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.client.Table;
+import org.apache.hadoop.hbase.util.Pair;
+import org.apache.hadoop.mapreduce.InputSplit;
+import org.apache.hadoop.mapreduce.JobContext;
+import org.apache.hadoop.mapreduce.lib.db.DBWritable;
+import org.apache.phoenix.compile.MutationPlan;
+import org.apache.phoenix.compile.QueryPlan;
+import org.apache.phoenix.compile.ServerBuildTransformingTableCompiler;
+import org.apache.phoenix.coprocessor.TableInfo;
+import org.apache.phoenix.jdbc.PhoenixConnection;
+import org.apache.phoenix.mapreduce.PhoenixInputFormat;
+import org.apache.phoenix.mapreduce.PhoenixServerBuildIndexInputFormat;
+import org.apache.phoenix.mapreduce.util.ConnectionUtil;
+import org.apache.phoenix.mapreduce.util.PhoenixConfigurationUtil;
+import org.apache.phoenix.mapreduce.util.ViewInfoWritable;
+import org.apache.phoenix.query.QueryConstants;
+import org.apache.phoenix.schema.PColumn;
+import org.apache.phoenix.schema.PTable;
+import org.apache.phoenix.schema.transform.Transform;
+import org.apache.phoenix.thirdparty.com.google.common.base.Strings;
+import org.apache.phoenix.util.EnvironmentEdgeManager;
+import org.apache.phoenix.util.PhoenixRuntime;
+import org.apache.phoenix.util.SchemaUtil;
+import org.apache.phoenix.util.StringUtil;
+import org.apache.phoenix.util.ViewUtil;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.sql.DriverManager;
+import java.sql.SQLException;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Properties;
+
+import static 
org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.SYSTEM_CHILD_LINK_NAME_BYTES;
+import static 
org.apache.phoenix.mapreduce.util.PhoenixConfigurationUtil.getIndexToolIndexTableName;
+
+public class PhoenixTransformWithViewsInputFormat 
extends PhoenixServerBuildIndexInputFormat {
+private static final Logger LOGGER =
+
LoggerFactory.getLogger(PhoenixTransformWithViewsInputFormat.class);
+@Override
+public List getSplits(JobContext context) throws IOException, 
InterruptedException {
+final Configuration configuration = context.getConfiguration();
+try (PhoenixConnection connection = (PhoenixConnection)
+ConnectionUtil.getInputConnection(configuration)) {
+try (Table hTable = 
connection.unwrap(PhoenixConnection.class).getQueryServices().getTable(
+
SchemaUtil.getPhysicalTableName(SYSTEM_CHILD_LINK_NAME_BYTES, 
configuration).toBytes())) {
+String oldDataTableFullName = 
PhoenixConfigurationUtil.getIndexToolDataTableName(configuration);
+String newDataTableFullName = 
getIndexToolIndexTableName(configuration);
+PTable newDataTable = 
PhoenixRuntime.getTableNoCache(connection, newDataTableFullName);
+String schemaName = 
SchemaUtil.getSchemaNameFromFullName(oldDataTableFullName);
+String tableName = 
SchemaUtil.getTableNameFromFullName(oldDataTableFullName);
+byte[] schemaNameBytes = Strings.isNullOrEmpty(schemaName) ? 
null : schemaName.getBytes();
+Pair, List> allDescendantViews = 
ViewUtil.findAllDescendantViews(hTable, configuration, null, schemaNameBytes,
+tableName.getBytes(), 
EnvironmentEdgeManager.currentTimeMillis(), false);
+List legitimateDecendants = 
allDescendantViews.getFirst();
+
+List inputSplits = new ArrayList<>();
+
+HashMap columnMap = new HashMap<>();
+for (PColumn column : newDataTable.getColumns()) {
+columnMap.put(column.getName().getString(), column);
+}
+
+for (PTable decendant : legitimateDecendants) {
+if (decendant.getViewType() == PTable.ViewType.READ_ONLY) {
+continue;
+}
+PTable newView = Transform.getTransformedView(decendant, 
newDataTable, columnMap, true);
+QueryPlan queryPlan = getQueryPlan(newView, decendant, 
connection);
+inputSplits.addAll(generateSplits(queryPlan, 
configuration));
+}

Review comment:
   What happens if the views are not disjoint? Do we just 

[GitHub] [phoenix] gjacoby126 commented on a change in pull request #1397: PHOENIX-6649 TransformTool to support views and tenant views

2022-02-25 Thread GitBox


gjacoby126 commented on a change in pull request #1397:
URL: https://github.com/apache/phoenix/pull/1397#discussion_r815118143



##
File path: 
phoenix-core/src/main/java/org/apache/phoenix/mapreduce/transform/PhoenixTransformWithViewsInputFormat.java
##
@@ -0,0 +1,116 @@
+package org.apache.phoenix.mapreduce.transform;
+
+import org.apache.commons.lang.StringUtils;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.client.Table;
+import org.apache.hadoop.hbase.util.Pair;
+import org.apache.hadoop.mapreduce.InputSplit;
+import org.apache.hadoop.mapreduce.JobContext;
+import org.apache.hadoop.mapreduce.lib.db.DBWritable;
+import org.apache.phoenix.compile.MutationPlan;
+import org.apache.phoenix.compile.QueryPlan;
+import org.apache.phoenix.compile.ServerBuildTransformingTableCompiler;
+import org.apache.phoenix.coprocessor.TableInfo;
+import org.apache.phoenix.jdbc.PhoenixConnection;
+import org.apache.phoenix.mapreduce.PhoenixInputFormat;
+import org.apache.phoenix.mapreduce.PhoenixServerBuildIndexInputFormat;
+import org.apache.phoenix.mapreduce.util.ConnectionUtil;
+import org.apache.phoenix.mapreduce.util.PhoenixConfigurationUtil;
+import org.apache.phoenix.mapreduce.util.ViewInfoWritable;
+import org.apache.phoenix.query.QueryConstants;
+import org.apache.phoenix.schema.PColumn;
+import org.apache.phoenix.schema.PTable;
+import org.apache.phoenix.schema.transform.Transform;
+import org.apache.phoenix.thirdparty.com.google.common.base.Strings;
+import org.apache.phoenix.util.EnvironmentEdgeManager;
+import org.apache.phoenix.util.PhoenixRuntime;
+import org.apache.phoenix.util.SchemaUtil;
+import org.apache.phoenix.util.StringUtil;
+import org.apache.phoenix.util.ViewUtil;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.sql.DriverManager;
+import java.sql.SQLException;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Properties;
+
+import static 
org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.SYSTEM_CHILD_LINK_NAME_BYTES;
+import static 
org.apache.phoenix.mapreduce.util.PhoenixConfigurationUtil.getIndexToolIndexTableName;
+
+public class PhoenixTransformWithViewsInputFormat 
extends PhoenixServerBuildIndexInputFormat {
+private static final Logger LOGGER =
+
LoggerFactory.getLogger(PhoenixTransformWithViewsInputFormat.class);
+@Override
+public List getSplits(JobContext context) throws IOException, 
InterruptedException {
+final Configuration configuration = context.getConfiguration();
+try (PhoenixConnection connection = (PhoenixConnection)
+ConnectionUtil.getInputConnection(configuration)) {
+try (Table hTable = 
connection.unwrap(PhoenixConnection.class).getQueryServices().getTable(
+
SchemaUtil.getPhysicalTableName(SYSTEM_CHILD_LINK_NAME_BYTES, 
configuration).toBytes())) {
+String oldDataTableFullName = 
PhoenixConfigurationUtil.getIndexToolDataTableName(configuration);
+String newDataTableFullName = 
getIndexToolIndexTableName(configuration);
+PTable newDataTable = 
PhoenixRuntime.getTableNoCache(connection, newDataTableFullName);
+String schemaName = 
SchemaUtil.getSchemaNameFromFullName(oldDataTableFullName);
+String tableName = 
SchemaUtil.getTableNameFromFullName(oldDataTableFullName);
+byte[] schemaNameBytes = Strings.isNullOrEmpty(schemaName) ? 
null : schemaName.getBytes();
+Pair, List> allDescendantViews = 
ViewUtil.findAllDescendantViews(hTable, configuration, null, schemaNameBytes,
+tableName.getBytes(), 
EnvironmentEdgeManager.currentTimeMillis(), false);
+List legitimateDecendants = 
allDescendantViews.getFirst();
+
+List inputSplits = new ArrayList<>();
+
+HashMap columnMap = new HashMap<>();
+for (PColumn column : newDataTable.getColumns()) {
+columnMap.put(column.getName().getString(), column);
+}
+
+for (PTable decendant : legitimateDecendants) {
+if (decendant.getViewType() == PTable.ViewType.READ_ONLY) {
+continue;
+}
+PTable newView = Transform.getTransformedView(decendant, 
newDataTable, columnMap, true);
+QueryPlan queryPlan = getQueryPlan(newView, decendant, 
connection);
+inputSplits.addAll(generateSplits(queryPlan, 
configuration));
+}

Review comment:
   What happens if the views are not disjoint? Do we just harmlessly 
transform the same data multiple times into the same shape? Seems like we 
should have a test for that if we don't already. 

##
File path: 

[jira] [Commented] (PHOENIX-6458) Using global indexes for queries with uncovered columns

2022-02-25 Thread Lars Hofhansl (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-6458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17498285#comment-17498285
 ] 

Lars Hofhansl commented on PHOENIX-6458:


(y)

Awesome

> Using global indexes for queries with uncovered columns
> ---
>
> Key: PHOENIX-6458
> URL: https://issues.apache.org/jira/browse/PHOENIX-6458
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 5.1.0
>Reporter: Kadir Ozdemir
>Assignee: Lars Hofhansl
>Priority: Major
> Attachments: PHOENIX-6458.master.001.patch, 
> PHOENIX-6458.master.002.patch
>
>
> The Phoenix query optimizer does not use a global index for a query with the 
> columns that are not covered by the global index if the query does not have 
> the corresponding index hint for this index. With the index hint, the 
> optimizer rewrites the query where the index is used within a subquery. With 
> this subquery, the row keys of the index rows that satisfy the subquery are 
> retrieved by the Phoenix client and then pushed into the Phoenix server 
> caches of the data table regions. Finally, on the server side, data table 
> rows are scanned and joined with the index rows using HashJoin. Based on the 
> selectivity of the original query, this join operation may still result in 
> scanning a large amount of data table rows. 
> Eliminating these data table scans would be a significant improvement. To do 
> that, instead of rewriting the query, the Phoenix optimizer simply treats the 
> global index as a covered index for the given query. With this, the Phoenix 
> query optimizer chooses the index table for the query especially when the 
> index row key prefix length is greater than the data row key prefix length 
> for the query. On the server side, the index table is scanned using index row 
> key ranges implied by the query and the index row keys are then mapped to the 
> data table row keys (please note an index row key includes all the data row 
> key columns). Finally, the corresponding data table rows are scanned using 
> server-to-server RPCs.  PHOENIX-6458 (this Jira) retrieves the data table 
> rows one by one using the HBase get operation. PHOENIX-6501 replaces this get 
> operation with the scan operation to reduce the number of server-to-server 
> RPC calls.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (PHOENIX-6458) Using global indexes for queries with uncovered columns

2022-02-25 Thread Kadir OZDEMIR (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-6458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17498248#comment-17498248
 ] 

Kadir OZDEMIR commented on PHOENIX-6458:


[~larsh]I have pushed it to 4.x. I will push it to 5.1 too

> Using global indexes for queries with uncovered columns
> ---
>
> Key: PHOENIX-6458
> URL: https://issues.apache.org/jira/browse/PHOENIX-6458
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 5.1.0
>Reporter: Kadir Ozdemir
>Assignee: Lars Hofhansl
>Priority: Major
> Attachments: PHOENIX-6458.master.001.patch, 
> PHOENIX-6458.master.002.patch
>
>
> The Phoenix query optimizer does not use a global index for a query with the 
> columns that are not covered by the global index if the query does not have 
> the corresponding index hint for this index. With the index hint, the 
> optimizer rewrites the query where the index is used within a subquery. With 
> this subquery, the row keys of the index rows that satisfy the subquery are 
> retrieved by the Phoenix client and then pushed into the Phoenix server 
> caches of the data table regions. Finally, on the server side, data table 
> rows are scanned and joined with the index rows using HashJoin. Based on the 
> selectivity of the original query, this join operation may still result in 
> scanning a large amount of data table rows. 
> Eliminating these data table scans would be a significant improvement. To do 
> that, instead of rewriting the query, the Phoenix optimizer simply treats the 
> global index as a covered index for the given query. With this, the Phoenix 
> query optimizer chooses the index table for the query especially when the 
> index row key prefix length is greater than the data row key prefix length 
> for the query. On the server side, the index table is scanned using index row 
> key ranges implied by the query and the index row keys are then mapped to the 
> data table row keys (please note an index row key includes all the data row 
> key columns). Finally, the corresponding data table rows are scanned using 
> server-to-server RPCs.  PHOENIX-6458 (this Jira) retrieves the data table 
> rows one by one using the HBase get operation. PHOENIX-6501 replaces this get 
> operation with the scan operation to reduce the number of server-to-server 
> RPC calls.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (PHOENIX-6649) TransformTool should transform the tenant view content as well

2022-02-25 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-6649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17498217#comment-17498217
 ] 

ASF GitHub Bot commented on PHOENIX-6649:
-

gokceni commented on pull request #1397:
URL: https://github.com/apache/phoenix/pull/1397#issuecomment-1051038429


   @gjacoby126 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@phoenix.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> TransformTool should transform the tenant view content as well
> --
>
> Key: PHOENIX-6649
> URL: https://issues.apache.org/jira/browse/PHOENIX-6649
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Gokcen Iskender
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [phoenix] gokceni commented on pull request #1397: PHOENIX-6649 TransformTool to support views and tenant views

2022-02-25 Thread GitBox


gokceni commented on pull request #1397:
URL: https://github.com/apache/phoenix/pull/1397#issuecomment-1051038429


   @gjacoby126 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@phoenix.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (PHOENIX-6636) Replace bundled log4j libraries with reload4j

2022-02-25 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-6636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17498108#comment-17498108
 ] 

ASF GitHub Bot commented on PHOENIX-6636:
-

stoty closed pull request #1379:
URL: https://github.com/apache/phoenix/pull/1379


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@phoenix.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Replace bundled log4j libraries with reload4j
> -
>
> Key: PHOENIX-6636
> URL: https://issues.apache.org/jira/browse/PHOENIX-6636
> Project: Phoenix
>  Issue Type: Bug
>  Components: connectors, core, queryserver
>Affects Versions: 5.2.0
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Major
> Fix For: 4.17.0, 5.2.0, 4.16.2, 5.1.3
>
>
> To reduce the number of dependecies with unresolved CVEs, replace the bundled 
> log4j libraries with reload4j ([https://reload4j.qos.ch/).]
> This will also require bumping the slf4j version.
> This is a quick fix, and does not preclude moving to some different backend 
> later (like log4j2 or logback)



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [phoenix] stoty closed pull request #1379: PHOENIX-6636 Replace bundled log4j libraries with reload4j

2022-02-25 Thread GitBox


stoty closed pull request #1379:
URL: https://github.com/apache/phoenix/pull/1379


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@phoenix.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (PHOENIX-6636) Replace bundled log4j libraries with reload4j

2022-02-25 Thread Istvan Toth (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-6636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17498104#comment-17498104
 ] 

Istvan Toth commented on PHOENIX-6636:
--

Committed to all active branches of the Phoenxi repo.
Thanks for the review [~gjacoby] .

Keeping the ticket open, as need to update/review the other repos.

> Replace bundled log4j libraries with reload4j
> -
>
> Key: PHOENIX-6636
> URL: https://issues.apache.org/jira/browse/PHOENIX-6636
> Project: Phoenix
>  Issue Type: Bug
>  Components: connectors, core, queryserver
>Affects Versions: 5.2.0
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Major
>
> To reduce the number of dependecies with unresolved CVEs, replace the bundled 
> log4j libraries with reload4j ([https://reload4j.qos.ch/).]
> This will also require bumping the slf4j version.
> This is a quick fix, and does not preclude moving to some different backend 
> later (like log4j2 or logback)



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (PHOENIX-6632) Migrate connectors to Spark-3

2022-02-25 Thread Istvan Toth (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-6632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17498091#comment-17498091
 ] 

Istvan Toth commented on PHOENIX-6632:
--

[~ashwinb1998] , if you have the time, I'd appreciate your help with those 
tests.

So far the most serious issue that I found seems to be that Spark3 requires 
that ALL columns are specificed when writing data, which breaks the update 
table use case for Phoenix.

> Migrate connectors to Spark-3
> -
>
> Key: PHOENIX-6632
> URL: https://issues.apache.org/jira/browse/PHOENIX-6632
> Project: Phoenix
>  Issue Type: Improvement
>  Components: spark-connector
>Affects Versions: connectors-6.0.0
>Reporter: Ashwin Balasubramani
>Assignee: Istvan Toth
>Priority: Major
>
> With Spark-3, the DatasourceV2 API has had major changes, where a new 
> TableProvider Interface has been introduced. These new changes bring in more 
> control to the data source developer and better integration with 
> spark-optimizer.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (PHOENIX-6632) Migrate connectors to Spark-3

2022-02-25 Thread Istvan Toth (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-6632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17498086#comment-17498086
 ] 

Istvan Toth commented on PHOENIX-6632:
--

Assiging to me, as I'm working in intergating this pathc now.

Unfortunately, a lot of the scala tests fail.

While those tests use use the old deprecated Spark API, they are much more 
thourough than the Java tests, and while a few of those failures are test 
issues, quite a few of them highlight stuff that used to work witth Spark2, but 
do not work with Spark 3.

You can test those on my WIP PR 
[https://github.com/apache/phoenix-connectors/pull/71]
by running 
{code:java}
mvn clean verify -am -pl phoenix5-spark3-it -Dscala-tests-enabled{code}
The tests log to the console, but you can find the results in 
phoenix5-spark3-it/target/surefire-reports.

> Migrate connectors to Spark-3
> -
>
> Key: PHOENIX-6632
> URL: https://issues.apache.org/jira/browse/PHOENIX-6632
> Project: Phoenix
>  Issue Type: Improvement
>  Components: spark-connector
>Affects Versions: connectors-6.0.0
>Reporter: Ashwin Balasubramani
>Assignee: Istvan Toth
>Priority: Major
>
> With Spark-3, the DatasourceV2 API has had major changes, where a new 
> TableProvider Interface has been introduced. These new changes bring in more 
> control to the data source developer and better integration with 
> spark-optimizer.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)