[jira] [Comment Edited] (PHOENIX-4242) Fix Indexer post-compact hook logging of NPE and TableNotFound
[ https://issues.apache.org/jira/browse/PHOENIX-4242?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16203061#comment-16203061 ] James Taylor edited comment on PHOENIX-4242 at 10/13/17 5:56 AM: - Here's some initial feedback, but would you mind reviewing this too, [~tdsilva]? - You'll want to select COLUMN_NAME in addition to COLUMN_FAMILY as it looks like we store the tenant ID there. Then we'll need to create a new PhoenixConnection for the {{PhoenixRuntime.getTableNoCache()}} call whenever the value of COLUMN_NAME changes (creating a connection is not an expensive operation). You'd set the PhoenixRuntime.TenantId connection property so that you'd only be looking for the tables for that tenant. You can collect up as many different TABLE_SCHEM, TABLE_NAME pairs for the same TENANT_ID and query for them in a single shot (i.e. WHERE (%s, %s, %s, %s) IN ((?,?,?,?), (?,?,?,?), ...). {code} +private static final String CHILD_LINK_QUERY = +String.format( +"SELECT " + COLUMN_FAMILY + " FROM " + SYSTEM_CATALOG ++ " WHERE (%s, %s, %s, %s) IN ((?,?,?,?))", +TENANT_ID, TABLE_SCHEM, TABLE_NAME, LINK_TYPE); + +private static void getAllDescendantViewsRecursive(Connection conn, PName tenantId, PName tableSchem, PName tableName, List results) throws SQLException { + try (PreparedStatement ps = conn.prepareStatement(CHILD_LINK_QUERY)) { + ps.setString(1, tenantId == null ? null : tenantId.getString()); + ps.setString(2, tableSchem == null ? null : tableSchem.getString()); + ps.setString(3, tableName.getString()); + ps.setByte(4, PTable.LinkType.CHILD_TABLE.getSerializedValue()); + try (ResultSet rs = ps.executeQuery()) { + while (rs.next()) { + String childTableName = rs.getString(1); + PTable childPTable = PhoenixRuntime.getTableNoCache(conn, childTableName); + results.add(childPTable); + getAllDescendantViewsRecursive(conn, childPTable.getTenantId(), + childPTable.getSchemaName(), childPTable.getTableName(), results); + } + } + } +} {code} - Create a test that requires the above. A good way would be to have the same named table for different tenants. You should only pickup the correct children. - If the number of children is large (which can be the case as we'll have a child linking row for every tenant), then we'll be churning the LRU server side cache by loading the PTable for every child. I think instead we can only get the index linking rows and in turn lookup the INDEX_DISABLED_TIMESTAMP for each index. - An important one to follow up on is PHOENIX-4263 to see if our partial index rebuilder handles rebuilding indexes on views. - [~tdsilva] - did we consider encoding the tenant ID in the COLUMN_FAMILY column so we could keep the COLUMN_NAME null to ensure linking rows would be grouped together? Or an alternative would be to add another nullable column (like CHILD_TENANT_ID) at the end of the SYSTEM CATALOG row key. We may want to consider changing this because there can be many many child linking rows potentially. Maybe file a JIRA for further discussion? was (Author: jamestaylor): Here's some initial feedback, but would you mind reviewing this too, [~tdsilva]? - You'll want to select COLUMN_NAME in addition to COLUMN_FAMILY as it looks like we store the tenant ID there. Then we'll need to create a new PhoenixConnection for the {{PhoenixRuntime.getTableNoCache()}} call whenever the value of COLUMN_NAME changes (creating a connection is not an expensive operation). You'd set the PhoenixRuntime.TenantId connection property so that you'd only be looking for the tables for that tenant. You can collect up as many different TABLE_SCHEM, TABLE_NAME pairs for the same TENANT_ID and query for them in a single shot (i.e. WHERE (%s, %s, %s, %s) IN ((?,?,?,?), (?,?,?,?), ...). {code} +private static final String CHILD_LINK_QUERY = +String.format( +"SELECT " + COLUMN_FAMILY + " FROM " + SYSTEM_CATALOG ++ " WHERE (%s, %s, %s, %s) IN ((?,?,?,?))", +TENANT_ID, TABLE_SCHEM, TABLE_NAME, LINK_TYPE); + +private static void getAllDescendantViewsRecursive(Connection conn, PName tenantId, PName tableSchem, PName tableName, List results) throws SQLException { + try (PreparedStatement ps = conn.prepareStatement(CHILD_LINK_QUERY)) { + ps.setString(1, tenantId == null ? null : tenantId.getString()); + ps.setString(2, tableSchem == null ? null : tableSchem.getString()); + ps.setString(3, tableName.getString()); + ps.setByte(4,
[jira] [Created] (PHOENIX-4286) Create EXPORT SCHEMA command
Geoffrey Jacoby created PHOENIX-4286: Summary: Create EXPORT SCHEMA command Key: PHOENIX-4286 URL: https://issues.apache.org/jira/browse/PHOENIX-4286 Project: Phoenix Issue Type: New Feature Reporter: Geoffrey Jacoby Phoenix takes in DDL statements and uses it to create metadata in the various SYSTEM tables. There's currently no supported way to go in the opposite direction. This is particularly important in migration use cases. If schemas between two clusters are already synchronized, migration of data is _relatively_ straightforward using either Phoenix or HBase's MapReduce integration. Syncing metadata can much more complicated, particularly if only a subset needs to be migrated. For example, an operator migrating a single tenant from one cluster to another would want to also migrate any views or sequences owned by that tenant. This can be accomplished by treating SYSTEM tables as data tables and migrating subsets of them but implementations will be relying on brittle low-level implementation details that can and do change. Given an EXPORT command, this could be done at a much higher level -- you simply select the DDL statements from the source cluster you need, and then run them on the target cluster. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (PHOENIX-4242) Fix Indexer post-compact hook logging of NPE and TableNotFound
[ https://issues.apache.org/jira/browse/PHOENIX-4242?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16203090#comment-16203090 ] James Taylor commented on PHOENIX-4242: --- Thinking about this more, I think we should start with PHOENIX-4263 to see what happens when an index failure occurs on an index on a view. I think it's likely that we'd fail to set the INDEX_DISABLE_TIMESTAMP in the first place, so the code you wrote will be for naught. We'll likely want to instead have a row in the SYSTEM CATALOG table that corresponds to the base/global index table row for all the indexes on a view. Then we'd have a single place to set the INDEX_STATE and INDEX_DISABLE_TIMESTAMP and indexes on views would essentially inherit that value. Or an alternative would be to set the INDEX_DISABLE_TIMESTAMP on the correct view index row, but we'd need to handle that in PhoenixIndexFailurePolicy.handleFailureWithExceptions() by figuring out the VIEW_INDEX_ID. The advantage of this approach is that we wouldn't be disabling all indexes on a view, but only the ones in which the write failures occurred. > Fix Indexer post-compact hook logging of NPE and TableNotFound > -- > > Key: PHOENIX-4242 > URL: https://issues.apache.org/jira/browse/PHOENIX-4242 > Project: Phoenix > Issue Type: Bug >Affects Versions: 4.12.0 >Reporter: Vincent Poon >Assignee: Vincent Poon > Attachments: PHOENIX-4242.v2.master.patch, > PHOENIX-4242.v3.master.patch, PHOENIX-4747.v1.master.patch > > > The post-compact hook in the Indexer seems to log extraneous log messages > indicating NPE or TableNotFound. The TableNotFound exceptions seem to > indicate actual table names prefixed with MERGE or RESTORE, and sometimes > suffixed with a digit, so perhaps these are views or something similar. > Examples: > 2017-09-28 13:35:03,118 WARN [ctions-1506410238599] index.Indexer - Unable > to permanently disable indexes being partially rebuild for SYSTEM.SEQUENCE > java.lang.NullPointerException > 2017-09-28 10:20:56,406 WARN [ctions-1506410238415] index.Indexer - Unable > to permanently disable indexes being partially rebuild for > MERGE_PLATFORM_ENTITY.PLATFORM_IMMUTABLE_ENTITY_DATA2 > org.apache.phoenix.schema.TableNotFoundException: ERROR 1012 (42M03): Table > undefined. tableName=MERGE_PLATFORM_ENTITY.PLATFORM_IMMUTABLE_ENTITY_DATA2 -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Comment Edited] (PHOENIX-4242) Fix Indexer post-compact hook logging of NPE and TableNotFound
[ https://issues.apache.org/jira/browse/PHOENIX-4242?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16203061#comment-16203061 ] James Taylor edited comment on PHOENIX-4242 at 10/13/17 5:18 AM: - Here's some initial feedback, but would you mind reviewing this too, [~tdsilva]? - You'll want to select COLUMN_NAME in addition to COLUMN_FAMILY as it looks like we store the tenant ID there. Then we'll need to create a new PhoenixConnection for the {{PhoenixRuntime.getTableNoCache()}} call whenever the value of COLUMN_NAME changes (creating a connection is not an expensive operation). You'd set the PhoenixRuntime.TenantId connection property so that you'd only be looking for the tables for that tenant. You can collect up as many different TABLE_SCHEM, TABLE_NAME pairs for the same TENANT_ID and query for them in a single shot (i.e. WHERE (%s, %s, %s, %s) IN ((?,?,?,?), (?,?,?,?), ...). {code} +private static final String CHILD_LINK_QUERY = +String.format( +"SELECT " + COLUMN_FAMILY + " FROM " + SYSTEM_CATALOG ++ " WHERE (%s, %s, %s, %s) IN ((?,?,?,?))", +TENANT_ID, TABLE_SCHEM, TABLE_NAME, LINK_TYPE); + +private static void getAllDescendantViewsRecursive(Connection conn, PName tenantId, PName tableSchem, PName tableName, List results) throws SQLException { + try (PreparedStatement ps = conn.prepareStatement(CHILD_LINK_QUERY)) { + ps.setString(1, tenantId == null ? null : tenantId.getString()); + ps.setString(2, tableSchem == null ? null : tableSchem.getString()); + ps.setString(3, tableName.getString()); + ps.setByte(4, PTable.LinkType.CHILD_TABLE.getSerializedValue()); + try (ResultSet rs = ps.executeQuery()) { + while (rs.next()) { + String childTableName = rs.getString(1); + PTable childPTable = PhoenixRuntime.getTableNoCache(conn, childTableName); + results.add(childPTable); + getAllDescendantViewsRecursive(conn, childPTable.getTenantId(), + childPTable.getSchemaName(), childPTable.getTableName(), results); + } + } + } +} {code} - Create a test that requires the above. A good way would be to have the same named table for different tenants. You should only pickup the correct children. - An important one to follow up on is PHOENIX-4263 to see if our partial index rebuilder handles rebuilding indexes on views. - [~tdsilva] - did we consider encoding the tenant ID in the COLUMN_FAMILY column so we could keep the COLUMN_NAME null to ensure linking rows would be grouped together? Or an alternative would be to add another nullable column (like CHILD_TENANT_ID) at the end of the SYSTEM CATALOG row key. We may want to consider changing this because there can be many many child linking rows potentially. Maybe file a JIRA for further discussion? was (Author: jamestaylor): Here's some initial feedback, but would you mind reviewing this too, [~tdsilva]? - You'll want to select COLUMN_NAME in addition to COLUMN_FAMILY as it looks like we store the tenant ID there. Then we'll need to create a new PhoenixConnection for the {{PhoenixRuntime.getTableNoCache()}} call whenever the value of COLUMN_NAME changes (creating a connection is not an expensive operation). You'd set the PhoenixRuntime.TenantId connection property so that you'd only be looking for the tables for that tenant. You can collect up as many different TABLE_SCHEM, TABLE_NAME pairs for the same TENANT_ID and query for them in a single shot (i.e. WHERE (%s, %s, %s, %s) IN ((?,?,?,?), (?,?,?,?), ...). {code} +private static final String CHILD_LINK_QUERY = +String.format( +"SELECT " + COLUMN_FAMILY + " FROM " + SYSTEM_CATALOG ++ " WHERE (%s, %s, %s, %s) IN ((?,?,?,?))", +TENANT_ID, TABLE_SCHEM, TABLE_NAME, LINK_TYPE); + +private static void getAllDescendantViewsRecursive(Connection conn, PName tenantId, PName tableSchem, PName tableName, List results) throws SQLException { + try (PreparedStatement ps = conn.prepareStatement(CHILD_LINK_QUERY)) { + ps.setString(1, tenantId == null ? null : tenantId.getString()); + ps.setString(2, tableSchem == null ? null : tableSchem.getString()); + ps.setString(3, tableName.getString()); + ps.setByte(4, PTable.LinkType.CHILD_TABLE.getSerializedValue()); + try (ResultSet rs = ps.executeQuery()) { + while (rs.next()) { + String childTableName = rs.getString(1); + PTable childPTable = PhoenixRuntime.getTableNoCache(conn, childTableName); + results.add(childPTable);
[jira] [Commented] (PHOENIX-4242) Fix Indexer post-compact hook logging of NPE and TableNotFound
[ https://issues.apache.org/jira/browse/PHOENIX-4242?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16203061#comment-16203061 ] James Taylor commented on PHOENIX-4242: --- Here's some initial feedback, but would you mind reviewing this too, [~tdsilva]? - You'll want to select COLUMN_NAME in addition to COLUMN_FAMILY as it looks like we store the tenant ID there. Then we'll need to create a new PhoenixConnection for the {{PhoenixRuntime.getTableNoCache()}} call whenever the value of COLUMN_NAME changes (creating a connection is not an expensive operation). You'd set the PhoenixRuntime.TenantId connection property so that you'd only be looking for the tables for that tenant. You can collect up as many different TABLE_SCHEM, TABLE_NAME pairs for the same TENANT_ID and query for them in a single shot (i.e. WHERE (%s, %s, %s, %s) IN ((?,?,?,?), (?,?,?,?), ...). {code} +private static final String CHILD_LINK_QUERY = +String.format( +"SELECT " + COLUMN_FAMILY + " FROM " + SYSTEM_CATALOG ++ " WHERE (%s, %s, %s, %s) IN ((?,?,?,?))", +TENANT_ID, TABLE_SCHEM, TABLE_NAME, LINK_TYPE); + +private static void getAllDescendantViewsRecursive(Connection conn, PName tenantId, PName tableSchem, PName tableName, List results) throws SQLException { + try (PreparedStatement ps = conn.prepareStatement(CHILD_LINK_QUERY)) { + ps.setString(1, tenantId == null ? null : tenantId.getString()); + ps.setString(2, tableSchem == null ? null : tableSchem.getString()); + ps.setString(3, tableName.getString()); + ps.setByte(4, PTable.LinkType.CHILD_TABLE.getSerializedValue()); + try (ResultSet rs = ps.executeQuery()) { + while (rs.next()) { + String childTableName = rs.getString(1); + PTable childPTable = PhoenixRuntime.getTableNoCache(conn, childTableName); + results.add(childPTable); + getAllDescendantViewsRecursive(conn, childPTable.getTenantId(), + childPTable.getSchemaName(), childPTable.getTableName(), results); + } + } + } +} {code} - Create a test that requires the above. A good way would be to have the same named table for different tenants. You should only pickup the correct children. - [~tdsilva] - did we consider encoding the tenant ID in the COLUMN_FAMILY column so we could keep the COLUMN_NAME null to ensure linking rows would be grouped together? Or an alternative would be to add another nullable column (like CHILD_TENANT_ID) at the end of the SYSTEM CATALOG row key. We may want to consider changing this because there can be many many child linking rows potentially. Maybe file a JIRA for further discussion? > Fix Indexer post-compact hook logging of NPE and TableNotFound > -- > > Key: PHOENIX-4242 > URL: https://issues.apache.org/jira/browse/PHOENIX-4242 > Project: Phoenix > Issue Type: Bug >Affects Versions: 4.12.0 >Reporter: Vincent Poon >Assignee: Vincent Poon > Attachments: PHOENIX-4242.v2.master.patch, > PHOENIX-4242.v3.master.patch, PHOENIX-4747.v1.master.patch > > > The post-compact hook in the Indexer seems to log extraneous log messages > indicating NPE or TableNotFound. The TableNotFound exceptions seem to > indicate actual table names prefixed with MERGE or RESTORE, and sometimes > suffixed with a digit, so perhaps these are views or something similar. > Examples: > 2017-09-28 13:35:03,118 WARN [ctions-1506410238599] index.Indexer - Unable > to permanently disable indexes being partially rebuild for SYSTEM.SEQUENCE > java.lang.NullPointerException > 2017-09-28 10:20:56,406 WARN [ctions-1506410238415] index.Indexer - Unable > to permanently disable indexes being partially rebuild for > MERGE_PLATFORM_ENTITY.PLATFORM_IMMUTABLE_ENTITY_DATA2 > org.apache.phoenix.schema.TableNotFoundException: ERROR 1012 (42M03): Table > undefined. tableName=MERGE_PLATFORM_ENTITY.PLATFORM_IMMUTABLE_ENTITY_DATA2 -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (PHOENIX-1160) Allow mix of immutable and mutable indexes on the same table
[ https://issues.apache.org/jira/browse/PHOENIX-1160?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] James Taylor updated PHOENIX-1160: -- Description: Currently, a table must be marked as immutable, through the IMMUTABLE_ROWS=true property specified at creation time. In this case, all indexes added to the table are immutable, while without this property, all indexes are mutable. Instead, we should support a mix of immutable and mutable indexes. We already have an INDEX_TYPE field on our metadata row. We can add a new IMMUTABLE keyword and specify an index is immutable like this: {code} CREATE IMMUTABLE INDEX foo ON bar(c2, c1); {code} It would be up to the application developer to ensure that only columns that don't mutate are part of an immutable index (we already rely on this anyway). was: Currently, a table must be marked as immutable, through the IMMUTABLE_ROWS=true property specified at creation time. In this case, all indexes added to the table are immutable, while without this property, all indexes are mutable. Instead, we should support a mix of immutable and mutable indexes. We already have an INDEX_TYPE field on our metadata row. We can add a new IMMUTABLE keyword and specify an index is immutable like this: {code} CREATE IMMUTABLE INDEX foo ON bar(c2, c1); {code} It would be up to the application developer to ensure that only columns that don't mutate are part of an immutable index (we already rely on this anyway). Related to this is support for deletion of rows when a table has an immutable index (PHOENIX-619). In this JIRA, we'd throw an exception if a DELETE statement filters on any columns that aren't contained in *all* immutable indexes for that table. We currently disallow *any* DELETE on a table that's marked as immutable. > Allow mix of immutable and mutable indexes on the same table > > > Key: PHOENIX-1160 > URL: https://issues.apache.org/jira/browse/PHOENIX-1160 > Project: Phoenix > Issue Type: Improvement >Reporter: James Taylor > > Currently, a table must be marked as immutable, through the > IMMUTABLE_ROWS=true property specified at creation time. In this case, all > indexes added to the table are immutable, while without this property, all > indexes are mutable. > Instead, we should support a mix of immutable and mutable indexes. We already > have an INDEX_TYPE field on our metadata row. We can add a new IMMUTABLE > keyword and specify an index is immutable like this: > {code} > CREATE IMMUTABLE INDEX foo ON bar(c2, c1); > {code} > It would be up to the application developer to ensure that only columns that > don't mutate are part of an immutable index (we already rely on this anyway). -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Reopened] (PHOENIX-1160) Allow mix of immutable and mutable indexes on the same table
[ https://issues.apache.org/jira/browse/PHOENIX-1160?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] James Taylor reopened PHOENIX-1160: --- Reopening since declaring an index as immutable would have the same perf benefit as with immutable tables - there'd be no need to look up the old row and issue a delete. We have users that are relying on this already, but by declaring a table as immutable while allowing it to mutate (which works if you avoid certain features, but is a bad practice). > Allow mix of immutable and mutable indexes on the same table > > > Key: PHOENIX-1160 > URL: https://issues.apache.org/jira/browse/PHOENIX-1160 > Project: Phoenix > Issue Type: Improvement >Reporter: James Taylor > > Currently, a table must be marked as immutable, through the > IMMUTABLE_ROWS=true property specified at creation time. In this case, all > indexes added to the table are immutable, while without this property, all > indexes are mutable. > Instead, we should support a mix of immutable and mutable indexes. We already > have an INDEX_TYPE field on our metadata row. We can add a new IMMUTABLE > keyword and specify an index is immutable like this: > {code} > CREATE IMMUTABLE INDEX foo ON bar(c2, c1); > {code} > It would be up to the application developer to ensure that only columns that > don't mutate are part of an immutable index (we already rely on this anyway). > Related to this is support for deletion of rows when a table has an immutable > index (PHOENIX-619). In this JIRA, we'd throw an exception if a DELETE > statement filters on any columns that aren't contained in *all* immutable > indexes for that table. We currently disallow *any* DELETE on a table that's > marked as immutable. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (PHOENIX-1160) Allow mix of immutable and mutable indexes on the same table
[ https://issues.apache.org/jira/browse/PHOENIX-1160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16203041#comment-16203041 ] James Taylor commented on PHOENIX-1160: --- [~tdsilva] - do you think you could take a look at this one once PHOENIX-3534 is wrapped up? > Allow mix of immutable and mutable indexes on the same table > > > Key: PHOENIX-1160 > URL: https://issues.apache.org/jira/browse/PHOENIX-1160 > Project: Phoenix > Issue Type: Improvement >Reporter: James Taylor > > Currently, a table must be marked as immutable, through the > IMMUTABLE_ROWS=true property specified at creation time. In this case, all > indexes added to the table are immutable, while without this property, all > indexes are mutable. > Instead, we should support a mix of immutable and mutable indexes. We already > have an INDEX_TYPE field on our metadata row. We can add a new IMMUTABLE > keyword and specify an index is immutable like this: > {code} > CREATE IMMUTABLE INDEX foo ON bar(c2, c1); > {code} > It would be up to the application developer to ensure that only columns that > don't mutate are part of an immutable index (we already rely on this anyway). > Related to this is support for deletion of rows when a table has an immutable > index (PHOENIX-619). In this JIRA, we'd throw an exception if a DELETE > statement filters on any columns that aren't contained in *all* immutable > indexes for that table. We currently disallow *any* DELETE on a table that's > marked as immutable. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (PHOENIX-4242) Fix Indexer post-compact hook logging of NPE and TableNotFound
[ https://issues.apache.org/jira/browse/PHOENIX-4242?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16203019#comment-16203019 ] Ethan Wang commented on PHOENIX-4242: - [~vincentpoon] Sure. I'm looking at it. > Fix Indexer post-compact hook logging of NPE and TableNotFound > -- > > Key: PHOENIX-4242 > URL: https://issues.apache.org/jira/browse/PHOENIX-4242 > Project: Phoenix > Issue Type: Bug >Affects Versions: 4.12.0 >Reporter: Vincent Poon >Assignee: Vincent Poon > Attachments: PHOENIX-4242.v2.master.patch, > PHOENIX-4242.v3.master.patch, PHOENIX-4747.v1.master.patch > > > The post-compact hook in the Indexer seems to log extraneous log messages > indicating NPE or TableNotFound. The TableNotFound exceptions seem to > indicate actual table names prefixed with MERGE or RESTORE, and sometimes > suffixed with a digit, so perhaps these are views or something similar. > Examples: > 2017-09-28 13:35:03,118 WARN [ctions-1506410238599] index.Indexer - Unable > to permanently disable indexes being partially rebuild for SYSTEM.SEQUENCE > java.lang.NullPointerException > 2017-09-28 10:20:56,406 WARN [ctions-1506410238415] index.Indexer - Unable > to permanently disable indexes being partially rebuild for > MERGE_PLATFORM_ENTITY.PLATFORM_IMMUTABLE_ENTITY_DATA2 > org.apache.phoenix.schema.TableNotFoundException: ERROR 1012 (42M03): Table > undefined. tableName=MERGE_PLATFORM_ENTITY.PLATFORM_IMMUTABLE_ENTITY_DATA2 -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (PHOENIX-4237) Allow sorting on (Java) collation keys for non-English locales
[ https://issues.apache.org/jira/browse/PHOENIX-4237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16202669#comment-16202669 ] ASF GitHub Bot commented on PHOENIX-4237: - Github user JamesRTaylor commented on the issue: https://github.com/apache/phoenix/pull/275 Thanks for the patch, @shehzaadn. This looks like a general enough built-in function to include in Phoenix IMHO. See inline for more specific comments. It'd be much better to include the first two commits as external dependencies. If we don't do that, we'll need to quickly follow up with replacing them with external dependencies (and make sure we don't change those files at all). > Allow sorting on (Java) collation keys for non-English locales > -- > > Key: PHOENIX-4237 > URL: https://issues.apache.org/jira/browse/PHOENIX-4237 > Project: Phoenix > Issue Type: Improvement >Reporter: Shehzaad Nakhoda > Fix For: 4.12.0 > > > Strings stored via Phoenix can be composed from a subset of the entire set of > Unicode characters. The natural sort order for strings for different > languages often differs from the order dictated by the binary representation > of the characters of these strings. Java provides the idea of a Collator > which given an input string and a (language) locale can generate a Collation > Key which can then be used to compare strings in that natural order. > Salesforce has recently open-sourced grammaticus. IBM has open-sourced ICU4J > some time ago. These technologies can be combined to provide a robust new > Phoenix function that can be used in an ORDER BY clause to sort strings > according to the user's locale. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[GitHub] phoenix issue #275: PHOENIX-4237: Add function to calculate Java collation k...
Github user JamesRTaylor commented on the issue: https://github.com/apache/phoenix/pull/275 Thanks for the patch, @shehzaadn. This looks like a general enough built-in function to include in Phoenix IMHO. See inline for more specific comments. It'd be much better to include the first two commits as external dependencies. If we don't do that, we'll need to quickly follow up with replacing them with external dependencies (and make sure we don't change those files at all). ---
[jira] [Commented] (PHOENIX-4237) Allow sorting on (Java) collation keys for non-English locales
[ https://issues.apache.org/jira/browse/PHOENIX-4237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16202667#comment-16202667 ] ASF GitHub Bot commented on PHOENIX-4237: - Github user JamesRTaylor commented on a diff in the pull request: https://github.com/apache/phoenix/pull/275#discussion_r144416251 --- Diff: phoenix-core/src/main/java/org/apache/phoenix/expression/function/CollationKeyFunction.java --- @@ -0,0 +1,233 @@ +package org.apache.phoenix.expression.function; + +import java.sql.SQLException; +import java.text.Collator; +import java.util.Arrays; +import java.util.List; +import java.util.Locale; + +import org.apache.commons.lang.BooleanUtils; +import org.apache.commons.logging.Log; +import org.apache.commons.logging.LogFactory; +import org.apache.hadoop.hbase.io.ImmutableBytesWritable; +import org.apache.phoenix.expression.Expression; +import org.apache.phoenix.parse.FunctionParseNode; +import org.apache.phoenix.schema.tuple.Tuple; +import org.apache.phoenix.schema.types.PBoolean; +import org.apache.phoenix.schema.types.PDataType; +import org.apache.phoenix.schema.types.PInteger; +import org.apache.phoenix.schema.types.PIntegerArray; +import org.apache.phoenix.schema.types.PUnsignedIntArray; +import org.apache.phoenix.schema.types.PVarbinary; +import org.apache.phoenix.schema.types.PVarchar; +import org.apache.phoenix.schema.types.PhoenixArray; + +import com.force.db.i18n.LinguisticSort; +import com.force.i18n.LocaleUtils; + +import com.ibm.icu.impl.jdkadapter.CollatorICU; +import com.ibm.icu.util.ULocale; + +/** + * A Phoenix Function that calculates a collation key for an input string based + * on a caller-provided locale and collator strength and decomposition settings. + * + * It uses the open-source grammaticus and i18n packages to obtain the collators + * it needs. --- End diff -- We should include more comments here. In particular, what sort order will we get? Does this mimic some other databases behavior (i.e. Oracle)? Does it deviate from that at all? Does Oracle follow some standard that we could point to? Also, please make sure to budget time to update our online reference manual: https://phoenix.apache.org/language/functions.html. This lives in phoenix.csv in our SVN repo as described here: https://phoenix.apache.org/building_website.html > Allow sorting on (Java) collation keys for non-English locales > -- > > Key: PHOENIX-4237 > URL: https://issues.apache.org/jira/browse/PHOENIX-4237 > Project: Phoenix > Issue Type: Improvement >Reporter: Shehzaad Nakhoda > Fix For: 4.12.0 > > > Strings stored via Phoenix can be composed from a subset of the entire set of > Unicode characters. The natural sort order for strings for different > languages often differs from the order dictated by the binary representation > of the characters of these strings. Java provides the idea of a Collator > which given an input string and a (language) locale can generate a Collation > Key which can then be used to compare strings in that natural order. > Salesforce has recently open-sourced grammaticus. IBM has open-sourced ICU4J > some time ago. These technologies can be combined to provide a robust new > Phoenix function that can be used in an ORDER BY clause to sort strings > according to the user's locale. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[GitHub] phoenix pull request #275: PHOENIX-4237: Add function to calculate Java coll...
Github user JamesRTaylor commented on a diff in the pull request: https://github.com/apache/phoenix/pull/275#discussion_r144416251 --- Diff: phoenix-core/src/main/java/org/apache/phoenix/expression/function/CollationKeyFunction.java --- @@ -0,0 +1,233 @@ +package org.apache.phoenix.expression.function; + +import java.sql.SQLException; +import java.text.Collator; +import java.util.Arrays; +import java.util.List; +import java.util.Locale; + +import org.apache.commons.lang.BooleanUtils; +import org.apache.commons.logging.Log; +import org.apache.commons.logging.LogFactory; +import org.apache.hadoop.hbase.io.ImmutableBytesWritable; +import org.apache.phoenix.expression.Expression; +import org.apache.phoenix.parse.FunctionParseNode; +import org.apache.phoenix.schema.tuple.Tuple; +import org.apache.phoenix.schema.types.PBoolean; +import org.apache.phoenix.schema.types.PDataType; +import org.apache.phoenix.schema.types.PInteger; +import org.apache.phoenix.schema.types.PIntegerArray; +import org.apache.phoenix.schema.types.PUnsignedIntArray; +import org.apache.phoenix.schema.types.PVarbinary; +import org.apache.phoenix.schema.types.PVarchar; +import org.apache.phoenix.schema.types.PhoenixArray; + +import com.force.db.i18n.LinguisticSort; +import com.force.i18n.LocaleUtils; + +import com.ibm.icu.impl.jdkadapter.CollatorICU; +import com.ibm.icu.util.ULocale; + +/** + * A Phoenix Function that calculates a collation key for an input string based + * on a caller-provided locale and collator strength and decomposition settings. + * + * It uses the open-source grammaticus and i18n packages to obtain the collators + * it needs. --- End diff -- We should include more comments here. In particular, what sort order will we get? Does this mimic some other databases behavior (i.e. Oracle)? Does it deviate from that at all? Does Oracle follow some standard that we could point to? Also, please make sure to budget time to update our online reference manual: https://phoenix.apache.org/language/functions.html. This lives in phoenix.csv in our SVN repo as described here: https://phoenix.apache.org/building_website.html ---
[jira] [Commented] (PHOENIX-4237) Allow sorting on (Java) collation keys for non-English locales
[ https://issues.apache.org/jira/browse/PHOENIX-4237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16202662#comment-16202662 ] ASF GitHub Bot commented on PHOENIX-4237: - Github user JamesRTaylor commented on a diff in the pull request: https://github.com/apache/phoenix/pull/275#discussion_r144415724 --- Diff: phoenix-core/src/test/java/org/apache/phoenix/expression/function/CollationKeyFunctionTest.java --- @@ -0,0 +1,134 @@ +package org.apache.phoenix.expression.function; + +import static org.junit.Assert.assertEquals; +import static org.junit.Assert.fail; + +import java.sql.SQLException; +import java.util.List; + +import org.apache.hadoop.hbase.io.ImmutableBytesWritable; +import org.apache.phoenix.expression.function.CollationKeyFunction; +import org.apache.phoenix.schema.SortOrder; +import org.apache.phoenix.schema.types.PBoolean; +import org.apache.phoenix.schema.types.PInteger; +import org.apache.phoenix.schema.types.PVarchar; +import org.apache.phoenix.schema.types.PhoenixArray; + +import org.apache.phoenix.expression.Expression; +import org.apache.phoenix.expression.LiteralExpression; + +import org.junit.Test; + +import com.google.common.collect.Lists; + +/** + * "Unit" tests for CollationKeyFunction + * + * @author snakhoda + * + */ +public class CollationKeyFunctionTest { --- End diff -- We'll need more tests. You really want to test the sort order of a list of strings matches the expected linguistic sort order. These tests don't have a lot of meaning in terms of validating the sort order is correct IMHO. We'll also want end2end tests that use the new function. > Allow sorting on (Java) collation keys for non-English locales > -- > > Key: PHOENIX-4237 > URL: https://issues.apache.org/jira/browse/PHOENIX-4237 > Project: Phoenix > Issue Type: Improvement >Reporter: Shehzaad Nakhoda > Fix For: 4.12.0 > > > Strings stored via Phoenix can be composed from a subset of the entire set of > Unicode characters. The natural sort order for strings for different > languages often differs from the order dictated by the binary representation > of the characters of these strings. Java provides the idea of a Collator > which given an input string and a (language) locale can generate a Collation > Key which can then be used to compare strings in that natural order. > Salesforce has recently open-sourced grammaticus. IBM has open-sourced ICU4J > some time ago. These technologies can be combined to provide a robust new > Phoenix function that can be used in an ORDER BY clause to sort strings > according to the user's locale. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[GitHub] phoenix pull request #275: PHOENIX-4237: Add function to calculate Java coll...
Github user JamesRTaylor commented on a diff in the pull request: https://github.com/apache/phoenix/pull/275#discussion_r144415724 --- Diff: phoenix-core/src/test/java/org/apache/phoenix/expression/function/CollationKeyFunctionTest.java --- @@ -0,0 +1,134 @@ +package org.apache.phoenix.expression.function; + +import static org.junit.Assert.assertEquals; +import static org.junit.Assert.fail; + +import java.sql.SQLException; +import java.util.List; + +import org.apache.hadoop.hbase.io.ImmutableBytesWritable; +import org.apache.phoenix.expression.function.CollationKeyFunction; +import org.apache.phoenix.schema.SortOrder; +import org.apache.phoenix.schema.types.PBoolean; +import org.apache.phoenix.schema.types.PInteger; +import org.apache.phoenix.schema.types.PVarchar; +import org.apache.phoenix.schema.types.PhoenixArray; + +import org.apache.phoenix.expression.Expression; +import org.apache.phoenix.expression.LiteralExpression; + +import org.junit.Test; + +import com.google.common.collect.Lists; + +/** + * "Unit" tests for CollationKeyFunction + * + * @author snakhoda + * + */ +public class CollationKeyFunctionTest { --- End diff -- We'll need more tests. You really want to test the sort order of a list of strings matches the expected linguistic sort order. These tests don't have a lot of meaning in terms of validating the sort order is correct IMHO. We'll also want end2end tests that use the new function. ---
[jira] [Commented] (PHOENIX-4237) Allow sorting on (Java) collation keys for non-English locales
[ https://issues.apache.org/jira/browse/PHOENIX-4237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16202655#comment-16202655 ] ASF GitHub Bot commented on PHOENIX-4237: - Github user joshelser commented on a diff in the pull request: https://github.com/apache/phoenix/pull/275#discussion_r144414821 --- Diff: phoenix-core/src/main/java/com/force/db/i18n/OracleUpper.java --- @@ -0,0 +1,66 @@ +/* --- End diff -- Yup! You got it right, James. Whether we include the code in binary form or source form, for BSD, we treat them the same (propagate in LICENSE, and copyright/etc in NOTICE). If there's a license header for the file, we would also leave that, IIRC. > Allow sorting on (Java) collation keys for non-English locales > -- > > Key: PHOENIX-4237 > URL: https://issues.apache.org/jira/browse/PHOENIX-4237 > Project: Phoenix > Issue Type: Improvement >Reporter: Shehzaad Nakhoda > Fix For: 4.12.0 > > > Strings stored via Phoenix can be composed from a subset of the entire set of > Unicode characters. The natural sort order for strings for different > languages often differs from the order dictated by the binary representation > of the characters of these strings. Java provides the idea of a Collator > which given an input string and a (language) locale can generate a Collation > Key which can then be used to compare strings in that natural order. > Salesforce has recently open-sourced grammaticus. IBM has open-sourced ICU4J > some time ago. These technologies can be combined to provide a robust new > Phoenix function that can be used in an ORDER BY clause to sort strings > according to the user's locale. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[GitHub] phoenix pull request #275: PHOENIX-4237: Add function to calculate Java coll...
Github user joshelser commented on a diff in the pull request: https://github.com/apache/phoenix/pull/275#discussion_r144414821 --- Diff: phoenix-core/src/main/java/com/force/db/i18n/OracleUpper.java --- @@ -0,0 +1,66 @@ +/* --- End diff -- Yup! You got it right, James. Whether we include the code in binary form or source form, for BSD, we treat them the same (propagate in LICENSE, and copyright/etc in NOTICE). If there's a license header for the file, we would also leave that, IIRC. ---
[jira] [Commented] (PHOENIX-4237) Allow sorting on (Java) collation keys for non-English locales
[ https://issues.apache.org/jira/browse/PHOENIX-4237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16202651#comment-16202651 ] ASF GitHub Bot commented on PHOENIX-4237: - Github user JamesRTaylor commented on a diff in the pull request: https://github.com/apache/phoenix/pull/275#discussion_r144414623 --- Diff: phoenix-core/src/main/java/org/apache/phoenix/expression/function/CollationKeyFunction.java --- @@ -0,0 +1,233 @@ +package org.apache.phoenix.expression.function; + +import java.sql.SQLException; +import java.text.Collator; +import java.util.Arrays; +import java.util.List; +import java.util.Locale; + +import org.apache.commons.lang.BooleanUtils; +import org.apache.commons.logging.Log; +import org.apache.commons.logging.LogFactory; +import org.apache.hadoop.hbase.io.ImmutableBytesWritable; +import org.apache.phoenix.expression.Expression; +import org.apache.phoenix.parse.FunctionParseNode; +import org.apache.phoenix.schema.tuple.Tuple; +import org.apache.phoenix.schema.types.PBoolean; +import org.apache.phoenix.schema.types.PDataType; +import org.apache.phoenix.schema.types.PInteger; +import org.apache.phoenix.schema.types.PIntegerArray; +import org.apache.phoenix.schema.types.PUnsignedIntArray; +import org.apache.phoenix.schema.types.PVarbinary; +import org.apache.phoenix.schema.types.PVarchar; +import org.apache.phoenix.schema.types.PhoenixArray; + +import com.force.db.i18n.LinguisticSort; +import com.force.i18n.LocaleUtils; + +import com.ibm.icu.impl.jdkadapter.CollatorICU; +import com.ibm.icu.util.ULocale; + +/** + * A Phoenix Function that calculates a collation key for an input string based + * on a caller-provided locale and collator strength and decomposition settings. + * + * It uses the open-source grammaticus and i18n packages to obtain the collators + * it needs. + * + * @author snakhoda + * + */ +@FunctionParseNode.BuiltInFunction(name = CollationKeyFunction.NAME, args = { + // input string + @FunctionParseNode.Argument(allowedTypes = { PVarchar.class }), + // ISO Code for Locale + @FunctionParseNode.Argument(allowedTypes = { PVarchar.class }, isConstant = true), + // whether to use special upper case collator + @FunctionParseNode.Argument(allowedTypes = { PBoolean.class }, defaultValue = "false", isConstant = true), + // collator strength + @FunctionParseNode.Argument(allowedTypes = { PInteger.class }, defaultValue = "null", isConstant = true), + // collator decomposition + @FunctionParseNode.Argument(allowedTypes = { PInteger.class }, defaultValue = "null", isConstant = true) }) +public class CollationKeyFunction extends ScalarFunction { + + private static final Log LOG = LogFactory.getLog(CollationKeyFunction.class); + + public static final String NAME = "COLLKEY"; + + public CollationKeyFunction() { + } + + public CollationKeyFunction(List children) throws SQLException { + super(children); + } + + @Override + public boolean evaluate(Tuple tuple, ImmutableBytesWritable ptr) { + try { + String inputValue = getInputValue(tuple, ptr); + String localeISOCode = getLocaleISOCode(tuple, ptr); + Boolean useSpecialUpperCaseCollator = getUseSpecialUpperCaseCollator(tuple, ptr); + Integer collatorStrength = getCollatorStrength(tuple, ptr); + Integer collatorDecomposition = getCollatorDecomposition(tuple, ptr); + + Locale locale = LocaleUtils.get().getLocaleByIsoCode(localeISOCode); + + if(LOG.isDebugEnabled()) { + LOG.debug(String.format("Locale: " + locale.toLanguageTag())); + } + + LinguisticSort linguisticSort = LinguisticSort.get(locale); + + Collator collator = BooleanUtils.isTrue(useSpecialUpperCaseCollator) + ? linguisticSort.getUpperCaseCollator(false) : linguisticSort.getCollator(); + + if (collatorStrength != null) { + collator.setStrength(collatorStrength); + } + + if (collatorDecomposition != null) { + collator.setDecomposition(collatorDecomposition); + } + + if(LOG.isDebugEnabled()) { + LOG.debug(String.format("Collator: [strength:
[GitHub] phoenix pull request #275: PHOENIX-4237: Add function to calculate Java coll...
Github user JamesRTaylor commented on a diff in the pull request: https://github.com/apache/phoenix/pull/275#discussion_r144414623 --- Diff: phoenix-core/src/main/java/org/apache/phoenix/expression/function/CollationKeyFunction.java --- @@ -0,0 +1,233 @@ +package org.apache.phoenix.expression.function; + +import java.sql.SQLException; +import java.text.Collator; +import java.util.Arrays; +import java.util.List; +import java.util.Locale; + +import org.apache.commons.lang.BooleanUtils; +import org.apache.commons.logging.Log; +import org.apache.commons.logging.LogFactory; +import org.apache.hadoop.hbase.io.ImmutableBytesWritable; +import org.apache.phoenix.expression.Expression; +import org.apache.phoenix.parse.FunctionParseNode; +import org.apache.phoenix.schema.tuple.Tuple; +import org.apache.phoenix.schema.types.PBoolean; +import org.apache.phoenix.schema.types.PDataType; +import org.apache.phoenix.schema.types.PInteger; +import org.apache.phoenix.schema.types.PIntegerArray; +import org.apache.phoenix.schema.types.PUnsignedIntArray; +import org.apache.phoenix.schema.types.PVarbinary; +import org.apache.phoenix.schema.types.PVarchar; +import org.apache.phoenix.schema.types.PhoenixArray; + +import com.force.db.i18n.LinguisticSort; +import com.force.i18n.LocaleUtils; + +import com.ibm.icu.impl.jdkadapter.CollatorICU; +import com.ibm.icu.util.ULocale; + +/** + * A Phoenix Function that calculates a collation key for an input string based + * on a caller-provided locale and collator strength and decomposition settings. + * + * It uses the open-source grammaticus and i18n packages to obtain the collators + * it needs. + * + * @author snakhoda + * + */ +@FunctionParseNode.BuiltInFunction(name = CollationKeyFunction.NAME, args = { + // input string + @FunctionParseNode.Argument(allowedTypes = { PVarchar.class }), + // ISO Code for Locale + @FunctionParseNode.Argument(allowedTypes = { PVarchar.class }, isConstant = true), + // whether to use special upper case collator + @FunctionParseNode.Argument(allowedTypes = { PBoolean.class }, defaultValue = "false", isConstant = true), + // collator strength + @FunctionParseNode.Argument(allowedTypes = { PInteger.class }, defaultValue = "null", isConstant = true), + // collator decomposition + @FunctionParseNode.Argument(allowedTypes = { PInteger.class }, defaultValue = "null", isConstant = true) }) +public class CollationKeyFunction extends ScalarFunction { + + private static final Log LOG = LogFactory.getLog(CollationKeyFunction.class); + + public static final String NAME = "COLLKEY"; + + public CollationKeyFunction() { + } + + public CollationKeyFunction(List children) throws SQLException { + super(children); + } + + @Override + public boolean evaluate(Tuple tuple, ImmutableBytesWritable ptr) { + try { + String inputValue = getInputValue(tuple, ptr); + String localeISOCode = getLocaleISOCode(tuple, ptr); + Boolean useSpecialUpperCaseCollator = getUseSpecialUpperCaseCollator(tuple, ptr); + Integer collatorStrength = getCollatorStrength(tuple, ptr); + Integer collatorDecomposition = getCollatorDecomposition(tuple, ptr); + + Locale locale = LocaleUtils.get().getLocaleByIsoCode(localeISOCode); + + if(LOG.isDebugEnabled()) { + LOG.debug(String.format("Locale: " + locale.toLanguageTag())); + } + + LinguisticSort linguisticSort = LinguisticSort.get(locale); + + Collator collator = BooleanUtils.isTrue(useSpecialUpperCaseCollator) + ? linguisticSort.getUpperCaseCollator(false) : linguisticSort.getCollator(); + + if (collatorStrength != null) { + collator.setStrength(collatorStrength); + } + + if (collatorDecomposition != null) { + collator.setDecomposition(collatorDecomposition); + } + + if(LOG.isDebugEnabled()) { + LOG.debug(String.format("Collator: [strength: %d, decomposition: %d], Special-Upper-Case: %s", + collator.getStrength(), collator.getDecomposition(), BooleanUtils.isTrue(useSpecialUpperCaseCollator))); + } +
[jira] [Commented] (PHOENIX-4237) Allow sorting on (Java) collation keys for non-English locales
[ https://issues.apache.org/jira/browse/PHOENIX-4237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16202645#comment-16202645 ] ASF GitHub Bot commented on PHOENIX-4237: - Github user JamesRTaylor commented on a diff in the pull request: https://github.com/apache/phoenix/pull/275#discussion_r144413717 --- Diff: phoenix-core/src/main/java/com/force/db/i18n/OracleUpper.java --- @@ -0,0 +1,66 @@ +/* --- End diff -- @joshelser - my take, based on this[1], is that it's ok to include source code in an ASF project with a BSD license (as opposed to only having BSD licensed software as an external dependency). WDYT? [1] http://apache.org/licenses/#code-developed-elsewhere-received-under-a-category-a-license-incorporated-into-apache-projects-distributed-by-apache-and-licensed-to-downstream-users-under-its-original-license > Allow sorting on (Java) collation keys for non-English locales > -- > > Key: PHOENIX-4237 > URL: https://issues.apache.org/jira/browse/PHOENIX-4237 > Project: Phoenix > Issue Type: Improvement >Reporter: Shehzaad Nakhoda > Fix For: 4.12.0 > > > Strings stored via Phoenix can be composed from a subset of the entire set of > Unicode characters. The natural sort order for strings for different > languages often differs from the order dictated by the binary representation > of the characters of these strings. Java provides the idea of a Collator > which given an input string and a (language) locale can generate a Collation > Key which can then be used to compare strings in that natural order. > Salesforce has recently open-sourced grammaticus. IBM has open-sourced ICU4J > some time ago. These technologies can be combined to provide a robust new > Phoenix function that can be used in an ORDER BY clause to sort strings > according to the user's locale. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[GitHub] phoenix pull request #275: PHOENIX-4237: Add function to calculate Java coll...
Github user JamesRTaylor commented on a diff in the pull request: https://github.com/apache/phoenix/pull/275#discussion_r144413717 --- Diff: phoenix-core/src/main/java/com/force/db/i18n/OracleUpper.java --- @@ -0,0 +1,66 @@ +/* --- End diff -- @joshelser - my take, based on this[1], is that it's ok to include source code in an ASF project with a BSD license (as opposed to only having BSD licensed software as an external dependency). WDYT? [1] http://apache.org/licenses/#code-developed-elsewhere-received-under-a-category-a-license-incorporated-into-apache-projects-distributed-by-apache-and-licensed-to-downstream-users-under-its-original-license ---
[jira] [Updated] (PHOENIX-4242) Fix Indexer post-compact hook logging of NPE and TableNotFound
[ https://issues.apache.org/jira/browse/PHOENIX-4242?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vincent Poon updated PHOENIX-4242: -- Attachment: PHOENIX-4242.v3.master.patch [~aertoria] Could you review this? > Fix Indexer post-compact hook logging of NPE and TableNotFound > -- > > Key: PHOENIX-4242 > URL: https://issues.apache.org/jira/browse/PHOENIX-4242 > Project: Phoenix > Issue Type: Bug >Affects Versions: 4.12.0 >Reporter: Vincent Poon >Assignee: Vincent Poon > Attachments: PHOENIX-4242.v2.master.patch, > PHOENIX-4242.v3.master.patch, PHOENIX-4747.v1.master.patch > > > The post-compact hook in the Indexer seems to log extraneous log messages > indicating NPE or TableNotFound. The TableNotFound exceptions seem to > indicate actual table names prefixed with MERGE or RESTORE, and sometimes > suffixed with a digit, so perhaps these are views or something similar. > Examples: > 2017-09-28 13:35:03,118 WARN [ctions-1506410238599] index.Indexer - Unable > to permanently disable indexes being partially rebuild for SYSTEM.SEQUENCE > java.lang.NullPointerException > 2017-09-28 10:20:56,406 WARN [ctions-1506410238415] index.Indexer - Unable > to permanently disable indexes being partially rebuild for > MERGE_PLATFORM_ENTITY.PLATFORM_IMMUTABLE_ENTITY_DATA2 > org.apache.phoenix.schema.TableNotFoundException: ERROR 1012 (42M03): Table > undefined. tableName=MERGE_PLATFORM_ENTITY.PLATFORM_IMMUTABLE_ENTITY_DATA2 -- This message was sent by Atlassian JIRA (v6.4.14#64029)
Re: [ANNOUNCE] New Phoenix committer: Vincent Poon
Congratulations Vincent! On Wed, Oct 11, 2017 at 6:51 PM, James Taylorwrote: > On behalf of the Apache Phoenix PMC, I'm delighted to announce that Vincent > Poon has accepted our invitation to become a committer. He's had a big > impact in helping to stabilize our secondary index implementation, > including the creation of an index scrutiny tool that will detect > out-of-sync issues [1]. > > Looking forward to continued contributions. > > Please give Vincent a warm welcome to the project! > > James > > > [1] https://phoenix.apache.org/secondary_indexing.html#Index_Scrutiny_Tool >
Re: [ANNOUNCE] New Phoenix committer: Ethan Wang
Congratulations Ethan! On Wed, Oct 11, 2017 at 6:45 PM, James Taylorwrote: > On behalf of the Apache Phoenix PMC, I'm please to announce that Ethan Wang > has accepted our invitation to become a committer. He's behind some of the > great new 4.12 features of table sampling [1] and approximate count > distinct [2] along with contributing to the less sexy work of helping to > stabilize our unit tests. > > Please give Ethan a warm welcome to the project! > > James > > [1] https://phoenix.apache.org/tablesample.html > [2] https://phoenix.apache.org/language/functions.html# > approx_count_distinct >
Re: [ANNOUNCE] New Phoenix committer: Ethan Wang
Congratulations, Ethan!! On Fri, Oct 13, 2017 at 12:20 AM, Andrew Purtellwrote: > Congratulations and welcome, Ethan. > > > On Wed, Oct 11, 2017 at 6:45 PM, James Taylor > wrote: > > > On behalf of the Apache Phoenix PMC, I'm please to announce that Ethan > Wang > > has accepted our invitation to become a committer. He's behind some of > the > > great new 4.12 features of table sampling [1] and approximate count > > distinct [2] along with contributing to the less sexy work of helping to > > stabilize our unit tests. > > > > Please give Ethan a warm welcome to the project! > > > > James > > > > [1] https://phoenix.apache.org/tablesample.html > > [2] https://phoenix.apache.org/language/functions.html# > > approx_count_distinct > > > > > > -- > Best regards, > Andrew > > Words like orphans lost among the crosstalk, meaning torn from truth's > decrepit hands >- A23, Crosstalk >
Re: [ANNOUNCE] New Phoenix committer: Vincent Poon
Congratulations, Vincent!! On Fri, Oct 13, 2017 at 12:20 AM, Andrew Purtellwrote: > Congratulations and welcome, Vincent. > > > On Wed, Oct 11, 2017 at 6:51 PM, James Taylor > wrote: > > > On behalf of the Apache Phoenix PMC, I'm delighted to announce that > Vincent > > Poon has accepted our invitation to become a committer. He's had a big > > impact in helping to stabilize our secondary index implementation, > > including the creation of an index scrutiny tool that will detect > > out-of-sync issues [1]. > > > > Looking forward to continued contributions. > > > > Please give Vincent a warm welcome to the project! > > > > James > > > > > > [1] https://phoenix.apache.org/secondary_indexing.html#Index_ > Scrutiny_Tool > > > > > > -- > Best regards, > Andrew > > Words like orphans lost among the crosstalk, meaning torn from truth's > decrepit hands >- A23, Crosstalk >
[jira] [Created] (PHOENIX-4285) Add PHERF.LOG_PER_NROWS constraint to PHERF.properties file
Vaitheeshwar Ramachandran created PHOENIX-4285: -- Summary: Add PHERF.LOG_PER_NROWS constraint to PHERF.properties file Key: PHOENIX-4285 URL: https://issues.apache.org/jira/browse/PHOENIX-4285 Project: Phoenix Issue Type: Improvement Reporter: Vaitheeshwar Ramachandran Priority: Minor Currently the PHERF constraint LOG_PER_NROWS is set to 1M. It is not configurable (cannot be overiden for our pherf testing purposes) Creating this 'improvement' issue to add the LOG_PER_NROWS property to PHERF.properties file so that we could configure the value to display log line after every 'N' row load. This fix will help us to provide the execution time of upserts after each 'N' rows. This will be used for PHERF performance testing and baselining for our use cases. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
Re: [ANNOUNCE] New Phoenix committer: Vincent Poon
Congratulations and welcome, Vincent. On Wed, Oct 11, 2017 at 6:51 PM, James Taylorwrote: > On behalf of the Apache Phoenix PMC, I'm delighted to announce that Vincent > Poon has accepted our invitation to become a committer. He's had a big > impact in helping to stabilize our secondary index implementation, > including the creation of an index scrutiny tool that will detect > out-of-sync issues [1]. > > Looking forward to continued contributions. > > Please give Vincent a warm welcome to the project! > > James > > > [1] https://phoenix.apache.org/secondary_indexing.html#Index_Scrutiny_Tool > -- Best regards, Andrew Words like orphans lost among the crosstalk, meaning torn from truth's decrepit hands - A23, Crosstalk
Re: [ANNOUNCE] New Phoenix committer: Ethan Wang
Congratulations and welcome, Ethan. On Wed, Oct 11, 2017 at 6:45 PM, James Taylorwrote: > On behalf of the Apache Phoenix PMC, I'm please to announce that Ethan Wang > has accepted our invitation to become a committer. He's behind some of the > great new 4.12 features of table sampling [1] and approximate count > distinct [2] along with contributing to the less sexy work of helping to > stabilize our unit tests. > > Please give Ethan a warm welcome to the project! > > James > > [1] https://phoenix.apache.org/tablesample.html > [2] https://phoenix.apache.org/language/functions.html# > approx_count_distinct > -- Best regards, Andrew Words like orphans lost among the crosstalk, meaning torn from truth's decrepit hands - A23, Crosstalk
Re: [ANNOUNCE] New Phoenix committer: Ethan Wang
Congrats Ethan! Code reviews coming your way! ;) On Thu, Oct 12, 2017 at 10:38 AM, Mujtaba Chohanwrote: > Congrats Ethan!! > > On Thu, Oct 12, 2017 at 10:03 AM, Josh Elser wrote: > > > Congrats Ethan! > > > > > > On 10/11/17 9:45 PM, James Taylor wrote: > > > >> On behalf of the Apache Phoenix PMC, I'm please to announce that Ethan > >> Wang > >> has accepted our invitation to become a committer. He's behind some of > the > >> great new 4.12 features of table sampling [1] and approximate count > >> distinct [2] along with contributing to the less sexy work of helping to > >> stabilize our unit tests. > >> > >> Please give Ethan a warm welcome to the project! > >> > >> James > >> > >> [1] https://phoenix.apache.org/tablesample.html > >> [2] https://phoenix.apache.org/language/functions.html#approx_ > >> count_distinct > >> > >> >
Re: [ANNOUNCE] New Phoenix committer: Vincent Poon
Congrats Vincent!! On Thu, Oct 12, 2017 at 10:03 AM, Josh Elserwrote: > Congrats Vincent! > > > On 10/11/17 9:51 PM, James Taylor wrote: > >> On behalf of the Apache Phoenix PMC, I'm delighted to announce that >> Vincent >> Poon has accepted our invitation to become a committer. He's had a big >> impact in helping to stabilize our secondary index implementation, >> including the creation of an index scrutiny tool that will detect >> out-of-sync issues [1]. >> >> Looking forward to continued contributions. >> >> Please give Vincent a warm welcome to the project! >> >> James >> >> >> [1] https://phoenix.apache.org/secondary_indexing.html#Index_Scr >> utiny_Tool >> >>
Re: [ANNOUNCE] New Phoenix committer: Ethan Wang
Congrats Ethan!! On Thu, Oct 12, 2017 at 10:03 AM, Josh Elserwrote: > Congrats Ethan! > > > On 10/11/17 9:45 PM, James Taylor wrote: > >> On behalf of the Apache Phoenix PMC, I'm please to announce that Ethan >> Wang >> has accepted our invitation to become a committer. He's behind some of the >> great new 4.12 features of table sampling [1] and approximate count >> distinct [2] along with contributing to the less sexy work of helping to >> stabilize our unit tests. >> >> Please give Ethan a warm welcome to the project! >> >> James >> >> [1] https://phoenix.apache.org/tablesample.html >> [2] https://phoenix.apache.org/language/functions.html#approx_ >> count_distinct >> >>
Re: [ANNOUNCE] New Phoenix committer: Ethan Wang
Congrats Ethan! On 10/11/17 9:45 PM, James Taylor wrote: On behalf of the Apache Phoenix PMC, I'm please to announce that Ethan Wang has accepted our invitation to become a committer. He's behind some of the great new 4.12 features of table sampling [1] and approximate count distinct [2] along with contributing to the less sexy work of helping to stabilize our unit tests. Please give Ethan a warm welcome to the project! James [1] https://phoenix.apache.org/tablesample.html [2] https://phoenix.apache.org/language/functions.html#approx_count_distinct
Re: [ANNOUNCE] New Phoenix committer: Vincent Poon
Congrats Vincent! On 10/11/17 9:51 PM, James Taylor wrote: On behalf of the Apache Phoenix PMC, I'm delighted to announce that Vincent Poon has accepted our invitation to become a committer. He's had a big impact in helping to stabilize our secondary index implementation, including the creation of an index scrutiny tool that will detect out-of-sync issues [1]. Looking forward to continued contributions. Please give Vincent a warm welcome to the project! James [1] https://phoenix.apache.org/secondary_indexing.html#Index_Scrutiny_Tool
Re: [ANNOUNCE] New Phoenix committer: Ethan Wang
Congrats, Ethan! On Thu, Oct 12, 2017 at 9:25 AM Thomas D'Silvawrote: > Congrats Ethan! > > On Thu, Oct 12, 2017 at 8:28 AM, Geoffrey Jacoby > wrote: > > > Congrats, Ethan! Looking forward to using those new functions soon. > > > > Geoffrey > > > > On Thu, Oct 12, 2017 at 1:32 AM, rajeshb...@apache.org < > > chrajeshbab...@gmail.com> wrote: > > > > > Congratulations Ethan!! Great Job. > > > > > > Thanks, > > > Rajeshbabu. > > > > > > On Thu, Oct 12, 2017 at 7:15 AM, James Taylor > > > wrote: > > > > > > > On behalf of the Apache Phoenix PMC, I'm please to announce that > Ethan > > > Wang > > > > has accepted our invitation to become a committer. He's behind some > of > > > the > > > > great new 4.12 features of table sampling [1] and approximate count > > > > distinct [2] along with contributing to the less sexy work of helping > > to > > > > stabilize our unit tests. > > > > > > > > Please give Ethan a warm welcome to the project! > > > > > > > > James > > > > > > > > [1] https://phoenix.apache.org/tablesample.html > > > > [2] https://phoenix.apache.org/language/functions.html# > > > > approx_count_distinct > > > > > > > > > >
Re: [ANNOUNCE] New Phoenix committer: Vincent Poon
Congrats, Vincent! On Thu, Oct 12, 2017 at 9:25 AM Thomas D'Silvawrote: > Congrats Vincent! > > On Thu, Oct 12, 2017 at 8:27 AM, Geoffrey Jacoby > wrote: > > > Congrats, Vincent! Thanks for all your help on the index stabilization. > > > > On Thu, Oct 12, 2017 at 1:32 AM, rajeshb...@apache.org < > > chrajeshbab...@gmail.com> wrote: > > > > > Congratulations Vincent!! Great Job. > > > > > > Thanks, > > > Rajeshbabu. > > > > > > On Thu, Oct 12, 2017 at 7:21 AM, James Taylor > > > wrote: > > > > > > > On behalf of the Apache Phoenix PMC, I'm delighted to announce that > > > Vincent > > > > Poon has accepted our invitation to become a committer. He's had a > big > > > > impact in helping to stabilize our secondary index implementation, > > > > including the creation of an index scrutiny tool that will detect > > > > out-of-sync issues [1]. > > > > > > > > Looking forward to continued contributions. > > > > > > > > Please give Vincent a warm welcome to the project! > > > > > > > > James > > > > > > > > > > > > [1] https://phoenix.apache.org/secondary_indexing.html#Index_ > > > Scrutiny_Tool > > > > > > > > > >
Re: [ANNOUNCE] New Phoenix committer: Ethan Wang
Congrats Ethan! On Thu, Oct 12, 2017 at 8:28 AM, Geoffrey Jacobywrote: > Congrats, Ethan! Looking forward to using those new functions soon. > > Geoffrey > > On Thu, Oct 12, 2017 at 1:32 AM, rajeshb...@apache.org < > chrajeshbab...@gmail.com> wrote: > > > Congratulations Ethan!! Great Job. > > > > Thanks, > > Rajeshbabu. > > > > On Thu, Oct 12, 2017 at 7:15 AM, James Taylor > > wrote: > > > > > On behalf of the Apache Phoenix PMC, I'm please to announce that Ethan > > Wang > > > has accepted our invitation to become a committer. He's behind some of > > the > > > great new 4.12 features of table sampling [1] and approximate count > > > distinct [2] along with contributing to the less sexy work of helping > to > > > stabilize our unit tests. > > > > > > Please give Ethan a warm welcome to the project! > > > > > > James > > > > > > [1] https://phoenix.apache.org/tablesample.html > > > [2] https://phoenix.apache.org/language/functions.html# > > > approx_count_distinct > > > > > >
Re: [ANNOUNCE] New Phoenix committer: Vincent Poon
Congrats Vincent! On Thu, Oct 12, 2017 at 8:27 AM, Geoffrey Jacobywrote: > Congrats, Vincent! Thanks for all your help on the index stabilization. > > On Thu, Oct 12, 2017 at 1:32 AM, rajeshb...@apache.org < > chrajeshbab...@gmail.com> wrote: > > > Congratulations Vincent!! Great Job. > > > > Thanks, > > Rajeshbabu. > > > > On Thu, Oct 12, 2017 at 7:21 AM, James Taylor > > wrote: > > > > > On behalf of the Apache Phoenix PMC, I'm delighted to announce that > > Vincent > > > Poon has accepted our invitation to become a committer. He's had a big > > > impact in helping to stabilize our secondary index implementation, > > > including the creation of an index scrutiny tool that will detect > > > out-of-sync issues [1]. > > > > > > Looking forward to continued contributions. > > > > > > Please give Vincent a warm welcome to the project! > > > > > > James > > > > > > > > > [1] https://phoenix.apache.org/secondary_indexing.html#Index_ > > Scrutiny_Tool > > > > > >
Re: [ANNOUNCE] New Phoenix committer: Ethan Wang
Congrats, Ethan! Looking forward to using those new functions soon. Geoffrey On Thu, Oct 12, 2017 at 1:32 AM, rajeshb...@apache.org < chrajeshbab...@gmail.com> wrote: > Congratulations Ethan!! Great Job. > > Thanks, > Rajeshbabu. > > On Thu, Oct 12, 2017 at 7:15 AM, James Taylor> wrote: > > > On behalf of the Apache Phoenix PMC, I'm please to announce that Ethan > Wang > > has accepted our invitation to become a committer. He's behind some of > the > > great new 4.12 features of table sampling [1] and approximate count > > distinct [2] along with contributing to the less sexy work of helping to > > stabilize our unit tests. > > > > Please give Ethan a warm welcome to the project! > > > > James > > > > [1] https://phoenix.apache.org/tablesample.html > > [2] https://phoenix.apache.org/language/functions.html# > > approx_count_distinct > > >
Re: [ANNOUNCE] New Phoenix committer: Vincent Poon
Congrats, Vincent! Thanks for all your help on the index stabilization. On Thu, Oct 12, 2017 at 1:32 AM, rajeshb...@apache.org < chrajeshbab...@gmail.com> wrote: > Congratulations Vincent!! Great Job. > > Thanks, > Rajeshbabu. > > On Thu, Oct 12, 2017 at 7:21 AM, James Taylor> wrote: > > > On behalf of the Apache Phoenix PMC, I'm delighted to announce that > Vincent > > Poon has accepted our invitation to become a committer. He's had a big > > impact in helping to stabilize our secondary index implementation, > > including the creation of an index scrutiny tool that will detect > > out-of-sync issues [1]. > > > > Looking forward to continued contributions. > > > > Please give Vincent a warm welcome to the project! > > > > James > > > > > > [1] https://phoenix.apache.org/secondary_indexing.html#Index_ > Scrutiny_Tool > > >
[jira] [Resolved] (PHOENIX-4284) Phoenix connection is not closed.
[ https://issues.apache.org/jira/browse/PHOENIX-4284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Geoffrey Jacoby resolved PHOENIX-4284. -- Resolution: Duplicate Closing as duplicate > Phoenix connection is not closed. > - > > Key: PHOENIX-4284 > URL: https://issues.apache.org/jira/browse/PHOENIX-4284 > Project: Phoenix > Issue Type: Bug >Affects Versions: 4.8.0 >Reporter: jifei_yang >Priority: Trivial > Labels: features > Fix For: 4.8.0 > > > When I call the jdbc close method, I found that the phoenix link still exists > and is not closed. > *The code is as follows:* > {code:java} > public class TestHbaseClose { > public static void main(String[] args) { > Connection conn=null; > for (int i = 0; i < 5; i++) { > try { > conn= PhoenixConnectionTest.getConnPhoenix(); > System.out.println("Before is "+conn); > Thread.sleep(8000); > conn.close(); > System.out.println("After is "+conn); > System.out.println(conn.isClosed()); > } catch (Exception e) { > e.printStackTrace(); > } > } > } > } > {code} > *The results of the print are as follows:* > {noformat} > Before is org.apache.phoenix.jdbc.PhoenixConnection@2fb5fe30 > After is org.apache.phoenix.jdbc.PhoenixConnection@2fb5fe30 > true > Before is org.apache.phoenix.jdbc.PhoenixConnection@14bae047 > After is org.apache.phoenix.jdbc.PhoenixConnection@14bae047 > true > Before is org.apache.phoenix.jdbc.PhoenixConnection@466d49f0 > After is org.apache.phoenix.jdbc.PhoenixConnection@466d49f0 > true > Before is org.apache.phoenix.jdbc.PhoenixConnection@40021799 > After is org.apache.phoenix.jdbc.PhoenixConnection@40021799 > true > Before is org.apache.phoenix.jdbc.PhoenixConnection@64f555e7 > After is org.apache.phoenix.jdbc.PhoenixConnection@64f555e7 > true > {noformat} > Why is this? Is there a bug fixes in version 4.8.0? > But the following link mentions this question: > [ https: //issues.apache.org/jira/browse/PHOENIX-2898] -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[GitHub] phoenix issue #274: 4.8 h base 1.2 cdh5.8
Github user highfei2011 commented on the issue: https://github.com/apache/phoenix/pull/274 Hi, @daiamo ,If you want to use phoenix-4.8.0-cdh-5.8.0, you can refer to it:[https://github.com/chiastic-security/phoenix-for-cloudera](url). ---
[GitHub] phoenix issue #274: 4.8 h base 1.2 cdh5.8
Github user highfei2011 commented on the issue: https://github.com/apache/phoenix/pull/274 If you want to use phoenix-4.8.0-cdh-5.8.0, you can refer to it:https://github.com/chiastic-security/phoenix-for-cloudera ---
[jira] [Comment Edited] (PHOENIX-4284) Phoenix connection is not closed.
[ https://issues.apache.org/jira/browse/PHOENIX-4284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201493#comment-16201493 ] jifei_yang edited comment on PHOENIX-4284 at 10/12/17 9:42 AM: --- 4.10.0 version has solved this problem! [https://issues.apache.org/jira/browse/PHOENIX-3553] [https://issues.apache.org/jira/browse/PHOENIX-3563] was (Author: highfei2...@126.com): 4.10.0 version has solved this problem! [https://issues.apache.org/jira/browse/PHOENIX-3553] > Phoenix connection is not closed. > - > > Key: PHOENIX-4284 > URL: https://issues.apache.org/jira/browse/PHOENIX-4284 > Project: Phoenix > Issue Type: Bug >Affects Versions: 4.8.0 >Reporter: jifei_yang >Priority: Trivial > Labels: features > Fix For: 4.8.0 > > > When I call the jdbc close method, I found that the phoenix link still exists > and is not closed. > *The code is as follows:* > {code:java} > public class TestHbaseClose { > public static void main(String[] args) { > Connection conn=null; > for (int i = 0; i < 5; i++) { > try { > conn= PhoenixConnectionTest.getConnPhoenix(); > System.out.println("Before is "+conn); > Thread.sleep(8000); > conn.close(); > System.out.println("After is "+conn); > System.out.println(conn.isClosed()); > } catch (Exception e) { > e.printStackTrace(); > } > } > } > } > {code} > *The results of the print are as follows:* > {noformat} > Before is org.apache.phoenix.jdbc.PhoenixConnection@2fb5fe30 > After is org.apache.phoenix.jdbc.PhoenixConnection@2fb5fe30 > true > Before is org.apache.phoenix.jdbc.PhoenixConnection@14bae047 > After is org.apache.phoenix.jdbc.PhoenixConnection@14bae047 > true > Before is org.apache.phoenix.jdbc.PhoenixConnection@466d49f0 > After is org.apache.phoenix.jdbc.PhoenixConnection@466d49f0 > true > Before is org.apache.phoenix.jdbc.PhoenixConnection@40021799 > After is org.apache.phoenix.jdbc.PhoenixConnection@40021799 > true > Before is org.apache.phoenix.jdbc.PhoenixConnection@64f555e7 > After is org.apache.phoenix.jdbc.PhoenixConnection@64f555e7 > true > {noformat} > Why is this? Is there a bug fixes in version 4.8.0? > But the following link mentions this question: > [ https: //issues.apache.org/jira/browse/PHOENIX-2898] -- This message was sent by Atlassian JIRA (v6.4.14#64029)
Re: [ANNOUNCE] New Phoenix committer: Ethan Wang
Congratulations Ethan!! Great Job. Thanks, Rajeshbabu. On Thu, Oct 12, 2017 at 7:15 AM, James Taylorwrote: > On behalf of the Apache Phoenix PMC, I'm please to announce that Ethan Wang > has accepted our invitation to become a committer. He's behind some of the > great new 4.12 features of table sampling [1] and approximate count > distinct [2] along with contributing to the less sexy work of helping to > stabilize our unit tests. > > Please give Ethan a warm welcome to the project! > > James > > [1] https://phoenix.apache.org/tablesample.html > [2] https://phoenix.apache.org/language/functions.html# > approx_count_distinct >
Re: [ANNOUNCE] New Phoenix committer: Vincent Poon
Congratulations Vincent!! Great Job. Thanks, Rajeshbabu. On Thu, Oct 12, 2017 at 7:21 AM, James Taylorwrote: > On behalf of the Apache Phoenix PMC, I'm delighted to announce that Vincent > Poon has accepted our invitation to become a committer. He's had a big > impact in helping to stabilize our secondary index implementation, > including the creation of an index scrutiny tool that will detect > out-of-sync issues [1]. > > Looking forward to continued contributions. > > Please give Vincent a warm welcome to the project! > > James > > > [1] https://phoenix.apache.org/secondary_indexing.html#Index_Scrutiny_Tool >