[jira] [Commented] (PHOENIX-1277) CSVCommonsLoader not allowing null CHAR values (non PK)
[ https://issues.apache.org/jira/browse/PHOENIX-1277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14607908#comment-14607908 ] Christian Wirth commented on PHOENIX-1277: -- +1 Yeah, thanks a lot for the Patch. CSVCommonsLoader not allowing null CHAR values (non PK) --- Key: PHOENIX-1277 URL: https://issues.apache.org/jira/browse/PHOENIX-1277 Project: Phoenix Issue Type: Bug Affects Versions: 5.0.0 Reporter: Carter Shanklin Assignee: Gabriel Reid Priority: Minor Fix For: 5.0.0, 4.5.0, 4.4.1 Attachments: PHOENIX-1277.patch With this simple table: {code} create table dummy ( x integer primary key, y char(10) ); {code} And dataset {code} $ cat DummyValues.csv 1,x 2, 3,z {code} And running psql.py I get this: {code} psql.py -t DUMMY localhost:2181:/hbase-unsecure DummyValues.csv 14/09/22 16:31:02 ERROR util.CSVCommonsLoader: Error upserting record [2, ]: CHAR may not be null CSV Upsert complete. 2 rows upserted Time: 0.052 sec(s) {code} In sqlline I can insert nulls just fine into column y. Didn't check to see if this affects other types, e.g. varchar. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (PHOENIX-2025) Phoenix-core's hbase-default.xml prevents HBaseTestingUtility from starting up in client apps
[ https://issues.apache.org/jira/browse/PHOENIX-2025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14609305#comment-14609305 ] Thomas D'Silva commented on PHOENIX-2025: - [~gjacoby] IndexToolIT seems to hang because of this patch. Can you please take a look? Phoenix-core's hbase-default.xml prevents HBaseTestingUtility from starting up in client apps - Key: PHOENIX-2025 URL: https://issues.apache.org/jira/browse/PHOENIX-2025 Project: Phoenix Issue Type: Bug Affects Versions: 4.3.0 Reporter: Geoffrey Jacoby Assignee: Geoffrey Jacoby Fix For: 5.0.0, 4.5.0, 4.4.1 Attachments: PHOENIX-2025.patch, PHOENIX-2025_v2.patch Phoenix seems to have long had its own version of hbase-default.xml as a test resource in phoenix-core with a single setting to override hbase.defaults.for.version.skip to true. Sometime around Phoenix 4.3, phoenix-core seems to have been split into a main jar and a test jar, and the hbase-default.xml went into the test jar. The odd result of this is that in client apps that include the test jar, the classloader in HBaseConfiguration.create() now sees Phoenix's hbase-default.xml, rather than HBase's, and creates a Configuration object without HBase's defaults. One major consequence of this is that the HBaseTestingUtility can't start up, because it relies on those HBase defaults being set. This is a huge problem in a client app that includes the phoenix-core test jar in order to make use of the PhoenixTestDriver and BaseTest classes; the upgrade to 4.3 breaks all tests using the HBaseTestingUtility. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (PHOENIX-2087) Ensure predictable column position during alter table
[ https://issues.apache.org/jira/browse/PHOENIX-2087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] James Taylor reassigned PHOENIX-2087: - Assignee: James Taylor Ensure predictable column position during alter table - Key: PHOENIX-2087 URL: https://issues.apache.org/jira/browse/PHOENIX-2087 Project: Phoenix Issue Type: Bug Reporter: James Taylor Assignee: James Taylor Fix For: 5.0.0, 4.5.0 Attachments: PHOENIX-2087.patch, PHOENIX-2087_v2.patch The columns added in an alter table call are reversed currently. We should add them instead in the order they're listed so it's easier for a user to know what the ordinal position will end up being. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (PHOENIX-2050) Avoid checking for child views unless operating on table
[ https://issues.apache.org/jira/browse/PHOENIX-2050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] James Taylor resolved PHOENIX-2050. --- Resolution: Fixed Fix Version/s: (was: 4.3.1) 4.5.0 5.0.0 Avoid checking for child views unless operating on table Key: PHOENIX-2050 URL: https://issues.apache.org/jira/browse/PHOENIX-2050 Project: Phoenix Issue Type: Bug Affects Versions: 4.3.1 Reporter: Arun Kumaran Sabtharishi Assignee: James Taylor Labels: drop, patch, view Fix For: 5.0.0, 4.5.0 Attachments: 0001-PHOENIX-2050-Avoid-checking-for-child-views-unless-o.patch Whenever a view is dropped, MetaDataEndPointImpl.findChildViews() checks whether it has child views or not. This is doing a full scan in all the rows in SYSTEM.CATALOG which reduces performance. When the number of rows is very high, it causes timeout. The check has to be done only for the tables. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (PHOENIX-2092) [BRAINSTORMING] Support read-your-own-writes semantics without sending updates to server
James Taylor created PHOENIX-2092: - Summary: [BRAINSTORMING] Support read-your-own-writes semantics without sending updates to server Key: PHOENIX-2092 URL: https://issues.apache.org/jira/browse/PHOENIX-2092 Project: Phoenix Issue Type: Bug Reporter: James Taylor Our current transaction integration sends uncommitted data to the HBase server when a client attempts to read on a connection with uncommitted data. Instead, we could (in theory) keep the data on the client and treat these local edits as a kind of separate region which would get merged into the results of any queries. Unclear how many cases would be handled, though: - partially aggregated results would need to be adjusted (i.e. subtract the sum from an overridden value and add the sum from the new value) - secondary index usage - the client would need to know the rows to delete and the rows to add to the index table - deleted rows Parking this here for now as food for thought. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (PHOENIX-2025) Phoenix-core's hbase-default.xml prevents HBaseTestingUtility from starting up in client apps
[ https://issues.apache.org/jira/browse/PHOENIX-2025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14609475#comment-14609475 ] Thomas D'Silva commented on PHOENIX-2025: - After setting hbase.defaults.for.version.skip to true IndexToolIT passes. Phoenix-core's hbase-default.xml prevents HBaseTestingUtility from starting up in client apps - Key: PHOENIX-2025 URL: https://issues.apache.org/jira/browse/PHOENIX-2025 Project: Phoenix Issue Type: Bug Affects Versions: 4.3.0 Reporter: Geoffrey Jacoby Assignee: Geoffrey Jacoby Fix For: 5.0.0, 4.5.0, 4.4.1 Attachments: PHOENIX-2025.patch, PHOENIX-2025_v2.patch Phoenix seems to have long had its own version of hbase-default.xml as a test resource in phoenix-core with a single setting to override hbase.defaults.for.version.skip to true. Sometime around Phoenix 4.3, phoenix-core seems to have been split into a main jar and a test jar, and the hbase-default.xml went into the test jar. The odd result of this is that in client apps that include the test jar, the classloader in HBaseConfiguration.create() now sees Phoenix's hbase-default.xml, rather than HBase's, and creates a Configuration object without HBase's defaults. One major consequence of this is that the HBaseTestingUtility can't start up, because it relies on those HBase defaults being set. This is a huge problem in a client app that includes the phoenix-core test jar in order to make use of the PhoenixTestDriver and BaseTest classes; the upgrade to 4.3 breaks all tests using the HBaseTestingUtility. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Re: ApacheCON EU HBase Track Submissions
Get your submissions in, the deadline is imminent! On Thu, Jun 25, 2015 at 11:48 AM, Nick Dimiduk ndimi...@apache.org wrote: Hello developers, users, speakers, In honor of ApacheCON's inaugural Apache: Big Data event, I'm hoping to see a HBase: NoSQL + SQL track come together. The idea is to showcase the growing ecosystem of applications and tools built on top of and around Apache HBase. To have a track, we need content, and that's where YOU come in. CFP for ApacheCon closes in one week, July 1. Get your Phoenix talks submitted so we can pull together a full day of great HBase ecosystem talks! Thanks, Nick ApacheCON EU Sept 28 - Oct 2 Corinthia Hotel, Budapest, Hungary (a beautiful venue in an awesome city!) http://apachecon.eu/ CFP link: http://events.linuxfoundation.org/cfp/dashboard
[jira] [Commented] (PHOENIX-2025) Phoenix-core's hbase-default.xml prevents HBaseTestingUtility from starting up in client apps
[ https://issues.apache.org/jira/browse/PHOENIX-2025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14609414#comment-14609414 ] James Taylor commented on PHOENIX-2025: --- [~tdsilva] - try manually setting hbase.defaults.for.version.skip = true in the doSetup() method of the class to workaround this issue. If that fails, you can revert this change and confirm any changes you're making prior to check-in. Would appreciate it if you could help figure out the root caused, though, [~gjacoby]. Phoenix-core's hbase-default.xml prevents HBaseTestingUtility from starting up in client apps - Key: PHOENIX-2025 URL: https://issues.apache.org/jira/browse/PHOENIX-2025 Project: Phoenix Issue Type: Bug Affects Versions: 4.3.0 Reporter: Geoffrey Jacoby Assignee: Geoffrey Jacoby Fix For: 5.0.0, 4.5.0, 4.4.1 Attachments: PHOENIX-2025.patch, PHOENIX-2025_v2.patch Phoenix seems to have long had its own version of hbase-default.xml as a test resource in phoenix-core with a single setting to override hbase.defaults.for.version.skip to true. Sometime around Phoenix 4.3, phoenix-core seems to have been split into a main jar and a test jar, and the hbase-default.xml went into the test jar. The odd result of this is that in client apps that include the test jar, the classloader in HBaseConfiguration.create() now sees Phoenix's hbase-default.xml, rather than HBase's, and creates a Configuration object without HBase's defaults. One major consequence of this is that the HBaseTestingUtility can't start up, because it relies on those HBase defaults being set. This is a huge problem in a client app that includes the phoenix-core test jar in order to make use of the PhoenixTestDriver and BaseTest classes; the upgrade to 4.3 breaks all tests using the HBaseTestingUtility. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (PHOENIX-2087) Ensure predictable column position during alter table
[ https://issues.apache.org/jira/browse/PHOENIX-2087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14609480#comment-14609480 ] Hudson commented on PHOENIX-2087: - FAILURE: Integrated in Phoenix-master #808 (See [https://builds.apache.org/job/Phoenix-master/808/]) PHOENIX-2087 Ensure predictable column position during alter table (jtaylor: rev 72a7356bcade01990a59cfd5d72161f18ae909f3) * phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java * phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java * phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableIT.java Ensure predictable column position during alter table - Key: PHOENIX-2087 URL: https://issues.apache.org/jira/browse/PHOENIX-2087 Project: Phoenix Issue Type: Bug Reporter: James Taylor Assignee: James Taylor Fix For: 5.0.0, 4.5.0 Attachments: PHOENIX-2087.patch, PHOENIX-2087_v2.patch The columns added in an alter table call are reversed currently. We should add them instead in the order they're listed so it's easier for a user to know what the ordinal position will end up being. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (PHOENIX-2025) Phoenix-core's hbase-default.xml prevents HBaseTestingUtility from starting up in client apps
[ https://issues.apache.org/jira/browse/PHOENIX-2025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14609483#comment-14609483 ] James Taylor commented on PHOENIX-2025: --- FYI, [~mujtabachohan] - this may help our unit tests complete again on the 1.0 and 1.1 branches Phoenix-core's hbase-default.xml prevents HBaseTestingUtility from starting up in client apps - Key: PHOENIX-2025 URL: https://issues.apache.org/jira/browse/PHOENIX-2025 Project: Phoenix Issue Type: Bug Affects Versions: 4.3.0 Reporter: Geoffrey Jacoby Assignee: Geoffrey Jacoby Fix For: 5.0.0, 4.5.0, 4.4.1 Attachments: PHOENIX-2025.patch, PHOENIX-2025_v2.patch Phoenix seems to have long had its own version of hbase-default.xml as a test resource in phoenix-core with a single setting to override hbase.defaults.for.version.skip to true. Sometime around Phoenix 4.3, phoenix-core seems to have been split into a main jar and a test jar, and the hbase-default.xml went into the test jar. The odd result of this is that in client apps that include the test jar, the classloader in HBaseConfiguration.create() now sees Phoenix's hbase-default.xml, rather than HBase's, and creates a Configuration object without HBase's defaults. One major consequence of this is that the HBaseTestingUtility can't start up, because it relies on those HBase defaults being set. This is a huge problem in a client app that includes the phoenix-core test jar in order to make use of the PhoenixTestDriver and BaseTest classes; the upgrade to 4.3 breaks all tests using the HBaseTestingUtility. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (PHOENIX-2050) Avoid checking for child views unless operating on table
[ https://issues.apache.org/jira/browse/PHOENIX-2050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14609479#comment-14609479 ] Hudson commented on PHOENIX-2050: - FAILURE: Integrated in Phoenix-master #808 (See [https://builds.apache.org/job/Phoenix-master/808/]) PHOENIX-2050 Avoid checking for child views unless operating (jtaylor: rev a8a9d01d1eaafc33ea73913bec16254ac6a55be3) * phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableIT.java * phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java Avoid checking for child views unless operating on table Key: PHOENIX-2050 URL: https://issues.apache.org/jira/browse/PHOENIX-2050 Project: Phoenix Issue Type: Bug Affects Versions: 4.3.1 Reporter: Arun Kumaran Sabtharishi Assignee: James Taylor Labels: drop, patch, view Fix For: 5.0.0, 4.5.0 Attachments: 0001-PHOENIX-2050-Avoid-checking-for-child-views-unless-o.patch Whenever a view is dropped, MetaDataEndPointImpl.findChildViews() checks whether it has child views or not. This is doing a full scan in all the rows in SYSTEM.CATALOG which reduces performance. When the number of rows is very high, it causes timeout. The check has to be done only for the tables. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (PHOENIX-2087) Ensure predictable column position during alter table
[ https://issues.apache.org/jira/browse/PHOENIX-2087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] James Taylor resolved PHOENIX-2087. --- Resolution: Fixed Fix Version/s: 4.5.0 5.0.0 Ensure predictable column position during alter table - Key: PHOENIX-2087 URL: https://issues.apache.org/jira/browse/PHOENIX-2087 Project: Phoenix Issue Type: Bug Reporter: James Taylor Assignee: James Taylor Fix For: 5.0.0, 4.5.0 Attachments: PHOENIX-2087.patch, PHOENIX-2087_v2.patch The columns added in an alter table call are reversed currently. We should add them instead in the order they're listed so it's easier for a user to know what the ordinal position will end up being. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (PHOENIX-1875) implement ARRAY_PREPEND built in function
[ https://issues.apache.org/jira/browse/PHOENIX-1875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14609551#comment-14609551 ] ramkrishna.s.vasudevan commented on PHOENIX-1875: - This JIRA was missing in 4.x-branch-1.1. Thanks for [~Dumindux] for checking on that. Now cherry picked the commit to 4.x-branch-1.1. This would allow ARRAY_CAT and ARRAY_FILL to be checked in to 4.x branches. implement ARRAY_PREPEND built in function - Key: PHOENIX-1875 URL: https://issues.apache.org/jira/browse/PHOENIX-1875 Project: Phoenix Issue Type: Sub-task Reporter: Dumindu Buddhika Assignee: Dumindu Buddhika Fix For: 5.0.0, 4.4.0 Attachments: PHOENIX-1875-v2.patch, PHOENIX-1875-v3.patch, PHOENIX-1875-v4.patch, PHOENIX-1875-v5.patch, PHOENIX-1875-v6.patch ARRAY_PREPEND(1, ARRAY[2, 3]) = ARRAY[1, 2, 3] ARRAY_PREPEND(a, ARRAY[b, c]) = ARRAY[a, b, c] ARRAY_PREPEND(null, ARRAY[b, c]) = ARRAY[null, b, c] -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (PHOENIX-2064) ARRAY constructor doesn't work when used in COUNT DISTINCT
[ https://issues.apache.org/jira/browse/PHOENIX-2064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dumindu Buddhika reassigned PHOENIX-2064: - Assignee: Dumindu Buddhika ARRAY constructor doesn't work when used in COUNT DISTINCT -- Key: PHOENIX-2064 URL: https://issues.apache.org/jira/browse/PHOENIX-2064 Project: Phoenix Issue Type: Bug Reporter: James Taylor Assignee: Dumindu Buddhika As a workaround for PHOENIX-2062, I tried the following query: {code} SELECT COUNT(DISTINCT ARRAY[a.col1, b.col2]) ... {code} However, this always returns the full number of rows which is wrong. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (PHOENIX-2060) Implement ARRAY_FILL built in function
[ https://issues.apache.org/jira/browse/PHOENIX-2060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14608189#comment-14608189 ] Dumindu Buddhika commented on PHOENIX-2060: --- It is covered here in endtoend tests. char here has a maxlength of 15. {code} @Test public void testArrayFillFunctionChar() throws Exception { Connection conn = DriverManager.getConnection(getUrl()); initTables(conn); ResultSet rs; rs = conn.createStatement().executeQuery(SELECT ARRAY_FILL(char,4) FROM regions WHERE region_name = 'SF Bay Area'); assertTrue(rs.next()); Object[] objects = new Object[]{foo, foo, foo, foo}; Array array = conn.createArrayOf(CHAR, objects); assertEquals(array, rs.getArray(1)); assertFalse(rs.next()); } {code} Implement ARRAY_FILL built in function -- Key: PHOENIX-2060 URL: https://issues.apache.org/jira/browse/PHOENIX-2060 Project: Phoenix Issue Type: Sub-task Reporter: Dumindu Buddhika Assignee: Dumindu Buddhika Attachments: PHOENIX-2060-v1.patch ARRAY_FILL(element, length) - Returns an array initialized with supplied value and length. Eg: ARRAY_FILL(4, 5) - ARRAY[4, 4, 4, 4, 4] ARRAY_FILL(a, 3) - ARRAYa, a, a] -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (PHOENIX-2060) Implement ARRAY_FILL built in function
[ https://issues.apache.org/jira/browse/PHOENIX-2060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14608232#comment-14608232 ] ramkrishna.s.vasudevan commented on PHOENIX-2060: - Fine with me. Am +1 on the patch. [~giacomotaylor] You have any feed back? Implement ARRAY_FILL built in function -- Key: PHOENIX-2060 URL: https://issues.apache.org/jira/browse/PHOENIX-2060 Project: Phoenix Issue Type: Sub-task Reporter: Dumindu Buddhika Assignee: Dumindu Buddhika Attachments: PHOENIX-2060-v1.patch ARRAY_FILL(element, length) - Returns an array initialized with supplied value and length. Eg: ARRAY_FILL(4, 5) - ARRAY[4, 4, 4, 4, 4] ARRAY_FILL(a, 3) - ARRAYa, a, a] -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (PHOENIX-1661) Implement built-in functions for JSON
[ https://issues.apache.org/jira/browse/PHOENIX-1661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14608391#comment-14608391 ] ASF GitHub Bot commented on PHOENIX-1661: - Github user ictwanglei commented on the pull request: https://github.com/apache/phoenix/pull/93#issuecomment-117210029 @AakashPradeep sorry for the missing of comments and other coding mistakes and thanks for your patient guidance. I have modified the code according to your suggestion. Implement built-in functions for JSON - Key: PHOENIX-1661 URL: https://issues.apache.org/jira/browse/PHOENIX-1661 Project: Phoenix Issue Type: Bug Reporter: James Taylor Labels: JSON, Java, SQL, gsoc2015, mentor Attachments: PhoenixJSONSpecification-First-Draft.pdf Take a look at the JSON built-in functions that are implemented in Postgres (http://www.postgresql.org/docs/9.3/static/functions-json.html) and implement the same for Phoenix in Java following this guide: http://phoenix-hbase.blogspot.com/2013/04/how-to-add-your-own-built-in-function.html Examples of functions include ARRAY_TO_JSON, ROW_TO_JSON, TO_JSON, etc. The implementation of these built-in functions will be impacted by how JSON is stored in Phoenix. See PHOENIX-628. An initial implementation could work off of a simple text-based JSON representation and then when a native JSON type is implemented, they could be reworked to be more efficient. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (PHOENIX-2088) Prevent splitting and recombining select expressions for MR integration
James Taylor created PHOENIX-2088: - Summary: Prevent splitting and recombining select expressions for MR integration Key: PHOENIX-2088 URL: https://issues.apache.org/jira/browse/PHOENIX-2088 Project: Phoenix Issue Type: Bug Reporter: James Taylor Assignee: Thomas D'Silva We currently send in the select expressions for the MR integration with a delimiter separated string, split based on the delimiter, and then recombine again using a comma separator. This is problematic because the delimiter character may appear in a select expression, thus breaking this logic. Instead, we should use a comma as the delimiter and avoid splitting and recombining as it's not necessary in that case. Instead, the entire string can be used as-is in that case to form the select expressions. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] phoenix pull request: PHOENIX-1661 Implement built-in functions fo...
Github user ictwanglei commented on the pull request: https://github.com/apache/phoenix/pull/93#issuecomment-117210029 @AakashPradeep sorry for the missing of comments and other coding mistakes and thanks for your patient guidance. I have modified the code according to your suggestion. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Updated] (PHOENIX-2087) Ensure predictable column position during alter table
[ https://issues.apache.org/jira/browse/PHOENIX-2087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] James Taylor updated PHOENIX-2087: -- Attachment: PHOENIX-2087_v2.patch Final patch with unit tests. Ensure predictable column position during alter table - Key: PHOENIX-2087 URL: https://issues.apache.org/jira/browse/PHOENIX-2087 Project: Phoenix Issue Type: Bug Reporter: James Taylor Attachments: PHOENIX-2087.patch, PHOENIX-2087_v2.patch The columns added in an alter table call are reversed currently. We should add them instead in the order they're listed so it's easier for a user to know what the ordinal position will end up being. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (PHOENIX-2064) ARRAY constructor doesn't work when used in COUNT DISTINCT
[ https://issues.apache.org/jira/browse/PHOENIX-2064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14608545#comment-14608545 ] James Taylor commented on PHOENIX-2064: --- [~Dumindux] - would you mind taking a look at this one? ARRAY constructor doesn't work when used in COUNT DISTINCT -- Key: PHOENIX-2064 URL: https://issues.apache.org/jira/browse/PHOENIX-2064 Project: Phoenix Issue Type: Bug Reporter: James Taylor As a workaround for PHOENIX-2062, I tried the following query: {code} SELECT COUNT(DISTINCT ARRAY[a.col1, b.col2]) ... {code} However, this always returns the full number of rows which is wrong. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (PHOENIX-2060) Implement ARRAY_FILL built in function
[ https://issues.apache.org/jira/browse/PHOENIX-2060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14608544#comment-14608544 ] James Taylor commented on PHOENIX-2060: --- +1. Nice tests, [~Dumindux]. Implement ARRAY_FILL built in function -- Key: PHOENIX-2060 URL: https://issues.apache.org/jira/browse/PHOENIX-2060 Project: Phoenix Issue Type: Sub-task Reporter: Dumindu Buddhika Assignee: Dumindu Buddhika Attachments: PHOENIX-2060-v1.patch ARRAY_FILL(element, length) - Returns an array initialized with supplied value and length. Eg: ARRAY_FILL(4, 5) - ARRAY[4, 4, 4, 4, 4] ARRAY_FILL(a, 3) - ARRAYa, a, a] -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (PHOENIX-2090) Refine PhoenixTableScan.computeSelfCost() when scanRanges is available
Maryann Xue created PHOENIX-2090: Summary: Refine PhoenixTableScan.computeSelfCost() when scanRanges is available Key: PHOENIX-2090 URL: https://issues.apache.org/jira/browse/PHOENIX-2090 Project: Phoenix Issue Type: Sub-task Reporter: Maryann Xue We should compute a more accurate cost based on the scanRanges so that we can better choose among these different indices. For example, if we have more than one indices concerning different index keys, for example IDX1 is indexed on column a, and IDX2 is indexed on column b, and our query is like select x, y, z where a between 'A1' and 'A2' and b between 'B3' and 'B4'. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (PHOENIX-2089) Use index when skip-scan is possible or when ordering key matches
Maryann Xue created PHOENIX-2089: Summary: Use index when skip-scan is possible or when ordering key matches Key: PHOENIX-2089 URL: https://issues.apache.org/jira/browse/PHOENIX-2089 Project: Phoenix Issue Type: Sub-task Reporter: Maryann Xue Assignee: Maryann Xue This is very basic work for secondary index and depends on the fix for CALCITE-761 and CALCITE-763. Further work should be done in PhoenixTableScan.computeSelfCost() to give a more accurate estimate for better matching among different indices. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (PHOENIX-2058) Check for existence and compatibility of columns being added in view
[ https://issues.apache.org/jira/browse/PHOENIX-2058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14608634#comment-14608634 ] James Taylor commented on PHOENIX-2058: --- The check should be included in the for loop at MetaDataEndPointImpl:1616: {code} for (Mutation m : tableMetadata) { byte[][] rkmd = new byte[5][]; int pkCount = getVarChars(m.getRow(), rkmd); if (m instanceof Put pkCount COLUMN_NAME_INDEX Bytes.compareTo(schemaName, rkmd[SCHEMA_NAME_INDEX]) == 0 Bytes.compareTo(tableName, rkmd[TABLE_NAME_INDEX]) == 0) { Put p = (Put)m; byte[] columnKey = ByteUtil.concat(viewKey, QueryConstants.SEPARATOR_BYTE_ARRAY, rkmd[COLUMN_NAME_INDEX]); if (rkmd[FAMILY_NAME_INDEX] != null) { columnKey = ByteUtil.concat(columnKey, QueryConstants.SEPARATOR_BYTE_ARRAY, rkmd[FAMILY_NAME_INDEX]); } Put viewColumnDefinitionPut = new Put(columnKey, clientTimeStamp); for (Cell cell : p.getFamilyCellMap().values().iterator().next()) { viewColumnDefinitionPut.add(CellUtil.createCell(columnKey, CellUtil.cloneFamily(cell), CellUtil.cloneQualifier(cell), cell.getTimestamp(), cell.getTypeByte(), CellUtil.cloneValue(cell))); } {code} The view is the PTable that represents the view that we're adding the column to. The for loop is looping through the mutations from the base table being added. First, outside the loop, you can check if the view already has the column that's being added using the following code: {code} PColumn existingViewColumn = null; try { // Maybe deserving of a new SchemaUtil.getColumnByName(byte[] familyName, String columnName) function String columnName = Bytes.toString(rkmd[COLUMN_NAME_INDEX]); existingViewColumn = rkmd[FAMILY_NAME_INDEX] == null ? view.getPKColumn(columnName) : view.getColumnFamily(rkmd[FAMILY_NAME_INDEX]).getColumn(columnName); } catch (MetaDataEntityNotFoundException e) {} // Ignore - means column family or column name don't exist {code} Within this loop, only if existingViewColumn != null, you'd want to check the CellUtil.cloneQualifier(cell) being equal to DATA_TYPE_BYTES (this is the sqlTypeID that you can use to get the PDataType), COLUMN_SIZE_BYTES (this is maxLength), DECIMAL_DIGITS_BYTES (this is scale), and KEY_SEQ_BYTES (this is the slot position within the PK which has to match, but only if the column being added is a PK column). Then, outside the loop (or even inside of it when possible), do the checks mentioned in the description of this JIRA. If one of the checks fails, just return the following (change the return type of addRowsToChildViews to MetaDataMutationResult) and return early in the caller if you get a non null MetaDataMutationResult from addRowsToChildViews: {code} return new MetaDataMutationResult(MutationCode.UNALLOWED_TABLE_MUTATION, EnvironmentEdgeManager.currentTimeMillis(), null); {code} We can eventually do a better job of returning a more specific exception, but this is ok for now. Check for existence and compatibility of columns being added in view Key: PHOENIX-2058 URL: https://issues.apache.org/jira/browse/PHOENIX-2058 Project: Phoenix Issue Type: Bug Reporter: James Taylor Assignee: Thomas D'Silva One check I realized we're not doing, but need to do, is ensuring that the column being added by the base table doesn't already exist in the view. If the column does already exist, ideally we can allow the addition to the base table if the type matches and the scale is null or = existing scale and the maxLength is null or = existing maxLength. Also, if a column is a PK column and it already exists in the view, the position in the PK must match. The fact that we've materialized a PTable for the view should make the addition of this check easier. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (PHOENIX-2021) Implement ARRAY_CAT built in function
[ https://issues.apache.org/jira/browse/PHOENIX-2021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ramkrishna.s.vasudevan resolved PHOENIX-2021. - Resolution: Fixed Fix Version/s: 4.4.1 Implement ARRAY_CAT built in function - Key: PHOENIX-2021 URL: https://issues.apache.org/jira/browse/PHOENIX-2021 Project: Phoenix Issue Type: Sub-task Reporter: Dumindu Buddhika Assignee: Dumindu Buddhika Fix For: 4.4.1 Attachments: PHOENIX-2021-v3.patch, PHOENIX-2021-v4.patch, PHOENIX-2021-v5.patch, PHOENIX-2021.patch Ex: ARRAY_CAT(ARRAY[2, 3, 4], ARRAY[4, 5, 6]) = ARRAY[2,3,4,4,5,6] ARRAY_CAT(ARRAY[a, b], ARRAY[c, d]) = ARRAY[a, b, c, d] -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (PHOENIX-2021) Implement ARRAY_CAT built in function
[ https://issues.apache.org/jira/browse/PHOENIX-2021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14608701#comment-14608701 ] ramkrishna.s.vasudevan commented on PHOENIX-2021: - [~giacomotaylor] Ok. I shall commit this to 4.x branches. [~Dumindux] The cherry pick from master to 4.x branch does not happen cleanly. Can you prepare an updated patch for 4.x branch? Implement ARRAY_CAT built in function - Key: PHOENIX-2021 URL: https://issues.apache.org/jira/browse/PHOENIX-2021 Project: Phoenix Issue Type: Sub-task Reporter: Dumindu Buddhika Assignee: Dumindu Buddhika Fix For: 4.5.0 Attachments: PHOENIX-2021-v3.patch, PHOENIX-2021-v4.patch, PHOENIX-2021-v5.patch, PHOENIX-2021.patch Ex: ARRAY_CAT(ARRAY[2, 3, 4], ARRAY[4, 5, 6]) = ARRAY[2,3,4,4,5,6] ARRAY_CAT(ARRAY[a, b], ARRAY[c, d]) = ARRAY[a, b, c, d] -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (PHOENIX-2021) Implement ARRAY_CAT built in function
[ https://issues.apache.org/jira/browse/PHOENIX-2021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ramkrishna.s.vasudevan updated PHOENIX-2021: Fix Version/s: (was: 4.4.1) 4.5.0 Implement ARRAY_CAT built in function - Key: PHOENIX-2021 URL: https://issues.apache.org/jira/browse/PHOENIX-2021 Project: Phoenix Issue Type: Sub-task Reporter: Dumindu Buddhika Assignee: Dumindu Buddhika Fix For: 4.5.0 Attachments: PHOENIX-2021-v3.patch, PHOENIX-2021-v4.patch, PHOENIX-2021-v5.patch, PHOENIX-2021.patch Ex: ARRAY_CAT(ARRAY[2, 3, 4], ARRAY[4, 5, 6]) = ARRAY[2,3,4,4,5,6] ARRAY_CAT(ARRAY[a, b], ARRAY[c, d]) = ARRAY[a, b, c, d] -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (PHOENIX-2060) Implement ARRAY_FILL built in function
[ https://issues.apache.org/jira/browse/PHOENIX-2060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ramkrishna.s.vasudevan resolved PHOENIX-2060. - Resolution: Fixed Fix Version/s: 4.5.0 Implement ARRAY_FILL built in function -- Key: PHOENIX-2060 URL: https://issues.apache.org/jira/browse/PHOENIX-2060 Project: Phoenix Issue Type: Sub-task Reporter: Dumindu Buddhika Assignee: Dumindu Buddhika Fix For: 4.5.0 Attachments: PHOENIX-2060-v1.patch ARRAY_FILL(element, length) - Returns an array initialized with supplied value and length. Eg: ARRAY_FILL(4, 5) - ARRAY[4, 4, 4, 4, 4] ARRAY_FILL(a, 3) - ARRAYa, a, a] -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Reopened] (PHOENIX-2060) Implement ARRAY_FILL built in function
[ https://issues.apache.org/jira/browse/PHOENIX-2060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ramkrishna.s.vasudevan reopened PHOENIX-2060: - Reopening to push to 4.x branches. Implement ARRAY_FILL built in function -- Key: PHOENIX-2060 URL: https://issues.apache.org/jira/browse/PHOENIX-2060 Project: Phoenix Issue Type: Sub-task Reporter: Dumindu Buddhika Assignee: Dumindu Buddhika Fix For: 4.5.0 Attachments: PHOENIX-2060-v1.patch ARRAY_FILL(element, length) - Returns an array initialized with supplied value and length. Eg: ARRAY_FILL(4, 5) - ARRAY[4, 4, 4, 4, 4] ARRAY_FILL(a, 3) - ARRAYa, a, a] -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (PHOENIX-2060) Implement ARRAY_FILL built in function
[ https://issues.apache.org/jira/browse/PHOENIX-2060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14608655#comment-14608655 ] ramkrishna.s.vasudevan commented on PHOENIX-2060: - Pushed to master. Thanks for the nice work [~Dumindux]. Implement ARRAY_FILL built in function -- Key: PHOENIX-2060 URL: https://issues.apache.org/jira/browse/PHOENIX-2060 Project: Phoenix Issue Type: Sub-task Reporter: Dumindu Buddhika Assignee: Dumindu Buddhika Attachments: PHOENIX-2060-v1.patch ARRAY_FILL(element, length) - Returns an array initialized with supplied value and length. Eg: ARRAY_FILL(4, 5) - ARRAY[4, 4, 4, 4, 4] ARRAY_FILL(a, 3) - ARRAYa, a, a] -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Reopened] (PHOENIX-2021) Implement ARRAY_CAT built in function
[ https://issues.apache.org/jira/browse/PHOENIX-2021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ramkrishna.s.vasudevan reopened PHOENIX-2021: - Reopening to push to 4.x branches. Implement ARRAY_CAT built in function - Key: PHOENIX-2021 URL: https://issues.apache.org/jira/browse/PHOENIX-2021 Project: Phoenix Issue Type: Sub-task Reporter: Dumindu Buddhika Assignee: Dumindu Buddhika Fix For: 4.5.0 Attachments: PHOENIX-2021-v3.patch, PHOENIX-2021-v4.patch, PHOENIX-2021-v5.patch, PHOENIX-2021.patch Ex: ARRAY_CAT(ARRAY[2, 3, 4], ARRAY[4, 5, 6]) = ARRAY[2,3,4,4,5,6] ARRAY_CAT(ARRAY[a, b], ARRAY[c, d]) = ARRAY[a, b, c, d] -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (PHOENIX-2089) Use index when skip-scan is possible or when ordering key matches
[ https://issues.apache.org/jira/browse/PHOENIX-2089?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Maryann Xue resolved PHOENIX-2089. -- Resolution: Fixed Use index when skip-scan is possible or when ordering key matches - Key: PHOENIX-2089 URL: https://issues.apache.org/jira/browse/PHOENIX-2089 Project: Phoenix Issue Type: Sub-task Reporter: Maryann Xue Assignee: Maryann Xue Original Estimate: 240h Remaining Estimate: 240h This is very basic work for secondary index and depends on the fix for CALCITE-761 and CALCITE-763. Further work should be done in PhoenixTableScan.computeSelfCost() to give a more accurate estimate for better matching among different indices. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (PHOENIX-2091) Add materialized view definition automatically based on the available indices
Maryann Xue created PHOENIX-2091: Summary: Add materialized view definition automatically based on the available indices Key: PHOENIX-2091 URL: https://issues.apache.org/jira/browse/PHOENIX-2091 Project: Phoenix Issue Type: Sub-task Reporter: Maryann Xue We should look for available indices in Phoenix and add them as pre-populated materialized views in Calcite. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (PHOENIX-2021) Implement ARRAY_CAT built in function
[ https://issues.apache.org/jira/browse/PHOENIX-2021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14608646#comment-14608646 ] James Taylor commented on PHOENIX-2021: --- [~ram_krish] - any new built-in function should go into the 4.x branches and master, not the 4.4 branches. This built-in functions will appear in the 4.5.0 release (as they can't appear in patch release). Implement ARRAY_CAT built in function - Key: PHOENIX-2021 URL: https://issues.apache.org/jira/browse/PHOENIX-2021 Project: Phoenix Issue Type: Sub-task Reporter: Dumindu Buddhika Assignee: Dumindu Buddhika Fix For: 4.4.1 Attachments: PHOENIX-2021-v3.patch, PHOENIX-2021-v4.patch, PHOENIX-2021-v5.patch, PHOENIX-2021.patch Ex: ARRAY_CAT(ARRAY[2, 3, 4], ARRAY[4, 5, 6]) = ARRAY[2,3,4,4,5,6] ARRAY_CAT(ARRAY[a, b], ARRAY[c, d]) = ARRAY[a, b, c, d] -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (PHOENIX-2060) Implement ARRAY_FILL built in function
[ https://issues.apache.org/jira/browse/PHOENIX-2060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14608681#comment-14608681 ] Hudson commented on PHOENIX-2060: - FAILURE: Integrated in Phoenix-master #806 (See [https://builds.apache.org/job/Phoenix-master/806/]) PHOENIX-2060 - Implement ARRAY_FILL built in function (Dumindu Buddhika) (ramkrishna: rev c0ad8cf6772b59e0ee24d1a4e8bc935d35a26a13) * phoenix-core/src/main/java/org/apache/phoenix/expression/ExpressionType.java Implement ARRAY_FILL built in function -- Key: PHOENIX-2060 URL: https://issues.apache.org/jira/browse/PHOENIX-2060 Project: Phoenix Issue Type: Sub-task Reporter: Dumindu Buddhika Assignee: Dumindu Buddhika Fix For: 4.5.0 Attachments: PHOENIX-2060-v1.patch ARRAY_FILL(element, length) - Returns an array initialized with supplied value and length. Eg: ARRAY_FILL(4, 5) - ARRAY[4, 4, 4, 4, 4] ARRAY_FILL(a, 3) - ARRAYa, a, a] -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (PHOENIX-1954) Reserve chunks of numbers for a sequence
[ https://issues.apache.org/jira/browse/PHOENIX-1954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jan Fernando updated PHOENIX-1954: -- Assignee: Jan Fernando (was: Thomas D'Silva) Reserve chunks of numbers for a sequence Key: PHOENIX-1954 URL: https://issues.apache.org/jira/browse/PHOENIX-1954 Project: Phoenix Issue Type: New Feature Reporter: Lars Hofhansl Assignee: Jan Fernando In order to be able to generate many ids in bulk (for example in map reduce jobs) we need a way to generate or reserve large sets of ids. We also need to mix ids reserved with incrementally generated ids from other clients. For this we need to atomically increment the sequence and return the value it had when the increment happened. If we're OK to throw the current cached set of values away we can do {{NEXT VALUE FOR seq(,N)}}, that needs to increment value and return the value it incremented from (i.e. it has to throw the current cache away, and return the next value it found at the server). Or we can invent a new syntax {{RESERVE VALUES FOR seq, N}} that does the same, but does not invalidate the cache. Note that in either case we won't retrieve the reserved set of values via {{NEXT VALUE FOR}} because we'd need to be idempotent in our case, all we need to guarantee is that after a call to {{RESERVE VALUES FOR seq, N}}, which returns a value M is that the range [M, M+N) won't be used by any other user of the sequence. My might need reserve 1bn ids this way ahead of a map reduce run. Any better ideas? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (PHOENIX-2076) Separator for select expression causes problems if used in expressions
[ https://issues.apache.org/jira/browse/PHOENIX-2076?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Thomas D'Silva resolved PHOENIX-2076. - Resolution: Duplicate Separator for select expression causes problems if used in expressions -- Key: PHOENIX-2076 URL: https://issues.apache.org/jira/browse/PHOENIX-2076 Project: Phoenix Issue Type: Bug Reporter: James Taylor Assignee: Thomas D'Silva Instead of parsing the select expressions and putting them back together, we can just use a comma as the separator without even needing a separator. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[ANNOUNCE] HBase 1.1.1 is now available for download
The HBase team is happy to announce the availability of HBase 1.1.1! Download it from an Apache mirror near you, http://www.apache.org/dyn/closer.cgi/hbase/, or wire up through the maven repo. HBase 1.1.1 is the first patch release in the HBase 1.1 line, continuing on the theme of bringing a stable, reliable database to the Hadoop and NoSQL communities. This release includes over 100 bug fixes since the 1.1.0 release, including an assignment manager bug that can lead to data loss in rare cases. Users of 1.1.0 are strongly encouraged to update to 1.1.1 as soon as possible. The full list of issues can be found at https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12310753version=12332169 Thanks to all the contributors who made this release possible! Cheers, The HBase Dev Team
[jira] [Commented] (PHOENIX-2087) Ensure predictable column position during alter table
[ https://issues.apache.org/jira/browse/PHOENIX-2087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14608987#comment-14608987 ] Thomas D'Silva commented on PHOENIX-2087: - +1 LGTM Ensure predictable column position during alter table - Key: PHOENIX-2087 URL: https://issues.apache.org/jira/browse/PHOENIX-2087 Project: Phoenix Issue Type: Bug Reporter: James Taylor Attachments: PHOENIX-2087.patch, PHOENIX-2087_v2.patch The columns added in an alter table call are reversed currently. We should add them instead in the order they're listed so it's easier for a user to know what the ordinal position will end up being. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (PHOENIX-2060) Implement ARRAY_FILL built in function
[ https://issues.apache.org/jira/browse/PHOENIX-2060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14608949#comment-14608949 ] Hudson commented on PHOENIX-2060: - FAILURE: Integrated in Phoenix-master #807 (See [https://builds.apache.org/job/Phoenix-master/807/]) PHOENIX-2060 - ARRAY_FILL Push the new files (ramkrishna: rev fb8c9413f6583798059741fb7c03c8c04a2c3336) * phoenix-core/src/main/java/org/apache/phoenix/expression/function/ArrayFillFunction.java * phoenix-core/src/test/java/org/apache/phoenix/expression/ArrayFillFunctionTest.java * phoenix-core/src/it/java/org/apache/phoenix/end2end/ArrayFillFunctionIT.java Implement ARRAY_FILL built in function -- Key: PHOENIX-2060 URL: https://issues.apache.org/jira/browse/PHOENIX-2060 Project: Phoenix Issue Type: Sub-task Reporter: Dumindu Buddhika Assignee: Dumindu Buddhika Fix For: 4.5.0 Attachments: PHOENIX-2060-v1.patch ARRAY_FILL(element, length) - Returns an array initialized with supplied value and length. Eg: ARRAY_FILL(4, 5) - ARRAY[4, 4, 4, 4, 4] ARRAY_FILL(a, 3) - ARRAYa, a, a] -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (PHOENIX-2050) Avoid checking for child views unless operating on table
[ https://issues.apache.org/jira/browse/PHOENIX-2050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14608921#comment-14608921 ] Thomas D'Silva commented on PHOENIX-2050: - +1 LGTM Avoid checking for child views unless operating on table Key: PHOENIX-2050 URL: https://issues.apache.org/jira/browse/PHOENIX-2050 Project: Phoenix Issue Type: Bug Affects Versions: 4.3.1 Reporter: Arun Kumaran Sabtharishi Assignee: James Taylor Labels: drop, patch, view Fix For: 4.3.1 Attachments: 0001-PHOENIX-2050-Avoid-checking-for-child-views-unless-o.patch Whenever a view is dropped, MetaDataEndPointImpl.findChildViews() checks whether it has child views or not. This is doing a full scan in all the rows in SYSTEM.CATALOG which reduces performance. When the number of rows is very high, it causes timeout. The check has to be done only for the tables. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Re: [jira] [Updated] (PHOENIX-1118) Provide a tool for visualizing Phoenix tracing information
Hi Nick, Regard to the implementation the user will search for a particular query. The query will be converted into a statement similar to the description column by GenerateStatementService. This will be rendered as a service. And this service is using a StatementFactory which is responsible for the conversion. The Factory is written as extendable as the Phoenix queries may change. I have started working on milestone 2. It includes the above description found in here https://github.com/AyolaJayamaha/TracingWebApp/commits/milestone-2. The last two commits are for the service and factory. Implementation will happen as described above and its still on progress. Thanks, Nishani. On Tue, Jun 30, 2015 at 3:33 AM, Nick Dimiduk ndimi...@apache.org wrote: Hi Nishani, Any progress on your module for including the SQL query in the UI? Thanks, Nick On Wed, Jun 24, 2015 at 11:46 AM, Ayola Jayamaha raphaelan...@gmail.com wrote: Hi All, I'll create a javascript module in angular to solve this issue and share. Thanks, Nishani. On Thu, Jun 25, 2015 at 12:09 AM, James Taylor jamestay...@apache.org wrote: Yes, exactly right. On Wed, Jun 24, 2015 at 11:35 AM, Ayola Jayamaha raphaelan...@gmail.com wrote: Hi All, Now it is clear. We can create a statement from the user's input query to the format in the description column and filter out the possible root spans of the traces of the query. Then by selecting the traces which have their parent ids equal to span id of the root span we can get all the traces relevant to the query. We can find the total duration for a particular statement. Interesting statements/traces can viewed as a timeline. Is this method alright? Thanks, Nishani On Wed, Jun 24, 2015 at 11:21 PM, James Taylor jamestay...@apache.org wrote: Yep, Jesse's right - the query is in the description column of the root span of the trace. We'll need to include this in the trace UI, otherwise the user won't have the context they need to know what they're looking at. If there's something missing from the way we're capturing, we can fix it. Thanks, James On Wed, Jun 24, 2015 at 9:09 AM, Jesse Yates jesse.k.ya...@gmail.com wrote: There was some discussion (maybe internal to salesforce?) around how to include the query in the trace. I think the simplest we came up with was just adding it to the trace metadata (as an annotation?) and then you can pull it out later since you know the key it was stored as On Wed, Jun 24, 2015 at 9:05 AM Ayola Jayamaha raphaelan...@gmail.com wrote: Hi James, I find it difficult to come up with a method to include the SQL statements with the traces. But it is possible to filter out the traces for a particular table for a given time period. Aggregating over the time spent for each SQL statement is possible. With the relationship between parent and span ids it is possible to differentiate between traces belonging to each query. Thanks, Nishani On Wed, Jun 24, 2015 at 12:11 PM, James Taylor jamestay...@apache.org wrote: Hi Nishani, I think this is a good start. One important part is tying this back to something to which the user can relate - namely the SQL statement that was executed. Would it be possible to include the string of the statement? Another interesting angle would be to group by the statement and aggregate the overall time spent to get an idea of the top N queries over a given time range. Then drilling into those to see the traces. Thanks, James On Fri, Jun 19, 2015 at 11:01 AM, Ayola Jayamaha raphaelan...@gmail.com wrote: Hi All, Milestone-1 can be found in my git repo[1]. Features : - Adding tracing to a timeline using sample json - Comparing two or more traces on the timeline - Visualizing the trace distribution across the time axis - Removing a trace from the list of traces represented on the chart - Listing the tracing information on a table Any feedback will be appreciated. Thanks. [1] https://github.com/AyolaJayamaha/TracingWebApp/tree/milestone-1 On Wed, Jun 17, 2015 at 11:35 PM, Ayola Jayamaha raphaelan...@gmail.com wrote: Hi All, You can find milestone-1 in my git repo. This is the working branch[1]. It has not been binded to backend yet. But the visualization of traces can be seen from the code. Traces can be selected from table/time period and shown on the timeline as [2]. The parameters could be entered as TableName, StartTime, EndTime and the traces would be listed down. The user can select the traces as his preference and view their timelines. Is the procedure ok? The start time of different traces could be visualized by bringing them up to a same
[jira] [Updated] (PHOENIX-2036) PhoenixConfigurationUtil should provide a pre-normalize table name to PhoenixRuntime
[ https://issues.apache.org/jira/browse/PHOENIX-2036?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] maghamravikiran updated PHOENIX-2036: - Attachment: PHOENIX-2036-v2.patch PhoenixConfigurationUtil should provide a pre-normalize table name to PhoenixRuntime Key: PHOENIX-2036 URL: https://issues.apache.org/jira/browse/PHOENIX-2036 Project: Phoenix Issue Type: Bug Reporter: Siddhi Mehta Priority: Minor Attachments: PHOENIX-2036-v1.patch, PHOENIX-2036-v2.patch, PHOENIX-2036.patch Original Estimate: 24h Remaining Estimate: 24h I was trying a basic store using PhoenixHBaseStorage and ran into some issues with it complaining about TableNotFoundException. The table(CUSTOM_ENTITY.z02) in question exists. Looking at the stacktrace I think its likely related to the change in PHOENIX-1682 where phoenix runtime expects a pre-normalized table name. We need to update PhoenixConfigurationUtil.getSelectColumnMetadataList(Configuration) be pass a pre-normalized table. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (PHOENIX-2036) PhoenixConfigurationUtil should provide a pre-normalize table name to PhoenixRuntime
[ https://issues.apache.org/jira/browse/PHOENIX-2036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14607614#comment-14607614 ] maghamravikiran commented on PHOENIX-2036: -- [~jamestaylor] , [~siddhimehta] I have attached the latest patch file . Please have a look. [~tdsilva] Can you please work on your changes post-merging my changes. PhoenixConfigurationUtil should provide a pre-normalize table name to PhoenixRuntime Key: PHOENIX-2036 URL: https://issues.apache.org/jira/browse/PHOENIX-2036 Project: Phoenix Issue Type: Bug Reporter: Siddhi Mehta Priority: Minor Attachments: PHOENIX-2036-v1.patch, PHOENIX-2036-v2.patch, PHOENIX-2036.patch Original Estimate: 24h Remaining Estimate: 24h I was trying a basic store using PhoenixHBaseStorage and ran into some issues with it complaining about TableNotFoundException. The table(CUSTOM_ENTITY.z02) in question exists. Looking at the stacktrace I think its likely related to the change in PHOENIX-1682 where phoenix runtime expects a pre-normalized table name. We need to update PhoenixConfigurationUtil.getSelectColumnMetadataList(Configuration) be pass a pre-normalized table. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (PHOENIX-2060) Implement ARRAY_FILL built in function
[ https://issues.apache.org/jira/browse/PHOENIX-2060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14608070#comment-14608070 ] ramkrishna.s.vasudevan commented on PHOENIX-2060: - [~Dumindux] Patch looks good to me. Nice tests. Is there a specific test covering this ? {code} //When max length of a char array is not the max length of the element passed in 45 if (getElementExpr().getDataType().isFixedWidth() getMaxLength() != null getMaxLength() != array.getMaxLength()) { 46 array = new PhoenixArray(array, getMaxLength()); 47 } {code} Implement ARRAY_FILL built in function -- Key: PHOENIX-2060 URL: https://issues.apache.org/jira/browse/PHOENIX-2060 Project: Phoenix Issue Type: Sub-task Reporter: Dumindu Buddhika Assignee: Dumindu Buddhika Attachments: PHOENIX-2060-v1.patch ARRAY_FILL(element, length) - Returns an array initialized with supplied value and length. Eg: ARRAY_FILL(4, 5) - ARRAY[4, 4, 4, 4, 4] ARRAY_FILL(a, 3) - ARRAYa, a, a] -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (PHOENIX-1954) Reserve chunks of numbers for a sequence
[ https://issues.apache.org/jira/browse/PHOENIX-1954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] James Taylor updated PHOENIX-1954: -- Attachment: PHOENIX-1954-wip.patch [~jfernando_sfdc] - here's an patch that compiles. I think it mostly threads everything in where needed. For the case you brought up about referencing the same sequence with different allocations, I think it's going to be best to treat these as the same, but keep the max allocation we see (this is what this patch does). Otherwise, putting it into the SequenceKey will be problematic as we don't have this information when the sequence is created and dropped (which is one way that clears out our client side cache). Also, we should allocate numToAllocate * incrementByAmount on the server side. Your sequences will likely be incrementing by 1, but it'd be allowed for it to be more than 1 too. Reserve chunks of numbers for a sequence Key: PHOENIX-1954 URL: https://issues.apache.org/jira/browse/PHOENIX-1954 Project: Phoenix Issue Type: New Feature Reporter: Lars Hofhansl Assignee: Jan Fernando Attachments: PHOENIX-1954-wip.patch In order to be able to generate many ids in bulk (for example in map reduce jobs) we need a way to generate or reserve large sets of ids. We also need to mix ids reserved with incrementally generated ids from other clients. For this we need to atomically increment the sequence and return the value it had when the increment happened. If we're OK to throw the current cached set of values away we can do {{NEXT VALUE FOR seq(,N)}}, that needs to increment value and return the value it incremented from (i.e. it has to throw the current cache away, and return the next value it found at the server). Or we can invent a new syntax {{RESERVE VALUES FOR seq, N}} that does the same, but does not invalidate the cache. Note that in either case we won't retrieve the reserved set of values via {{NEXT VALUE FOR}} because we'd need to be idempotent in our case, all we need to guarantee is that after a call to {{RESERVE VALUES FOR seq, N}}, which returns a value M is that the range [M, M+N) won't be used by any other user of the sequence. My might need reserve 1bn ids this way ahead of a map reduce run. Any better ideas? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Reopened] (PHOENIX-1913) Unable to build the website code in svn
[ https://issues.apache.org/jira/browse/PHOENIX-1913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Josh Mahonin reopened PHOENIX-1913: --- I just ran into the same issue, it doesn't work with Java 8, despite reporting a build success. The errors displayed are the same as in this report. James asked me to re-open and assign to Mujtaba. Unable to build the website code in svn --- Key: PHOENIX-1913 URL: https://issues.apache.org/jira/browse/PHOENIX-1913 Project: Phoenix Issue Type: Bug Reporter: maghamravikiran Assignee: Mujtaba Chohan Following the steps mentioned in http://phoenix.apache.org/building_website.html I get the below exception Generate Phoenix Website Pre-req: On source repo run $ mvn install -DskipTests BUILDING LANGUAGE REFERENCE === src/tools/org/h2/build/BuildBase.java:136: error: no suitable method found for replaceAll(String,String,String) pattern = replaceAll(pattern, /, File.separator); ^ method List.replaceAll(UnaryOperatorFile) is not applicable (actual and formal argument lists differ in length) method ArrayList.replaceAll(UnaryOperatorFile) is not applicable (actual and formal argument lists differ in length) 1 error Error: Could not find or load main class org.h2.build.Build BUILDING SITE === [INFO] Scanning for projects... [ERROR] The build could not read 1 project - [Help 1] [ERROR] [ERROR] The project org.apache.phoenix:phoenix-site:[unknown-version] (/Users/ravimagham/git/sources/phoenix/site/source/pom.xml) has 1 error [ERROR] Non-resolvable parent POM: Could not find artifact org.apache.phoenix:phoenix:pom:4.4.0-SNAPSHOT and 'parent.relativePath' points at wrong local POM @ line 4, column 11 - [Help 2] [ERROR] [ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch. [ERROR] Re-run Maven using the -X switch to enable full debug logging. [ERROR] [ERROR] For more information about the errors and possible solutions, please read the following articles: [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/ProjectBuildingException [ERROR] [Help 2] http://cwiki.apache.org/confluence/display/MAVEN/UnresolvableModelException Can you please have a look ? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (PHOENIX-1954) Reserve chunks of numbers for a sequence
[ https://issues.apache.org/jira/browse/PHOENIX-1954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14609077#comment-14609077 ] James Taylor edited comment on PHOENIX-1954 at 6/30/15 11:11 PM: - [~jfernando_sfdc] - here's an patch that compiles. I think it mostly threads everything in where needed. For the case you brought up about referencing the same sequence with different allocations, I think it's going to be best to treat these as the same, but keep the max allocation we see (this is what this patch does). Otherwise, putting it into the SequenceKey will be problematic as we don't have this information when the sequence is created and dropped (which is one way that clears out our client side cache). For example, the following query would return the same value for the NEXT VALUE expressions (the next value for the sequence with a batch of 1000 consecutive sequences allocated): {code} SELECT NEXT VALUE FOR seq, NEXT 1000 VALUES FOR seq FROM T LIMIT 1; {code} as does this query today if both were NEXT VALUE FOR calls. Also, we should allocate numToAllocate * incrementByAmount on the server side. Your sequences will likely be incrementing by 1, but it'd be allowed for it to be more than 1 too. was (Author: jamestaylor): [~jfernando_sfdc] - here's an patch that compiles. I think it mostly threads everything in where needed. For the case you brought up about referencing the same sequence with different allocations, I think it's going to be best to treat these as the same, but keep the max allocation we see (this is what this patch does). Otherwise, putting it into the SequenceKey will be problematic as we don't have this information when the sequence is created and dropped (which is one way that clears out our client side cache). Also, we should allocate numToAllocate * incrementByAmount on the server side. Your sequences will likely be incrementing by 1, but it'd be allowed for it to be more than 1 too. Reserve chunks of numbers for a sequence Key: PHOENIX-1954 URL: https://issues.apache.org/jira/browse/PHOENIX-1954 Project: Phoenix Issue Type: New Feature Reporter: Lars Hofhansl Assignee: Jan Fernando Attachments: PHOENIX-1954-wip.patch In order to be able to generate many ids in bulk (for example in map reduce jobs) we need a way to generate or reserve large sets of ids. We also need to mix ids reserved with incrementally generated ids from other clients. For this we need to atomically increment the sequence and return the value it had when the increment happened. If we're OK to throw the current cached set of values away we can do {{NEXT VALUE FOR seq(,N)}}, that needs to increment value and return the value it incremented from (i.e. it has to throw the current cache away, and return the next value it found at the server). Or we can invent a new syntax {{RESERVE VALUES FOR seq, N}} that does the same, but does not invalidate the cache. Note that in either case we won't retrieve the reserved set of values via {{NEXT VALUE FOR}} because we'd need to be idempotent in our case, all we need to guarantee is that after a call to {{RESERVE VALUES FOR seq, N}}, which returns a value M is that the range [M, M+N) won't be used by any other user of the sequence. My might need reserve 1bn ids this way ahead of a map reduce run. Any better ideas? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (PHOENIX-1913) Unable to build the website code in svn with Java 8
[ https://issues.apache.org/jira/browse/PHOENIX-1913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] James Taylor updated PHOENIX-1913: -- Summary: Unable to build the website code in svn with Java 8 (was: Unable to build the website code in svn) Unable to build the website code in svn with Java 8 --- Key: PHOENIX-1913 URL: https://issues.apache.org/jira/browse/PHOENIX-1913 Project: Phoenix Issue Type: Bug Reporter: maghamravikiran Assignee: Mujtaba Chohan Following the steps mentioned in http://phoenix.apache.org/building_website.html I get the below exception Generate Phoenix Website Pre-req: On source repo run $ mvn install -DskipTests BUILDING LANGUAGE REFERENCE === src/tools/org/h2/build/BuildBase.java:136: error: no suitable method found for replaceAll(String,String,String) pattern = replaceAll(pattern, /, File.separator); ^ method List.replaceAll(UnaryOperatorFile) is not applicable (actual and formal argument lists differ in length) method ArrayList.replaceAll(UnaryOperatorFile) is not applicable (actual and formal argument lists differ in length) 1 error Error: Could not find or load main class org.h2.build.Build BUILDING SITE === [INFO] Scanning for projects... [ERROR] The build could not read 1 project - [Help 1] [ERROR] [ERROR] The project org.apache.phoenix:phoenix-site:[unknown-version] (/Users/ravimagham/git/sources/phoenix/site/source/pom.xml) has 1 error [ERROR] Non-resolvable parent POM: Could not find artifact org.apache.phoenix:phoenix:pom:4.4.0-SNAPSHOT and 'parent.relativePath' points at wrong local POM @ line 4, column 11 - [Help 2] [ERROR] [ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch. [ERROR] Re-run Maven using the -X switch to enable full debug logging. [ERROR] [ERROR] For more information about the errors and possible solutions, please read the following articles: [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/ProjectBuildingException [ERROR] [Help 2] http://cwiki.apache.org/confluence/display/MAVEN/UnresolvableModelException Can you please have a look ? -- This message was sent by Atlassian JIRA (v6.3.4#6332)