[jira] [Commented] (PHOENIX-4278) Implement pure client side transactional index maintenance

2018-03-14 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399918#comment-16399918
 ] 

James Taylor commented on PHOENIX-4278:
---

In looking at PHOENIX-4641, I noticed that this JIRA had made some changes to 
the way we compute the index updates. Instead of running a single query to 
compute the prior row values across all indexes, it would run a query per index 
which would be much more expensive. I've reworked the refactoring and attached 
an addendum patch, [~ohads]. Will run all tests against it locally.

> Implement pure client side transactional index maintenance
> --
>
> Key: PHOENIX-4278
> URL: https://issues.apache.org/jira/browse/PHOENIX-4278
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: James Taylor
>Assignee: Ohad Shacham
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4278.4.x-HBase-1.3.v1.patch, 
> PHOENIX-4278_5.x-HBase-2.0.patch, PHOENIX-4278_addendum1.patch, 
> PHOENIX-4278_v2.patch
>
>
> The index maintenance for transactions follows the same model as non 
> transactional tables - coprocessor based on data table updates that looks up 
> previous row value to perform maintenance. This is necessary for non 
> transactional tables to ensure the rows are locked so that a consistent view 
> may be obtained. However, for transactional tables, the time stamp oracle 
> ensures uniqueness of time stamps (via transaction IDs) and the filtering 
> handles a scan seeing the "true" last committed value for a row. Thus, 
> there's no hard dependency to perform this on the server side.
> Moving the index maintenance to the client side would prevent any RS->RS RPC 
> calls (which have proved to be troublesome for HBase). It would require 
> returning more data to the client (i.e. the prior row value), but this seems 
> like a reasonable tradeoff.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4278) Implement pure client side transactional index maintenance

2018-03-14 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-4278:
--
Attachment: PHOENIX-4278_addendum1.patch

> Implement pure client side transactional index maintenance
> --
>
> Key: PHOENIX-4278
> URL: https://issues.apache.org/jira/browse/PHOENIX-4278
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: James Taylor
>Assignee: Ohad Shacham
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4278.4.x-HBase-1.3.v1.patch, 
> PHOENIX-4278_5.x-HBase-2.0.patch, PHOENIX-4278_addendum1.patch, 
> PHOENIX-4278_v2.patch
>
>
> The index maintenance for transactions follows the same model as non 
> transactional tables - coprocessor based on data table updates that looks up 
> previous row value to perform maintenance. This is necessary for non 
> transactional tables to ensure the rows are locked so that a consistent view 
> may be obtained. However, for transactional tables, the time stamp oracle 
> ensures uniqueness of time stamps (via transaction IDs) and the filtering 
> handles a scan seeing the "true" last committed value for a row. Thus, 
> there's no hard dependency to perform this on the server side.
> Moving the index maintenance to the client side would prevent any RS->RS RPC 
> calls (which have proved to be troublesome for HBase). It would require 
> returning more data to the client (i.e. the prior row value), but this seems 
> like a reasonable tradeoff.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Reopened] (PHOENIX-4278) Implement pure client side transactional index maintenance

2018-03-14 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor reopened PHOENIX-4278:
---

> Implement pure client side transactional index maintenance
> --
>
> Key: PHOENIX-4278
> URL: https://issues.apache.org/jira/browse/PHOENIX-4278
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: James Taylor
>Assignee: Ohad Shacham
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4278.4.x-HBase-1.3.v1.patch, 
> PHOENIX-4278_5.x-HBase-2.0.patch, PHOENIX-4278_addendum1.patch, 
> PHOENIX-4278_v2.patch
>
>
> The index maintenance for transactions follows the same model as non 
> transactional tables - coprocessor based on data table updates that looks up 
> previous row value to perform maintenance. This is necessary for non 
> transactional tables to ensure the rows are locked so that a consistent view 
> may be obtained. However, for transactional tables, the time stamp oracle 
> ensures uniqueness of time stamps (via transaction IDs) and the filtering 
> handles a scan seeing the "true" last committed value for a row. Thus, 
> there's no hard dependency to perform this on the server side.
> Moving the index maintenance to the client side would prevent any RS->RS RPC 
> calls (which have proved to be troublesome for HBase). It would require 
> returning more data to the client (i.e. the prior row value), but this seems 
> like a reasonable tradeoff.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4611) Not nullable column impact on join query plans

2018-03-14 Thread Maryann Xue (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399915#comment-16399915
 ] 

Maryann Xue commented on PHOENIX-4611:
--

Thank you very much for reminding me, [~jamestaylor]! Fixed now. I had missed a 
commit for PHOENIX-4322.

> Not nullable column impact on join query plans
> --
>
> Key: PHOENIX-4611
> URL: https://issues.apache.org/jira/browse/PHOENIX-4611
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Priority: Major
>
> With PHOENIX-2566, there's a subtle change in projected tables in that a 
> column may end up being not nullable where as before it was nullable when the 
> family name is not null. I've kept the old behavior with 
> [this|https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=blobdiff;f=phoenix-core/src/main/java/org/apache/phoenix/compile/TupleProjectionCompiler.java;h=fccded2a896855a2a01d727b992f954a1d3fa8ab;hp=d0b900c1a9c21609b89065307433a0d37b12b72a;hb=82ba1417fdd69a0ac57cbcf2f2327d4aa371bcd9;hpb=e126dd1dda5aa80e8296d3b0c84736b22b658999]
>  commit, but would you mind confirming what the right thing to do is, 
> [~maryannxue]?
> Without this change, the explain plan changes in 
> SortMergeJoinMoreIT.testBug2894() and the assert fails. Looks like the 
> compiler ends up changing the row ordering.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4231) Support restriction of remote UDF load sources

2018-03-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399883#comment-16399883
 ] 

Hudson commented on PHOENIX-4231:
-

ABORTED: Integrated in Jenkins build Phoenix-4.x-HBase-1.3 #63 (See 
[https://builds.apache.org/job/Phoenix-4.x-HBase-1.3/63/])
PHOENIX-4231 Support restriction of remote UDF load sources (apurtell: rev 
ae3618ff88c36eb04734fad78ac64c8989fc470f)
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/expression/function/UDFExpression.java
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/UserDefinedFunctionsIT.java


> Support restriction of remote UDF load sources 
> ---
>
> Key: PHOENIX-4231
> URL: https://issues.apache.org/jira/browse/PHOENIX-4231
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Andrew Purtell
>Assignee: Chinmay Kulkarni
>Priority: Major
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4231-v2.patch, PHOENIX-4231.patch
>
>
> When allowUserDefinedFunctions is true, users can load UDFs remotely via a 
> jar file from any HDFS filesystem reachable on the network. The setting 
> hbase.dynamic.jars.dir can be used to restrict locations for jar loading but 
> is only applied to jars loaded from the local filesystem.  We should 
> implement support for similar restriction via configuration for jars loaded 
> via hdfs:// URIs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4634) Looking up a parent index table of a tenant child view fails in BaseColumnResolver createTableRef()

2018-03-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399843#comment-16399843
 ] 

Hudson commented on PHOENIX-4634:
-

FAILURE: Integrated in Jenkins build PreCommit-PHOENIX-Build #1805 (See 
[https://builds.apache.org/job/PreCommit-PHOENIX-Build/1805/])
PHOENIX-4634 Looking up a parent index table of a tenant child view (tdsilva: 
rev 4e677818e2a2453e3e078506e3e096301df4564f)
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/PhoenixDriverIT.java
* (edit) phoenix-core/src/main/java/org/apache/phoenix/util/PhoenixRuntime.java
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/index/ChildViewsUseParentViewIndexIT.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java


> Looking up a parent index table of a tenant child view fails in 
> BaseColumnResolver createTableRef()
> ---
>
> Key: PHOENIX-4634
> URL: https://issues.apache.org/jira/browse/PHOENIX-4634
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Thomas D'Silva
>Assignee: Thomas D'Silva
>Priority: Major
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4634-4.x-HBase-0.98.patch, 
> PHOENIX-4634-v2.patch, PHOENIX-4634-v3.patch, PHOENIX-4634-v4.patch, 
> PHOENIX-4634-v5.patch, PHOENIX-4634-v6.patch, PHOENIX-4634-v7.patch
>
>
> If we are looking up a parent table index of a child view , we need to 
> resolve the view which will load the parent table indexes (instead of trying 
> to resolve the parent table index directly). 
>  
> {code:java}
> org.apache.phoenix.schema.TableNotFoundException: ERROR 1012 (42M03): Table 
> undefined. tableName=Schema.Schema.Index#Schma.View
> org.apache.phoenix.compile.FromCompiler$BaseColumnResolver.createTableRef(FromCompiler.java:577)
> at 
> org.apache.phoenix.compile.FromCompiler$SingleTableColumnResolver.(FromCompiler.java:391)
> at 
> org.apache.phoenix.compile.FromCompiler.getResolverForQuery(FromCompiler.java:228)
> at 
> org.apache.phoenix.compile.FromCompiler.getResolverForQuery(FromCompiler.java:206)
> at org.apache.phoenix.optimize.QueryOptimizer.addPlan(QueryOptimizer.java:226)
> at 
> org.apache.phoenix.optimize.QueryOptimizer.getApplicablePlans(QueryOptimizer.java:146)
> at 
> org.apache.phoenix.optimize.QueryOptimizer.getApplicablePlans(QueryOptimizer.java:103)
> at org.apache.phoenix.compile.DeleteCompiler.compile(DeleteCompiler.java:501)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableDeleteStatement.compilePlan(PhoenixStatement.java:770)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableDeleteStatement.compilePlan(PhoenixStatement.java:758)
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:386)
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:376)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4370) Surface hbase metrics from perconnection to global metrics

2018-03-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399844#comment-16399844
 ] 

Hudson commented on PHOENIX-4370:
-

FAILURE: Integrated in Jenkins build PreCommit-PHOENIX-Build #1805 (See 
[https://builds.apache.org/job/PreCommit-PHOENIX-Build/1805/])
PHOENIX-4370 Surface hbase metrics from perconnection to global metrics (ewang: 
rev 274c7be949f8502c087caaa1605afdccd410ac90)
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/monitoring/PhoenixMetricsIT.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/monitoring/GlobalClientMetrics.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/iterate/ScanningResultIterator.java


> Surface hbase metrics from perconnection to global metrics
> --
>
> Key: PHOENIX-4370
> URL: https://issues.apache.org/jira/browse/PHOENIX-4370
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Ethan Wang
>Assignee: Ethan Wang
>Priority: Major
> Attachments: PHOENIX-4370-v1.patch
>
>
> Surface hbase metrics from perconnection to global metrics
> Currently in phoenix client side, HBASE metrics are recorded and surfaced at 
> Per Connection level. PHOENIX-4370 allow it to be aggregated at global level, 
> i.e., aggregate across all connections within in one JVM so that user can 
> evaluate it as a stable metrics periodically.
> COUNT_RPC_CALLS("rp", "Number of RPC calls"),
> COUNT_REMOTE_RPC_CALLS("rr", "Number of remote RPC calls"),
> COUNT_MILLS_BETWEEN_NEXTS("n", "Sum of milliseconds between sequential 
> next calls"),
> COUNT_NOT_SERVING_REGION_EXCEPTION("nsr", "Number of 
> NotServingRegionException caught"),
> COUNT_BYTES_REGION_SERVER_RESULTS("rs", "Number of bytes in Result 
> objects from region servers"),
> COUNT_BYTES_IN_REMOTE_RESULTS("rrs", "Number of bytes in Result objects 
> from remote region servers"),
> COUNT_SCANNED_REGIONS("rg", "Number of regions scanned"),
> COUNT_RPC_RETRIES("rpr", "Number of RPC retries"),
> COUNT_REMOTE_RPC_RETRIES("rrr", "Number of remote RPC retries"),
> COUNT_ROWS_SCANNED("ws", "Number of rows scanned"),
> COUNT_ROWS_FILTERED("wf", "Number of rows filtered");



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4231) Support restriction of remote UDF load sources

2018-03-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399845#comment-16399845
 ] 

Hudson commented on PHOENIX-4231:
-

FAILURE: Integrated in Jenkins build PreCommit-PHOENIX-Build #1805 (See 
[https://builds.apache.org/job/PreCommit-PHOENIX-Build/1805/])
PHOENIX-4231 Support restriction of remote UDF load sources (apurtell: rev 
74228aee724e24ddb00bef2be0c7430172b699a8)
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/UserDefinedFunctionsIT.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/expression/function/UDFExpression.java


> Support restriction of remote UDF load sources 
> ---
>
> Key: PHOENIX-4231
> URL: https://issues.apache.org/jira/browse/PHOENIX-4231
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Andrew Purtell
>Assignee: Chinmay Kulkarni
>Priority: Major
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4231-v2.patch, PHOENIX-4231.patch
>
>
> When allowUserDefinedFunctions is true, users can load UDFs remotely via a 
> jar file from any HDFS filesystem reachable on the network. The setting 
> hbase.dynamic.jars.dir can be used to restrict locations for jar loading but 
> is only applied to jars loaded from the local filesystem.  We should 
> implement support for similar restriction via configuration for jars loaded 
> via hdfs:// URIs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4579) Add a config to conditionally create Phoenix meta tables on first client connection

2018-03-14 Thread Thomas D'Silva (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399833#comment-16399833
 ] 

Thomas D'Silva commented on PHOENIX-4579:
-

[~ckulkarni]

{{ConnectionQueryServicesImpl.ensureTableCreated}} creates/updates the HBase 
metadata when createTable is called for SYSTEM.CATALOG. 

> Add a config to conditionally create Phoenix meta tables on first client 
> connection
> ---
>
> Key: PHOENIX-4579
> URL: https://issues.apache.org/jira/browse/PHOENIX-4579
> Project: Phoenix
>  Issue Type: New Feature
>Reporter: Mujtaba Chohan
>Assignee: Chinmay Kulkarni
>Priority: Major
> Attachments: PHOENIX-4579.patch
>
>
> Currently we create/modify Phoenix meta tables on first client connection. 
> Adding a property to make it configurable (with default true as it is 
> currently implemented).
> With this property set to false, it will avoid lockstep upgrade requirement 
> for all clients when changing meta properties using PHOENIX-4575 as this 
> property can be flipped back on once all the clients are upgraded.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4654) Document PhoenixDatabaseMetaData.getTable arguments

2018-03-14 Thread Thomas D'Silva (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-4654:

Attachment: PHOENIX-4654.patch

> Document PhoenixDatabaseMetaData.getTable arguments
> ---
>
> Key: PHOENIX-4654
> URL: https://issues.apache.org/jira/browse/PHOENIX-4654
> Project: Phoenix
>  Issue Type: Task
>Reporter: Thomas D'Silva
>Assignee: Thomas D'Silva
>Priority: Major
> Attachments: PHOENIX-4654.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4654) Document PhoenixDatabaseMetaData.getTable arguments

2018-03-14 Thread Thomas D'Silva (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-4654:

Description: (was: or else we end up doing a full table scan)
Summary: Document PhoenixDatabaseMetaData.getTable arguments  (was: 
Document Phoenix)

> Document PhoenixDatabaseMetaData.getTable arguments
> ---
>
> Key: PHOENIX-4654
> URL: https://issues.apache.org/jira/browse/PHOENIX-4654
> Project: Phoenix
>  Issue Type: Task
>Reporter: Thomas D'Silva
>Assignee: Thomas D'Silva
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4654) Document Phoenix

2018-03-14 Thread Thomas D'Silva (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-4654:

Summary: Document Phoenix  (was: In 
PhoenixDatabaseMetaData.addTenantIdFilter add WHERE TENANT_ID is NULL when the 
connection used does not have a tenant id )

> Document Phoenix
> 
>
> Key: PHOENIX-4654
> URL: https://issues.apache.org/jira/browse/PHOENIX-4654
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Thomas D'Silva
>Assignee: Thomas D'Silva
>Priority: Major
>
> or else we end up doing a full table scan



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4654) Document Phoenix

2018-03-14 Thread Thomas D'Silva (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-4654:

Issue Type: Task  (was: Bug)

> Document Phoenix
> 
>
> Key: PHOENIX-4654
> URL: https://issues.apache.org/jira/browse/PHOENIX-4654
> Project: Phoenix
>  Issue Type: Task
>Reporter: Thomas D'Silva
>Assignee: Thomas D'Silva
>Priority: Major
>
> or else we end up doing a full table scan



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4654) In PhoenixDatabaseMetaData.addTenantIdFilter add WHERE TENANT_ID is NULL when the connection used does not have a tenant id

2018-03-14 Thread Thomas D'Silva (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399795#comment-16399795
 ] 

Thomas D'Silva commented on PHOENIX-4654:
-

Ok that makes sense, I'll change this JIRA to add document this on the getTable 
metod().

> In PhoenixDatabaseMetaData.addTenantIdFilter add WHERE TENANT_ID is NULL when 
> the connection used does not have a tenant id 
> 
>
> Key: PHOENIX-4654
> URL: https://issues.apache.org/jira/browse/PHOENIX-4654
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Thomas D'Silva
>Assignee: Thomas D'Silva
>Priority: Major
>
> or else we end up doing a full table scan



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4654) In PhoenixDatabaseMetaData.addTenantIdFilter add WHERE TENANT_ID is NULL when the connection used does not have a tenant id

2018-03-14 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399787#comment-16399787
 ] 

James Taylor commented on PHOENIX-4654:
---

There's a subtle distinction in using a null tenantId and an empty string 
tenantId. The latter will do as you mention (add an IS NULL clause), but the 
former is a way to get all tenants - global and multi-tenant.

> In PhoenixDatabaseMetaData.addTenantIdFilter add WHERE TENANT_ID is NULL when 
> the connection used does not have a tenant id 
> 
>
> Key: PHOENIX-4654
> URL: https://issues.apache.org/jira/browse/PHOENIX-4654
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Thomas D'Silva
>Assignee: Thomas D'Silva
>Priority: Major
>
> or else we end up doing a full table scan



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4649) Phoenix Upsert..Select query is not working for long running query.But while we ran same query with limit clause then it works fine

2018-03-14 Thread Jepson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399778#comment-16399778
 ] 

Jepson commented on PHOENIX-4649:
-

*hdfs-site.xml:*
{code:java}
dfs.client.socket-timeout180
dfs.socket.timeout180
dfs.datanode.socket.write.timeout180
{code}

Try it.



 

> Phoenix Upsert..Select query is not working for long running query.But while 
> we ran same query with limit clause then it works fine  
> -
>
> Key: PHOENIX-4649
> URL: https://issues.apache.org/jira/browse/PHOENIX-4649
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.11.0
>Reporter: Nitin D Chunke
>Priority: Major
>  Labels: patch
> Fix For: 5.0.0-alpha
>
> Attachments: image-2018-03-11-00-16-14-968.png, 
> image-2018-03-11-00-22-32-613.png
>
>
> We have data in table A which is around 3 Million Records. and from this 
> table we have upsert data in to table B with out limit clause.
> (Note : Here we already export the hbase conf path and set all phoenix 
> properties to mentioned value and both the tables are salted)
> Please find following hbase-site.xml screen shot.
> !image-2018-03-11-00-22-32-613.png!
> But hbase conf was not picked up when we launch sqlline/psql.
> Please refer following screenshot 
> !image-2018-03-11-00-16-14-968.png!
> And when we ran same query with setting limit clause which is higher than 
> total number record present in table A,it runs absolutely fine with out 
> giving any error. 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4611) Not nullable column impact on join query plans

2018-03-14 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399768#comment-16399768
 ] 

James Taylor commented on PHOENIX-4611:
---

Thanks, [~maryannxue]. Looks like there's still one regression in the 
4.x-HBase-1.1 branch: 
https://builds.apache.org/job/Phoenix-4.x-HBase-1.1/685/testReport/

> Not nullable column impact on join query plans
> --
>
> Key: PHOENIX-4611
> URL: https://issues.apache.org/jira/browse/PHOENIX-4611
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Priority: Major
>
> With PHOENIX-2566, there's a subtle change in projected tables in that a 
> column may end up being not nullable where as before it was nullable when the 
> family name is not null. I've kept the old behavior with 
> [this|https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=blobdiff;f=phoenix-core/src/main/java/org/apache/phoenix/compile/TupleProjectionCompiler.java;h=fccded2a896855a2a01d727b992f954a1d3fa8ab;hp=d0b900c1a9c21609b89065307433a0d37b12b72a;hb=82ba1417fdd69a0ac57cbcf2f2327d4aa371bcd9;hpb=e126dd1dda5aa80e8296d3b0c84736b22b658999]
>  commit, but would you mind confirming what the right thing to do is, 
> [~maryannxue]?
> Without this change, the explain plan changes in 
> SortMergeJoinMoreIT.testBug2894() and the assert fails. Looks like the 
> compiler ends up changing the row ordering.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4579) Add a config to conditionally create Phoenix meta tables on first client connection

2018-03-14 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399766#comment-16399766
 ] 

James Taylor commented on PHOENIX-4579:
---

Because we always attempt to create the system catalog table, the code that 
creates/updates the hbase metadata always runs. So even if the table already 
exists (determined on the server by the createTable call), the metadata would 
have already been updated.

I don't think we need a new {{phoenix.system.upgrade.first.connection}} 
property. Instead, we can base what we do off of 
{{phoenix.autoupgrade.enabled}} and the version of the system catalog (i.e. the 
new information we'll return in the response back from the getVersion() call). 
Like you said, if {{phoenix.autoupgrade.enabled}} is false and the version we 
get back is older than the current client version, we can throw an error 
(unless we're in the process of upgrading). At that point, the old action a 
client could take would be to run EXECUTE UPGRADE.

> Add a config to conditionally create Phoenix meta tables on first client 
> connection
> ---
>
> Key: PHOENIX-4579
> URL: https://issues.apache.org/jira/browse/PHOENIX-4579
> Project: Phoenix
>  Issue Type: New Feature
>Reporter: Mujtaba Chohan
>Assignee: Chinmay Kulkarni
>Priority: Major
> Attachments: PHOENIX-4579.patch
>
>
> Currently we create/modify Phoenix meta tables on first client connection. 
> Adding a property to make it configurable (with default true as it is 
> currently implemented).
> With this property set to false, it will avoid lockstep upgrade requirement 
> for all clients when changing meta properties using PHOENIX-4575 as this 
> property can be flipped back on once all the clients are upgraded.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4654) In PhoenixDatabaseMetaData.addTenantIdFilter add WHERE TENANT_ID is NULL when the connection used does not have a tenant id

2018-03-14 Thread Thomas D'Silva (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399759#comment-16399759
 ] 

Thomas D'Silva commented on PHOENIX-4654:
-

[~jamestaylor] 
In PhoenixDatabaseMetaData.addTenantIdFilter if the tenantIdPattern is null and 
the connection being used is a global connection, should we add "TENANT_ID IS 
NULL" to the where clause? 

{code}
private void addTenantIdFilter(StringBuilder buf, String tenantIdPattern) {
PName tenantId = connection.getTenantId();
if (tenantIdPattern == null) {
if (tenantId != null) {
appendConjunction(buf);
buf.append(" (" + TENANT_ID + " IS NULL " +
" OR " + TENANT_ID + " = '" + 
StringUtil.escapeStringConstant(tenantId.getString()) + "') ");
}
} else if (tenantIdPattern.length() == 0) {
appendConjunction(buf);
buf.append(TENANT_ID + " IS NULL ");
} else {
appendConjunction(buf);
buf.append(" TENANT_ID LIKE '" + 
StringUtil.escapeStringConstant(tenantIdPattern) + "' ");
if (tenantId != null) {
buf.append(" and TENANT_ID = '" + 
StringUtil.escapeStringConstant(tenantId.getString()) + "' ");
}
}
}
{code}

> In PhoenixDatabaseMetaData.addTenantIdFilter add WHERE TENANT_ID is NULL when 
> the connection used does not have a tenant id 
> 
>
> Key: PHOENIX-4654
> URL: https://issues.apache.org/jira/browse/PHOENIX-4654
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Thomas D'Silva
>Assignee: Thomas D'Silva
>Priority: Major
>
> or else we end up doing a full table scan



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4654) In PhoenixDatabaseMetaData.addTenantIdFilter add WHERE TENANT_ID is NULL when the connection used does not have a tenant id

2018-03-14 Thread Thomas D'Silva (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-4654:

Description: or else we end up doing a full table scan  (was: We are 
currently doing a full table scan since we do not include a tenant id filter in 
PhoenixDatabaseMetaData.getTables())
Summary: In PhoenixDatabaseMetaData.addTenantIdFilter add WHERE 
TENANT_ID is NULL when the connection used does not have a tenant id   (was: In 
PhoenixDatabaseMetaData.getTables add the connection's tenant id to where 
clause filter)

> In PhoenixDatabaseMetaData.addTenantIdFilter add WHERE TENANT_ID is NULL when 
> the connection used does not have a tenant id 
> 
>
> Key: PHOENIX-4654
> URL: https://issues.apache.org/jira/browse/PHOENIX-4654
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Thomas D'Silva
>Assignee: Thomas D'Silva
>Priority: Major
>
> or else we end up doing a full table scan



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (PHOENIX-4654) In PhoenixDatabaseMetaData.getTables add the connection's tenant id to where clause filter

2018-03-14 Thread Thomas D'Silva (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva reassigned PHOENIX-4654:
---

Assignee: Thomas D'Silva

> In PhoenixDatabaseMetaData.getTables add the connection's tenant id to where 
> clause filter
> --
>
> Key: PHOENIX-4654
> URL: https://issues.apache.org/jira/browse/PHOENIX-4654
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Thomas D'Silva
>Assignee: Thomas D'Silva
>Priority: Major
>
> We are currently doing a full table scan since we do not include a tenant id 
> filter in PhoenixDatabaseMetaData.getTables()



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4231) Support restriction of remote UDF load sources

2018-03-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399635#comment-16399635
 ] 

Hudson commented on PHOENIX-4231:
-

SUCCESS: Integrated in Jenkins build Phoenix-4.x-HBase-0.98 #1835 (See 
[https://builds.apache.org/job/Phoenix-4.x-HBase-0.98/1835/])
PHOENIX-4231 Support restriction of remote UDF load sources (apurtell: rev 
ade93c9d5ac6ecad2234d22da6fdbb1168c5d32a)
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/UserDefinedFunctionsIT.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/expression/function/UDFExpression.java


> Support restriction of remote UDF load sources 
> ---
>
> Key: PHOENIX-4231
> URL: https://issues.apache.org/jira/browse/PHOENIX-4231
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Andrew Purtell
>Assignee: Chinmay Kulkarni
>Priority: Major
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4231-v2.patch, PHOENIX-4231.patch
>
>
> When allowUserDefinedFunctions is true, users can load UDFs remotely via a 
> jar file from any HDFS filesystem reachable on the network. The setting 
> hbase.dynamic.jars.dir can be used to restrict locations for jar loading but 
> is only applied to jars loaded from the local filesystem.  We should 
> implement support for similar restriction via configuration for jars loaded 
> via hdfs:// URIs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4370) Surface hbase metrics from perconnection to global metrics

2018-03-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399627#comment-16399627
 ] 

Hudson commented on PHOENIX-4370:
-

SUCCESS: Integrated in Jenkins build Phoenix-4.x-HBase-1.3 #62 (See 
[https://builds.apache.org/job/Phoenix-4.x-HBase-1.3/62/])
PHOENIX-4370 Surface hbase metrics from perconnection to global metrics (ewang: 
rev c115b6a3ec77fc5cbfbf6322607274bfd07fc518)
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/monitoring/PhoenixMetricsIT.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/monitoring/GlobalClientMetrics.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/iterate/ScanningResultIterator.java


> Surface hbase metrics from perconnection to global metrics
> --
>
> Key: PHOENIX-4370
> URL: https://issues.apache.org/jira/browse/PHOENIX-4370
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Ethan Wang
>Assignee: Ethan Wang
>Priority: Major
> Attachments: PHOENIX-4370-v1.patch
>
>
> Surface hbase metrics from perconnection to global metrics
> Currently in phoenix client side, HBASE metrics are recorded and surfaced at 
> Per Connection level. PHOENIX-4370 allow it to be aggregated at global level, 
> i.e., aggregate across all connections within in one JVM so that user can 
> evaluate it as a stable metrics periodically.
> COUNT_RPC_CALLS("rp", "Number of RPC calls"),
> COUNT_REMOTE_RPC_CALLS("rr", "Number of remote RPC calls"),
> COUNT_MILLS_BETWEEN_NEXTS("n", "Sum of milliseconds between sequential 
> next calls"),
> COUNT_NOT_SERVING_REGION_EXCEPTION("nsr", "Number of 
> NotServingRegionException caught"),
> COUNT_BYTES_REGION_SERVER_RESULTS("rs", "Number of bytes in Result 
> objects from region servers"),
> COUNT_BYTES_IN_REMOTE_RESULTS("rrs", "Number of bytes in Result objects 
> from remote region servers"),
> COUNT_SCANNED_REGIONS("rg", "Number of regions scanned"),
> COUNT_RPC_RETRIES("rpr", "Number of RPC retries"),
> COUNT_REMOTE_RPC_RETRIES("rrr", "Number of remote RPC retries"),
> COUNT_ROWS_SCANNED("ws", "Number of rows scanned"),
> COUNT_ROWS_FILTERED("wf", "Number of rows filtered");



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-4654) In PhoenixDatabaseMetaData.getTables add the connection's tenant id to where clause filter

2018-03-14 Thread Thomas D'Silva (JIRA)
Thomas D'Silva created PHOENIX-4654:
---

 Summary: In PhoenixDatabaseMetaData.getTables add the connection's 
tenant id to where clause filter
 Key: PHOENIX-4654
 URL: https://issues.apache.org/jira/browse/PHOENIX-4654
 Project: Phoenix
  Issue Type: Bug
Reporter: Thomas D'Silva


We are currently doing a full table scan since we do not include a tenant id 
filter in PhoenixDatabaseMetaData.getTables()



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4616) Move join query optimization out from QueryCompiler into QueryOptimizer

2018-03-14 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399505#comment-16399505
 ] 

James Taylor commented on PHOENIX-4616:
---

Instead of this:
{code}
+if (dataPlan instanceof BaseQueryPlan) {
+return getApplicablePlans((BaseQueryPlan) dataPlan, statement, 
targetColumns, parallelIteratorFactory, stopAtBestPlan);
+}
{code}
can you do this?
{code}
if (dataPlan.getSourceRefs().size() == 1) {
return getApplicablePlans(dataPlan, statement, targetColumns, 
parallelIteratorFactory, stopAtBestPlan);
}
{code}

> Move join query optimization out from QueryCompiler into QueryOptimizer
> ---
>
> Key: PHOENIX-4616
> URL: https://issues.apache.org/jira/browse/PHOENIX-4616
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Maryann Xue
>Assignee: Maryann Xue
>Priority: Major
> Attachments: PHOENIX-4616.patch
>
>
> Currently we do optimization for join queries inside QueryCompiler, which 
> makes the APIs and code logic confusing, so we need to move join optimization 
> logic into QueryOptimizer.
>  Similarly, but probably with a different approach, we need to optimize UNION 
> ALL queries and derived table sub-queries in QueryOptimizer.optimize().
> Please also refer to this comment:
> https://issues.apache.org/jira/browse/PHOENIX-4585?focusedCommentId=16367616=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16367616



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4370) Surface hbase metrics from perconnection to global metrics

2018-03-14 Thread Ethan Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399501#comment-16399501
 ] 

Ethan Wang commented on PHOENIX-4370:
-

Patch applied on 

4.x-HBase-1.1
4.x-HBase-1.2
4.x-HBase-1.3
4.x-cdh5.11.2
5.x-HBase-2.0
master

[~tdsilva] [~jamestaylor]

 

> Surface hbase metrics from perconnection to global metrics
> --
>
> Key: PHOENIX-4370
> URL: https://issues.apache.org/jira/browse/PHOENIX-4370
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Ethan Wang
>Assignee: Ethan Wang
>Priority: Major
> Attachments: PHOENIX-4370-v1.patch
>
>
> Surface hbase metrics from perconnection to global metrics
> Currently in phoenix client side, HBASE metrics are recorded and surfaced at 
> Per Connection level. PHOENIX-4370 allow it to be aggregated at global level, 
> i.e., aggregate across all connections within in one JVM so that user can 
> evaluate it as a stable metrics periodically.
> COUNT_RPC_CALLS("rp", "Number of RPC calls"),
> COUNT_REMOTE_RPC_CALLS("rr", "Number of remote RPC calls"),
> COUNT_MILLS_BETWEEN_NEXTS("n", "Sum of milliseconds between sequential 
> next calls"),
> COUNT_NOT_SERVING_REGION_EXCEPTION("nsr", "Number of 
> NotServingRegionException caught"),
> COUNT_BYTES_REGION_SERVER_RESULTS("rs", "Number of bytes in Result 
> objects from region servers"),
> COUNT_BYTES_IN_REMOTE_RESULTS("rrs", "Number of bytes in Result objects 
> from remote region servers"),
> COUNT_SCANNED_REGIONS("rg", "Number of regions scanned"),
> COUNT_RPC_RETRIES("rpr", "Number of RPC retries"),
> COUNT_REMOTE_RPC_RETRIES("rrr", "Number of remote RPC retries"),
> COUNT_ROWS_SCANNED("ws", "Number of rows scanned"),
> COUNT_ROWS_FILTERED("wf", "Number of rows filtered");



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4231) Support restriction of remote UDF load sources

2018-03-14 Thread Ethan Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399219#comment-16399219
 ] 

Ethan Wang commented on PHOENIX-4231:
-

[~apurtell]

Thanks!

> Support restriction of remote UDF load sources 
> ---
>
> Key: PHOENIX-4231
> URL: https://issues.apache.org/jira/browse/PHOENIX-4231
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Andrew Purtell
>Assignee: Chinmay Kulkarni
>Priority: Major
> Attachments: PHOENIX-4231-v2.patch, PHOENIX-4231.patch
>
>
> When allowUserDefinedFunctions is true, users can load UDFs remotely via a 
> jar file from any HDFS filesystem reachable on the network. The setting 
> hbase.dynamic.jars.dir can be used to restrict locations for jar loading but 
> is only applied to jars loaded from the local filesystem.  We should 
> implement support for similar restriction via configuration for jars loaded 
> via hdfs:// URIs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4611) Not nullable column impact on join query plans

2018-03-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399158#comment-16399158
 ] 

Hudson commented on PHOENIX-4611:
-

SUCCESS: Integrated in Jenkins build Phoenix-4.x-HBase-0.98 #1834 (See 
[https://builds.apache.org/job/Phoenix-4.x-HBase-0.98/1834/])
Revert "PHOENIX-4611 Not nullable column impact on join query plans" 
(maryannxue: rev 8f8209dcf83696869c7f0b567a86e0231ddef80f)
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/SortMergeJoinMoreIT.java
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/join/HashJoinMoreIT.java
* (edit) phoenix-core/src/main/java/org/apache/phoenix/compile/JoinCompiler.java


> Not nullable column impact on join query plans
> --
>
> Key: PHOENIX-4611
> URL: https://issues.apache.org/jira/browse/PHOENIX-4611
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Priority: Major
>
> With PHOENIX-2566, there's a subtle change in projected tables in that a 
> column may end up being not nullable where as before it was nullable when the 
> family name is not null. I've kept the old behavior with 
> [this|https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=blobdiff;f=phoenix-core/src/main/java/org/apache/phoenix/compile/TupleProjectionCompiler.java;h=fccded2a896855a2a01d727b992f954a1d3fa8ab;hp=d0b900c1a9c21609b89065307433a0d37b12b72a;hb=82ba1417fdd69a0ac57cbcf2f2327d4aa371bcd9;hpb=e126dd1dda5aa80e8296d3b0c84736b22b658999]
>  commit, but would you mind confirming what the right thing to do is, 
> [~maryannxue]?
> Without this change, the explain plan changes in 
> SortMergeJoinMoreIT.testBug2894() and the assert fails. Looks like the 
> compiler ends up changing the row ordering.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4231) Support restriction of remote UDF load sources

2018-03-14 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399114#comment-16399114
 ] 

Andrew Purtell commented on PHOENIX-4231:
-

Ok

Committing today.

> Support restriction of remote UDF load sources 
> ---
>
> Key: PHOENIX-4231
> URL: https://issues.apache.org/jira/browse/PHOENIX-4231
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Andrew Purtell
>Assignee: Chinmay Kulkarni
>Priority: Major
> Attachments: PHOENIX-4231-v2.patch, PHOENIX-4231.patch
>
>
> When allowUserDefinedFunctions is true, users can load UDFs remotely via a 
> jar file from any HDFS filesystem reachable on the network. The setting 
> hbase.dynamic.jars.dir can be used to restrict locations for jar loading but 
> is only applied to jars loaded from the local filesystem.  We should 
> implement support for similar restriction via configuration for jars loaded 
> via hdfs:// URIs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (PHOENIX-4579) Add a config to conditionally create Phoenix meta tables on first client connection

2018-03-14 Thread Chinmay Kulkarni (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinmay Kulkarni reassigned PHOENIX-4579:
-

Assignee: Chinmay Kulkarni  (was: Mujtaba Chohan)

> Add a config to conditionally create Phoenix meta tables on first client 
> connection
> ---
>
> Key: PHOENIX-4579
> URL: https://issues.apache.org/jira/browse/PHOENIX-4579
> Project: Phoenix
>  Issue Type: New Feature
>Reporter: Mujtaba Chohan
>Assignee: Chinmay Kulkarni
>Priority: Major
> Attachments: PHOENIX-4579.patch
>
>
> Currently we create/modify Phoenix meta tables on first client connection. 
> Adding a property to make it configurable (with default true as it is 
> currently implemented).
> With this property set to false, it will avoid lockstep upgrade requirement 
> for all clients when changing meta properties using PHOENIX-4575 as this 
> property can be flipped back on once all the clients are upgraded.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4616) Move join query optimization out from QueryCompiler into QueryOptimizer

2018-03-14 Thread Maryann Xue (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399053#comment-16399053
 ] 

Maryann Xue commented on PHOENIX-4616:
--

As one of the potential improvements I mentioned earlier, we might want to find 
a global optimal for some query plans, so I think it'd be good to keep the 
optimization logic in one place and independent of QueryPlan for the long-term 
goal.

At this point, we only have three different situations to handle for all kinds 
of QueryPlan:
 # BaseQueryPlan
 # Joins
 # All others

Ultimately we'd like to make it only two branches: 1. BaseQueryPlan and 2. 
Non-BaseQueryPlan. Although we do have to separate BaseQueryPlan from other 
kinds of QueryPlan, yet I don't think it's worth using a visitor either. Shall 
I just push this in as it is now and figure out what we should do as we expand 
the optimization logic?

> Move join query optimization out from QueryCompiler into QueryOptimizer
> ---
>
> Key: PHOENIX-4616
> URL: https://issues.apache.org/jira/browse/PHOENIX-4616
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Maryann Xue
>Assignee: Maryann Xue
>Priority: Major
> Attachments: PHOENIX-4616.patch
>
>
> Currently we do optimization for join queries inside QueryCompiler, which 
> makes the APIs and code logic confusing, so we need to move join optimization 
> logic into QueryOptimizer.
>  Similarly, but probably with a different approach, we need to optimize UNION 
> ALL queries and derived table sub-queries in QueryOptimizer.optimize().
> Please also refer to this comment:
> https://issues.apache.org/jira/browse/PHOENIX-4585?focusedCommentId=16367616=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16367616



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4619) Process transactional updates to local index on server-side

2018-03-14 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4619?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-4619:
--
Fix Version/s: 5.0.0
   4.14.0

> Process transactional updates to local index on server-side
> ---
>
> Key: PHOENIX-4619
> URL: https://issues.apache.org/jira/browse/PHOENIX-4619
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
>
> For local indexes, we'll want to continue to process updates on the 
> server-side. After PHOENIX-4278, updates even for local indexes are occurring 
> on the client-side. The reason is that we know the updates to the index table 
> will be a local write and we can generate the write on the server side. 
> Having a separate RPC and sending the updates across the wire would be 
> tremendously inefficient. On top of that, we need the region boundary 
> information which we have already in the coprocessor, but would need to 
> retrieve it on the client side (with a likely race condition too if a split 
> occurs after we retrieve it).
> To fix this, we need to modify PhoenixTxnIndexMutationGenerator such that it 
> can be use on the server-side as well. The main change will be to change this 
> method signature to pass through an IndexMaintainer instead of a PTable 
> (which isn't available on the server-side):
> {code}
> public List getIndexUpdates(final PTable table, PTable index, 
> List dataMutations) throws IOException, SQLException {
> {code}
> I think this can be changed to the following instead and be used both client 
> and server side:
> {code}
> public List getIndexUpdates(final IndexMaintainer maintainer, 
> byte[] dataTableName, List dataMutations) throws IOException, 
> SQLException {
> {code}
> We can tweak the code that makes PhoenixTransactionalIndexer a noop for 
> clients >= 4.14 to have it execute if the index is a local index. The one 
> downside is that if there's a mix of local and global indexes on the same 
> table, the index update calculation will be done twice. I think having a mix 
> of index types would be rare, though, and we should advise against it.
> There's also this code in UngroupedAggregateRegionObserver which needs to be 
> updated to write shadow cells for Omid:
> {code}
> } else if (buildLocalIndex) {
> for (IndexMaintainer maintainer : 
> indexMaintainers) {
> if (!results.isEmpty()) {
> result.getKey(ptr);
> ValueGetter valueGetter =
> 
> maintainer.createGetterFromKeyValues(
> 
> ImmutableBytesPtr.copyBytesIfNecessary(ptr),
> results);
> Put put = 
> maintainer.buildUpdateMutation(kvBuilder,
> valueGetter, ptr, 
> results.get(0).getTimestamp(),
> 
> env.getRegion().getRegionInfo().getStartKey(),
> 
> env.getRegion().getRegionInfo().getEndKey());
> indexMutations.add(put);
> }
> }
> result.setKeyValues(results);
> {code}
> This is the code that builds a local index initially (unlike the global index 
> code path which executes an UPSERT SELECT on the client side to do this 
> initial population).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (PHOENIX-4619) Process transactional updates to local index on server-side

2018-03-14 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4619?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor reassigned PHOENIX-4619:
-

Assignee: James Taylor

> Process transactional updates to local index on server-side
> ---
>
> Key: PHOENIX-4619
> URL: https://issues.apache.org/jira/browse/PHOENIX-4619
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
>Priority: Major
>
> For local indexes, we'll want to continue to process updates on the 
> server-side. After PHOENIX-4278, updates even for local indexes are occurring 
> on the client-side. The reason is that we know the updates to the index table 
> will be a local write and we can generate the write on the server side. 
> Having a separate RPC and sending the updates across the wire would be 
> tremendously inefficient. On top of that, we need the region boundary 
> information which we have already in the coprocessor, but would need to 
> retrieve it on the client side (with a likely race condition too if a split 
> occurs after we retrieve it).
> To fix this, we need to modify PhoenixTxnIndexMutationGenerator such that it 
> can be use on the server-side as well. The main change will be to change this 
> method signature to pass through an IndexMaintainer instead of a PTable 
> (which isn't available on the server-side):
> {code}
> public List getIndexUpdates(final PTable table, PTable index, 
> List dataMutations) throws IOException, SQLException {
> {code}
> I think this can be changed to the following instead and be used both client 
> and server side:
> {code}
> public List getIndexUpdates(final IndexMaintainer maintainer, 
> byte[] dataTableName, List dataMutations) throws IOException, 
> SQLException {
> {code}
> We can tweak the code that makes PhoenixTransactionalIndexer a noop for 
> clients >= 4.14 to have it execute if the index is a local index. The one 
> downside is that if there's a mix of local and global indexes on the same 
> table, the index update calculation will be done twice. I think having a mix 
> of index types would be rare, though, and we should advise against it.
> There's also this code in UngroupedAggregateRegionObserver which needs to be 
> updated to write shadow cells for Omid:
> {code}
> } else if (buildLocalIndex) {
> for (IndexMaintainer maintainer : 
> indexMaintainers) {
> if (!results.isEmpty()) {
> result.getKey(ptr);
> ValueGetter valueGetter =
> 
> maintainer.createGetterFromKeyValues(
> 
> ImmutableBytesPtr.copyBytesIfNecessary(ptr),
> results);
> Put put = 
> maintainer.buildUpdateMutation(kvBuilder,
> valueGetter, ptr, 
> results.get(0).getTimestamp(),
> 
> env.getRegion().getRegionInfo().getStartKey(),
> 
> env.getRegion().getRegionInfo().getEndKey());
> indexMutations.add(put);
> }
> }
> result.setKeyValues(results);
> {code}
> This is the code that builds a local index initially (unlike the global index 
> code path which executes an UPSERT SELECT on the client side to do this 
> initial population).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4322) DESC primary key column with variable length does not work in SkipScanFilter

2018-03-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16398597#comment-16398597
 ] 

Hudson commented on PHOENIX-4322:
-

FAILURE: Integrated in Jenkins build Phoenix-4.x-HBase-0.98 #1833 (See 
[https://builds.apache.org/job/Phoenix-4.x-HBase-0.98/1833/])
PHOENIX-4322 DESC primary key column with variable length does not work 
(maryannxue: rev 92b57c7893c91d90d78e30171e233043dbcb4583)
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/expression/RowValueConstructorExpression.java
* (edit) phoenix-core/src/it/java/org/apache/phoenix/end2end/SortOrderIT.java


> DESC primary key column with variable length does not work in SkipScanFilter
> 
>
> Key: PHOENIX-4322
> URL: https://issues.apache.org/jira/browse/PHOENIX-4322
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.11.0
>Reporter: Maryann Xue
>Assignee: Maryann Xue
>Priority: Minor
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4322.patch
>
>
> Example:
> {code}
> @Test
> public void inDescCompositePK3() throws Exception {
> String table = generateUniqueName();
> String ddl = "CREATE table " + table + " (oid VARCHAR NOT NULL, code 
> VARCHAR NOT NULL constraint pk primary key (oid DESC, code DESC))";
> Object[][] insertedRows = new Object[][]{{"o1", "1"}, {"o2", "2"}, 
> {"o3", "3"}};
> runQueryTest(ddl, upsert("oid", "code"), insertedRows, new 
> Object[][]{{"o2", "2"}, {"o1", "1"}}, new WhereCondition("(oid, code)", "IN", 
> "(('o2', '2'), ('o1', '1'))"),
> table);
> }
> {code}
> Here the last column in primary key is in DESC order and has variable length, 
> and WHERE clause involves an "IN" operator with RowValueConstructor 
> specifying all PK columns. We get no results.
> This ends up being the root cause for not being able to use child/parent join 
> optimization on DESC pk columns as described in PHOENIX-3050.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4585) Prune local index regions used for join queries

2018-03-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16398602#comment-16398602
 ] 

Hudson commented on PHOENIX-4585:
-

FAILURE: Integrated in Jenkins build Phoenix-4.x-HBase-0.98 #1833 (See 
[https://builds.apache.org/job/Phoenix-4.x-HBase-0.98/1833/])
PHOENIX-4585 Prune local index regions used for join queries (maryannxue: rev 
babda3258921fdf4de595ba734d972860d58a0a4)
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/compile/QueryCompiler.java
* (edit) 
phoenix-core/src/test/java/org/apache/phoenix/compile/QueryCompilerTest.java
* (edit) phoenix-core/src/main/java/org/apache/phoenix/compile/JoinCompiler.java


> Prune local index regions used for join queries
> ---
>
> Key: PHOENIX-4585
> URL: https://issues.apache.org/jira/browse/PHOENIX-4585
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: James Taylor
>Assignee: Maryann Xue
>Priority: Major
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4585.patch
>
>
> Some remaining work from PHOENIX-3941: we currently do not capture the data 
> plan as part of the index plan due to the way in which we rewrite the 
> statement during join processing. See comment here for more detail: 
> https://issues.apache.org/jira/browse/PHOENIX-3941?focusedCommentId=16351017=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16351017



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-3050) Handle DESC columns in child/parent join optimization

2018-03-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16398598#comment-16398598
 ] 

Hudson commented on PHOENIX-3050:
-

FAILURE: Integrated in Jenkins build Phoenix-4.x-HBase-0.98 #1833 (See 
[https://builds.apache.org/job/Phoenix-4.x-HBase-0.98/1833/])
PHOENIX-3050 Handle DESC columns in child/parent join optimization (maryannxue: 
rev 977699afe0d66f1434b8bc1c5a751767e563d6ce)
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/compile/QueryCompiler.java
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/join/HashJoinMoreIT.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/compile/WhereOptimizer.java
* (edit) phoenix-core/src/main/java/org/apache/phoenix/compile/JoinCompiler.java


> Handle DESC columns in child/parent join optimization
> -
>
> Key: PHOENIX-3050
> URL: https://issues.apache.org/jira/browse/PHOENIX-3050
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.8.0
>Reporter: Maryann Xue
>Assignee: Maryann Xue
>Priority: Minor
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-3050.patch
>
>
> We found that child/parent join optimization would not work with DESC pk 
> columns. So as a quick fix for PHOENIX-3029, we simply avoid DESC columns 
> when optimizing, which would have no impact on the overall approach and no 
> impact on ASC columns.
>  
> But eventually we need to make the optimization work with DESC columns too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-3505) Potential NullPointerException on close() in OrderedResultIterator

2018-03-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16398600#comment-16398600
 ] 

Hudson commented on PHOENIX-3505:
-

FAILURE: Integrated in Jenkins build Phoenix-4.x-HBase-0.98 #1833 (See 
[https://builds.apache.org/job/Phoenix-4.x-HBase-0.98/1833/])
PHOENIX-3505 Avoid NPE on close() in OrderedResultIterator (maryannxue: rev 
2c758234186e5a4d70cdc6501df19f9b0d9ec601)
* (add) 
phoenix-core/src/test/java/org/apache/phoenix/iterate/OrderedResultIteratorTest.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/iterate/OrderedResultIterator.java


> Potential NullPointerException on close() in OrderedResultIterator
> --
>
> Key: PHOENIX-3505
> URL: https://issues.apache.org/jira/browse/PHOENIX-3505
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Santhosh B Gowda
>Assignee: Josh Elser
>Priority: Major
> Fix For: 4.10.0, 4.8.3
>
> Attachments: PHOENIX-3505.001.patch
>
>
> I observed a NPE in executing query 10 from TPC-H over a Phoenix-4.7ish 
> version at $dayjob.
> {noformat}
> select
>   c_custkey,
>   c_name,
>   sum(l_extendedprice*(1 - l_discount)) as revenue,
>   c_acctbal,
>   n_name,
>   c_address,
>   c_phone,
>   c_comment
> from
>   customer,
>   orders,
>   lineitem,
>   nation
> where
>   c_custkey = o_custkey
>   and l_orderkey = o_orderkey
>   and o_orderdate >= to_date('1993-07-01')
>   and o_orderdate < to_date('1993-10-01')
>   and l_returnflag = 'R'
>   and c_nationkey = n_nationkey
> group by
>   c_custkey,
>   c_name,
>   c_acctbal,
>   c_phone,
>   n_name,
>   c_address,
>   c_comment
> order by
>   revenue desc
> {noformat}
> DDL are:
> {noformat}
> CREATE TABLE NATION  ( N_NATIONKEY  INTEGER NOT NULL,
> N_NAME   CHAR(25) NOT NULL,
> N_REGIONKEY  INTEGER NOT NULL,
> N_COMMENTVARCHAR(152));
> CREATE TABLE CUSTOMER ( C_CUSTKEY INTEGER NOT NULL,
>  C_NAMEVARCHAR(25) NOT NULL,
>  C_ADDRESS VARCHAR(40) NOT NULL,
>  C_NATIONKEY   INTEGER NOT NULL,
>  C_PHONE   CHAR(15) NOT NULL,
>  C_ACCTBAL DECIMAL(15,2)   NOT NULL,
>  C_MKTSEGMENT  CHAR(10) NOT NULL,
>  C_COMMENT VARCHAR(117) NOT NULL);
> CREATE TABLE ORDERS  ( O_ORDERKEY   INTEGER NOT NULL,
>O_CUSTKEYINTEGER NOT NULL,
>O_ORDERSTATUSCHAR(1) NOT NULL,
>O_TOTALPRICE DECIMAL(15,2) NOT NULL,
>O_ORDERDATE  DATE NOT NULL,
>O_ORDERPRIORITY  CHAR(15) NOT NULL,  
>O_CLERK  CHAR(15) NOT NULL, 
>O_SHIPPRIORITY   INTEGER NOT NULL,
>O_COMMENTVARCHAR(79) NOT NULL);
> CREATE TABLE LINEITEM ( L_ORDERKEYINTEGER NOT NULL,
>  L_PARTKEY INTEGER NOT NULL,
>  L_SUPPKEY INTEGER NOT NULL,
>  L_LINENUMBER  INTEGER NOT NULL,
>  L_QUANTITYDECIMAL(15,2) NOT NULL,
>  L_EXTENDEDPRICE  DECIMAL(15,2) NOT NULL,
>  L_DISCOUNTDECIMAL(15,2) NOT NULL,
>  L_TAX DECIMAL(15,2) NOT NULL,
>  L_RETURNFLAG  CHAR(1) NOT NULL,
>  L_LINESTATUS  CHAR(1) NOT NULL,
>  L_SHIPDATEDATE NOT NULL,
>  L_COMMITDATE  DATE NOT NULL,
>  L_RECEIPTDATE DATE NOT NULL,
>  L_SHIPINSTRUCT CHAR(25) NOT NULL,
>  L_SHIPMODE CHAR(10) NOT NULL,
>  L_COMMENT VARCHAR(44) NOT NULL);
> {noformat}
> We ultimately got a NullPointerException trying to close the PhoenixStatement 
> down in OrderedResultIterator. The only execution path I can come up with is 
> that the Iterator was constructor but {{next()}} or {{peek()}} were never 
> called (for whatever reason). Calling {{close()}} at this point would result 
> in an NPE being thrown.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4437) Make QueryPlan.getEstimatedBytesToScan() independent of getExplainPlan() and pull optimize() out of getExplainPlan()

2018-03-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16398599#comment-16398599
 ] 

Hudson commented on PHOENIX-4437:
-

FAILURE: Integrated in Jenkins build Phoenix-4.x-HBase-0.98 #1833 (See 
[https://builds.apache.org/job/Phoenix-4.x-HBase-0.98/1833/])
PHOENIX-4437 Make QueryPlan.getEstimatedBytesToScan() independent of 
(maryannxue: rev 7ef96fe1bed43f3ac3dae900a3e6a83791faf697)
* (edit) phoenix-core/src/main/java/org/apache/phoenix/execute/HashJoinPlan.java
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/ExplainPlanWithStatsEnabledIT.java
* (edit) phoenix-core/src/main/java/org/apache/phoenix/execute/UnionPlan.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/execute/SortMergeJoinPlan.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/execute/BaseQueryPlan.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixStatement.java


> Make QueryPlan.getEstimatedBytesToScan() independent of getExplainPlan() and 
> pull optimize() out of getExplainPlan()
> 
>
> Key: PHOENIX-4437
> URL: https://issues.apache.org/jira/browse/PHOENIX-4437
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.11.0
>Reporter: Maryann Xue
>Assignee: Maryann Xue
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4437.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-1556) Base hash versus sort merge join decision on cost

2018-03-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16398601#comment-16398601
 ] 

Hudson commented on PHOENIX-1556:
-

FAILURE: Integrated in Jenkins build Phoenix-4.x-HBase-0.98 #1833 (See 
[https://builds.apache.org/job/Phoenix-4.x-HBase-0.98/1833/])
PHOENIX-1556 Base hash versus sort merge join decision on cost (maryannxue: rev 
6914d54d99b4fafae44d1a3397c44ba6e5d10368)
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/execute/SortMergeJoinPlan.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/execute/CorrelatePlan.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/execute/LiteralResultIterationPlan.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/compile/TraceQueryPlan.java
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/CostBasedDecisionIT.java
* (edit) phoenix-core/src/main/java/org/apache/phoenix/util/CostUtil.java
* (edit) 
phoenix-core/src/test/java/org/apache/phoenix/query/ParallelIteratorsSplitTest.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/execute/ClientProcessingPlan.java
* (edit) phoenix-core/src/main/java/org/apache/phoenix/compile/QueryPlan.java
* (add) 
phoenix-core/src/main/java/org/apache/phoenix/execute/visitor/ByteCountVisitor.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixStatement.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/execute/UnnestArrayPlan.java
* (add) 
phoenix-core/src/main/java/org/apache/phoenix/execute/visitor/QueryPlanVisitor.java
* (edit) phoenix-core/src/main/java/org/apache/phoenix/execute/UnionPlan.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/execute/ClientScanPlan.java
* (edit) phoenix-core/src/main/java/org/apache/phoenix/compile/JoinCompiler.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/execute/ClientAggregatePlan.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/execute/TupleProjectionPlan.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/compile/QueryCompiler.java
* (edit) phoenix-core/src/main/java/org/apache/phoenix/execute/ScanPlan.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/execute/CursorFetchPlan.java
* (edit) phoenix-core/src/main/java/org/apache/phoenix/execute/HashJoinPlan.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/execute/AggregatePlan.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/compile/ListJarsQueryPlan.java
* (add) 
phoenix-core/src/main/java/org/apache/phoenix/execute/visitor/AvgRowWidthVisitor.java
* (add) 
phoenix-core/src/main/java/org/apache/phoenix/execute/visitor/RowCountVisitor.java


> Base hash versus sort merge join decision on cost
> -
>
> Key: PHOENIX-1556
> URL: https://issues.apache.org/jira/browse/PHOENIX-1556
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: James Taylor
>Assignee: Maryann Xue
>Priority: Major
>  Labels: CostBasedOptimization
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-1556.patch
>
>
> At compile time, we know how many guideposts (i.e. how many bytes) will be 
> scanned for the RHS table. We should, by default, base the decision of using 
> the hash-join verus many-to-many join on this information.
> Another criteria (as we've seen in PHOENIX-4508) is whether or not the tables 
> being joined are already ordered by the join key. In that case, it's better 
> to always use the sort merge join.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4611) Not nullable column impact on join query plans

2018-03-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16398603#comment-16398603
 ] 

Hudson commented on PHOENIX-4611:
-

FAILURE: Integrated in Jenkins build Phoenix-4.x-HBase-0.98 #1833 (See 
[https://builds.apache.org/job/Phoenix-4.x-HBase-0.98/1833/])
PHOENIX-4611 Not nullable column impact on join query plans (maryannxue: rev 
9bb7811f001d00cea42da6185c3645d7d14e4a16)
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/SortMergeJoinMoreIT.java
* (edit) phoenix-core/src/main/java/org/apache/phoenix/compile/JoinCompiler.java
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/join/HashJoinMoreIT.java


> Not nullable column impact on join query plans
> --
>
> Key: PHOENIX-4611
> URL: https://issues.apache.org/jira/browse/PHOENIX-4611
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Priority: Major
>
> With PHOENIX-2566, there's a subtle change in projected tables in that a 
> column may end up being not nullable where as before it was nullable when the 
> family name is not null. I've kept the old behavior with 
> [this|https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=blobdiff;f=phoenix-core/src/main/java/org/apache/phoenix/compile/TupleProjectionCompiler.java;h=fccded2a896855a2a01d727b992f954a1d3fa8ab;hp=d0b900c1a9c21609b89065307433a0d37b12b72a;hb=82ba1417fdd69a0ac57cbcf2f2327d4aa371bcd9;hpb=e126dd1dda5aa80e8296d3b0c84736b22b658999]
>  commit, but would you mind confirming what the right thing to do is, 
> [~maryannxue]?
> Without this change, the explain plan changes in 
> SortMergeJoinMoreIT.testBug2894() and the assert fails. Looks like the 
> compiler ends up changing the row ordering.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4288) Indexes not used when ordering by primary key

2018-03-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16398596#comment-16398596
 ] 

Hudson commented on PHOENIX-4288:
-

FAILURE: Integrated in Jenkins build Phoenix-4.x-HBase-0.98 #1833 (See 
[https://builds.apache.org/job/Phoenix-4.x-HBase-0.98/1833/])
PHOENIX-4288 Indexes not used when ordering by primary key (maryannxue: rev 
541d6ac22866fe7571365e063a23108c6ca1ea63)
* (edit) phoenix-core/src/main/java/org/apache/phoenix/execute/ScanPlan.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/compile/TraceQueryPlan.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/execute/LiteralResultIterationPlan.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/optimize/QueryOptimizer.java
* (add) phoenix-core/src/main/java/org/apache/phoenix/optimize/Cost.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/execute/AggregatePlan.java
* (edit) phoenix-core/src/main/java/org/apache/phoenix/execute/UnionPlan.java
* (edit) phoenix-core/src/main/java/org/apache/phoenix/query/QueryServices.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/query/QueryServicesOptions.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/execute/BaseQueryPlan.java
* (add) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/CostBasedDecisionIT.java
* (add) phoenix-core/src/main/java/org/apache/phoenix/util/CostUtil.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/compile/ListJarsQueryPlan.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixStatement.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/execute/CorrelatePlan.java
* (edit) phoenix-core/src/main/java/org/apache/phoenix/execute/HashJoinPlan.java
* (edit) 
phoenix-core/src/test/java/org/apache/phoenix/query/ParallelIteratorsSplitTest.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/execute/ClientScanPlan.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/execute/DelegateQueryPlan.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/execute/ClientAggregatePlan.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/execute/SortMergeJoinPlan.java
* (edit) phoenix-core/src/main/java/org/apache/phoenix/compile/QueryPlan.java


> Indexes not used when ordering by primary key
> -
>
> Key: PHOENIX-4288
> URL: https://issues.apache.org/jira/browse/PHOENIX-4288
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Marcin Januszkiewicz
>Assignee: Maryann Xue
>Priority: Major
>  Labels: CostBasedOptimization
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4288.patch
>
>
> We have a table
> CREATE TABLE t (
>   rowkey VARCHAR PRIMARY KEY,
>   c1 VARCHAR,
>   c2 VARCHAR
> )
> which we want to query by doing partial matches on c1, and keep the ordering 
> of the source table:
> SELECT rowkey, c1, c2 FROM t where c1 LIKE 'X0%' ORDER BY rowkey;
> We expect most queries to select a small subset of the table, so we create an 
> index to speed up searches:
> CREATE LOCAL INDEX t_c1_ix ON t (c1);
> However, this index will not be used since Phoenix will always choose not to 
> resort the data.
> In our actual use case, adding index hints is not a practical solution.
> See also discussion at:
> https://lists.apache.org/thread.html/26ab58288eb811d2f074c3f89067163d341e5531fb581f3b2486cf43@%3Cuser.phoenix.apache.org%3E



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-4653) Upgrading from namespace enabled cluster to latest version failing with UpgradeInProgressException

2018-03-14 Thread Rajeshbabu Chintaguntla (JIRA)
Rajeshbabu Chintaguntla created PHOENIX-4653:


 Summary:  Upgrading from namespace enabled cluster to latest 
version failing with UpgradeInProgressException
 Key: PHOENIX-4653
 URL: https://issues.apache.org/jira/browse/PHOENIX-4653
 Project: Phoenix
  Issue Type: Bug
Reporter: Rajeshbabu Chintaguntla
Assignee: Rajeshbabu Chintaguntla


Currently SYSTEM.MUTEX table is not getting created in any case if the 
namespaces already enabled in older versions and trying to upgrade to latest 
version so that upgrade failing with the following error.
{noformat}
Error: Cluster is being concurrently upgraded from 4.7.x to 5.0.x. Please retry 
establishing connection. (state=INT12,code=2010)
org.apache.phoenix.exception.UpgradeInProgressException: Cluster is being 
concurrently upgraded from 4.7.x to 5.0.x. Please retry establishing connection.
at 
org.apache.phoenix.query.ConnectionQueryServicesImpl.acquireUpgradeMutex(ConnectionQueryServicesImpl.java:3301)
at 
org.apache.phoenix.query.ConnectionQueryServicesImpl.upgradeSystemTables(ConnectionQueryServicesImpl.java:2680)
at 
org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2524)
at 
org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2417)
at 
org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:76)
at 
org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:2417)
at 
org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:255)
at 
org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:150)
at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:221)
at sqlline.DatabaseConnection.connect(DatabaseConnection.java:157)
at sqlline.DatabaseConnection.getConnection(DatabaseConnection.java:203)
at sqlline.Commands.close(Commands.java:906)
at sqlline.Commands.quit(Commands.java:870)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:38)
at sqlline.SqlLine.dispatch(SqlLine.java:809)
at sqlline.SqlLine.begin(SqlLine.java:686)
at sqlline.SqlLine.start(SqlLine.java:398)
at sqlline.SqlLine.main(SqlLine.java:291)
{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4653) Upgrading from namespace enabled cluster to latest version failing with UpgradeInProgressException

2018-03-14 Thread Rajeshbabu Chintaguntla (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4653?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla updated PHOENIX-4653:
-
Fix Version/s: 5.0.0
   4.14.0

>  Upgrading from namespace enabled cluster to latest version failing with 
> UpgradeInProgressException
> ---
>
> Key: PHOENIX-4653
> URL: https://issues.apache.org/jira/browse/PHOENIX-4653
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
>
> Currently SYSTEM.MUTEX table is not getting created in any case if the 
> namespaces already enabled in older versions and trying to upgrade to latest 
> version so that upgrade failing with the following error.
> {noformat}
> Error: Cluster is being concurrently upgraded from 4.7.x to 5.0.x. Please 
> retry establishing connection. (state=INT12,code=2010)
> org.apache.phoenix.exception.UpgradeInProgressException: Cluster is being 
> concurrently upgraded from 4.7.x to 5.0.x. Please retry establishing 
> connection.
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.acquireUpgradeMutex(ConnectionQueryServicesImpl.java:3301)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.upgradeSystemTables(ConnectionQueryServicesImpl.java:2680)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2524)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2417)
>   at 
> org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:76)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:2417)
>   at 
> org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:255)
>   at 
> org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:150)
>   at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:221)
>   at sqlline.DatabaseConnection.connect(DatabaseConnection.java:157)
>   at sqlline.DatabaseConnection.getConnection(DatabaseConnection.java:203)
>   at sqlline.Commands.close(Commands.java:906)
>   at sqlline.Commands.quit(Commands.java:870)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at 
> sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:38)
>   at sqlline.SqlLine.dispatch(SqlLine.java:809)
>   at sqlline.SqlLine.begin(SqlLine.java:686)
>   at sqlline.SqlLine.start(SqlLine.java:398)
>   at sqlline.SqlLine.main(SqlLine.java:291)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)