[jira] [Commented] (PHOENIX-3230) Upgrade code running concurrently on different JVMs could make clients unusuable

2016-09-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15475927#comment-15475927
 ] 

Hudson commented on PHOENIX-3230:
-

SUCCESS: Integrated in Jenkins build Phoenix-4.8-HBase-1.2 #14 (See 
[https://builds.apache.org/job/Phoenix-4.8-HBase-1.2/14/])
PHOENIX-3230 Upgrade code running concurrently on different JVMs could 
(samarth: rev abde6ad67808ae39769dff708b2cf653de485e58)
* (edit) phoenix-core/src/main/java/org/apache/phoenix/query/QueryConstants.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/exception/SQLExceptionCode.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataProtocol.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
* (edit) phoenix-core/src/main/java/org/apache/phoenix/util/UpgradeUtil.java


> Upgrade code running concurrently on different JVMs could make clients 
> unusuable
> 
>
> Key: PHOENIX-3230
> URL: https://issues.apache.org/jira/browse/PHOENIX-3230
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Mujtaba Chohan
>Assignee: Samarth Jain
> Fix For: 4.8.1
>
> Attachments: PHOENIX-3230_nowhitespacediff.patch, 
> PHOENIX-3230_v2_nowhitespacediff.patch
>
>
> If two separate Phoenix connections try to upgrade Phoenix from v4.7 to 4.8.1 
> then second connection fails with the following exception. This happens even 
> if second connection is couple of seconds apart but within upgrade window. 
> This is likely to happen in situation where pool of client machines all get 
> upgraded to latest Phoenix version. After this exception, all clients will 
> cease to work with undefined column exception due to restore/aborted upgrade.
> {noformat}
> WARN query.ConnectionQueryServicesImpl: Table already modified at this 
> timestamp, so assuming add of these columns already done: IS_NAMESPACE_MAPPED 
> BOOLEAN
> WARN query.ConnectionQueryServicesImpl: Table already modified at this 
> timestamp, so assuming add of these columns already done: AUTO_PARTITION_SEQ 
> VARCHAR
> WARN query.ConnectionQueryServicesImpl: Table already modified at this 
> timestamp, so assuming add of these columns already done: APPEND_ONLY_SCHEMA 
> BOOLEAN
> WARN query.ConnectionQueryServicesImpl: Starting restore of SYSTEM.CATALOG 
> using snapshot SNAPSHOT_SYSTEM.CATALOG_4.7.x_TO_4.8.0_20160831114048-0700 
> because upgrade failed
> 16/08/31 11:41:05 WARN query.ConnectionQueryServicesImpl: Successfully 
> restored SYSTEM.CATALOG using snapshot 
> SNAPSHOT_SYSTEM.CATALOG_4.7.x_TO_4.8.0_20160831114048-0700
> 16/08/31 11:41:09 WARN query.ConnectionQueryServicesImpl: Successfully 
> restored and enabled SYSTEM.CATALOG using snapshot 
> SNAPSHOT_SYSTEM.CATALOG_4.7.x_TO_4.8.0_20160831114048-0700
> Error: ERROR 504 (42703): Undefined column. columnName=IS_NAMESPACE_MAPPED 
> (state=42703,code=504)
> org.apache.phoenix.schema.ColumnNotFoundException: ERROR 504 (42703): 
> Undefined column. columnName=IS_NAMESPACE_MAPPED
>   at org.apache.phoenix.schema.PTableImpl.getColumn(PTableImpl.java:693)
>   at 
> org.apache.phoenix.compile.FromCompiler$SingleTableColumnResolver.resolveColumn(FromCompiler.java:449)
>   at 
> org.apache.phoenix.compile.UpsertCompiler.compile(UpsertCompiler.java:418)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableUpsertStatement.compilePlan(PhoenixStatement.java:590)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableUpsertStatement.compilePlan(PhoenixStatement.java:578)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:333)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:328)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:326)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:247)
>   at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:172)
>   at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:177)
>   at 
> org.apache.phoenix.schema.MetaDataClient.createTableInternal(MetaDataClient.java:2275)
>   at 
> org.apache.phoenix.schema.MetaDataClient.createTable(MetaDataClient.java:920)
>   at 
> org.apache.phoenix.compile.CreateTableCompiler$2.execute(CreateTableCompiler.java:193)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:340)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:328)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> 

[jira] [Commented] (PHOENIX-3230) Upgrade code running concurrently on different JVMs could make clients unusuable

2016-09-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15475923#comment-15475923
 ] 

Hudson commented on PHOENIX-3230:
-

FAILURE: Integrated in Jenkins build Phoenix-master #1388 (See 
[https://builds.apache.org/job/Phoenix-master/1388/])
PHOENIX-3230 Upgrade code running concurrently on different JVMs could 
(samarth: rev df0d61179a97881b8b72595af198dbb346adfb9f)
* (edit) phoenix-core/src/main/java/org/apache/phoenix/util/UpgradeUtil.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataProtocol.java
* (edit) phoenix-core/src/main/java/org/apache/phoenix/query/QueryConstants.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/exception/SQLExceptionCode.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java


> Upgrade code running concurrently on different JVMs could make clients 
> unusuable
> 
>
> Key: PHOENIX-3230
> URL: https://issues.apache.org/jira/browse/PHOENIX-3230
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Mujtaba Chohan
>Assignee: Samarth Jain
> Fix For: 4.8.1
>
> Attachments: PHOENIX-3230_nowhitespacediff.patch, 
> PHOENIX-3230_v2_nowhitespacediff.patch
>
>
> If two separate Phoenix connections try to upgrade Phoenix from v4.7 to 4.8.1 
> then second connection fails with the following exception. This happens even 
> if second connection is couple of seconds apart but within upgrade window. 
> This is likely to happen in situation where pool of client machines all get 
> upgraded to latest Phoenix version. After this exception, all clients will 
> cease to work with undefined column exception due to restore/aborted upgrade.
> {noformat}
> WARN query.ConnectionQueryServicesImpl: Table already modified at this 
> timestamp, so assuming add of these columns already done: IS_NAMESPACE_MAPPED 
> BOOLEAN
> WARN query.ConnectionQueryServicesImpl: Table already modified at this 
> timestamp, so assuming add of these columns already done: AUTO_PARTITION_SEQ 
> VARCHAR
> WARN query.ConnectionQueryServicesImpl: Table already modified at this 
> timestamp, so assuming add of these columns already done: APPEND_ONLY_SCHEMA 
> BOOLEAN
> WARN query.ConnectionQueryServicesImpl: Starting restore of SYSTEM.CATALOG 
> using snapshot SNAPSHOT_SYSTEM.CATALOG_4.7.x_TO_4.8.0_20160831114048-0700 
> because upgrade failed
> 16/08/31 11:41:05 WARN query.ConnectionQueryServicesImpl: Successfully 
> restored SYSTEM.CATALOG using snapshot 
> SNAPSHOT_SYSTEM.CATALOG_4.7.x_TO_4.8.0_20160831114048-0700
> 16/08/31 11:41:09 WARN query.ConnectionQueryServicesImpl: Successfully 
> restored and enabled SYSTEM.CATALOG using snapshot 
> SNAPSHOT_SYSTEM.CATALOG_4.7.x_TO_4.8.0_20160831114048-0700
> Error: ERROR 504 (42703): Undefined column. columnName=IS_NAMESPACE_MAPPED 
> (state=42703,code=504)
> org.apache.phoenix.schema.ColumnNotFoundException: ERROR 504 (42703): 
> Undefined column. columnName=IS_NAMESPACE_MAPPED
>   at org.apache.phoenix.schema.PTableImpl.getColumn(PTableImpl.java:693)
>   at 
> org.apache.phoenix.compile.FromCompiler$SingleTableColumnResolver.resolveColumn(FromCompiler.java:449)
>   at 
> org.apache.phoenix.compile.UpsertCompiler.compile(UpsertCompiler.java:418)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableUpsertStatement.compilePlan(PhoenixStatement.java:590)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableUpsertStatement.compilePlan(PhoenixStatement.java:578)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:333)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:328)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:326)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:247)
>   at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:172)
>   at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:177)
>   at 
> org.apache.phoenix.schema.MetaDataClient.createTableInternal(MetaDataClient.java:2275)
>   at 
> org.apache.phoenix.schema.MetaDataClient.createTable(MetaDataClient.java:920)
>   at 
> org.apache.phoenix.compile.CreateTableCompiler$2.execute(CreateTableCompiler.java:193)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:340)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:328)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> 

[jira] [Commented] (PHOENIX-3176) Rows will be skipped which are having future timestamp in row_timestamp column

2016-09-08 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15475861#comment-15475861
 ] 

Lars Hofhansl commented on PHOENIX-3176:


bq.  This would be a pretty fundamental change, so we need to be careful with 
it.

Let's not do that for 4.8.x then.

> Rows will be skipped which are having future timestamp in row_timestamp column
> --
>
> Key: PHOENIX-3176
> URL: https://issues.apache.org/jira/browse/PHOENIX-3176
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.6.0
>Reporter: Ankit Singhal
> Fix For: 4.8.1
>
> Attachments: PHOENIX-3176.patch
>
>
> Rows will be skipped when row_timestamp have future timestamp
> {code}
> : jdbc:phoenix:localhost> CREATE TABLE historian.data (
> . . . . . . . . . . . . .> assetid unsigned_int not null,
> . . . . . . . . . . . . .> metricid unsigned_int not null,
> . . . . . . . . . . . . .> ts timestamp not null,
> . . . . . . . . . . . . .> val double
> . . . . . . . . . . . . .> CONSTRAINT pk PRIMARY KEY (assetid, metricid, ts 
> row_timestamp))
> . . . . . . . . . . . . .> IMMUTABLE_ROWS=true;
> No rows affected (1.283 seconds)
> 0: jdbc:phoenix:localhost> upsert into historian.data 
> values(1,2,'2015-01-01',1.2);
> 1 row affected (0.047 seconds)
> 0: jdbc:phoenix:localhost> upsert into historian.data 
> values(1,2,'2018-01-01',1.2);
> 1 row affected (0.005 seconds)
> 0: jdbc:phoenix:localhost> select * from historian.data;
> +--+---+--+--+
> | ASSETID  | METRICID  |TS| VAL  |
> +--+---+--+--+
> | 1| 2 | 2015-01-01 00:00:00.000  | 1.2  |
> +--+---+--+--+
> 1 row selected (0.04 seconds)
> 0: jdbc:phoenix:localhost> select count(*) from historian.data;
> +---+
> | COUNT(1)  |
> +---+
> | 1 |
> +---+
> 1 row selected (0.013 seconds)
> {code}
> Explain plan, where scan range is capped to compile time.
> {code}
> | CLIENT 1-CHUNK PARALLEL 1-WAY FULL SCAN OVER HISTORIAN.DATA  |
> | ROW TIMESTAMP FILTER [0, 1470901929982)  |
> | SERVER FILTER BY FIRST KEY ONLY  |
> | SERVER AGGREGATE INTO SINGLE ROW |
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (PHOENIX-3230) Upgrade code running concurrently on different JVMs could make clients unusuable

2016-09-08 Thread Samarth Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Samarth Jain resolved PHOENIX-3230.
---
Resolution: Fixed

> Upgrade code running concurrently on different JVMs could make clients 
> unusuable
> 
>
> Key: PHOENIX-3230
> URL: https://issues.apache.org/jira/browse/PHOENIX-3230
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Mujtaba Chohan
>Assignee: Samarth Jain
> Fix For: 4.8.1
>
> Attachments: PHOENIX-3230_nowhitespacediff.patch, 
> PHOENIX-3230_v2_nowhitespacediff.patch
>
>
> If two separate Phoenix connections try to upgrade Phoenix from v4.7 to 4.8.1 
> then second connection fails with the following exception. This happens even 
> if second connection is couple of seconds apart but within upgrade window. 
> This is likely to happen in situation where pool of client machines all get 
> upgraded to latest Phoenix version. After this exception, all clients will 
> cease to work with undefined column exception due to restore/aborted upgrade.
> {noformat}
> WARN query.ConnectionQueryServicesImpl: Table already modified at this 
> timestamp, so assuming add of these columns already done: IS_NAMESPACE_MAPPED 
> BOOLEAN
> WARN query.ConnectionQueryServicesImpl: Table already modified at this 
> timestamp, so assuming add of these columns already done: AUTO_PARTITION_SEQ 
> VARCHAR
> WARN query.ConnectionQueryServicesImpl: Table already modified at this 
> timestamp, so assuming add of these columns already done: APPEND_ONLY_SCHEMA 
> BOOLEAN
> WARN query.ConnectionQueryServicesImpl: Starting restore of SYSTEM.CATALOG 
> using snapshot SNAPSHOT_SYSTEM.CATALOG_4.7.x_TO_4.8.0_20160831114048-0700 
> because upgrade failed
> 16/08/31 11:41:05 WARN query.ConnectionQueryServicesImpl: Successfully 
> restored SYSTEM.CATALOG using snapshot 
> SNAPSHOT_SYSTEM.CATALOG_4.7.x_TO_4.8.0_20160831114048-0700
> 16/08/31 11:41:09 WARN query.ConnectionQueryServicesImpl: Successfully 
> restored and enabled SYSTEM.CATALOG using snapshot 
> SNAPSHOT_SYSTEM.CATALOG_4.7.x_TO_4.8.0_20160831114048-0700
> Error: ERROR 504 (42703): Undefined column. columnName=IS_NAMESPACE_MAPPED 
> (state=42703,code=504)
> org.apache.phoenix.schema.ColumnNotFoundException: ERROR 504 (42703): 
> Undefined column. columnName=IS_NAMESPACE_MAPPED
>   at org.apache.phoenix.schema.PTableImpl.getColumn(PTableImpl.java:693)
>   at 
> org.apache.phoenix.compile.FromCompiler$SingleTableColumnResolver.resolveColumn(FromCompiler.java:449)
>   at 
> org.apache.phoenix.compile.UpsertCompiler.compile(UpsertCompiler.java:418)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableUpsertStatement.compilePlan(PhoenixStatement.java:590)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableUpsertStatement.compilePlan(PhoenixStatement.java:578)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:333)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:328)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:326)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:247)
>   at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:172)
>   at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:177)
>   at 
> org.apache.phoenix.schema.MetaDataClient.createTableInternal(MetaDataClient.java:2275)
>   at 
> org.apache.phoenix.schema.MetaDataClient.createTable(MetaDataClient.java:920)
>   at 
> org.apache.phoenix.compile.CreateTableCompiler$2.execute(CreateTableCompiler.java:193)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:340)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:328)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:326)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeUpdate(PhoenixStatement.java:1369)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl$13.call(ConnectionQueryServicesImpl.java:2486)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl$13.call(ConnectionQueryServicesImpl.java:2282)
>   at 
> org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:78)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:2282)
>   at 
> org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:231)
>   at 
> 

[jira] [Updated] (PHOENIX-3230) Upgrade code running concurrently on different JVMs could make clients unusuable

2016-09-08 Thread Samarth Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Samarth Jain updated PHOENIX-3230:
--
Summary: Upgrade code running concurrently on different JVMs could make 
clients unusuable  (was: SYSTEM.CATALOG get restored from snapshot with 
multi-client connection)

> Upgrade code running concurrently on different JVMs could make clients 
> unusuable
> 
>
> Key: PHOENIX-3230
> URL: https://issues.apache.org/jira/browse/PHOENIX-3230
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Mujtaba Chohan
>Assignee: Samarth Jain
> Fix For: 4.8.1
>
> Attachments: PHOENIX-3230_nowhitespacediff.patch, 
> PHOENIX-3230_v2_nowhitespacediff.patch
>
>
> If two separate Phoenix connections try to upgrade Phoenix from v4.7 to 4.8.1 
> then second connection fails with the following exception. This happens even 
> if second connection is couple of seconds apart but within upgrade window. 
> This is likely to happen in situation where pool of client machines all get 
> upgraded to latest Phoenix version. After this exception, all clients will 
> cease to work with undefined column exception due to restore/aborted upgrade.
> {noformat}
> WARN query.ConnectionQueryServicesImpl: Table already modified at this 
> timestamp, so assuming add of these columns already done: IS_NAMESPACE_MAPPED 
> BOOLEAN
> WARN query.ConnectionQueryServicesImpl: Table already modified at this 
> timestamp, so assuming add of these columns already done: AUTO_PARTITION_SEQ 
> VARCHAR
> WARN query.ConnectionQueryServicesImpl: Table already modified at this 
> timestamp, so assuming add of these columns already done: APPEND_ONLY_SCHEMA 
> BOOLEAN
> WARN query.ConnectionQueryServicesImpl: Starting restore of SYSTEM.CATALOG 
> using snapshot SNAPSHOT_SYSTEM.CATALOG_4.7.x_TO_4.8.0_20160831114048-0700 
> because upgrade failed
> 16/08/31 11:41:05 WARN query.ConnectionQueryServicesImpl: Successfully 
> restored SYSTEM.CATALOG using snapshot 
> SNAPSHOT_SYSTEM.CATALOG_4.7.x_TO_4.8.0_20160831114048-0700
> 16/08/31 11:41:09 WARN query.ConnectionQueryServicesImpl: Successfully 
> restored and enabled SYSTEM.CATALOG using snapshot 
> SNAPSHOT_SYSTEM.CATALOG_4.7.x_TO_4.8.0_20160831114048-0700
> Error: ERROR 504 (42703): Undefined column. columnName=IS_NAMESPACE_MAPPED 
> (state=42703,code=504)
> org.apache.phoenix.schema.ColumnNotFoundException: ERROR 504 (42703): 
> Undefined column. columnName=IS_NAMESPACE_MAPPED
>   at org.apache.phoenix.schema.PTableImpl.getColumn(PTableImpl.java:693)
>   at 
> org.apache.phoenix.compile.FromCompiler$SingleTableColumnResolver.resolveColumn(FromCompiler.java:449)
>   at 
> org.apache.phoenix.compile.UpsertCompiler.compile(UpsertCompiler.java:418)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableUpsertStatement.compilePlan(PhoenixStatement.java:590)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableUpsertStatement.compilePlan(PhoenixStatement.java:578)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:333)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:328)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:326)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:247)
>   at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:172)
>   at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:177)
>   at 
> org.apache.phoenix.schema.MetaDataClient.createTableInternal(MetaDataClient.java:2275)
>   at 
> org.apache.phoenix.schema.MetaDataClient.createTable(MetaDataClient.java:920)
>   at 
> org.apache.phoenix.compile.CreateTableCompiler$2.execute(CreateTableCompiler.java:193)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:340)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:328)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:326)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeUpdate(PhoenixStatement.java:1369)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl$13.call(ConnectionQueryServicesImpl.java:2486)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl$13.call(ConnectionQueryServicesImpl.java:2282)
>   at 
> org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:78)
>   at 
> 

[jira] [Commented] (PHOENIX-3230) SYSTEM.CATALOG get restored from snapshot with multi-client connection

2016-09-08 Thread Samarth Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15475675#comment-15475675
 ] 

Samarth Jain commented on PHOENIX-3230:
---

We could still have the initializationException code path. We could be a little 
smart and set initializationException when the exception is not an instance of 
IOException. If it is an instance of IOException, then set it only if it is an 
instance of DoNotRetryIOException. 

> SYSTEM.CATALOG get restored from snapshot with multi-client connection
> --
>
> Key: PHOENIX-3230
> URL: https://issues.apache.org/jira/browse/PHOENIX-3230
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Mujtaba Chohan
>Assignee: Samarth Jain
> Fix For: 4.8.1
>
> Attachments: PHOENIX-3230_nowhitespacediff.patch, 
> PHOENIX-3230_v2_nowhitespacediff.patch
>
>
> If two separate Phoenix connections try to upgrade Phoenix from v4.7 to 4.8.1 
> then second connection fails with the following exception. This happens even 
> if second connection is couple of seconds apart but within upgrade window. 
> This is likely to happen in situation where pool of client machines all get 
> upgraded to latest Phoenix version. After this exception, all clients will 
> cease to work with undefined column exception due to restore/aborted upgrade.
> {noformat}
> WARN query.ConnectionQueryServicesImpl: Table already modified at this 
> timestamp, so assuming add of these columns already done: IS_NAMESPACE_MAPPED 
> BOOLEAN
> WARN query.ConnectionQueryServicesImpl: Table already modified at this 
> timestamp, so assuming add of these columns already done: AUTO_PARTITION_SEQ 
> VARCHAR
> WARN query.ConnectionQueryServicesImpl: Table already modified at this 
> timestamp, so assuming add of these columns already done: APPEND_ONLY_SCHEMA 
> BOOLEAN
> WARN query.ConnectionQueryServicesImpl: Starting restore of SYSTEM.CATALOG 
> using snapshot SNAPSHOT_SYSTEM.CATALOG_4.7.x_TO_4.8.0_20160831114048-0700 
> because upgrade failed
> 16/08/31 11:41:05 WARN query.ConnectionQueryServicesImpl: Successfully 
> restored SYSTEM.CATALOG using snapshot 
> SNAPSHOT_SYSTEM.CATALOG_4.7.x_TO_4.8.0_20160831114048-0700
> 16/08/31 11:41:09 WARN query.ConnectionQueryServicesImpl: Successfully 
> restored and enabled SYSTEM.CATALOG using snapshot 
> SNAPSHOT_SYSTEM.CATALOG_4.7.x_TO_4.8.0_20160831114048-0700
> Error: ERROR 504 (42703): Undefined column. columnName=IS_NAMESPACE_MAPPED 
> (state=42703,code=504)
> org.apache.phoenix.schema.ColumnNotFoundException: ERROR 504 (42703): 
> Undefined column. columnName=IS_NAMESPACE_MAPPED
>   at org.apache.phoenix.schema.PTableImpl.getColumn(PTableImpl.java:693)
>   at 
> org.apache.phoenix.compile.FromCompiler$SingleTableColumnResolver.resolveColumn(FromCompiler.java:449)
>   at 
> org.apache.phoenix.compile.UpsertCompiler.compile(UpsertCompiler.java:418)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableUpsertStatement.compilePlan(PhoenixStatement.java:590)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableUpsertStatement.compilePlan(PhoenixStatement.java:578)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:333)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:328)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:326)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:247)
>   at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:172)
>   at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:177)
>   at 
> org.apache.phoenix.schema.MetaDataClient.createTableInternal(MetaDataClient.java:2275)
>   at 
> org.apache.phoenix.schema.MetaDataClient.createTable(MetaDataClient.java:920)
>   at 
> org.apache.phoenix.compile.CreateTableCompiler$2.execute(CreateTableCompiler.java:193)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:340)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:328)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:326)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeUpdate(PhoenixStatement.java:1369)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl$13.call(ConnectionQueryServicesImpl.java:2486)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl$13.call(ConnectionQueryServicesImpl.java:2282)
>   at 
> 

[jira] [Commented] (PHOENIX-3081) MIsleading exception on async stats update after major compaction

2016-09-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15475659#comment-15475659
 ] 

Hudson commented on PHOENIX-3081:
-

FAILURE: Integrated in Jenkins build Phoenix-4.8-HBase-1.2 #13 (See 
[https://builds.apache.org/job/Phoenix-4.8-HBase-1.2/13/])
PHOENIX-3081 Consult RegionServer stopped/stopping state before logging 
(elserj: rev ce3533deb255697141cc790a1c3000b41d6863dd)
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/schema/stats/StatisticsScanner.java
* (add) 
phoenix-core/src/test/java/org/apache/phoenix/schema/stats/StatisticsScannerTest.java


> MIsleading exception on async stats update after major compaction
> -
>
> Key: PHOENIX-3081
> URL: https://issues.apache.org/jira/browse/PHOENIX-3081
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Minor
> Fix For: 4.9.0, 4.8.1
>
> Attachments: PHOENIX-3081.001.patch
>
>
> Saw an error in some $dayjob testing where, while a RegionServer was going 
> down to due to an exception, there was a scary looking exception about being 
> unable to write to the stats table because an hconnection was closed. Pardon 
> the mis-matched line numbers:
> {noformat}
> 2016-07-17 07:52:13,229 ERROR [phoenix-update-statistics-0] 
> stats.StatisticsScanner: Failed to update statistics table!
> org.apache.hadoop.hbase.client.RetriesExhaustedException: Can't get the 
> location
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:309)
>   at 
> org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:152)
>   at 
> org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:60)
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200)
>   at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:326)
>   at 
> org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:301)
>   at 
> org.apache.hadoop.hbase.client.ClientScanner.initializeScannerInConstruction(ClientScanner.java:166)
>   at 
> org.apache.hadoop.hbase.client.ClientScanner.(ClientScanner.java:161)
>   at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:794)
>   at 
> org.apache.hadoop.hbase.client.HTableWrapper.getScanner(HTableWrapper.java:215)
>   at 
> org.apache.phoenix.schema.stats.StatisticsUtil.readStatistics(StatisticsUtil.java:136)
>   at 
> org.apache.phoenix.schema.stats.StatisticsWriter.deleteStats(StatisticsWriter.java:230)
>   at 
> org.apache.phoenix.schema.stats.StatisticsScanner$StatisticsScannerCallable.call(StatisticsScanner.java:117)
>   at 
> org.apache.phoenix.schema.stats.StatisticsScanner$StatisticsScannerCallable.call(StatisticsScanner.java:102)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.io.IOException: hconnection-0x5314972b closed
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1153)
>   at 
> org.apache.hadoop.hbase.client.CoprocessorHConnection.locateRegion(CoprocessorHConnection.java:41)
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.relocateRegion(ConnectionManager.java:1133)
>   at 
> org.apache.hadoop.hbase.client.CoprocessorHConnection.relocateRegion(CoprocessorHConnection.java:41)
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegionInMeta(ConnectionManager.java:1338)
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1162)
>   at 
> org.apache.hadoop.hbase.client.CoprocessorHConnection.locateRegion(CoprocessorHConnection.java:41)
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:300)
>   ... 17 more
> {noformat}
> Looking into this some more, this async task to update the stats was still 
> running after a RegionServer already was in the process of shutting down. The 
> RegionServer already closed all of the "userRegions", but, because this task 
> is async, the task is still running and using the RegionServer's 
> CoprocessorHConnection. So, the RegionServer thinks all of the user regions 
> are closed and it is safe to close the HConnection. In reality, there is 
> still code tied to those user regions that might be running (as we can see 

[jira] [Commented] (PHOENIX-3230) SYSTEM.CATALOG get restored from snapshot with multi-client connection

2016-09-08 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15475627#comment-15475627
 ] 

James Taylor commented on PHOENIX-3230:
---

+1. Nice work, [~samarthjain]. Just curious on the initializationException code 
path - should we not being doing that, but doing it the way you've done for 
this new exception?

> SYSTEM.CATALOG get restored from snapshot with multi-client connection
> --
>
> Key: PHOENIX-3230
> URL: https://issues.apache.org/jira/browse/PHOENIX-3230
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Mujtaba Chohan
>Assignee: Samarth Jain
> Fix For: 4.8.1
>
> Attachments: PHOENIX-3230_nowhitespacediff.patch, 
> PHOENIX-3230_v2_nowhitespacediff.patch
>
>
> If two separate Phoenix connections try to upgrade Phoenix from v4.7 to 4.8.1 
> then second connection fails with the following exception. This happens even 
> if second connection is couple of seconds apart but within upgrade window. 
> This is likely to happen in situation where pool of client machines all get 
> upgraded to latest Phoenix version. After this exception, all clients will 
> cease to work with undefined column exception due to restore/aborted upgrade.
> {noformat}
> WARN query.ConnectionQueryServicesImpl: Table already modified at this 
> timestamp, so assuming add of these columns already done: IS_NAMESPACE_MAPPED 
> BOOLEAN
> WARN query.ConnectionQueryServicesImpl: Table already modified at this 
> timestamp, so assuming add of these columns already done: AUTO_PARTITION_SEQ 
> VARCHAR
> WARN query.ConnectionQueryServicesImpl: Table already modified at this 
> timestamp, so assuming add of these columns already done: APPEND_ONLY_SCHEMA 
> BOOLEAN
> WARN query.ConnectionQueryServicesImpl: Starting restore of SYSTEM.CATALOG 
> using snapshot SNAPSHOT_SYSTEM.CATALOG_4.7.x_TO_4.8.0_20160831114048-0700 
> because upgrade failed
> 16/08/31 11:41:05 WARN query.ConnectionQueryServicesImpl: Successfully 
> restored SYSTEM.CATALOG using snapshot 
> SNAPSHOT_SYSTEM.CATALOG_4.7.x_TO_4.8.0_20160831114048-0700
> 16/08/31 11:41:09 WARN query.ConnectionQueryServicesImpl: Successfully 
> restored and enabled SYSTEM.CATALOG using snapshot 
> SNAPSHOT_SYSTEM.CATALOG_4.7.x_TO_4.8.0_20160831114048-0700
> Error: ERROR 504 (42703): Undefined column. columnName=IS_NAMESPACE_MAPPED 
> (state=42703,code=504)
> org.apache.phoenix.schema.ColumnNotFoundException: ERROR 504 (42703): 
> Undefined column. columnName=IS_NAMESPACE_MAPPED
>   at org.apache.phoenix.schema.PTableImpl.getColumn(PTableImpl.java:693)
>   at 
> org.apache.phoenix.compile.FromCompiler$SingleTableColumnResolver.resolveColumn(FromCompiler.java:449)
>   at 
> org.apache.phoenix.compile.UpsertCompiler.compile(UpsertCompiler.java:418)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableUpsertStatement.compilePlan(PhoenixStatement.java:590)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableUpsertStatement.compilePlan(PhoenixStatement.java:578)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:333)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:328)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:326)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:247)
>   at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:172)
>   at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:177)
>   at 
> org.apache.phoenix.schema.MetaDataClient.createTableInternal(MetaDataClient.java:2275)
>   at 
> org.apache.phoenix.schema.MetaDataClient.createTable(MetaDataClient.java:920)
>   at 
> org.apache.phoenix.compile.CreateTableCompiler$2.execute(CreateTableCompiler.java:193)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:340)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:328)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:326)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeUpdate(PhoenixStatement.java:1369)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl$13.call(ConnectionQueryServicesImpl.java:2486)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl$13.call(ConnectionQueryServicesImpl.java:2282)
>   at 
> org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:78)
>   at 
> 

[jira] [Updated] (PHOENIX-3230) SYSTEM.CATALOG get restored from snapshot with multi-client connection

2016-09-08 Thread Samarth Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Samarth Jain updated PHOENIX-3230:
--
Attachment: PHOENIX-3230_v2_nowhitespacediff.patch

Updated patch to address review comments around making the exception extend 
SQLException. 

The error stackrace is now like:
{code}
Error: Cluster is being concurrently upgraded from 4.7.x to 4.8.x. Please retry 
establishing connection. (state=INT12,code=2010)
org.apache.phoenix.query.ConnectionQueryServicesImpl$UpgradeInProgressException:
 Cluster is being concurrently upgraded from 4.7.x to 4.8.x. Please retry 
establishing connection.
at 
org.apache.phoenix.query.ConnectionQueryServicesImpl$13.acquireUpgradeMutex(ConnectionQueryServicesImpl.java:2764)
at 
org.apache.phoenix.query.ConnectionQueryServicesImpl$13.call(ConnectionQueryServicesImpl.java:2341)
at 
org.apache.phoenix.query.ConnectionQueryServicesImpl$13.call(ConnectionQueryServicesImpl.java:2278)
at 
org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:78)
at 
org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:2278)
at 
org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:232)
at 
org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:147)
at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:202)
at sqlline.DatabaseConnection.connect(DatabaseConnection.java:157)
at sqlline.DatabaseConnection.getConnection(DatabaseConnection.java:203)
at sqlline.Commands.connect(Commands.java:1064)
at sqlline.Commands.connect(Commands.java:996)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:36)
at sqlline.SqlLine.dispatch(SqlLine.java:803)
at sqlline.SqlLine.initArgs(SqlLine.java:588)
at sqlline.SqlLine.begin(SqlLine.java:656)
at sqlline.SqlLine.start(SqlLine.java:398)
at sqlline.SqlLine.main(SqlLine.java:292)
{code}

Unfortunately, writing unit/integration tests for testing out upgrade scenarios 
isn't always possible and this happens to be one of those. There is currently 
no easy way to trigger the upgrade code using tests. Using different instances 
of connectionqueryservicesimpl won't help either.

> SYSTEM.CATALOG get restored from snapshot with multi-client connection
> --
>
> Key: PHOENIX-3230
> URL: https://issues.apache.org/jira/browse/PHOENIX-3230
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Mujtaba Chohan
>Assignee: Samarth Jain
> Fix For: 4.8.1
>
> Attachments: PHOENIX-3230_nowhitespacediff.patch, 
> PHOENIX-3230_v2_nowhitespacediff.patch
>
>
> If two separate Phoenix connections try to upgrade Phoenix from v4.7 to 4.8.1 
> then second connection fails with the following exception. This happens even 
> if second connection is couple of seconds apart but within upgrade window. 
> This is likely to happen in situation where pool of client machines all get 
> upgraded to latest Phoenix version. After this exception, all clients will 
> cease to work with undefined column exception due to restore/aborted upgrade.
> {noformat}
> WARN query.ConnectionQueryServicesImpl: Table already modified at this 
> timestamp, so assuming add of these columns already done: IS_NAMESPACE_MAPPED 
> BOOLEAN
> WARN query.ConnectionQueryServicesImpl: Table already modified at this 
> timestamp, so assuming add of these columns already done: AUTO_PARTITION_SEQ 
> VARCHAR
> WARN query.ConnectionQueryServicesImpl: Table already modified at this 
> timestamp, so assuming add of these columns already done: APPEND_ONLY_SCHEMA 
> BOOLEAN
> WARN query.ConnectionQueryServicesImpl: Starting restore of SYSTEM.CATALOG 
> using snapshot SNAPSHOT_SYSTEM.CATALOG_4.7.x_TO_4.8.0_20160831114048-0700 
> because upgrade failed
> 16/08/31 11:41:05 WARN query.ConnectionQueryServicesImpl: Successfully 
> restored SYSTEM.CATALOG using snapshot 
> SNAPSHOT_SYSTEM.CATALOG_4.7.x_TO_4.8.0_20160831114048-0700
> 16/08/31 11:41:09 WARN query.ConnectionQueryServicesImpl: Successfully 
> restored and enabled SYSTEM.CATALOG using snapshot 
> SNAPSHOT_SYSTEM.CATALOG_4.7.x_TO_4.8.0_20160831114048-0700
> Error: ERROR 504 (42703): Undefined column. columnName=IS_NAMESPACE_MAPPED 
> (state=42703,code=504)
> org.apache.phoenix.schema.ColumnNotFoundException: ERROR 504 

[jira] [Commented] (PHOENIX-1367) VIEW derived from another VIEW doesn't use parent VIEW indexes

2016-09-08 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15475520#comment-15475520
 ] 

James Taylor commented on PHOENIX-1367:
---

Also, does the getParentName() change require any other code to change? Might 
want to do a quick check for calls to getParentName(), getParentSchemaName(), 
and getParentTableName() to make sure (if you haven't already). Might be worth 
a test run, but if you've run them locally that's fine too.

> VIEW derived from another VIEW doesn't use parent VIEW indexes
> --
>
> Key: PHOENIX-1367
> URL: https://issues.apache.org/jira/browse/PHOENIX-1367
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: James Taylor
>Assignee: Thomas D'Silva
> Fix For: 4.9.0, 4.8.1
>
> Attachments: PHOENIX-1367-4.x-HBase-0.98-v3.patch, 
> PHOENIX-1369-4.x-HBase-0.98-v2.patch, PHOENIX-1369-4.x-HBase-0.98-v3.patch, 
> PHOENIX-1369-4.x-HBase-0.98.patch, PHOENIX_1367.test.patch
>
>
> If a VIEW has an index and another VIEW is derived from it, the child view 
> will not use the parent view's indexes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1367) VIEW derived from another VIEW doesn't use parent VIEW indexes

2016-09-08 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15475517#comment-15475517
 ] 

James Taylor commented on PHOENIX-1367:
---

Looks good, [~tdsilva]. Couple of minor comments:
- Should we use a static constant prefix instead of adding the view name when 
you rename? That way we can use a name that we don't allow for real indexes. We 
could define something that starts with an underscore and put it in 
MetaDataUtil.
- Do we need to override PTable.getParentSchemaName() and getParentTableName() 
as you've done for getParentName()?
- Should we only return the physicalName when it's a view? Otherwise, I think a 
regular table would return itself, no?

> VIEW derived from another VIEW doesn't use parent VIEW indexes
> --
>
> Key: PHOENIX-1367
> URL: https://issues.apache.org/jira/browse/PHOENIX-1367
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: James Taylor
>Assignee: Thomas D'Silva
> Fix For: 4.9.0, 4.8.1
>
> Attachments: PHOENIX-1367-4.x-HBase-0.98-v3.patch, 
> PHOENIX-1369-4.x-HBase-0.98-v2.patch, PHOENIX-1369-4.x-HBase-0.98-v3.patch, 
> PHOENIX-1369-4.x-HBase-0.98.patch, PHOENIX_1367.test.patch
>
>
> If a VIEW has an index and another VIEW is derived from it, the child view 
> will not use the parent view's indexes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-1367) VIEW derived from another VIEW doesn't use parent VIEW indexes

2016-09-08 Thread Thomas D'Silva (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-1367:

Attachment: PHOENIX-1367-4.x-HBase-0.98-v3.patch

> VIEW derived from another VIEW doesn't use parent VIEW indexes
> --
>
> Key: PHOENIX-1367
> URL: https://issues.apache.org/jira/browse/PHOENIX-1367
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: James Taylor
>Assignee: Thomas D'Silva
> Fix For: 4.9.0, 4.8.1
>
> Attachments: PHOENIX-1367-4.x-HBase-0.98-v3.patch, 
> PHOENIX-1369-4.x-HBase-0.98-v2.patch, PHOENIX-1369-4.x-HBase-0.98-v3.patch, 
> PHOENIX-1369-4.x-HBase-0.98.patch, PHOENIX_1367.test.patch
>
>
> If a VIEW has an index and another VIEW is derived from it, the child view 
> will not use the parent view's indexes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-1367) VIEW derived from another VIEW doesn't use parent VIEW indexes

2016-09-08 Thread Thomas D'Silva (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-1367:

Attachment: (was: PHOENIX-1367-4.x-HBase-0.98-v3.patch)

> VIEW derived from another VIEW doesn't use parent VIEW indexes
> --
>
> Key: PHOENIX-1367
> URL: https://issues.apache.org/jira/browse/PHOENIX-1367
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: James Taylor
>Assignee: Thomas D'Silva
> Fix For: 4.9.0, 4.8.1
>
> Attachments: PHOENIX-1367-4.x-HBase-0.98-v3.patch, 
> PHOENIX-1369-4.x-HBase-0.98-v2.patch, PHOENIX-1369-4.x-HBase-0.98-v3.patch, 
> PHOENIX-1369-4.x-HBase-0.98.patch, PHOENIX_1367.test.patch
>
>
> If a VIEW has an index and another VIEW is derived from it, the child view 
> will not use the parent view's indexes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-1367) VIEW derived from another VIEW doesn't use parent VIEW indexes

2016-09-08 Thread Thomas D'Silva (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-1367:

Attachment: PHOENIX-1367-4.x-HBase-0.98-v3.patch

I have attached a v3 patch to fix the issues we found. 

the following change was made in IndexMaintainer to ensure the index row key is 
generated correctly for fixed length, nullable columns. 

{code}
-   this.indexedExpressions.add(expression);
+   try {
+   // Surround constant with cast so that we can still 
know the original type. Otherwise, if we lose the type,
+   // (for example when VARCHAR becomes CHAR), it can 
lead to problems in the type translation we do between data tables and indexes.
+   if (column.isNullable() && 
ExpressionUtil.isConstant(expression)) {
+   expression = 
CoerceExpression.create(expression, indexColumn.getDataType());
+   }
+this.indexedExpressions.add(expression);
+} catch (SQLException e) {
+throw new RuntimeException(e); // Impossible
+}
{code}

I changed PTableImpl.getParentName to return the physical name if the parent 
name is null
{code}
-return parentName;
+// a view on a table will not have a parent name but will have a 
physical table name (which is the parent)
+return parentName!=null ? parentName : getPhysicalName();
{code}

I also modified the index that is added to a child view , to use a new name and 
set the tenantId and update cache frequency to never. This ensures we don't 
remove the existing index from the client cache. 

{code}
-String viewStatement = 
IndexUtil.rewriteViewStatement(connection, index, physicalTable, 
view.getViewStatement());
-index = PTableImpl.makePTable(index, viewStatement);
-indexesToAdd.add(index);
+String viewStatement = 
IndexUtil.rewriteViewStatement(connection, index, parentTable, 
view.getViewStatement());
+PName modifiedIndexName = 
PNameFactory.newName(index.getName().getString() + 
QueryConstants.NAME_SEPARATOR + view.getName().getString());
+// add the index table with a new name so that it does not 
conflict with the existing index table 
+// also set update cache frequency to never since the renamed 
index is not present on the server 
+indexesToAdd.add(PTableImpl.makePTable(index, 
modifiedIndexName, viewStatement, Long.MAX_VALUE, view.getTenantId()));
{code}

> VIEW derived from another VIEW doesn't use parent VIEW indexes
> --
>
> Key: PHOENIX-1367
> URL: https://issues.apache.org/jira/browse/PHOENIX-1367
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: James Taylor
>Assignee: Thomas D'Silva
> Fix For: 4.9.0, 4.8.1
>
> Attachments: PHOENIX-1367-4.x-HBase-0.98-v3.patch, 
> PHOENIX-1369-4.x-HBase-0.98-v2.patch, PHOENIX-1369-4.x-HBase-0.98-v3.patch, 
> PHOENIX-1369-4.x-HBase-0.98.patch, PHOENIX_1367.test.patch
>
>
> If a VIEW has an index and another VIEW is derived from it, the child view 
> will not use the parent view's indexes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3081) MIsleading exception on async stats update after major compaction

2016-09-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15475436#comment-15475436
 ] 

Hudson commented on PHOENIX-3081:
-

FAILURE: Integrated in Jenkins build Phoenix-master #1387 (See 
[https://builds.apache.org/job/Phoenix-master/1387/])
PHOENIX-3081 Consult RegionServer stopped/stopping state before logging 
(elserj: rev 36d500cb83209cd10ae87c5cce6648aab2866925)
* (add) 
phoenix-core/src/test/java/org/apache/phoenix/schema/stats/StatisticsScannerTest.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/schema/stats/StatisticsScanner.java


> MIsleading exception on async stats update after major compaction
> -
>
> Key: PHOENIX-3081
> URL: https://issues.apache.org/jira/browse/PHOENIX-3081
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Minor
> Fix For: 4.9.0, 4.8.1
>
> Attachments: PHOENIX-3081.001.patch
>
>
> Saw an error in some $dayjob testing where, while a RegionServer was going 
> down to due to an exception, there was a scary looking exception about being 
> unable to write to the stats table because an hconnection was closed. Pardon 
> the mis-matched line numbers:
> {noformat}
> 2016-07-17 07:52:13,229 ERROR [phoenix-update-statistics-0] 
> stats.StatisticsScanner: Failed to update statistics table!
> org.apache.hadoop.hbase.client.RetriesExhaustedException: Can't get the 
> location
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:309)
>   at 
> org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:152)
>   at 
> org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:60)
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200)
>   at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:326)
>   at 
> org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:301)
>   at 
> org.apache.hadoop.hbase.client.ClientScanner.initializeScannerInConstruction(ClientScanner.java:166)
>   at 
> org.apache.hadoop.hbase.client.ClientScanner.(ClientScanner.java:161)
>   at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:794)
>   at 
> org.apache.hadoop.hbase.client.HTableWrapper.getScanner(HTableWrapper.java:215)
>   at 
> org.apache.phoenix.schema.stats.StatisticsUtil.readStatistics(StatisticsUtil.java:136)
>   at 
> org.apache.phoenix.schema.stats.StatisticsWriter.deleteStats(StatisticsWriter.java:230)
>   at 
> org.apache.phoenix.schema.stats.StatisticsScanner$StatisticsScannerCallable.call(StatisticsScanner.java:117)
>   at 
> org.apache.phoenix.schema.stats.StatisticsScanner$StatisticsScannerCallable.call(StatisticsScanner.java:102)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.io.IOException: hconnection-0x5314972b closed
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1153)
>   at 
> org.apache.hadoop.hbase.client.CoprocessorHConnection.locateRegion(CoprocessorHConnection.java:41)
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.relocateRegion(ConnectionManager.java:1133)
>   at 
> org.apache.hadoop.hbase.client.CoprocessorHConnection.relocateRegion(CoprocessorHConnection.java:41)
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegionInMeta(ConnectionManager.java:1338)
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1162)
>   at 
> org.apache.hadoop.hbase.client.CoprocessorHConnection.locateRegion(CoprocessorHConnection.java:41)
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:300)
>   ... 17 more
> {noformat}
> Looking into this some more, this async task to update the stats was still 
> running after a RegionServer already was in the process of shutting down. The 
> RegionServer already closed all of the "userRegions", but, because this task 
> is async, the task is still running and using the RegionServer's 
> CoprocessorHConnection. So, the RegionServer thinks all of the user regions 
> are closed and it is safe to close the HConnection. In reality, there is 
> still code tied to those user regions that might be running (as we can see 
> with the 

[jira] [Commented] (PHOENIX-476) Support declaration of DEFAULT in CREATE statement

2016-09-08 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15475294#comment-15475294
 ] 

James Taylor commented on PHOENIX-476:
--

Thanks for the patch, [~kliew]. FYI, a fair amount of the work is already done 
thanks to functional indexes. Here's some feedback:
- In the grammar file, allow DEFAULT to be any expression (i.e. it'll end up 
being a ParseNode).
- Set the ColumnDef expressionStr argument based on the defaultNode.toString(). 
We could potentially add a new ColumnDef constructor that takes a ParseNode 
instead of String and a getExpressionNode() getter. You could check here that 
defaultNode.isStateless() is true and raise an exception if it isn't.
- In CreateTableCompiler, compile ColumnDef.getExpressionNode() using an 
ExpressionCompiler and verify that defaultExpression.isStateless() and 
defaultExpression.getDeterminism() == Determinism.ALWAYS. The latter will rule 
out CURRENT_DATE() and NEXT VALUE FOR expressions.
- For PColumn, we already have getExpressionStr() which is all we need for now. 
We might want to add the following to cache the Expression version of the same
{code}
Expression getDefaultExpression(PhoenixConnection conn);
{code}
The getDefaultExpression will be similar to the PTable.getIndexMaintainer call 
in that it'll parse and compile the string and cache it on PColumnImpl.
- The default value will be persisted in SYSTEM.CATALOG in the COLUMN_DEF 
column already based on ColumnDef.getExpressionStr(). It'll also flow already 
between client and server in PColumn (which is great).
- Instead of always setting the default value in UpsertCompiler for UPSERT 
VALUE, you can only set it for row key columns. This logic can live in 
PTable.newKey() method - just use the default value if a column value is null 
which has a default. You can use ExpressionUtil.getConstantExpression() to 
evaluate it dynamically to get the actual bytes to use (which will be returned 
in ptr).
- For other, non PK columns, you don't need to store the value at all (see 
discussion above). Instead, in ExpressionCompiler.resolveColumn(), before 
returning {{ref}} (but only if it's not a PK column), you'd check if the 
PColumn has a default value using PColumn.getExpressionStr(). If it does, you'd 
wrap the {{ref}} in a CoalesceFunction with the first child being {{ref}} and 
the second child being the defaultExpression. In this way, when evaluated, we'd 
fall back to the default value if the column has no value.
- For indexes, copy over the expressionStr from the PColumn of the data table 
in MetaDataClient.createIndex() when we're creating the ColumnDef for covered 
columns (might already be occurring).

One open question, using this technique, what if the default value changes? Any 
thoughts on this, [~lhofhansl]? Since we didn't persist it before, rows that 
already exist would get the new default value. We don't have a mechanism for it 
to be changed now, so it's a bit of a theoretical issue currently. I suppose in 
theory we could persist the old value if the value were to change. Or lookup 
the default value for the column based on the timestamp of the data cell (which 
would get complicated).


> Support declaration of DEFAULT in CREATE statement
> --
>
> Key: PHOENIX-476
> URL: https://issues.apache.org/jira/browse/PHOENIX-476
> Project: Phoenix
>  Issue Type: Task
>Affects Versions: 3.0-Release
>Reporter: James Taylor
>Assignee: Kevin Liew
>  Labels: enhancement
> Attachments: PHOENIX-476.patch
>
>
> Support the declaration of a default value in the CREATE TABLE/VIEW statement 
> like this:
> CREATE TABLE Persons (
> Pid int NOT NULL PRIMARY KEY,
> LastName varchar(255) NOT NULL,
> FirstName varchar(255),
> Address varchar(255),
> City varchar(255) DEFAULT 'Sandnes'
> )
> To implement this, we'd need to:
> 1. add a new DEFAULT_VALUE key value column in SYSTEM.TABLE and pass through 
> the value when the table is created (in MetaDataClient).
> 2. always set NULLABLE to ResultSetMetaData.columnNoNulls if a default value 
> is present, since the column will never be null.
> 3. add a getDefaultValue() accessor in PColumn
> 4.  for a row key column, during UPSERT use the default value if no value was 
> specified for that column. This could be done in the PTableImpl.newKey method.
> 5.  for a key value column with a default value, we can get away without 
> incurring any storage cost. Although a little bit of extra effort than if we 
> persisted the default value on an UPSERT for key value columns, this approach 
> has the benefit of not incurring any storage cost for a default value.
> * serialize any default value into KeyValueColumnExpression
> * in the evaluate method of 

[jira] [Comment Edited] (PHOENIX-3046) `NOT LIKE '%'` unexpectedly returns results

2016-09-08 Thread Kevin Liew (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15475276#comment-15475276
 ] 

Kevin Liew edited comment on PHOENIX-3046 at 9/8/16 10:57 PM:
--

This may have been fixed by PHOENIX-2641 but I will double check.


was (Author: kliew):
This may have been fixed by https://issues.apache.org/jira/browse/PHOENIX-2641 
but I will double check.

> `NOT LIKE '%'` unexpectedly returns results
> ---
>
> Key: PHOENIX-3046
> URL: https://issues.apache.org/jira/browse/PHOENIX-3046
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: Kevin Liew
>Assignee: Kevin Liew
>Priority: Minor
>  Labels: like, like-predicate, phoenix, regex, wildcard, wildcards
> Fix For: 4.9.0, 4.8.1
>
>
> The following returns all rows in the table when it should return no rows:
> {code}select * from emp where first_name not like '%'{code}
> The following returns no rows as expected:
> {code}select * from emp where first_name not like '%%'{code}
> first_name is a VARCHAR column



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (PHOENIX-3046) `NOT LIKE '%'` unexpectedly returns results

2016-09-08 Thread Kevin Liew (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Liew reassigned PHOENIX-3046:
---

Assignee: Kevin Liew

> `NOT LIKE '%'` unexpectedly returns results
> ---
>
> Key: PHOENIX-3046
> URL: https://issues.apache.org/jira/browse/PHOENIX-3046
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: Kevin Liew
>Assignee: Kevin Liew
>Priority: Minor
>  Labels: like, like-predicate, phoenix, regex, wildcard, wildcards
> Fix For: 4.9.0, 4.8.1
>
>
> The following returns all rows in the table when it should return no rows:
> {code}select * from emp where first_name not like '%'{code}
> The following returns no rows as expected:
> {code}select * from emp where first_name not like '%%'{code}
> first_name is a VARCHAR column



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3046) `NOT LIKE '%'` unexpectedly returns results

2016-09-08 Thread Kevin Liew (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15475276#comment-15475276
 ] 

Kevin Liew commented on PHOENIX-3046:
-

This may have been fixed by https://issues.apache.org/jira/browse/PHOENIX-2641 
but I will double check.

> `NOT LIKE '%'` unexpectedly returns results
> ---
>
> Key: PHOENIX-3046
> URL: https://issues.apache.org/jira/browse/PHOENIX-3046
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: Kevin Liew
>Assignee: Kevin Liew
>Priority: Minor
>  Labels: like, like-predicate, phoenix, regex, wildcard, wildcards
> Fix For: 4.9.0, 4.8.1
>
>
> The following returns all rows in the table when it should return no rows:
> {code}select * from emp where first_name not like '%'{code}
> The following returns no rows as expected:
> {code}select * from emp where first_name not like '%%'{code}
> first_name is a VARCHAR column



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3228) Index tables should not be configured with a custom/smaller MAX_FILESIZE

2016-09-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3228?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15475235#comment-15475235
 ] 

Hudson commented on PHOENIX-3228:
-

SUCCESS: Integrated in Jenkins build Phoenix-4.8-HBase-1.2 #12 (See 
[https://builds.apache.org/job/Phoenix-4.8-HBase-1.2/12/])
PHOENIX-3228 Index tables should not be configured with a custom/smaller 
(larsh: rev 03e10054979e800acb72c362594fb18a5d3830f3)
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/query/QueryServicesOptions.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
* (edit) phoenix-core/src/main/java/org/apache/phoenix/query/QueryServices.java


> Index tables should not be configured with a custom/smaller MAX_FILESIZE
> 
>
> Key: PHOENIX-3228
> URL: https://issues.apache.org/jira/browse/PHOENIX-3228
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
> Fix For: 4.9.0, 4.8.1
>
> Attachments: 3228-remove.txt, 3228.txt
>
>
> I do not think we should do this. The default of 10G is chosen to keep HBase 
> happy. For smaller tables or initially until the index gets large it might 
> lead to more index regions and hence be able to utilize more region server, 
> but generally, this is not the right thing to do.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3228) Index tables should not be configured with a custom/smaller MAX_FILESIZE

2016-09-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3228?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15475191#comment-15475191
 ] 

Hudson commented on PHOENIX-3228:
-

FAILURE: Integrated in Jenkins build Phoenix-master #1386 (See 
[https://builds.apache.org/job/Phoenix-master/1386/])
PHOENIX-3228 Index tables should not be configured with a custom/smaller 
(larsh: rev fd6da35eee5b366530f73a435ae4bc4de0f0eb25)
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
* (edit) phoenix-core/src/main/java/org/apache/phoenix/query/QueryServices.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/query/QueryServicesOptions.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java


> Index tables should not be configured with a custom/smaller MAX_FILESIZE
> 
>
> Key: PHOENIX-3228
> URL: https://issues.apache.org/jira/browse/PHOENIX-3228
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
> Fix For: 4.9.0, 4.8.1
>
> Attachments: 3228-remove.txt, 3228.txt
>
>
> I do not think we should do this. The default of 10G is chosen to keep HBase 
> happy. For smaller tables or initially until the index gets large it might 
> lead to more index regions and hence be able to utilize more region server, 
> but generally, this is not the right thing to do.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3230) SYSTEM.CATALOG get restored from snapshot with multi-client connection

2016-09-08 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15475128#comment-15475128
 ] 

James Taylor commented on PHOENIX-3230:
---

Looks good, [~samarthjain]. If a test is possible, that'd be nice - maybe two 
connections with different ConnectionQueryServicesImpl? If it's too difficult 
(but not impossible), then maybe file another JIRA?

+1 with one minor nit: can you make UpgradeInProgressException derived from 
SQLException and add it to SQLExceptionCode like others so that users can react 
to it more easily (as it'll have a unique error code)?

> SYSTEM.CATALOG get restored from snapshot with multi-client connection
> --
>
> Key: PHOENIX-3230
> URL: https://issues.apache.org/jira/browse/PHOENIX-3230
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Mujtaba Chohan
>Assignee: Samarth Jain
> Fix For: 4.8.1
>
> Attachments: PHOENIX-3230_nowhitespacediff.patch
>
>
> If two separate Phoenix connections try to upgrade Phoenix from v4.7 to 4.8.1 
> then second connection fails with the following exception. This happens even 
> if second connection is couple of seconds apart but within upgrade window. 
> This is likely to happen in situation where pool of client machines all get 
> upgraded to latest Phoenix version. After this exception, all clients will 
> cease to work with undefined column exception due to restore/aborted upgrade.
> {noformat}
> WARN query.ConnectionQueryServicesImpl: Table already modified at this 
> timestamp, so assuming add of these columns already done: IS_NAMESPACE_MAPPED 
> BOOLEAN
> WARN query.ConnectionQueryServicesImpl: Table already modified at this 
> timestamp, so assuming add of these columns already done: AUTO_PARTITION_SEQ 
> VARCHAR
> WARN query.ConnectionQueryServicesImpl: Table already modified at this 
> timestamp, so assuming add of these columns already done: APPEND_ONLY_SCHEMA 
> BOOLEAN
> WARN query.ConnectionQueryServicesImpl: Starting restore of SYSTEM.CATALOG 
> using snapshot SNAPSHOT_SYSTEM.CATALOG_4.7.x_TO_4.8.0_20160831114048-0700 
> because upgrade failed
> 16/08/31 11:41:05 WARN query.ConnectionQueryServicesImpl: Successfully 
> restored SYSTEM.CATALOG using snapshot 
> SNAPSHOT_SYSTEM.CATALOG_4.7.x_TO_4.8.0_20160831114048-0700
> 16/08/31 11:41:09 WARN query.ConnectionQueryServicesImpl: Successfully 
> restored and enabled SYSTEM.CATALOG using snapshot 
> SNAPSHOT_SYSTEM.CATALOG_4.7.x_TO_4.8.0_20160831114048-0700
> Error: ERROR 504 (42703): Undefined column. columnName=IS_NAMESPACE_MAPPED 
> (state=42703,code=504)
> org.apache.phoenix.schema.ColumnNotFoundException: ERROR 504 (42703): 
> Undefined column. columnName=IS_NAMESPACE_MAPPED
>   at org.apache.phoenix.schema.PTableImpl.getColumn(PTableImpl.java:693)
>   at 
> org.apache.phoenix.compile.FromCompiler$SingleTableColumnResolver.resolveColumn(FromCompiler.java:449)
>   at 
> org.apache.phoenix.compile.UpsertCompiler.compile(UpsertCompiler.java:418)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableUpsertStatement.compilePlan(PhoenixStatement.java:590)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableUpsertStatement.compilePlan(PhoenixStatement.java:578)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:333)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:328)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:326)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:247)
>   at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:172)
>   at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:177)
>   at 
> org.apache.phoenix.schema.MetaDataClient.createTableInternal(MetaDataClient.java:2275)
>   at 
> org.apache.phoenix.schema.MetaDataClient.createTable(MetaDataClient.java:920)
>   at 
> org.apache.phoenix.compile.CreateTableCompiler$2.execute(CreateTableCompiler.java:193)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:340)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:328)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:326)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeUpdate(PhoenixStatement.java:1369)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl$13.call(ConnectionQueryServicesImpl.java:2486)
>   at 
> 

[jira] [Comment Edited] (PHOENIX-3046) `NOT LIKE '%'` unexpectedly returns results

2016-09-08 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15475086#comment-15475086
 ] 

James Taylor edited comment on PHOENIX-3046 at 9/8/16 9:32 PM:
---

Any chance for a patch, [~kliew]?


was (Author: jamestaylor):
Any chance for a patch, @Kevin Liew?

> `NOT LIKE '%'` unexpectedly returns results
> ---
>
> Key: PHOENIX-3046
> URL: https://issues.apache.org/jira/browse/PHOENIX-3046
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: Kevin Liew
>Priority: Minor
>  Labels: like, like-predicate, phoenix, regex, wildcard, wildcards
> Fix For: 4.9.0, 4.8.1
>
>
> The following returns all rows in the table when it should return no rows:
> {code}select * from emp where first_name not like '%'{code}
> The following returns no rows as expected:
> {code}select * from emp where first_name not like '%%'{code}
> first_name is a VARCHAR column



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3072) Deadlock on region opening with secondary index recovery

2016-09-08 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15475067#comment-15475067
 ] 

James Taylor commented on PHOENIX-3072:
---

If we can get in shape for 4.8.1, that'd be great, [~enis]. I agree, it seems 
important. Some questions/comments:
- It's difficult to tell what's changed with all the whitespace diffs. Can you 
generate a patch without that?
- It looks like you're setting a new "PRIORITY" attribute on table descriptor 
for indexes? How/where is this used?
- How will you handle local indexes since the table descriptor is the same data 
and index table? Should we add it as a column descriptor attribute instead, or 
would we not know which column families are involved when we're using this info?
- Minor nit: is I suppose you're not using the HBase static constant for 
"PRIORITY" because this doesn't appear until HBase 1.3? Maybe we should define 
one in QueryConstants with a comment?
- Didn't priority get exposed as an attribute on operations now? If so, would 
that be an alternate implementation mechanism which is a bit more flexible?
- What about existing tables and indexes - I didn't see any upgrade code that 
sets this for those. If setting priority on operation is an option, that'd get 
around this.

> Deadlock on region opening with secondary index recovery
> 
>
> Key: PHOENIX-3072
> URL: https://issues.apache.org/jira/browse/PHOENIX-3072
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 4.9.0, 4.8.1
>
> Attachments: phoenix-3072_v1.patch
>
>
> There is a distributed deadlock happening in clusters with some moderate 
> number of regions for the data tables and secondary index tables and cluster 
> and it is cluster restart or some large failure. We have seen this in a 
> couple of production cases already. 
> Opening of regions in hbase is performed by a thread pool with 3 threads by 
> default. Every regionserver can open 3 regions at a time. However, opening 
> data table regions has to write to multiple index regions during WAL 
> recovery. All other region open requests are queued up in a single queue. 
> This causes a deadlock, since the secondary index regions are also opened by 
> the same thread pools that we do the work. So if there is greater number of 
> data table regions then available number of region opening threads from 
> regionservers, the secondary index region open requests just wait to be 
> processed in the queue. Since these index regions are not open, the region 
> opening of data table regions just block the region opening threads for a 
> long time.  
> One proposed fix is to use a different thread pool for opening regions of the 
> secondary index tables so that we will not deadlock. See HBASE-16095 for the 
> HBase-level fix. In Phoenix, we just have to set the priority for secondary 
> index tables. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3046) `NOT LIKE '%'` unexpectedly returns results

2016-09-08 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15475086#comment-15475086
 ] 

James Taylor commented on PHOENIX-3046:
---

Any chance for a patch, @Kevin Liew?

> `NOT LIKE '%'` unexpectedly returns results
> ---
>
> Key: PHOENIX-3046
> URL: https://issues.apache.org/jira/browse/PHOENIX-3046
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: Kevin Liew
>Priority: Minor
>  Labels: like, like-predicate, phoenix, regex, wildcard, wildcards
> Fix For: 4.9.0, 4.8.1
>
>
> The following returns all rows in the table when it should return no rows:
> {code}select * from emp where first_name not like '%'{code}
> The following returns no rows as expected:
> {code}select * from emp where first_name not like '%%'{code}
> first_name is a VARCHAR column



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3179) Trim or remove hadoop-common dependency fat from thin-client jar

2016-09-08 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15475057#comment-15475057
 ] 

Josh Elser commented on PHOENIX-3179:
-

Thanks, [~lhofhansl], pushed it out for now. Not a big issue.

> Trim or remove hadoop-common dependency fat from thin-client jar
> 
>
> Key: PHOENIX-3179
> URL: https://issues.apache.org/jira/browse/PHOENIX-3179
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Josh Elser
>Assignee: Josh Elser
> Fix For: 4.9.0, 4.8.2
>
>
> 4.8.0 brought in hadoop-common, pretty much for Configuration and 
> UserGroupInformation, to the thin-client shaded jar.
> This ends up really bloating the size of the artifact which is annoying. We 
> should be able to exclude some of the transitive dependencies which will 
> reduce the size.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3081) MIsleading exception on async stats update after major compaction

2016-09-08 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15475059#comment-15475059
 ] 

Josh Elser commented on PHOENIX-3081:
-

Nope, I just forgot about it. Will rebase+apply. Thanks, Lars!

> MIsleading exception on async stats update after major compaction
> -
>
> Key: PHOENIX-3081
> URL: https://issues.apache.org/jira/browse/PHOENIX-3081
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Minor
> Fix For: 4.9.0, 4.8.1
>
> Attachments: PHOENIX-3081.001.patch
>
>
> Saw an error in some $dayjob testing where, while a RegionServer was going 
> down to due to an exception, there was a scary looking exception about being 
> unable to write to the stats table because an hconnection was closed. Pardon 
> the mis-matched line numbers:
> {noformat}
> 2016-07-17 07:52:13,229 ERROR [phoenix-update-statistics-0] 
> stats.StatisticsScanner: Failed to update statistics table!
> org.apache.hadoop.hbase.client.RetriesExhaustedException: Can't get the 
> location
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:309)
>   at 
> org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:152)
>   at 
> org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:60)
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200)
>   at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:326)
>   at 
> org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:301)
>   at 
> org.apache.hadoop.hbase.client.ClientScanner.initializeScannerInConstruction(ClientScanner.java:166)
>   at 
> org.apache.hadoop.hbase.client.ClientScanner.(ClientScanner.java:161)
>   at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:794)
>   at 
> org.apache.hadoop.hbase.client.HTableWrapper.getScanner(HTableWrapper.java:215)
>   at 
> org.apache.phoenix.schema.stats.StatisticsUtil.readStatistics(StatisticsUtil.java:136)
>   at 
> org.apache.phoenix.schema.stats.StatisticsWriter.deleteStats(StatisticsWriter.java:230)
>   at 
> org.apache.phoenix.schema.stats.StatisticsScanner$StatisticsScannerCallable.call(StatisticsScanner.java:117)
>   at 
> org.apache.phoenix.schema.stats.StatisticsScanner$StatisticsScannerCallable.call(StatisticsScanner.java:102)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.io.IOException: hconnection-0x5314972b closed
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1153)
>   at 
> org.apache.hadoop.hbase.client.CoprocessorHConnection.locateRegion(CoprocessorHConnection.java:41)
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.relocateRegion(ConnectionManager.java:1133)
>   at 
> org.apache.hadoop.hbase.client.CoprocessorHConnection.relocateRegion(CoprocessorHConnection.java:41)
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegionInMeta(ConnectionManager.java:1338)
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1162)
>   at 
> org.apache.hadoop.hbase.client.CoprocessorHConnection.locateRegion(CoprocessorHConnection.java:41)
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:300)
>   ... 17 more
> {noformat}
> Looking into this some more, this async task to update the stats was still 
> running after a RegionServer already was in the process of shutting down. The 
> RegionServer already closed all of the "userRegions", but, because this task 
> is async, the task is still running and using the RegionServer's 
> CoprocessorHConnection. So, the RegionServer thinks all of the user regions 
> are closed and it is safe to close the HConnection. In reality, there is 
> still code tied to those user regions that might be running (as we can see 
> with the above stacktrace). The next time the StatisticsScannerCallable tries 
> to use the HConnection, it will then error.
> I think the simple fix is to just use the CoprocessorEnvironment to access 
> the RegionServerServices and use the {{isClosing()}} and {{isClosed()}} 
> methods. This is all pretty minor because the RegionServer is already 
> shutting down, but 

[jira] [Comment Edited] (PHOENIX-3072) Deadlock on region opening with secondary index recovery

2016-09-08 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15475067#comment-15475067
 ] 

James Taylor edited comment on PHOENIX-3072 at 9/8/16 9:27 PM:
---

If we can get in shape for 4.8.1, that'd be great, [~enis]. I agree, it seems 
important. Some questions/comments:
- It's difficult to tell what's changed with all the whitespace diffs. Can you 
generate a patch without that?
- It looks like you're setting a new "PRIORITY" attribute on table descriptor 
for indexes? How/where is this used? (never mind on this - I see it's part of 
an HBase JIRA).
- How will you handle local indexes since the table descriptor is the same data 
and index table? Should we add it as a column descriptor attribute instead, or 
would we not know which column families are involved when we're using this info?
- Minor nit: is I suppose you're not using the HBase static constant for 
"PRIORITY" because this doesn't appear until HBase 1.3? Maybe we should define 
one in QueryConstants with a comment?
- Didn't priority get exposed as an attribute on operations now? If so, would 
that be an alternate implementation mechanism which is a bit more flexible?
- What about existing tables and indexes - I didn't see any upgrade code that 
sets this for those. If setting priority on operation is an option, that'd get 
around this.


was (Author: jamestaylor):
If we can get in shape for 4.8.1, that'd be great, [~enis]. I agree, it seems 
important. Some questions/comments:
- It's difficult to tell what's changed with all the whitespace diffs. Can you 
generate a patch without that?
- It looks like you're setting a new "PRIORITY" attribute on table descriptor 
for indexes? How/where is this used?
- How will you handle local indexes since the table descriptor is the same data 
and index table? Should we add it as a column descriptor attribute instead, or 
would we not know which column families are involved when we're using this info?
- Minor nit: is I suppose you're not using the HBase static constant for 
"PRIORITY" because this doesn't appear until HBase 1.3? Maybe we should define 
one in QueryConstants with a comment?
- Didn't priority get exposed as an attribute on operations now? If so, would 
that be an alternate implementation mechanism which is a bit more flexible?
- What about existing tables and indexes - I didn't see any upgrade code that 
sets this for those. If setting priority on operation is an option, that'd get 
around this.

> Deadlock on region opening with secondary index recovery
> 
>
> Key: PHOENIX-3072
> URL: https://issues.apache.org/jira/browse/PHOENIX-3072
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 4.9.0, 4.8.1
>
> Attachments: phoenix-3072_v1.patch
>
>
> There is a distributed deadlock happening in clusters with some moderate 
> number of regions for the data tables and secondary index tables and cluster 
> and it is cluster restart or some large failure. We have seen this in a 
> couple of production cases already. 
> Opening of regions in hbase is performed by a thread pool with 3 threads by 
> default. Every regionserver can open 3 regions at a time. However, opening 
> data table regions has to write to multiple index regions during WAL 
> recovery. All other region open requests are queued up in a single queue. 
> This causes a deadlock, since the secondary index regions are also opened by 
> the same thread pools that we do the work. So if there is greater number of 
> data table regions then available number of region opening threads from 
> regionservers, the secondary index region open requests just wait to be 
> processed in the queue. Since these index regions are not open, the region 
> opening of data table regions just block the region opening threads for a 
> long time.  
> One proposed fix is to use a different thread pool for opening regions of the 
> secondary index tables so that we will not deadlock. See HBASE-16095 for the 
> HBase-level fix. In Phoenix, we just have to set the priority for secondary 
> index tables. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-3179) Trim or remove hadoop-common dependency fat from thin-client jar

2016-09-08 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3179?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated PHOENIX-3179:

Priority: Minor  (was: Major)

> Trim or remove hadoop-common dependency fat from thin-client jar
> 
>
> Key: PHOENIX-3179
> URL: https://issues.apache.org/jira/browse/PHOENIX-3179
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Minor
> Fix For: 4.9.0, 4.8.2
>
>
> 4.8.0 brought in hadoop-common, pretty much for Configuration and 
> UserGroupInformation, to the thin-client shaded jar.
> This ends up really bloating the size of the artifact which is annoying. We 
> should be able to exclude some of the transitive dependencies which will 
> reduce the size.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-3179) Trim or remove hadoop-common dependency fat from thin-client jar

2016-09-08 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3179?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated PHOENIX-3179:

Fix Version/s: (was: 4.8.1)
   4.8.2

> Trim or remove hadoop-common dependency fat from thin-client jar
> 
>
> Key: PHOENIX-3179
> URL: https://issues.apache.org/jira/browse/PHOENIX-3179
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Josh Elser
>Assignee: Josh Elser
> Fix For: 4.9.0, 4.8.2
>
>
> 4.8.0 brought in hadoop-common, pretty much for Configuration and 
> UserGroupInformation, to the thin-client shaded jar.
> This ends up really bloating the size of the artifact which is annoying. We 
> should be able to exclude some of the transitive dependencies which will 
> reduce the size.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3072) Deadlock on region opening with secondary index recovery

2016-09-08 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15475049#comment-15475049
 ] 

Andrew Purtell commented on PHOENIX-3072:
-

Then +1 for not pushing out

> Deadlock on region opening with secondary index recovery
> 
>
> Key: PHOENIX-3072
> URL: https://issues.apache.org/jira/browse/PHOENIX-3072
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 4.9.0, 4.8.1
>
> Attachments: phoenix-3072_v1.patch
>
>
> There is a distributed deadlock happening in clusters with some moderate 
> number of regions for the data tables and secondary index tables and cluster 
> and it is cluster restart or some large failure. We have seen this in a 
> couple of production cases already. 
> Opening of regions in hbase is performed by a thread pool with 3 threads by 
> default. Every regionserver can open 3 regions at a time. However, opening 
> data table regions has to write to multiple index regions during WAL 
> recovery. All other region open requests are queued up in a single queue. 
> This causes a deadlock, since the secondary index regions are also opened by 
> the same thread pools that we do the work. So if there is greater number of 
> data table regions then available number of region opening threads from 
> regionservers, the secondary index region open requests just wait to be 
> processed in the queue. Since these index regions are not open, the region 
> opening of data table regions just block the region opening threads for a 
> long time.  
> One proposed fix is to use a different thread pool for opening regions of the 
> secondary index tables so that we will not deadlock. See HBASE-16095 for the 
> HBase-level fix. In Phoenix, we just have to set the priority for secondary 
> index tables. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3204) Scanner lease timeout exception during UPSERT INTO t1 SELECT * FROM t2

2016-09-08 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15475020#comment-15475020
 ] 

James Taylor commented on PHOENIX-3204:
---

[~samarthjain] - can you take a stab at this? I think your idea of ignoring 
(and logging) an IOException at close is all that's needed.

> Scanner lease timeout exception during UPSERT INTO t1 SELECT * FROM t2
> --
>
> Key: PHOENIX-3204
> URL: https://issues.apache.org/jira/browse/PHOENIX-3204
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.8.0
>Reporter: Lars Hofhansl
> Fix For: 4.9.0, 4.8.2
>
>
> We ran some larg'ish Phoenix tests on a larg'ish cluster.
> We had loaded 1bn rows into table t1 and tried to load the same data into a 
> different table t2. The UPSERT failed about 1/2 way through with various 
> exceptions like the following.
> Interesting are the exception in close() chain, as Phoenix could probably 
> ignore those.
> Just filing here for reference.
> {code}
> 2016-08-23 00:56:41,851 INFO  [phoenix-1-thread-653] client.AsyncProcess - 
> #4816618, waiting for some tasks to finish. Expected max=0, tasksSent=14, 
> tasksDone=13, currentTasksDone=13, retries=13 hasError=false, tableName=null
> 2016-08-23 00:56:41,851 INFO  [phoenix-1-thread-653] client.AsyncProcess - 
> Left over 1 task(s) are processed on server(s): [host17-8,60020,1471339710192]
> 2016-08-23 00:56:41,851 INFO  [phoenix-1-thread-653] client.AsyncProcess - 
> Regions against which left over task(s) are processed: [TEST3,97432328$#}
> ,1471913672240.9f2fd6435921da1a203daf52f0116a34.]
> 2016-08-23 00:56:41,859 INFO  [phoenix-1-thread-293] client.AsyncProcess - 
> #4816622, waiting for some tasks to finish. Expected max=0, tasksSent=14, 
> tasksDone=13, currentTasksDone=13, retries=13 hasError=false, tableName=null
> 2016-08-23 00:56:41,859 INFO  [phoenix-1-thread-293] client.AsyncProcess - 
> Left over 1 task(s) are processed on server(s): [host17-8,60020,1471339710192]
> 2016-08-23 00:56:41,859 INFO  [phoenix-1-thread-293] client.AsyncProcess - 
> Regions against which left over task(s) are processed: [TEST3,97432328$#}
> ,1471913672240.9f2fd6435921da1a203daf52f0116a34.]
> 2016-08-23 00:56:41,860 INFO  [80-shared--pool2-t90] client.AsyncProcess - 
> #4816618, table=TEST3, attempt=14/35 SUCCEEDED on 
> host17-8,60020,1471339710192, tracking started Tue Aug 23 00:55:12 GMT 2016
> 2016-08-23 00:56:41,861 DEBUG [phoenix-1-thread-653] 
> token.AuthenticationTokenSelector - No matching token found
> 2016-08-23 00:56:41,861 DEBUG [phoenix-1-thread-653] 
> security.HBaseSaslRpcClient - Creating SASL GSSAPI client. Server's Kerberos 
> principal name is hbase/host28-20@
> 2016-08-23 00:56:41,862 DEBUG [phoenix-1-thread-653] 
> security.HBaseSaslRpcClient - Have sent token of size 730 from 
> initSASLContext.
> 2016-08-23 00:56:41,863 DEBUG [phoenix-1-thread-653] 
> security.HBaseSaslRpcClient - Will read input token of size 108 for 
> processing by initSASLContext
> 2016-08-23 00:56:41,863 DEBUG [phoenix-1-thread-653] 
> security.HBaseSaslRpcClient - Will send token of size 0 from initSASLContext.
> 2016-08-23 00:56:41,863 DEBUG [phoenix-1-thread-653] 
> security.HBaseSaslRpcClient - Will read input token of size 32 for processing 
> by initSASLContext
> 2016-08-23 00:56:41,863 DEBUG [phoenix-1-thread-653] 
> security.HBaseSaslRpcClient - Will send token of size 32 from initSASLContext.
> 2016-08-23 00:56:41,863 DEBUG [phoenix-1-thread-653] 
> security.HBaseSaslRpcClient - SASL client context established. Negotiated 
> QoP: auth
> 2016-08-23 00:56:41,865 INFO  [phoenix-1-thread-653] client.ClientScanner - 
> For hints related to the following exception, please try taking a look at: 
> https://hbase.apache.org/book.html#trouble.client.scantimeout
> 2016-08-23 00:56:41,865 WARN  [phoenix-1-thread-653] client.ScannerCallable - 
> Ignore, probably already closed
> org.apache.hadoop.hbase.UnknownScannerException: 
> org.apache.hadoop.hbase.UnknownScannerException: Unknown scanner '49'. This 
> can happen due to any of the following reasons: a) Scanner id given is wrong, 
> b) Scanner lease expired because of long wait between consecutive client 
> checkins, c) Server may be closing down, d) RegionServer restart during 
> upgrade.
> If the issue is due to reason (b), a possible fix would be increasing the 
> value of'hbase.client.scanner.timeout.period' configuration.
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3228)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32492)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2208)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:104)
>   at 
> 

[jira] [Commented] (PHOENIX-3072) Deadlock on region opening with secondary index recovery

2016-09-08 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15475018#comment-15475018
 ] 

Enis Soztutar commented on PHOENIX-3072:


We should commit this for 4.8.1. We have seen this in multiple production 
clusters already. Although there is a known workaround, it is a hassle and very 
user un-friendly. 

> Deadlock on region opening with secondary index recovery
> 
>
> Key: PHOENIX-3072
> URL: https://issues.apache.org/jira/browse/PHOENIX-3072
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 4.9.0, 4.8.1
>
> Attachments: phoenix-3072_v1.patch
>
>
> There is a distributed deadlock happening in clusters with some moderate 
> number of regions for the data tables and secondary index tables and cluster 
> and it is cluster restart or some large failure. We have seen this in a 
> couple of production cases already. 
> Opening of regions in hbase is performed by a thread pool with 3 threads by 
> default. Every regionserver can open 3 regions at a time. However, opening 
> data table regions has to write to multiple index regions during WAL 
> recovery. All other region open requests are queued up in a single queue. 
> This causes a deadlock, since the secondary index regions are also opened by 
> the same thread pools that we do the work. So if there is greater number of 
> data table regions then available number of region opening threads from 
> regionservers, the secondary index region open requests just wait to be 
> processed in the queue. Since these index regions are not open, the region 
> opening of data table regions just block the region opening threads for a 
> long time.  
> One proposed fix is to use a different thread pool for opening regions of the 
> secondary index tables so that we will not deadlock. See HBASE-16095 for the 
> HBase-level fix. In Phoenix, we just have to set the priority for secondary 
> index tables. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3072) Deadlock on region opening with secondary index recovery

2016-09-08 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15475015#comment-15475015
 ] 

Enis Soztutar commented on PHOENIX-3072:


bq. On the RS, we already make index table updates higher priority than data 
table updates
This happens on the region open, and does not involve the RPC scheduling. In a 
cluster restart, all of the index and data table regions will be opened by the 
regionservers. There is only 3 threads that does the opening of regions by 
default, and for the data tables, the opening of the region blocks on doing the 
index updates. However, if the index regions are not opened yet, then they will 
not succeed even if the regionserver RPC scheduling works. The index regions 
will be waiting on the same "region opening queue" to be opened by the same 
regionserver. 
bq. Also, would you mind generating a patch that ignores whitespace changes as 
it's difficult to find the change you've made.
Sorry, the existing code is full with extra whitespace, and my Eclipse settings 
is to truncate these as a save action. This is to make sure that my patches do 
not introduce any more extra whitespaces. I can put the patch in RB/github if 
you want. 

> Deadlock on region opening with secondary index recovery
> 
>
> Key: PHOENIX-3072
> URL: https://issues.apache.org/jira/browse/PHOENIX-3072
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 4.9.0, 4.8.1
>
> Attachments: phoenix-3072_v1.patch
>
>
> There is a distributed deadlock happening in clusters with some moderate 
> number of regions for the data tables and secondary index tables and cluster 
> and it is cluster restart or some large failure. We have seen this in a 
> couple of production cases already. 
> Opening of regions in hbase is performed by a thread pool with 3 threads by 
> default. Every regionserver can open 3 regions at a time. However, opening 
> data table regions has to write to multiple index regions during WAL 
> recovery. All other region open requests are queued up in a single queue. 
> This causes a deadlock, since the secondary index regions are also opened by 
> the same thread pools that we do the work. So if there is greater number of 
> data table regions then available number of region opening threads from 
> regionservers, the secondary index region open requests just wait to be 
> processed in the queue. Since these index regions are not open, the region 
> opening of data table regions just block the region opening threads for a 
> long time.  
> One proposed fix is to use a different thread pool for opening regions of the 
> secondary index tables so that we will not deadlock. See HBASE-16095 for the 
> HBase-level fix. In Phoenix, we just have to set the priority for secondary 
> index tables. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Starting to think about 4.8.1

2016-09-08 Thread larsh
And thanks to everybody who committed some of their jiras since my last email.
-- Lars

  From: "la...@apache.org" 
 To: "dev@phoenix.apache.org"  
 Sent: Thursday, September 8, 2016 2:00 PM
 Subject: Re: Starting to think about 4.8.1
   
I committed my two outstanding issues, and made a pass through the jira and 
pushed some to 4.8.2.
18 issues are still assigned to 4.8.1, please have a look. I'll do another more 
aggressive pass early next week.

-- Lars

      From: "la...@apache.org" 
 To: "dev@phoenix.apache.org" ; "la...@apache.org" 
 
 Sent: Wednesday, September 7, 2016 2:48 PM
 Subject: Re: Starting to think about 4.8.1
  
I meant: "It would be great if everybody could go through their jira's and push 
issues they won't get to within 10 days to 4.8.2 OR unschedule those from 4.8, 
to mark them as 4.9 only".

Also, please make sure that jira is up to date. If a fix was committed to the 
4.x/master and the 4.8 branches the jira should be marked with 4.8.1 and 4.9.0. 
I'll do a pass through git to make sure it matches up with jira.
-- Lars

      From: "la...@apache.org" 
 To: Dev  
 Sent: Wednesday, September 7, 2016 1:43 PM
 Subject: Starting to think about 4.8.1
  
I'd like to have an RC out within 10 days, to start a regular monthly cadence.

Just checked jira. There are 21 items fixed, and 30 items either open or 
patch-available.I'll do a pass through all the open issues. It would be great 
if everybody could go through their jira's and push issue they won't get to 
within 10 days to 4.8.2 and unschedule those from 4.8.
Also, if there's anything that must go in, please let me know.

Thanks.

-- Lars (- your friendly RM)



  

  

   

Re: Starting to think about 4.8.1

2016-09-08 Thread larsh
I committed my two outstanding issues, and made a pass through the jira and 
pushed some to 4.8.2.
18 issues are still assigned to 4.8.1, please have a look. I'll do another more 
aggressive pass early next week.

-- Lars

  From: "la...@apache.org" 
 To: "dev@phoenix.apache.org" ; "la...@apache.org" 
 
 Sent: Wednesday, September 7, 2016 2:48 PM
 Subject: Re: Starting to think about 4.8.1
   
I meant: "It would be great if everybody could go through their jira's and push 
issues they won't get to within 10 days to 4.8.2 OR unschedule those from 4.8, 
to mark them as 4.9 only".

Also, please make sure that jira is up to date. If a fix was committed to the 
4.x/master and the 4.8 branches the jira should be marked with 4.8.1 and 4.9.0. 
I'll do a pass through git to make sure it matches up with jira.
-- Lars

  From: "la...@apache.org" 
 To: Dev  
 Sent: Wednesday, September 7, 2016 1:43 PM
 Subject: Starting to think about 4.8.1
  
I'd like to have an RC out within 10 days, to start a regular monthly cadence.

Just checked jira. There are 21 items fixed, and 30 items either open or 
patch-available.I'll do a pass through all the open issues. It would be great 
if everybody could go through their jira's and push issue they won't get to 
within 10 days to 4.8.2 and unschedule those from 4.8.
Also, if there's anything that must go in, please let me know.

Thanks.

-- Lars (- your friendly RM)



   

   

[jira] [Commented] (PHOENIX-3072) Deadlock on region opening with secondary index recovery

2016-09-08 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15474982#comment-15474982
 ] 

Lars Hofhansl commented on PHOENIX-3072:


[~enis], [~jamestaylor], where are we with this? Push to 4.8.2?

> Deadlock on region opening with secondary index recovery
> 
>
> Key: PHOENIX-3072
> URL: https://issues.apache.org/jira/browse/PHOENIX-3072
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 4.9.0, 4.8.1
>
> Attachments: phoenix-3072_v1.patch
>
>
> There is a distributed deadlock happening in clusters with some moderate 
> number of regions for the data tables and secondary index tables and cluster 
> and it is cluster restart or some large failure. We have seen this in a 
> couple of production cases already. 
> Opening of regions in hbase is performed by a thread pool with 3 threads by 
> default. Every regionserver can open 3 regions at a time. However, opening 
> data table regions has to write to multiple index regions during WAL 
> recovery. All other region open requests are queued up in a single queue. 
> This causes a deadlock, since the secondary index regions are also opened by 
> the same thread pools that we do the work. So if there is greater number of 
> data table regions then available number of region opening threads from 
> regionservers, the secondary index region open requests just wait to be 
> processed in the queue. Since these index regions are not open, the region 
> opening of data table regions just block the region opening threads for a 
> long time.  
> One proposed fix is to use a different thread pool for opening regions of the 
> secondary index tables so that we will not deadlock. See HBASE-16095 for the 
> HBase-level fix. In Phoenix, we just have to set the priority for secondary 
> index tables. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-3178) Row count incorrect for UPSERT SELECT when auto commit is false

2016-09-08 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-3178:
---
Fix Version/s: (was: 4.8.1)
   4.8.2

pushing to 4.8.2 for now.

> Row count incorrect for UPSERT SELECT when auto commit is false
> ---
>
> Key: PHOENIX-3178
> URL: https://issues.apache.org/jira/browse/PHOENIX-3178
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
> Fix For: 4.8.2
>
>
> To reproduce, use the following test:
> {code}
> @Test
> public void testRowCountWithNoAutoCommitOnUpsertSelect() throws Exception 
> {
> Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
> props.setProperty(QueryServices.MUTATE_BATCH_SIZE_ATTRIB, 
> Integer.toString(3));
> props.setProperty(QueryServices.SCAN_CACHE_SIZE_ATTRIB, 
> Integer.toString(3));
> props.setProperty(QueryServices.SCAN_RESULT_CHUNK_SIZE, 
> Integer.toString(3));
> Connection conn = DriverManager.getConnection(getUrl(), props);
> conn.setAutoCommit(false);
> conn.createStatement().execute("CREATE SEQUENCE keys");
> String tableName = generateRandomString();
> conn.createStatement().execute(
> "CREATE TABLE " + tableName + " (pk INTEGER PRIMARY KEY, val 
> INTEGER)");
> conn.createStatement().execute(
> "UPSERT INTO " + tableName + " VALUES (NEXT VALUE FOR keys,1)");
> conn.commit();
> for (int i=0; i<6; i++) {
> Statement stmt = conn.createStatement();
> int upsertCount = stmt.executeUpdate(
> "UPSERT INTO " + tableName + " SELECT NEXT VALUE FOR keys, 
> val FROM " + tableName);
> conn.commit();
> assertEquals((int)Math.pow(2, i), upsertCount);
> }
> conn.close();
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3179) Trim or remove hadoop-common dependency fat from thin-client jar

2016-09-08 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15474972#comment-15474972
 ] 

Lars Hofhansl commented on PHOENIX-3179:


[~elserj], are you planning a patch within the next week or so? Otherwise let's 
push to 4.8.2.

> Trim or remove hadoop-common dependency fat from thin-client jar
> 
>
> Key: PHOENIX-3179
> URL: https://issues.apache.org/jira/browse/PHOENIX-3179
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Josh Elser
>Assignee: Josh Elser
> Fix For: 4.9.0, 4.8.1
>
>
> 4.8.0 brought in hadoop-common, pretty much for Configuration and 
> UserGroupInformation, to the thin-client shaded jar.
> This ends up really bloating the size of the artifact which is annoying. We 
> should be able to exclude some of the transitive dependencies which will 
> reduce the size.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-3172) sqlline-thin.py not spawning java process correctly, java process lingers

2016-09-08 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-3172:
---
Fix Version/s: (was: 4.8.1)
   4.8.2

no patch. pushing to 4.8.2.

> sqlline-thin.py not spawning java process correctly, java process lingers
> -
>
> Key: PHOENIX-3172
> URL: https://issues.apache.org/jira/browse/PHOENIX-3172
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.8.0
>Reporter: Josh Elser
>Assignee: Josh Elser
> Fix For: 4.9.0, 4.8.2
>
>
> I'm noticing that the way the java command (sqlline or a level of 
> indirection) with sqlline-thin.py, if the user C^c's before the program fully 
> loads (maybe if it's blocked?), the java program will linger (often times, 
> waiting on remote IO from PQS).
> The SIGINT (or SIGKILL in other cases) given to the python driver should also 
> cause the Java process to exit. Even after the python program exits, the Java 
> program continues to run which is incorrect. It should exit if the parent 
> dies.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-3108) ImmutableIndexIT fails when run on its own

2016-09-08 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-3108:
---
Fix Version/s: (was: 4.8.1)
   4.8.2

older issue, no patch, pushing to 4.8.2.

> ImmutableIndexIT fails when run on its own
> --
>
> Key: PHOENIX-3108
> URL: https://issues.apache.org/jira/browse/PHOENIX-3108
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Samarth Jain
> Fix For: 4.8.2
>
>
> [~prakul] and I noticed that when running ImmutableIndexIT on its own, the 
> test testCreateIndexDuringUpsertSelect fails for parameters localIndex = true 
> and transactional = false. The failure stacktrace is:
> {code}
> java.sql.SQLTimeoutException: Operation timed out.
>   at 
> org.apache.phoenix.exception.SQLExceptionCode$15.newException(SQLExceptionCode.java:376)
>   at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:145)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:805)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:719)
>   at 
> org.apache.phoenix.iterate.ConcatResultIterator.getIterators(ConcatResultIterator.java:50)
>   at 
> org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:97)
>   at 
> org.apache.phoenix.iterate.ConcatResultIterator.next(ConcatResultIterator.java:117)
>   at 
> org.apache.phoenix.compile.UpsertCompiler$2.execute(UpsertCompiler.java:810)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$3.call(PhoenixStatement.java:340)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$3.call(PhoenixStatement.java:1)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:327)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1388)
>   at 
> org.apache.phoenix.end2end.index.ImmutableIndexIT.testCreateIndexDuringUpsertSelect(ImmutableIndexIT.java:180)
> {code}
> My first guess is that the UPSERT SELECT running for building the local index 
> isn't setting timestamp correctly. This is probably causing the select part 
> to read the records that are being upserted. FYI, [~rajeshbabu]. The reason 
> we are not seeing this error in jenkins is because the co-processor 
> CreateIndexRegionObserver isn't getting installed. Because the co-processor 
> is a server side property, the test class needs to extend 
> BaseOwnClusterHBaseManagedTimeIT and *not* BaseHBaseManagedTimeIT.  This will 
> make the test class run in its own mini-cluster. FYI, [~tdsilva] - it doesn't 
> look like your race condition test is getting exercised when running test 
> suite via maven.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3230) SYSTEM.CATALOG get restored from snapshot with multi-client connection

2016-09-08 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15474964#comment-15474964
 ] 

Lars Hofhansl commented on PHOENIX-3230:


Patch looks good. No test? (probably hard given that this is a timing issue, 
still worth thinking about)

> SYSTEM.CATALOG get restored from snapshot with multi-client connection
> --
>
> Key: PHOENIX-3230
> URL: https://issues.apache.org/jira/browse/PHOENIX-3230
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Mujtaba Chohan
>Assignee: Samarth Jain
> Fix For: 4.8.1
>
> Attachments: PHOENIX-3230_nowhitespacediff.patch
>
>
> If two separate Phoenix connections try to upgrade Phoenix from v4.7 to 4.8.1 
> then second connection fails with the following exception. This happens even 
> if second connection is couple of seconds apart but within upgrade window. 
> This is likely to happen in situation where pool of client machines all get 
> upgraded to latest Phoenix version. After this exception, all clients will 
> cease to work with undefined column exception due to restore/aborted upgrade.
> {noformat}
> WARN query.ConnectionQueryServicesImpl: Table already modified at this 
> timestamp, so assuming add of these columns already done: IS_NAMESPACE_MAPPED 
> BOOLEAN
> WARN query.ConnectionQueryServicesImpl: Table already modified at this 
> timestamp, so assuming add of these columns already done: AUTO_PARTITION_SEQ 
> VARCHAR
> WARN query.ConnectionQueryServicesImpl: Table already modified at this 
> timestamp, so assuming add of these columns already done: APPEND_ONLY_SCHEMA 
> BOOLEAN
> WARN query.ConnectionQueryServicesImpl: Starting restore of SYSTEM.CATALOG 
> using snapshot SNAPSHOT_SYSTEM.CATALOG_4.7.x_TO_4.8.0_20160831114048-0700 
> because upgrade failed
> 16/08/31 11:41:05 WARN query.ConnectionQueryServicesImpl: Successfully 
> restored SYSTEM.CATALOG using snapshot 
> SNAPSHOT_SYSTEM.CATALOG_4.7.x_TO_4.8.0_20160831114048-0700
> 16/08/31 11:41:09 WARN query.ConnectionQueryServicesImpl: Successfully 
> restored and enabled SYSTEM.CATALOG using snapshot 
> SNAPSHOT_SYSTEM.CATALOG_4.7.x_TO_4.8.0_20160831114048-0700
> Error: ERROR 504 (42703): Undefined column. columnName=IS_NAMESPACE_MAPPED 
> (state=42703,code=504)
> org.apache.phoenix.schema.ColumnNotFoundException: ERROR 504 (42703): 
> Undefined column. columnName=IS_NAMESPACE_MAPPED
>   at org.apache.phoenix.schema.PTableImpl.getColumn(PTableImpl.java:693)
>   at 
> org.apache.phoenix.compile.FromCompiler$SingleTableColumnResolver.resolveColumn(FromCompiler.java:449)
>   at 
> org.apache.phoenix.compile.UpsertCompiler.compile(UpsertCompiler.java:418)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableUpsertStatement.compilePlan(PhoenixStatement.java:590)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableUpsertStatement.compilePlan(PhoenixStatement.java:578)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:333)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:328)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:326)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:247)
>   at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:172)
>   at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:177)
>   at 
> org.apache.phoenix.schema.MetaDataClient.createTableInternal(MetaDataClient.java:2275)
>   at 
> org.apache.phoenix.schema.MetaDataClient.createTable(MetaDataClient.java:920)
>   at 
> org.apache.phoenix.compile.CreateTableCompiler$2.execute(CreateTableCompiler.java:193)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:340)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:328)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:326)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeUpdate(PhoenixStatement.java:1369)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl$13.call(ConnectionQueryServicesImpl.java:2486)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl$13.call(ConnectionQueryServicesImpl.java:2282)
>   at 
> org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:78)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:2282)
>   at 
> 

[jira] [Commented] (PHOENIX-2946) Projected comparison between date and timestamp columns always returns true

2016-09-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15474962#comment-15474962
 ] 

Hudson commented on PHOENIX-2946:
-

SUCCESS: Integrated in Jenkins build Phoenix-4.8-HBase-1.2 #11 (See 
[https://builds.apache.org/job/Phoenix-4.8-HBase-1.2/11/])
PHOENIX-2946 Projected comparison between date and timestamp columns 
(jamestaylor: rev da5f63c9027687d8a6527e0f813f749dfe681d31)
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/schema/types/PTimestamp.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/expression/function/ToDateFunction.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/expression/function/SecondFunction.java
* (edit) phoenix-core/src/main/java/org/apache/phoenix/util/DateUtil.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/expression/function/DayOfMonthFunction.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/expression/function/RoundJodaDateExpression.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/schema/types/PDataType.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/schema/types/PUnsignedTime.java
* (edit) phoenix-core/src/main/java/org/apache/phoenix/schema/types/PTime.java
* (add) 
phoenix-core/src/main/java/org/apache/phoenix/expression/function/DateScalarFunction.java
* (edit) phoenix-core/src/main/java/org/apache/phoenix/schema/types/PDate.java
* (edit) phoenix-core/src/main/java/org/apache/phoenix/schema/types/PLong.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/expression/function/MonthFunction.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/schema/types/PUnsignedLong.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/schema/types/PUnsignedDate.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixPreparedStatement.java
* (edit) 
phoenix-core/src/test/java/org/apache/phoenix/schema/types/PDataTypeTest.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/expression/function/YearFunction.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/util/csv/CsvUpsertExecutor.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/expression/function/WeekFunction.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/expression/function/MinuteFunction.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/expression/function/RoundDateExpression.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/schema/types/PDecimal.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/schema/types/PUnsignedTimestamp.java
* (edit) phoenix-core/src/main/java/org/apache/phoenix/schema/SortOrder.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/expression/function/CeilDateExpression.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/expression/function/CeilTimestampExpression.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/expression/function/HourFunction.java


> Projected comparison between date and timestamp columns always returns true
> ---
>
> Key: PHOENIX-2946
> URL: https://issues.apache.org/jira/browse/PHOENIX-2946
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.6.0, 4.8.0
>Reporter: Kevin Liew
>Assignee: James Taylor
>Priority: Minor
>  Labels: comparison, date, timestamp
> Fix For: 4.9.0, 4.8.1
>
> Attachments: PHOENIX-2946_v2.patch, PHOENIX-2946_v3.patch, 
> PHOENIX-2946_v4.patch, PHOENIX-2946_v5.patch, PHOENIX-2946_v6.patch
>
>
> {code}
> 0: jdbc:phoenix:thin:url=http://localhost:876> create table test (dateCol 
> DATE primary key, timestampCol TIMESTAMP);
> No rows affected (2.559 seconds)
> 0: jdbc:phoenix:thin:url=http://localhost:876> upsert into test values 
> (TO_DATE('1990-01-01'), NOW());
> 1 row affected (0.255 seconds)
> 0: jdbc:phoenix:thin:url=http://localhost:876> select dateCol = timestampCol 
> from test;
> +--+
> |  DATECOL = TIMESTAMPCOL  |
> +--+
> | true |
> +--+
> 1 row selected (0.019 seconds)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3255) Increase test coverage for TIMESTAMP

2016-09-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15474963#comment-15474963
 ] 

Hudson commented on PHOENIX-3255:
-

SUCCESS: Integrated in Jenkins build Phoenix-4.8-HBase-1.2 #11 (See 
[https://builds.apache.org/job/Phoenix-4.8-HBase-1.2/11/])
PHOENIX-3255 Increase test coverage for TIMESTAMP (Kevin Liew) (jamestaylor: 
rev 3516993b1d254670aadb7f99855d1a5dce9b078e)
* (edit) phoenix-core/src/it/java/org/apache/phoenix/end2end/DateTimeIT.java


> Increase test coverage for TIMESTAMP
> 
>
> Key: PHOENIX-3255
> URL: https://issues.apache.org/jira/browse/PHOENIX-3255
> Project: Phoenix
>  Issue Type: Test
>Reporter: James Taylor
>Assignee: Kevin Liew
> Fix For: 4.9.0, 4.8.1
>
> Attachments: PHOENIX-3255.patch
>
>
> We need to improve test coverage for TIMESTAMP. See PHOENIX-2946 for issues 
> found when comparing TIMESTAMP with DATE.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3210) Exception trying to cast Double to BigDecimal in UpsertCompiler

2016-09-08 Thread Samarth Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15474956#comment-15474956
 ] 

Samarth Jain commented on PHOENIX-3210:
---

Patch doesn't apply. [~prakul], please rebase the patch.

> Exception trying to cast Double to BigDecimal in UpsertCompiler
> ---
>
> Key: PHOENIX-3210
> URL: https://issues.apache.org/jira/browse/PHOENIX-3210
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: Shehzaad Nakhoda
>Assignee: prakul agarwal
>  Labels: SFDC
> Fix For: 4.9.0, 4.8.1
>
> Attachments: PHOENIX-3210.patch, PHOENIX-3210_v2.patch
>
>
> We have an UPSERT statement that is resulting in this stack trace. 
> Unfortunately I can't get a hold of the actual Upsert statement since we 
> don't log it. 
> Cause0: java.lang.ClassCastException: java.lang.Double cannot be cast to 
> java.math.BigDecimal
>  Cause0-StackTrace: 
>   at 
> org.apache.phoenix.schema.types.PDecimal.isSizeCompatible(PDecimal.java:312)
>   at 
> org.apache.phoenix.compile.UpsertCompiler$3.execute(UpsertCompiler.java:887)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:335)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:323)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:321)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeUpdate(PhoenixStatement.java:1274)
>   at 
> phoenix.connection.ProtectedPhoenixStatement.executeUpdate(ProtectedPhoenixStatement.java:127)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-3118) Increase default value of hbase.client.scanner.max.result.size

2016-09-08 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-3118:
---
Fix Version/s: (was: 4.8.1)
   4.8.2
   4.9.0

and since there's no patch and this is an existing issue, pushing to 4.8.2.

> Increase default value of hbase.client.scanner.max.result.size
> --
>
> Key: PHOENIX-3118
> URL: https://issues.apache.org/jira/browse/PHOENIX-3118
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: James Taylor
> Fix For: 4.9.0, 4.8.2
>
>
> See parent JIRA for a discussion on how to handle partial scan results. An 
> easy workaround would be to increase the 
> {{hbase.client.scanner.max.result.size}} above the default 2MB limit. In 
> combination with this, we could detect in BaseScannerRegionObserver.nextRaw() 
> if partial results are being returned and throw an exception. Silently 
> ignoring this is bad because it can lead to incorrect query results as 
> demonstrated by the parent JIRA.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3118) Increase default value of hbase.client.scanner.max.result.size

2016-09-08 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15474947#comment-15474947
 ] 

Lars Hofhansl commented on PHOENIX-3118:


Hmm... HBase doesn't automatically "trip" the partial result logic (at least 
not in 0.98). Instead if a single row is > 2MB it will be returned in one PRC 
regardless of what the HBase buffer size is. (will check 1.0+ as well).


> Increase default value of hbase.client.scanner.max.result.size
> --
>
> Key: PHOENIX-3118
> URL: https://issues.apache.org/jira/browse/PHOENIX-3118
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: James Taylor
> Fix For: 4.8.1
>
>
> See parent JIRA for a discussion on how to handle partial scan results. An 
> easy workaround would be to increase the 
> {{hbase.client.scanner.max.result.size}} above the default 2MB limit. In 
> combination with this, we could detect in BaseScannerRegionObserver.nextRaw() 
> if partial results are being returned and throw an exception. Silently 
> ignoring this is bad because it can lead to incorrect query results as 
> demonstrated by the parent JIRA.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-3133) Investigate why offset queries with reverse scan take a long time

2016-09-08 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3133?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-3133:
---
Fix Version/s: (was: 4.8.1)
   4.8.2
   4.9.0

no patch. so moving to 4.8.2

> Investigate why offset queries with reverse scan take a long time
> -
>
> Key: PHOENIX-3133
> URL: https://issues.apache.org/jira/browse/PHOENIX-3133
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Ankit Singhal
> Fix For: 4.9.0, 4.8.2
>
>
> We need to workaround HBASE-16296 because users of Phoenix won't see the fix 
> until at least the fix makes it into a release version of HBase. 
> Unfortunately, often times users are forced to stick to earlier version of 
> HBase, even after a release. PHOENIX-3121 works around the issue when there's 
> only a LIMIT clause. However, if there's a LIMIT and an OFFSET, the issue 
> still occurs. 
> Repro code courtesy, [~mujtabachohan] 
> {code}
> DDL:
> CREATE TABLE IF NOT EXISTS XYZ.T (
>   TENANT_ID CHAR(15) NOT NULL, 
>   KEY_PREFIX CHAR(3) NOT NULL,
>   CREATED_DATE DATE,
>   CREATED_BY CHAR(15),
>   LAST_UPDATE DATE,
>   LAST_UPDATE_BY CHAR(15),
>   SYSTEM_MODSTAMP DATE
>   CONSTRAINT PK PRIMARY KEY (
>   TENANT_ID, 
>   KEY_PREFIX
>   )
>   ) VERSIONS=1, IMMUTABLE_ROWS=true, MULTI_TENANT=true, 
> REPLICATION_SCOPE=1
>   
>   CREATE VIEW IF NOT EXISTS XYZ.ABC_VIEW (
>   ACTIVITY_DATE DATE NOT NULL,
>   WHO_ID CHAR(15) NOT NULL,
>   WHAT_ID CHAR(15) NOT NULL,
>   CHANNEL_TYPE VARCHAR NOT NULL,
>   CHANNEL_ACTION_TYPE VARCHAR NOT NULL,
>   ENGAGEMENT_HISTORY_POC_ID CHAR(15) ,
>   CHANNEL_CONTEXT VARCHAR
>   CONSTRAINT PKVIEW PRIMARY KEY
>   (
>   ACTIVITY_DATE, WHO_ID, WHAT_ID, CHANNEL_TYPE, 
> CHANNEL_ACTION_TYPE
>   )
>   )
>   AS SELECT * FROM XYZ.T WHERE KEY_PREFIX = '08m' 
> UPSERT records using this:
> Connection con = 
> DriverManager.getConnection("jdbc:phoenix:samarthjai-ltm3.internal.salesforce.com",
>  new Properties());
>   PreparedStatement pStatement;
>   pStatement = con.prepareStatement("upsert into XYZ.ABC_VIEW 
> (ACTIVITY_DATE,CHANNEL_ACTION_TYPE,CHANNEL_TYPE,TENANT_ID,WHAT_ID,WHO_ID) 
> values (TO_DATE('2010-11-11 
> 00:00:00.000'),?,'ABC','00Dx000GyYS','701xdzp','00Qx001S2qa')");
>   for (int i=0; i<1000;i++) {
>   pStatement.setString(1, UUID.randomUUID().toString());
>   pStatement.execute();
>   
>   if (i % 1 == 0) {
>   con.commit();
>   System.out.println(i);
>   }
>   }
> Sample query:
> @Test
> public void testLimitCacheQuery() throws Exception {
> String url = "jdbc:phoenix:localhost:2181";
> try (Connection conn = DriverManager.getConnection(url)) {
> PreparedStatement stmt = conn.prepareStatement("select * from 
> XYZ.ABC_VIEW where who_id = '00Qx001S2qa' and TENANT_ID='00Dx000GyYS' 
> order by activity_date desc LIMIT 18 OFFSET 2");
> stmt.setFetchSize(10);
> try (ResultSet rs = stmt.executeQuery()) {
> long startTime = System.currentTimeMillis();
> int record = 0;
> while (rs.next()) {
> System.out.println("Record "+ (++record) + " Time: " + 
> (System.currentTimeMillis() - startTime));
> startTime = System.currentTimeMillis();
> }
> }
> }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-3204) Scanner lease timeout exception during UPSERT INTO t1 SELECT * FROM t2

2016-09-08 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-3204:
---
Fix Version/s: (was: 4.8.1)
   4.8.2

No patch and existing issue. Moving to 4.8.2.

> Scanner lease timeout exception during UPSERT INTO t1 SELECT * FROM t2
> --
>
> Key: PHOENIX-3204
> URL: https://issues.apache.org/jira/browse/PHOENIX-3204
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.8.0
>Reporter: Lars Hofhansl
> Fix For: 4.9.0, 4.8.2
>
>
> We ran some larg'ish Phoenix tests on a larg'ish cluster.
> We had loaded 1bn rows into table t1 and tried to load the same data into a 
> different table t2. The UPSERT failed about 1/2 way through with various 
> exceptions like the following.
> Interesting are the exception in close() chain, as Phoenix could probably 
> ignore those.
> Just filing here for reference.
> {code}
> 2016-08-23 00:56:41,851 INFO  [phoenix-1-thread-653] client.AsyncProcess - 
> #4816618, waiting for some tasks to finish. Expected max=0, tasksSent=14, 
> tasksDone=13, currentTasksDone=13, retries=13 hasError=false, tableName=null
> 2016-08-23 00:56:41,851 INFO  [phoenix-1-thread-653] client.AsyncProcess - 
> Left over 1 task(s) are processed on server(s): [host17-8,60020,1471339710192]
> 2016-08-23 00:56:41,851 INFO  [phoenix-1-thread-653] client.AsyncProcess - 
> Regions against which left over task(s) are processed: [TEST3,97432328$#}
> ,1471913672240.9f2fd6435921da1a203daf52f0116a34.]
> 2016-08-23 00:56:41,859 INFO  [phoenix-1-thread-293] client.AsyncProcess - 
> #4816622, waiting for some tasks to finish. Expected max=0, tasksSent=14, 
> tasksDone=13, currentTasksDone=13, retries=13 hasError=false, tableName=null
> 2016-08-23 00:56:41,859 INFO  [phoenix-1-thread-293] client.AsyncProcess - 
> Left over 1 task(s) are processed on server(s): [host17-8,60020,1471339710192]
> 2016-08-23 00:56:41,859 INFO  [phoenix-1-thread-293] client.AsyncProcess - 
> Regions against which left over task(s) are processed: [TEST3,97432328$#}
> ,1471913672240.9f2fd6435921da1a203daf52f0116a34.]
> 2016-08-23 00:56:41,860 INFO  [80-shared--pool2-t90] client.AsyncProcess - 
> #4816618, table=TEST3, attempt=14/35 SUCCEEDED on 
> host17-8,60020,1471339710192, tracking started Tue Aug 23 00:55:12 GMT 2016
> 2016-08-23 00:56:41,861 DEBUG [phoenix-1-thread-653] 
> token.AuthenticationTokenSelector - No matching token found
> 2016-08-23 00:56:41,861 DEBUG [phoenix-1-thread-653] 
> security.HBaseSaslRpcClient - Creating SASL GSSAPI client. Server's Kerberos 
> principal name is hbase/host28-20@
> 2016-08-23 00:56:41,862 DEBUG [phoenix-1-thread-653] 
> security.HBaseSaslRpcClient - Have sent token of size 730 from 
> initSASLContext.
> 2016-08-23 00:56:41,863 DEBUG [phoenix-1-thread-653] 
> security.HBaseSaslRpcClient - Will read input token of size 108 for 
> processing by initSASLContext
> 2016-08-23 00:56:41,863 DEBUG [phoenix-1-thread-653] 
> security.HBaseSaslRpcClient - Will send token of size 0 from initSASLContext.
> 2016-08-23 00:56:41,863 DEBUG [phoenix-1-thread-653] 
> security.HBaseSaslRpcClient - Will read input token of size 32 for processing 
> by initSASLContext
> 2016-08-23 00:56:41,863 DEBUG [phoenix-1-thread-653] 
> security.HBaseSaslRpcClient - Will send token of size 32 from initSASLContext.
> 2016-08-23 00:56:41,863 DEBUG [phoenix-1-thread-653] 
> security.HBaseSaslRpcClient - SASL client context established. Negotiated 
> QoP: auth
> 2016-08-23 00:56:41,865 INFO  [phoenix-1-thread-653] client.ClientScanner - 
> For hints related to the following exception, please try taking a look at: 
> https://hbase.apache.org/book.html#trouble.client.scantimeout
> 2016-08-23 00:56:41,865 WARN  [phoenix-1-thread-653] client.ScannerCallable - 
> Ignore, probably already closed
> org.apache.hadoop.hbase.UnknownScannerException: 
> org.apache.hadoop.hbase.UnknownScannerException: Unknown scanner '49'. This 
> can happen due to any of the following reasons: a) Scanner id given is wrong, 
> b) Scanner lease expired because of long wait between consecutive client 
> checkins, c) Server may be closing down, d) RegionServer restart during 
> upgrade.
> If the issue is due to reason (b), a possible fix would be increasing the 
> value of'hbase.client.scanner.timeout.period' configuration.
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3228)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32492)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2208)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:104)
>   at 
> 

[jira] [Updated] (PHOENIX-3168) Remove sqlline from LICENSE in source release

2016-09-08 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-3168:
---
Fix Version/s: (was: 4.8.1)
   4.8.2

No patch, and minor issue, moving to 4.8.2

> Remove sqlline from LICENSE in source release
> -
>
> Key: PHOENIX-3168
> URL: https://issues.apache.org/jira/browse/PHOENIX-3168
> Project: Phoenix
>  Issue Type: Task
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Minor
> Fix For: 4.9.0, 4.8.2
>
>
> Looking at the official 4.8.0-rc2 (source) release artifacts, I'm noticing 
> that sqlline isn't actually bundled in source form. I think I had assumed we 
> had copied some scripts, but that does not appear to be the case.
> We can drop this LICENSE entry for Sqlline.
> Not a blocker for 4.8.0 (but still unnecessary and incorrect to include)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3081) MIsleading exception on async stats update after major compaction

2016-09-08 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15474930#comment-15474930
 ] 

Lars Hofhansl commented on PHOENIX-3081:


+1

Patch looks good. Any reason to not commit it? [~elserj]

> MIsleading exception on async stats update after major compaction
> -
>
> Key: PHOENIX-3081
> URL: https://issues.apache.org/jira/browse/PHOENIX-3081
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Minor
> Fix For: 4.9.0, 4.8.1
>
> Attachments: PHOENIX-3081.001.patch
>
>
> Saw an error in some $dayjob testing where, while a RegionServer was going 
> down to due to an exception, there was a scary looking exception about being 
> unable to write to the stats table because an hconnection was closed. Pardon 
> the mis-matched line numbers:
> {noformat}
> 2016-07-17 07:52:13,229 ERROR [phoenix-update-statistics-0] 
> stats.StatisticsScanner: Failed to update statistics table!
> org.apache.hadoop.hbase.client.RetriesExhaustedException: Can't get the 
> location
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:309)
>   at 
> org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:152)
>   at 
> org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:60)
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200)
>   at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:326)
>   at 
> org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:301)
>   at 
> org.apache.hadoop.hbase.client.ClientScanner.initializeScannerInConstruction(ClientScanner.java:166)
>   at 
> org.apache.hadoop.hbase.client.ClientScanner.(ClientScanner.java:161)
>   at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:794)
>   at 
> org.apache.hadoop.hbase.client.HTableWrapper.getScanner(HTableWrapper.java:215)
>   at 
> org.apache.phoenix.schema.stats.StatisticsUtil.readStatistics(StatisticsUtil.java:136)
>   at 
> org.apache.phoenix.schema.stats.StatisticsWriter.deleteStats(StatisticsWriter.java:230)
>   at 
> org.apache.phoenix.schema.stats.StatisticsScanner$StatisticsScannerCallable.call(StatisticsScanner.java:117)
>   at 
> org.apache.phoenix.schema.stats.StatisticsScanner$StatisticsScannerCallable.call(StatisticsScanner.java:102)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.io.IOException: hconnection-0x5314972b closed
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1153)
>   at 
> org.apache.hadoop.hbase.client.CoprocessorHConnection.locateRegion(CoprocessorHConnection.java:41)
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.relocateRegion(ConnectionManager.java:1133)
>   at 
> org.apache.hadoop.hbase.client.CoprocessorHConnection.relocateRegion(CoprocessorHConnection.java:41)
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegionInMeta(ConnectionManager.java:1338)
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1162)
>   at 
> org.apache.hadoop.hbase.client.CoprocessorHConnection.locateRegion(CoprocessorHConnection.java:41)
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:300)
>   ... 17 more
> {noformat}
> Looking into this some more, this async task to update the stats was still 
> running after a RegionServer already was in the process of shutting down. The 
> RegionServer already closed all of the "userRegions", but, because this task 
> is async, the task is still running and using the RegionServer's 
> CoprocessorHConnection. So, the RegionServer thinks all of the user regions 
> are closed and it is safe to close the HConnection. In reality, there is 
> still code tied to those user regions that might be running (as we can see 
> with the above stacktrace). The next time the StatisticsScannerCallable tries 
> to use the HConnection, it will then error.
> I think the simple fix is to just use the CoprocessorEnvironment to access 
> the RegionServerServices and use the {{isClosing()}} and {{isClosed()}} 
> methods. This is all pretty minor because the RegionServer is already 
> shutting down, 

[jira] [Commented] (PHOENIX-3046) `NOT LIKE '%'` unexpectedly returns results

2016-09-08 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15474920#comment-15474920
 ] 

Lars Hofhansl commented on PHOENIX-3046:


[~jamestaylor], this one seems important. WDYT? If no patch emerges in a few 
days I'll push this to 4.8.2

> `NOT LIKE '%'` unexpectedly returns results
> ---
>
> Key: PHOENIX-3046
> URL: https://issues.apache.org/jira/browse/PHOENIX-3046
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: Kevin Liew
>Priority: Minor
>  Labels: like, like-predicate, phoenix, regex, wildcard, wildcards
> Fix For: 4.9.0, 4.8.1
>
>
> The following returns all rows in the table when it should return no rows:
> {code}select * from emp where first_name not like '%'{code}
> The following returns no rows as expected:
> {code}select * from emp where first_name not like '%%'{code}
> first_name is a VARCHAR column



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-3106) dev/make_rc.sh doesn't work with BSD `find`

2016-09-08 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-3106:
---
Fix Version/s: (was: 4.8.1)
   4.8.2

no patch. moving to 4.8.2

> dev/make_rc.sh doesn't work with BSD `find`
> ---
>
> Key: PHOENIX-3106
> URL: https://issues.apache.org/jira/browse/PHOENIX-3106
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Trivial
> Fix For: 4.9.0, 4.8.2
>
>
> {{./dev/make_rc.sh}} uses some {{find}} invocations which work on the GNU 
> find, but not the BSD variant.
> They're pretty easy to change so that we can support both:
> Change {{find filename}} to {{find . -name filename}} and {{find -iname 
> filename}} to {{find . -iname filename}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (PHOENIX-3228) Index tables should not be configured with a custom/smaller MAX_FILESIZE

2016-09-08 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3228?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl resolved PHOENIX-3228.

Resolution: Fixed

> Index tables should not be configured with a custom/smaller MAX_FILESIZE
> 
>
> Key: PHOENIX-3228
> URL: https://issues.apache.org/jira/browse/PHOENIX-3228
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
> Fix For: 4.9.0, 4.8.1
>
> Attachments: 3228-remove.txt, 3228.txt
>
>
> I do not think we should do this. The default of 10G is chosen to keep HBase 
> happy. For smaller tables or initially until the index gets large it might 
> lead to more index regions and hence be able to utilize more region server, 
> but generally, this is not the right thing to do.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-3228) Index tables should not be configured with a custom/smaller MAX_FILESIZE

2016-09-08 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3228?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-3228:
---
Summary: Index tables should not be configured with a custom/smaller 
MAX_FILESIZE  (was: Index table should not be configured with a custom/smaller 
MAX_FILESIZE)

> Index tables should not be configured with a custom/smaller MAX_FILESIZE
> 
>
> Key: PHOENIX-3228
> URL: https://issues.apache.org/jira/browse/PHOENIX-3228
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
> Fix For: 4.9.0, 4.8.1
>
> Attachments: 3228-remove.txt, 3228.txt
>
>
> I do not think we should do this. The default of 10G is chosen to keep HBase 
> happy. For smaller tables or initially until the index gets large it might 
> lead to more index regions and hence be able to utilize more region server, 
> but generally, this is not the right thing to do.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-3228) Index table should not be configured with a custom/smaller MAX_FILESIZE

2016-09-08 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3228?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-3228:
---
Summary: Index table should not be configured with a custom/smaller 
MAX_FILESIZE  (was: Index table are configured with a custom/smaller 
MAX_FILESIZE)

> Index table should not be configured with a custom/smaller MAX_FILESIZE
> ---
>
> Key: PHOENIX-3228
> URL: https://issues.apache.org/jira/browse/PHOENIX-3228
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
> Fix For: 4.9.0, 4.8.1
>
> Attachments: 3228-remove.txt, 3228.txt
>
>
> I do not think we should do this. The default of 10G is chosen to keep HBase 
> happy. For smaller tables or initially until the index gets large it might 
> lead to more index regions and hence be able to utilize more region server, 
> but generally, this is not the right thing to do.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2946) Projected comparison between date and timestamp columns always returns true

2016-09-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15474874#comment-15474874
 ] 

Hudson commented on PHOENIX-2946:
-

FAILURE: Integrated in Jenkins build Phoenix-master #1385 (See 
[https://builds.apache.org/job/Phoenix-master/1385/])
PHOENIX-2946 Projected comparison between date and timestamp columns 
(jamestaylor: rev 210445ded95d9da503c13cac24b4148aa819e205)
* (edit) 
phoenix-core/src/test/java/org/apache/phoenix/schema/types/PDataTypeTest.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/expression/function/CeilTimestampExpression.java
* (add) 
phoenix-core/src/main/java/org/apache/phoenix/expression/function/DateScalarFunction.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/schema/types/PUnsignedTimestamp.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/expression/function/CeilDateExpression.java
* (edit) phoenix-core/src/main/java/org/apache/phoenix/schema/types/PTime.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/schema/types/PTimestamp.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/expression/function/RoundDateExpression.java
* (edit) phoenix-core/src/main/java/org/apache/phoenix/util/DateUtil.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/expression/function/HourFunction.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/expression/function/SecondFunction.java
* (edit) phoenix-core/src/main/java/org/apache/phoenix/schema/SortOrder.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/expression/function/DayOfWeekFunction.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/expression/function/YearFunction.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/expression/function/MinuteFunction.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixPreparedStatement.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/expression/function/WeekFunction.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/schema/types/PDataType.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/expression/function/RoundJodaDateExpression.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/schema/types/PUnsignedDate.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/schema/types/PUnsignedTime.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/expression/function/DayOfYearFunction.java
* (edit) phoenix-core/src/main/java/org/apache/phoenix/schema/types/PLong.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/schema/types/PUnsignedLong.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/expression/function/MonthFunction.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/expression/function/ToDateFunction.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/expression/function/DayOfMonthFunction.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/schema/types/PDecimal.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/util/csv/CsvUpsertExecutor.java
* (edit) phoenix-core/src/main/java/org/apache/phoenix/schema/types/PDate.java


> Projected comparison between date and timestamp columns always returns true
> ---
>
> Key: PHOENIX-2946
> URL: https://issues.apache.org/jira/browse/PHOENIX-2946
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.6.0, 4.8.0
>Reporter: Kevin Liew
>Assignee: James Taylor
>Priority: Minor
>  Labels: comparison, date, timestamp
> Fix For: 4.9.0, 4.8.1
>
> Attachments: PHOENIX-2946_v2.patch, PHOENIX-2946_v3.patch, 
> PHOENIX-2946_v4.patch, PHOENIX-2946_v5.patch, PHOENIX-2946_v6.patch
>
>
> {code}
> 0: jdbc:phoenix:thin:url=http://localhost:876> create table test (dateCol 
> DATE primary key, timestampCol TIMESTAMP);
> No rows affected (2.559 seconds)
> 0: jdbc:phoenix:thin:url=http://localhost:876> upsert into test values 
> (TO_DATE('1990-01-01'), NOW());
> 1 row affected (0.255 seconds)
> 0: jdbc:phoenix:thin:url=http://localhost:876> select dateCol = timestampCol 
> from test;
> +--+
> |  DATECOL = TIMESTAMPCOL  |
> +--+
> | true |
> +--+
> 1 row selected (0.019 seconds)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3255) Increase test coverage for TIMESTAMP

2016-09-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15474875#comment-15474875
 ] 

Hudson commented on PHOENIX-3255:
-

FAILURE: Integrated in Jenkins build Phoenix-master #1385 (See 
[https://builds.apache.org/job/Phoenix-master/1385/])
PHOENIX-3255 Increase test coverage for TIMESTAMP (Kevin Liew) (jamestaylor: 
rev d94d57183501526cff6fc1c5d1475e487c9c4653)
* (edit) phoenix-core/src/it/java/org/apache/phoenix/end2end/DateTimeIT.java


> Increase test coverage for TIMESTAMP
> 
>
> Key: PHOENIX-3255
> URL: https://issues.apache.org/jira/browse/PHOENIX-3255
> Project: Phoenix
>  Issue Type: Test
>Reporter: James Taylor
>Assignee: Kevin Liew
> Fix For: 4.9.0, 4.8.1
>
> Attachments: PHOENIX-3255.patch
>
>
> We need to improve test coverage for TIMESTAMP. See PHOENIX-2946 for issues 
> found when comparing TIMESTAMP with DATE.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (PHOENIX-3255) Increase test coverage for TIMESTAMP

2016-09-08 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor resolved PHOENIX-3255.
---
Resolution: Fixed

> Increase test coverage for TIMESTAMP
> 
>
> Key: PHOENIX-3255
> URL: https://issues.apache.org/jira/browse/PHOENIX-3255
> Project: Phoenix
>  Issue Type: Test
>Reporter: James Taylor
>Assignee: Kevin Liew
> Fix For: 4.9.0, 4.8.1
>
> Attachments: PHOENIX-3255.patch
>
>
> We need to improve test coverage for TIMESTAMP. See PHOENIX-2946 for issues 
> found when comparing TIMESTAMP with DATE.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3210) Exception trying to cast Double to BigDecimal in UpsertCompiler

2016-09-08 Thread Samarth Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15474786#comment-15474786
 ] 

Samarth Jain commented on PHOENIX-3210:
---

+1, I will get this committed. Thanks, [~prakul]!

> Exception trying to cast Double to BigDecimal in UpsertCompiler
> ---
>
> Key: PHOENIX-3210
> URL: https://issues.apache.org/jira/browse/PHOENIX-3210
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: Shehzaad Nakhoda
>Assignee: prakul agarwal
>  Labels: SFDC
> Fix For: 4.9.0, 4.8.1
>
> Attachments: PHOENIX-3210.patch, PHOENIX-3210_v2.patch
>
>
> We have an UPSERT statement that is resulting in this stack trace. 
> Unfortunately I can't get a hold of the actual Upsert statement since we 
> don't log it. 
> Cause0: java.lang.ClassCastException: java.lang.Double cannot be cast to 
> java.math.BigDecimal
>  Cause0-StackTrace: 
>   at 
> org.apache.phoenix.schema.types.PDecimal.isSizeCompatible(PDecimal.java:312)
>   at 
> org.apache.phoenix.compile.UpsertCompiler$3.execute(UpsertCompiler.java:887)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:335)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:323)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:321)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeUpdate(PhoenixStatement.java:1274)
>   at 
> phoenix.connection.ProtectedPhoenixStatement.executeUpdate(ProtectedPhoenixStatement.java:127)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-3230) SYSTEM.CATALOG get restored from snapshot with multi-client connection

2016-09-08 Thread Samarth Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Samarth Jain updated PHOENIX-3230:
--
Attachment: PHOENIX-3230_nowhitespacediff.patch

Patch to make sure only client JVM is able to run the upgrade. Clients that 
lose the race are thrown an UpgradeInProgressException. The 
UpgradeInProgressException isn't marked as initException so that the clients 
can keep retrying on their end to establish the first connection to the 
cluster. I verified this by writing a simple test that keeps on requesting 
connection in a loop when another client is concurrently running the upgrade. 
Once the upgrade is done, the test was able to acquire connection successfully. 

The UpgradeInProgressException stacktrace looks like this:
{code}
org.apache.phoenix.query.ConnectionQueryServicesImpl$UpgradeInProgressException:
 Cluster is being upgraded from 4.7.x to 4.8.x. Please retry establishing 
connection
at 
org.apache.phoenix.query.ConnectionQueryServicesImpl$13.acquireUpgradeMutex(ConnectionQueryServicesImpl.java:2768)
at 
org.apache.phoenix.query.ConnectionQueryServicesImpl$13.call(ConnectionQueryServicesImpl.java:2341)
at 
org.apache.phoenix.query.ConnectionQueryServicesImpl$13.call(ConnectionQueryServicesImpl.java:1)
at 
org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:78)
at 
org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:2278)
at 
org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:232)
at 
org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:147)
at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:202)
at java.sql.DriverManager.getConnection(DriverManager.java:571)
at java.sql.DriverManager.getConnection(DriverManager.java:233)
at 
org.apache.phoenix.end2end.PhoenixRuntimeIT.testConnection(PhoenixRuntimeIT.java:151)

{code}

I also had to tweak the way we were looking up version string for system 
catalog timestamps. It turns out when doing concurrent upgrades, we could be at 
an intermediate timestamp. Using a navigable map to help store the 
timestamp->version combo helps provide a range based lookup. 

I haven't added the part to store the upgrade state in a STATUS column in 
SYSTEM.CATALOG as I believe this mechanism ends up providing enough info to the 
user. 

[~jamestaylor] - please review.

> SYSTEM.CATALOG get restored from snapshot with multi-client connection
> --
>
> Key: PHOENIX-3230
> URL: https://issues.apache.org/jira/browse/PHOENIX-3230
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Mujtaba Chohan
>Assignee: Samarth Jain
> Fix For: 4.8.1
>
> Attachments: PHOENIX-3230_nowhitespacediff.patch
>
>
> If two separate Phoenix connections try to upgrade Phoenix from v4.7 to 4.8.1 
> then second connection fails with the following exception. This happens even 
> if second connection is couple of seconds apart but within upgrade window. 
> This is likely to happen in situation where pool of client machines all get 
> upgraded to latest Phoenix version. After this exception, all clients will 
> cease to work with undefined column exception due to restore/aborted upgrade.
> {noformat}
> WARN query.ConnectionQueryServicesImpl: Table already modified at this 
> timestamp, so assuming add of these columns already done: IS_NAMESPACE_MAPPED 
> BOOLEAN
> WARN query.ConnectionQueryServicesImpl: Table already modified at this 
> timestamp, so assuming add of these columns already done: AUTO_PARTITION_SEQ 
> VARCHAR
> WARN query.ConnectionQueryServicesImpl: Table already modified at this 
> timestamp, so assuming add of these columns already done: APPEND_ONLY_SCHEMA 
> BOOLEAN
> WARN query.ConnectionQueryServicesImpl: Starting restore of SYSTEM.CATALOG 
> using snapshot SNAPSHOT_SYSTEM.CATALOG_4.7.x_TO_4.8.0_20160831114048-0700 
> because upgrade failed
> 16/08/31 11:41:05 WARN query.ConnectionQueryServicesImpl: Successfully 
> restored SYSTEM.CATALOG using snapshot 
> SNAPSHOT_SYSTEM.CATALOG_4.7.x_TO_4.8.0_20160831114048-0700
> 16/08/31 11:41:09 WARN query.ConnectionQueryServicesImpl: Successfully 
> restored and enabled SYSTEM.CATALOG using snapshot 
> SNAPSHOT_SYSTEM.CATALOG_4.7.x_TO_4.8.0_20160831114048-0700
> Error: ERROR 504 (42703): Undefined column. columnName=IS_NAMESPACE_MAPPED 
> (state=42703,code=504)
> org.apache.phoenix.schema.ColumnNotFoundException: ERROR 504 (42703): 
> Undefined column. columnName=IS_NAMESPACE_MAPPED
>   at org.apache.phoenix.schema.PTableImpl.getColumn(PTableImpl.java:693)
>   at 
> 

[jira] [Commented] (PHOENIX-2946) Projected comparison between date and timestamp columns always returns true

2016-09-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15474738#comment-15474738
 ] 

Hadoop QA commented on PHOENIX-2946:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12827618/PHOENIX-2946_v6.patch
  against master branch at commit 3a8724eee05aaf477bf6085415e781856990e1c0.
  ATTACHMENT ID: 12827618

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
34 warning messages.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+private void setTimestampParameter(int parameterIndex, Timestamp x, 
Calendar cal) throws SQLException {
+// Ignore trailing zero bytes if fixed byte length (for 
example TIMESTAMP compared to DATE)
+if (lhsLength != rhsLength && this.isFixedWidth() && 
rhsType.isFixedWidth() && this.getByteSize() != null && rhsType.getByteSize() 
!= null) {
+for (int i = lhsOffset + lhsLength - 1; i >= minOffset 
&& lhsSortOrder.normalize(lhs[i]) == 0; i--,lhsLength--) {
+for (int i = rhsOffset + rhsLength - 1; i >= minOffset 
&& rhsSortOrder.normalize(rhs[i]) == 0; i--,rhsLength--) {
+new DateCodec(), 11); // After TIMESTAMP and DATE to ensure 
toLiteral finds those first
+public Date toObject(byte[] b, int o, int l, PDataType actualType, 
SortOrder sortOrder, Integer maxLength, Integer scale) {
+return equalsAny(targetType, PDate.INSTANCE, PTime.INSTANCE, 
PTimestamp.INSTANCE, PVarbinary.INSTANCE, PBinary.INSTANCE);
+return super.isBytesComparableWith(otherType) || otherType == 
PTime.INSTANCE || otherType == PTimestamp.INSTANCE || otherType == 
PLong.INSTANCE;
+Integer maxLength, Integer scale, SortOrder actualModifier, 
Integer desiredMaxLength, Integer desiredScale,

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/559//testReport/
Javadoc warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/559//artifact/patchprocess/patchJavadocWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/559//console

This message is automatically generated.

> Projected comparison between date and timestamp columns always returns true
> ---
>
> Key: PHOENIX-2946
> URL: https://issues.apache.org/jira/browse/PHOENIX-2946
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.6.0, 4.8.0
>Reporter: Kevin Liew
>Assignee: James Taylor
>Priority: Minor
>  Labels: comparison, date, timestamp
> Fix For: 4.9.0, 4.8.1
>
> Attachments: PHOENIX-2946_v2.patch, PHOENIX-2946_v3.patch, 
> PHOENIX-2946_v4.patch, PHOENIX-2946_v5.patch, PHOENIX-2946_v6.patch
>
>
> {code}
> 0: jdbc:phoenix:thin:url=http://localhost:876> create table test (dateCol 
> DATE primary key, timestampCol TIMESTAMP);
> No rows affected (2.559 seconds)
> 0: jdbc:phoenix:thin:url=http://localhost:876> upsert into test values 
> (TO_DATE('1990-01-01'), NOW());
> 1 row affected (0.255 seconds)
> 0: jdbc:phoenix:thin:url=http://localhost:876> select dateCol = timestampCol 
> from test;
> +--+
> |  DATECOL = TIMESTAMPCOL  |
> +--+
> | true |
> +--+
> 1 row selected (0.019 seconds)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2946) Projected comparison between date and timestamp columns always returns true

2016-09-08 Thread Kevin Liew (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15474639#comment-15474639
 ] 

Kevin Liew commented on PHOENIX-2946:
-

Thanks [~giacomotaylor], I'm glad to see this fixed.

> Projected comparison between date and timestamp columns always returns true
> ---
>
> Key: PHOENIX-2946
> URL: https://issues.apache.org/jira/browse/PHOENIX-2946
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.6.0, 4.8.0
>Reporter: Kevin Liew
>Assignee: James Taylor
>Priority: Minor
>  Labels: comparison, date, timestamp
> Fix For: 4.9.0, 4.8.1
>
> Attachments: PHOENIX-2946_v2.patch, PHOENIX-2946_v3.patch, 
> PHOENIX-2946_v4.patch, PHOENIX-2946_v5.patch, PHOENIX-2946_v6.patch
>
>
> {code}
> 0: jdbc:phoenix:thin:url=http://localhost:876> create table test (dateCol 
> DATE primary key, timestampCol TIMESTAMP);
> No rows affected (2.559 seconds)
> 0: jdbc:phoenix:thin:url=http://localhost:876> upsert into test values 
> (TO_DATE('1990-01-01'), NOW());
> 1 row affected (0.255 seconds)
> 0: jdbc:phoenix:thin:url=http://localhost:876> select dateCol = timestampCol 
> from test;
> +--+
> |  DATECOL = TIMESTAMPCOL  |
> +--+
> | true |
> +--+
> 1 row selected (0.019 seconds)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (PHOENIX-476) Support declaration of DEFAULT in CREATE statement

2016-09-08 Thread Kevin Liew (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15471945#comment-15471945
 ] 

Kevin Liew edited comment on PHOENIX-476 at 9/8/16 6:24 PM:


I attached a patch implementing support for  in the CREATE statement. Is this 
the right approach? I will work on ALTER, DROP support, site documentation, and 
support for expressions in the  definition.

Should we save the  value in the SYSTEM.CATALOG table?

edit: I hadn't read the discussion above. I will take a look at that.


was (Author: kliew):
I attached a patch implementing support for  in the CREATE statement. Is this 
the right approach? I will work on ALTER, DROP support, site documentation, and 
support for expressions in the  definition.

Should we save the default value in the SYSTEM.CATALOG table?

> Support declaration of DEFAULT in CREATE statement
> --
>
> Key: PHOENIX-476
> URL: https://issues.apache.org/jira/browse/PHOENIX-476
> Project: Phoenix
>  Issue Type: Task
>Affects Versions: 3.0-Release
>Reporter: James Taylor
>Assignee: Kevin Liew
>  Labels: enhancement
> Attachments: PHOENIX-476.patch
>
>
> Support the declaration of a default value in the CREATE TABLE/VIEW statement 
> like this:
> CREATE TABLE Persons (
> Pid int NOT NULL PRIMARY KEY,
> LastName varchar(255) NOT NULL,
> FirstName varchar(255),
> Address varchar(255),
> City varchar(255) DEFAULT 'Sandnes'
> )
> To implement this, we'd need to:
> 1. add a new DEFAULT_VALUE key value column in SYSTEM.TABLE and pass through 
> the value when the table is created (in MetaDataClient).
> 2. always set NULLABLE to ResultSetMetaData.columnNoNulls if a default value 
> is present, since the column will never be null.
> 3. add a getDefaultValue() accessor in PColumn
> 4.  for a row key column, during UPSERT use the default value if no value was 
> specified for that column. This could be done in the PTableImpl.newKey method.
> 5.  for a key value column with a default value, we can get away without 
> incurring any storage cost. Although a little bit of extra effort than if we 
> persisted the default value on an UPSERT for key value columns, this approach 
> has the benefit of not incurring any storage cost for a default value.
> * serialize any default value into KeyValueColumnExpression
> * in the evaluate method of KeyValueColumnExpression, conditionally use 
> the default value if the column value is not present. If doing partial 
> evaluation, you should not yet return the default value, as we may not have 
> encountered the the KeyValue for the column yet (since a filter evaluates 
> each time it sees each KeyValue, and there may be more than one KeyValue 
> referenced in the expression). Partial evaluation is determined by calling 
> Tuple.isImmutable(), where false means it is NOT doing partial evaluation, 
> while true means it is.
> * modify EvaluateOnCompletionVisitor by adding a visitor method for 
> RowKeyColumnExpression and KeyValueColumnExpression to set 
> evaluateOnCompletion to true if they have a default value specified. This 
> will cause filter evaluation to execute one final time after all KeyValues 
> for a row have been seen, since it's at this time we know we should use the 
> default value.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3193) Tracing UI improvements

2016-09-08 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15474602#comment-15474602
 ] 

Josh Elser commented on PHOENIX-3193:
-

This cannot be merged presently. There are numerous files missing license 
headers which need to be fixed first.

The Maven structure should also be fixed up (unnecessary intermediate pom) and 
a couple other random questions I left on the PR.

> Tracing UI improvements
> ---
>
> Key: PHOENIX-3193
> URL: https://issues.apache.org/jira/browse/PHOENIX-3193
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Mujtaba Chohan
>Assignee: Nishani 
> Attachments: PHOENIX-3193.patch
>
>
> Points from GSoC presentation on tracing imporvements:
> *Tracing UI*
> * Remove line chart 
> * In list page, run query with description, start_time, (end_time-start_time) 
> duration from T where trace_id = ?
> * More space for descriptions on bar chart. Wrap if necessary
> * Label for X axis on timeline sometime again start from 0, if X axis is in 
> seconds then it should not rollover after 60 seconds unless minutes are also 
> shown
> * X-axis labeled as Node on various charts, but should be Percentage
> *Zipkin*
> * Flip zipkin chart on vertical axis with arrows going other way. So start 
> from the top level root on the leftmost side and work toward children on the 
> right.
> * Ask zipkin community if there's a way to tell it that date/time is in 
> milliseconds.
> *Overall*
> * Please put together a pull request to the phoenix project to add the 
> zipkiin work you've done to the OS project. Ideally, include the zipkin work 
> in the phoenix-tracing module and call it phoeix-tracing. Only if there is 
> some major hurdle, create a new module.
> * Test with trace_ids that have multiple spans with duration (end_time - 
> start_time) > 5 ms and verify that UI and Zipkin output shows the correct 
> corresponding timeline
>   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3193) Tracing UI improvements

2016-09-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15474599#comment-15474599
 ] 

ASF GitHub Bot commented on PHOENIX-3193:
-

Github user joshelser commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/202#discussion_r78058576
  
--- Diff: bin/traceserver.py ---
@@ -116,7 +116,7 @@
 
 #" -Xdebug 
-Xrunjdwp:transport=dt_socket,address=5005,server=y,suspend=n " + \
 #" -XX:+UnlockCommercialFeatures -XX:+FlightRecorder 
-XX:FlightRecorderOptions=defaultrecording=true,dumponexit=true" + \
-java_cmd = '%(java)s $PHOENIX_OPTS ' + \
+java_cmd = '%(java)s ' + \
--- End diff --

Why was `$PHOENIX_OPTS` dropped here?


> Tracing UI improvements
> ---
>
> Key: PHOENIX-3193
> URL: https://issues.apache.org/jira/browse/PHOENIX-3193
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Mujtaba Chohan
>Assignee: Nishani 
> Attachments: PHOENIX-3193.patch
>
>
> Points from GSoC presentation on tracing imporvements:
> *Tracing UI*
> * Remove line chart 
> * In list page, run query with description, start_time, (end_time-start_time) 
> duration from T where trace_id = ?
> * More space for descriptions on bar chart. Wrap if necessary
> * Label for X axis on timeline sometime again start from 0, if X axis is in 
> seconds then it should not rollover after 60 seconds unless minutes are also 
> shown
> * X-axis labeled as Node on various charts, but should be Percentage
> *Zipkin*
> * Flip zipkin chart on vertical axis with arrows going other way. So start 
> from the top level root on the leftmost side and work toward children on the 
> right.
> * Ask zipkin community if there's a way to tell it that date/time is in 
> milliseconds.
> *Overall*
> * Please put together a pull request to the phoenix project to add the 
> zipkiin work you've done to the OS project. Ideally, include the zipkin work 
> in the phoenix-tracing module and call it phoeix-tracing. Only if there is 
> some major hurdle, create a new module.
> * Test with trace_ids that have multiple spans with duration (end_time - 
> start_time) > 5 ms and verify that UI and Zipkin output shows the correct 
> corresponding timeline
>   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] phoenix pull request #202: PHOENIX-3193 Tracing UI cleanup

2016-09-08 Thread joshelser
Github user joshelser commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/202#discussion_r78058576
  
--- Diff: bin/traceserver.py ---
@@ -116,7 +116,7 @@
 
 #" -Xdebug 
-Xrunjdwp:transport=dt_socket,address=5005,server=y,suspend=n " + \
 #" -XX:+UnlockCommercialFeatures -XX:+FlightRecorder 
-XX:FlightRecorderOptions=defaultrecording=true,dumponexit=true" + \
-java_cmd = '%(java)s $PHOENIX_OPTS ' + \
+java_cmd = '%(java)s ' + \
--- End diff --

Why was `$PHOENIX_OPTS` dropped here?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (PHOENIX-3193) Tracing UI improvements

2016-09-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15474595#comment-15474595
 ] 

ASF GitHub Bot commented on PHOENIX-3193:
-

Github user joshelser commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/202#discussion_r78058337
  
--- Diff: 
phoenix-tracing/phoenix-tracing-webapp/src/main/webapp/js/services/generate-dependancytree-service.js
 ---
@@ -0,0 +1,51 @@
+'use strict';
--- End diff --

Requires license header.


> Tracing UI improvements
> ---
>
> Key: PHOENIX-3193
> URL: https://issues.apache.org/jira/browse/PHOENIX-3193
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Mujtaba Chohan
>Assignee: Nishani 
> Attachments: PHOENIX-3193.patch
>
>
> Points from GSoC presentation on tracing imporvements:
> *Tracing UI*
> * Remove line chart 
> * In list page, run query with description, start_time, (end_time-start_time) 
> duration from T where trace_id = ?
> * More space for descriptions on bar chart. Wrap if necessary
> * Label for X axis on timeline sometime again start from 0, if X axis is in 
> seconds then it should not rollover after 60 seconds unless minutes are also 
> shown
> * X-axis labeled as Node on various charts, but should be Percentage
> *Zipkin*
> * Flip zipkin chart on vertical axis with arrows going other way. So start 
> from the top level root on the leftmost side and work toward children on the 
> right.
> * Ask zipkin community if there's a way to tell it that date/time is in 
> milliseconds.
> *Overall*
> * Please put together a pull request to the phoenix project to add the 
> zipkiin work you've done to the OS project. Ideally, include the zipkin work 
> in the phoenix-tracing module and call it phoeix-tracing. Only if there is 
> some major hurdle, create a new module.
> * Test with trace_ids that have multiple spans with duration (end_time - 
> start_time) > 5 ms and verify that UI and Zipkin output shows the correct 
> corresponding timeline
>   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] phoenix pull request #202: PHOENIX-3193 Tracing UI cleanup

2016-09-08 Thread joshelser
Github user joshelser commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/202#discussion_r78058366
  
--- Diff: 
phoenix-tracing/phoenix-tracing-webapp/src/main/webapp/js/controllers/trace-search-controllers.js
 ---
@@ -0,0 +1,261 @@
+'use strict';
--- End diff --

Requires license header.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (PHOENIX-3193) Tracing UI improvements

2016-09-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15474597#comment-15474597
 ] 

ASF GitHub Bot commented on PHOENIX-3193:
-

Github user joshelser commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/202#discussion_r78058366
  
--- Diff: 
phoenix-tracing/phoenix-tracing-webapp/src/main/webapp/js/controllers/trace-search-controllers.js
 ---
@@ -0,0 +1,261 @@
+'use strict';
--- End diff --

Requires license header.


> Tracing UI improvements
> ---
>
> Key: PHOENIX-3193
> URL: https://issues.apache.org/jira/browse/PHOENIX-3193
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Mujtaba Chohan
>Assignee: Nishani 
> Attachments: PHOENIX-3193.patch
>
>
> Points from GSoC presentation on tracing imporvements:
> *Tracing UI*
> * Remove line chart 
> * In list page, run query with description, start_time, (end_time-start_time) 
> duration from T where trace_id = ?
> * More space for descriptions on bar chart. Wrap if necessary
> * Label for X axis on timeline sometime again start from 0, if X axis is in 
> seconds then it should not rollover after 60 seconds unless minutes are also 
> shown
> * X-axis labeled as Node on various charts, but should be Percentage
> *Zipkin*
> * Flip zipkin chart on vertical axis with arrows going other way. So start 
> from the top level root on the leftmost side and work toward children on the 
> right.
> * Ask zipkin community if there's a way to tell it that date/time is in 
> milliseconds.
> *Overall*
> * Please put together a pull request to the phoenix project to add the 
> zipkiin work you've done to the OS project. Ideally, include the zipkin work 
> in the phoenix-tracing module and call it phoeix-tracing. Only if there is 
> some major hurdle, create a new module.
> * Test with trace_ids that have multiple spans with duration (end_time - 
> start_time) > 5 ms and verify that UI and Zipkin output shows the correct 
> corresponding timeline
>   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] phoenix pull request #202: PHOENIX-3193 Tracing UI cleanup

2016-09-08 Thread joshelser
Github user joshelser commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/202#discussion_r78058337
  
--- Diff: 
phoenix-tracing/phoenix-tracing-webapp/src/main/webapp/js/services/generate-dependancytree-service.js
 ---
@@ -0,0 +1,51 @@
+'use strict';
--- End diff --

Requires license header.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (PHOENIX-3193) Tracing UI improvements

2016-09-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15474588#comment-15474588
 ] 

ASF GitHub Bot commented on PHOENIX-3193:
-

Github user joshelser commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/202#discussion_r78058135
  
--- Diff: 
phoenix-tracing/phoenix-tracing-webapp/src/main/webapp/partials/list.html ---
@@ -29,7 +29,7 @@
 
   
  
- 
+ 
--- End diff --

Remove it instead of commenting?


> Tracing UI improvements
> ---
>
> Key: PHOENIX-3193
> URL: https://issues.apache.org/jira/browse/PHOENIX-3193
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Mujtaba Chohan
>Assignee: Nishani 
> Attachments: PHOENIX-3193.patch
>
>
> Points from GSoC presentation on tracing imporvements:
> *Tracing UI*
> * Remove line chart 
> * In list page, run query with description, start_time, (end_time-start_time) 
> duration from T where trace_id = ?
> * More space for descriptions on bar chart. Wrap if necessary
> * Label for X axis on timeline sometime again start from 0, if X axis is in 
> seconds then it should not rollover after 60 seconds unless minutes are also 
> shown
> * X-axis labeled as Node on various charts, but should be Percentage
> *Zipkin*
> * Flip zipkin chart on vertical axis with arrows going other way. So start 
> from the top level root on the leftmost side and work toward children on the 
> right.
> * Ask zipkin community if there's a way to tell it that date/time is in 
> milliseconds.
> *Overall*
> * Please put together a pull request to the phoenix project to add the 
> zipkiin work you've done to the OS project. Ideally, include the zipkin work 
> in the phoenix-tracing module and call it phoeix-tracing. Only if there is 
> some major hurdle, create a new module.
> * Test with trace_ids that have multiple spans with duration (end_time - 
> start_time) > 5 ms and verify that UI and Zipkin output shows the correct 
> corresponding timeline
>   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3193) Tracing UI improvements

2016-09-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15474589#comment-15474589
 ] 

ASF GitHub Bot commented on PHOENIX-3193:
-

Github user joshelser commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/202#discussion_r78058156
  
--- Diff: 
phoenix-tracing/phoenix-tracing-webapp/src/main/webapp/partials/search-trace.html
 ---
@@ -0,0 +1,100 @@
+
--- End diff --

No license header?


> Tracing UI improvements
> ---
>
> Key: PHOENIX-3193
> URL: https://issues.apache.org/jira/browse/PHOENIX-3193
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Mujtaba Chohan
>Assignee: Nishani 
> Attachments: PHOENIX-3193.patch
>
>
> Points from GSoC presentation on tracing imporvements:
> *Tracing UI*
> * Remove line chart 
> * In list page, run query with description, start_time, (end_time-start_time) 
> duration from T where trace_id = ?
> * More space for descriptions on bar chart. Wrap if necessary
> * Label for X axis on timeline sometime again start from 0, if X axis is in 
> seconds then it should not rollover after 60 seconds unless minutes are also 
> shown
> * X-axis labeled as Node on various charts, but should be Percentage
> *Zipkin*
> * Flip zipkin chart on vertical axis with arrows going other way. So start 
> from the top level root on the leftmost side and work toward children on the 
> right.
> * Ask zipkin community if there's a way to tell it that date/time is in 
> milliseconds.
> *Overall*
> * Please put together a pull request to the phoenix project to add the 
> zipkiin work you've done to the OS project. Ideally, include the zipkin work 
> in the phoenix-tracing module and call it phoeix-tracing. Only if there is 
> some major hurdle, create a new module.
> * Test with trace_ids that have multiple spans with duration (end_time - 
> start_time) > 5 ms and verify that UI and Zipkin output shows the correct 
> corresponding timeline
>   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3193) Tracing UI improvements

2016-09-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15474594#comment-15474594
 ] 

ASF GitHub Bot commented on PHOENIX-3193:
-

Github user joshelser commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/202#discussion_r78058264
  
--- Diff: 
phoenix-tracing/phoenix-tracing-webapp/src/main/webapp/js/services/generate-timeline-service.js
 ---
@@ -0,0 +1,121 @@
+'use strict';
--- End diff --

Requires license header.


> Tracing UI improvements
> ---
>
> Key: PHOENIX-3193
> URL: https://issues.apache.org/jira/browse/PHOENIX-3193
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Mujtaba Chohan
>Assignee: Nishani 
> Attachments: PHOENIX-3193.patch
>
>
> Points from GSoC presentation on tracing imporvements:
> *Tracing UI*
> * Remove line chart 
> * In list page, run query with description, start_time, (end_time-start_time) 
> duration from T where trace_id = ?
> * More space for descriptions on bar chart. Wrap if necessary
> * Label for X axis on timeline sometime again start from 0, if X axis is in 
> seconds then it should not rollover after 60 seconds unless minutes are also 
> shown
> * X-axis labeled as Node on various charts, but should be Percentage
> *Zipkin*
> * Flip zipkin chart on vertical axis with arrows going other way. So start 
> from the top level root on the leftmost side and work toward children on the 
> right.
> * Ask zipkin community if there's a way to tell it that date/time is in 
> milliseconds.
> *Overall*
> * Please put together a pull request to the phoenix project to add the 
> zipkiin work you've done to the OS project. Ideally, include the zipkin work 
> in the phoenix-tracing module and call it phoeix-tracing. Only if there is 
> some major hurdle, create a new module.
> * Test with trace_ids that have multiple spans with duration (end_time - 
> start_time) > 5 ms and verify that UI and Zipkin output shows the correct 
> corresponding timeline
>   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] phoenix pull request #202: PHOENIX-3193 Tracing UI cleanup

2016-09-08 Thread joshelser
Github user joshelser commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/202#discussion_r78058264
  
--- Diff: 
phoenix-tracing/phoenix-tracing-webapp/src/main/webapp/js/services/generate-timeline-service.js
 ---
@@ -0,0 +1,121 @@
+'use strict';
--- End diff --

Requires license header.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (PHOENIX-3193) Tracing UI improvements

2016-09-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15474593#comment-15474593
 ] 

ASF GitHub Bot commented on PHOENIX-3193:
-

Github user joshelser commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/202#discussion_r78058196
  
--- Diff: 
phoenix-tracing/phoenix-tracing-webapp/src/main/webapp/js/services/generate-distribution-service.js
 ---
@@ -0,0 +1,147 @@
+'use strict';
--- End diff --

Requires license header.


> Tracing UI improvements
> ---
>
> Key: PHOENIX-3193
> URL: https://issues.apache.org/jira/browse/PHOENIX-3193
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Mujtaba Chohan
>Assignee: Nishani 
> Attachments: PHOENIX-3193.patch
>
>
> Points from GSoC presentation on tracing imporvements:
> *Tracing UI*
> * Remove line chart 
> * In list page, run query with description, start_time, (end_time-start_time) 
> duration from T where trace_id = ?
> * More space for descriptions on bar chart. Wrap if necessary
> * Label for X axis on timeline sometime again start from 0, if X axis is in 
> seconds then it should not rollover after 60 seconds unless minutes are also 
> shown
> * X-axis labeled as Node on various charts, but should be Percentage
> *Zipkin*
> * Flip zipkin chart on vertical axis with arrows going other way. So start 
> from the top level root on the leftmost side and work toward children on the 
> right.
> * Ask zipkin community if there's a way to tell it that date/time is in 
> milliseconds.
> *Overall*
> * Please put together a pull request to the phoenix project to add the 
> zipkiin work you've done to the OS project. Ideally, include the zipkin work 
> in the phoenix-tracing module and call it phoeix-tracing. Only if there is 
> some major hurdle, create a new module.
> * Test with trace_ids that have multiple spans with duration (end_time - 
> start_time) > 5 ms and verify that UI and Zipkin output shows the correct 
> corresponding timeline
>   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] phoenix pull request #202: PHOENIX-3193 Tracing UI cleanup

2016-09-08 Thread joshelser
Github user joshelser commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/202#discussion_r78058196
  
--- Diff: 
phoenix-tracing/phoenix-tracing-webapp/src/main/webapp/js/services/generate-distribution-service.js
 ---
@@ -0,0 +1,147 @@
+'use strict';
--- End diff --

Requires license header.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request #202: PHOENIX-3193 Tracing UI cleanup

2016-09-08 Thread joshelser
Github user joshelser commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/202#discussion_r78058156
  
--- Diff: 
phoenix-tracing/phoenix-tracing-webapp/src/main/webapp/partials/search-trace.html
 ---
@@ -0,0 +1,100 @@
+
--- End diff --

No license header?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request #202: PHOENIX-3193 Tracing UI cleanup

2016-09-08 Thread joshelser
Github user joshelser commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/202#discussion_r78058135
  
--- Diff: 
phoenix-tracing/phoenix-tracing-webapp/src/main/webapp/partials/list.html ---
@@ -29,7 +29,7 @@
 
   
  
- 
+ 
--- End diff --

Remove it instead of commenting?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (PHOENIX-3193) Tracing UI improvements

2016-09-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15474584#comment-15474584
 ] 

ASF GitHub Bot commented on PHOENIX-3193:
-

Github user joshelser commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/202#discussion_r78057994
  
--- Diff: pom.xml ---
@@ -102,6 +102,8 @@
 1.6.1
 2.10.4
 2.10
+1.3.6.RELEASE
+1.5.1
--- End diff --

Similarly, Zipkin has a 1.9.0 release they just put out. Is there a reason 
for using 1.5.1?


> Tracing UI improvements
> ---
>
> Key: PHOENIX-3193
> URL: https://issues.apache.org/jira/browse/PHOENIX-3193
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Mujtaba Chohan
>Assignee: Nishani 
> Attachments: PHOENIX-3193.patch
>
>
> Points from GSoC presentation on tracing imporvements:
> *Tracing UI*
> * Remove line chart 
> * In list page, run query with description, start_time, (end_time-start_time) 
> duration from T where trace_id = ?
> * More space for descriptions on bar chart. Wrap if necessary
> * Label for X axis on timeline sometime again start from 0, if X axis is in 
> seconds then it should not rollover after 60 seconds unless minutes are also 
> shown
> * X-axis labeled as Node on various charts, but should be Percentage
> *Zipkin*
> * Flip zipkin chart on vertical axis with arrows going other way. So start 
> from the top level root on the leftmost side and work toward children on the 
> right.
> * Ask zipkin community if there's a way to tell it that date/time is in 
> milliseconds.
> *Overall*
> * Please put together a pull request to the phoenix project to add the 
> zipkiin work you've done to the OS project. Ideally, include the zipkin work 
> in the phoenix-tracing module and call it phoeix-tracing. Only if there is 
> some major hurdle, create a new module.
> * Test with trace_ids that have multiple spans with duration (end_time - 
> start_time) > 5 ms and verify that UI and Zipkin output shows the correct 
> corresponding timeline
>   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] phoenix pull request #202: PHOENIX-3193 Tracing UI cleanup

2016-09-08 Thread joshelser
Github user joshelser commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/202#discussion_r78057994
  
--- Diff: pom.xml ---
@@ -102,6 +102,8 @@
 1.6.1
 2.10.4
 2.10
+1.3.6.RELEASE
+1.5.1
--- End diff --

Similarly, Zipkin has a 1.9.0 release they just put out. Is there a reason 
for using 1.5.1?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (PHOENIX-3193) Tracing UI improvements

2016-09-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15474582#comment-15474582
 ] 

ASF GitHub Bot commented on PHOENIX-3193:
-

Github user joshelser commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/202#discussion_r78057750
  
--- Diff: pom.xml ---
@@ -102,6 +102,8 @@
 1.6.1
 2.10.4
 2.10
+1.3.6.RELEASE
--- End diff --

Why, 1.3.6? It looks like there is a 1.3.7 and the "current" is tagged as 
"1.4.0". Is there a reason that the older 1.3.6 is being used?


> Tracing UI improvements
> ---
>
> Key: PHOENIX-3193
> URL: https://issues.apache.org/jira/browse/PHOENIX-3193
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Mujtaba Chohan
>Assignee: Nishani 
> Attachments: PHOENIX-3193.patch
>
>
> Points from GSoC presentation on tracing imporvements:
> *Tracing UI*
> * Remove line chart 
> * In list page, run query with description, start_time, (end_time-start_time) 
> duration from T where trace_id = ?
> * More space for descriptions on bar chart. Wrap if necessary
> * Label for X axis on timeline sometime again start from 0, if X axis is in 
> seconds then it should not rollover after 60 seconds unless minutes are also 
> shown
> * X-axis labeled as Node on various charts, but should be Percentage
> *Zipkin*
> * Flip zipkin chart on vertical axis with arrows going other way. So start 
> from the top level root on the leftmost side and work toward children on the 
> right.
> * Ask zipkin community if there's a way to tell it that date/time is in 
> milliseconds.
> *Overall*
> * Please put together a pull request to the phoenix project to add the 
> zipkiin work you've done to the OS project. Ideally, include the zipkin work 
> in the phoenix-tracing module and call it phoeix-tracing. Only if there is 
> some major hurdle, create a new module.
> * Test with trace_ids that have multiple spans with duration (end_time - 
> start_time) > 5 ms and verify that UI and Zipkin output shows the correct 
> corresponding timeline
>   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] phoenix pull request #202: PHOENIX-3193 Tracing UI cleanup

2016-09-08 Thread joshelser
Github user joshelser commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/202#discussion_r78057750
  
--- Diff: pom.xml ---
@@ -102,6 +102,8 @@
 1.6.1
 2.10.4
 2.10
+1.3.6.RELEASE
--- End diff --

Why, 1.3.6? It looks like there is a 1.3.7 and the "current" is tagged as 
"1.4.0". Is there a reason that the older 1.3.6 is being used?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (PHOENIX-3193) Tracing UI improvements

2016-09-08 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15474579#comment-15474579
 ] 

Josh Elser commented on PHOENIX-3193:
-

bq. The patch introduces two new dependencies: springboot and zipkin. Both look 
to be ASF 2.0 licensed which is good, but it'd probably be worth the keen eye 
of Josh Elser if we want to move forward with it.

Yep, they both look fine to me WRT license compatibility. However..

{code}
+
+  org.springframework.boot
+  spring-boot-maven-plugin
+  1.3.6.RELEASE
+  
+
+  
+repackage
+  
+
+  
+
{code}

This appears to be making some kind of uber/shaded jar. We need to investigate 
every artifact that is contained in this artifact and ensure that the 
appropriate entries are added to {{dev/release_files/LICENSE}} and 
{{dev/release_files/NOTICE}}.

> Tracing UI improvements
> ---
>
> Key: PHOENIX-3193
> URL: https://issues.apache.org/jira/browse/PHOENIX-3193
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Mujtaba Chohan
>Assignee: Nishani 
> Attachments: PHOENIX-3193.patch
>
>
> Points from GSoC presentation on tracing imporvements:
> *Tracing UI*
> * Remove line chart 
> * In list page, run query with description, start_time, (end_time-start_time) 
> duration from T where trace_id = ?
> * More space for descriptions on bar chart. Wrap if necessary
> * Label for X axis on timeline sometime again start from 0, if X axis is in 
> seconds then it should not rollover after 60 seconds unless minutes are also 
> shown
> * X-axis labeled as Node on various charts, but should be Percentage
> *Zipkin*
> * Flip zipkin chart on vertical axis with arrows going other way. So start 
> from the top level root on the leftmost side and work toward children on the 
> right.
> * Ask zipkin community if there's a way to tell it that date/time is in 
> milliseconds.
> *Overall*
> * Please put together a pull request to the phoenix project to add the 
> zipkiin work you've done to the OS project. Ideally, include the zipkin work 
> in the phoenix-tracing module and call it phoeix-tracing. Only if there is 
> some major hurdle, create a new module.
> * Test with trace_ids that have multiple spans with duration (end_time - 
> start_time) > 5 ms and verify that UI and Zipkin output shows the correct 
> corresponding timeline
>   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3193) Tracing UI improvements

2016-09-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15474554#comment-15474554
 ] 

ASF GitHub Bot commented on PHOENIX-3193:
-

Github user joshelser commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/202#discussion_r78055951
  
--- Diff: phoenix-tracing/pom.xml ---
@@ -0,0 +1,47 @@
+
+  
+
+  http://maven.apache.org/POM/4.0.0; 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
--- End diff --

I would remove this intermediate pom. It serves no purpose and will just 
make the build more brittle. Just make phoenix-tracing-webapp and 
phoenix-zipkin refer to the parent two-level ups (`../../`).


> Tracing UI improvements
> ---
>
> Key: PHOENIX-3193
> URL: https://issues.apache.org/jira/browse/PHOENIX-3193
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Mujtaba Chohan
>Assignee: Nishani 
> Attachments: PHOENIX-3193.patch
>
>
> Points from GSoC presentation on tracing imporvements:
> *Tracing UI*
> * Remove line chart 
> * In list page, run query with description, start_time, (end_time-start_time) 
> duration from T where trace_id = ?
> * More space for descriptions on bar chart. Wrap if necessary
> * Label for X axis on timeline sometime again start from 0, if X axis is in 
> seconds then it should not rollover after 60 seconds unless minutes are also 
> shown
> * X-axis labeled as Node on various charts, but should be Percentage
> *Zipkin*
> * Flip zipkin chart on vertical axis with arrows going other way. So start 
> from the top level root on the leftmost side and work toward children on the 
> right.
> * Ask zipkin community if there's a way to tell it that date/time is in 
> milliseconds.
> *Overall*
> * Please put together a pull request to the phoenix project to add the 
> zipkiin work you've done to the OS project. Ideally, include the zipkin work 
> in the phoenix-tracing module and call it phoeix-tracing. Only if there is 
> some major hurdle, create a new module.
> * Test with trace_ids that have multiple spans with duration (end_time - 
> start_time) > 5 ms and verify that UI and Zipkin output shows the correct 
> corresponding timeline
>   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] phoenix pull request #202: PHOENIX-3193 Tracing UI cleanup

2016-09-08 Thread joshelser
Github user joshelser commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/202#discussion_r78055951
  
--- Diff: phoenix-tracing/pom.xml ---
@@ -0,0 +1,47 @@
+
+  
+
+  http://maven.apache.org/POM/4.0.0; 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
--- End diff --

I would remove this intermediate pom. It serves no purpose and will just 
make the build more brittle. Just make phoenix-tracing-webapp and 
phoenix-zipkin refer to the parent two-level ups (`../../`).


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (PHOENIX-2946) Projected comparison between date and timestamp columns always returns true

2016-09-08 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-2946:
--
Attachment: PHOENIX-2946_v6.patch

Final patch

> Projected comparison between date and timestamp columns always returns true
> ---
>
> Key: PHOENIX-2946
> URL: https://issues.apache.org/jira/browse/PHOENIX-2946
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.6.0, 4.8.0
>Reporter: Kevin Liew
>Assignee: James Taylor
>Priority: Minor
>  Labels: comparison, date, timestamp
> Fix For: 4.9.0, 4.8.1
>
> Attachments: PHOENIX-2946_v2.patch, PHOENIX-2946_v3.patch, 
> PHOENIX-2946_v4.patch, PHOENIX-2946_v5.patch, PHOENIX-2946_v6.patch
>
>
> {code}
> 0: jdbc:phoenix:thin:url=http://localhost:876> create table test (dateCol 
> DATE primary key, timestampCol TIMESTAMP);
> No rows affected (2.559 seconds)
> 0: jdbc:phoenix:thin:url=http://localhost:876> upsert into test values 
> (TO_DATE('1990-01-01'), NOW());
> 1 row affected (0.255 seconds)
> 0: jdbc:phoenix:thin:url=http://localhost:876> select dateCol = timestampCol 
> from test;
> +--+
> |  DATECOL = TIMESTAMPCOL  |
> +--+
> | true |
> +--+
> 1 row selected (0.019 seconds)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-3255) Increase test coverage for TIMESTAMP

2016-09-08 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-3255:
--
Fix Version/s: 4.8.1
   4.9.0

> Increase test coverage for TIMESTAMP
> 
>
> Key: PHOENIX-3255
> URL: https://issues.apache.org/jira/browse/PHOENIX-3255
> Project: Phoenix
>  Issue Type: Test
>Reporter: James Taylor
>Assignee: Kevin Liew
> Fix For: 4.9.0, 4.8.1
>
> Attachments: PHOENIX-3255.patch
>
>
> We need to improve test coverage for TIMESTAMP. See PHOENIX-2946 for issues 
> found when comparing TIMESTAMP with DATE.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2946) Projected comparison between date and timestamp columns always returns true

2016-09-08 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15474347#comment-15474347
 ] 

James Taylor commented on PHOENIX-2946:
---

Improvements to test coverage for TIMESTAMP has been moved to PHOENIX-3255.

> Projected comparison between date and timestamp columns always returns true
> ---
>
> Key: PHOENIX-2946
> URL: https://issues.apache.org/jira/browse/PHOENIX-2946
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.6.0, 4.8.0
>Reporter: Kevin Liew
>Assignee: James Taylor
>Priority: Minor
>  Labels: comparison, date, timestamp
> Fix For: 4.9.0, 4.8.1
>
> Attachments: PHOENIX-2946_v2.patch, PHOENIX-2946_v3.patch, 
> PHOENIX-2946_v4.patch, PHOENIX-2946_v5.patch
>
>
> {code}
> 0: jdbc:phoenix:thin:url=http://localhost:876> create table test (dateCol 
> DATE primary key, timestampCol TIMESTAMP);
> No rows affected (2.559 seconds)
> 0: jdbc:phoenix:thin:url=http://localhost:876> upsert into test values 
> (TO_DATE('1990-01-01'), NOW());
> 1 row affected (0.255 seconds)
> 0: jdbc:phoenix:thin:url=http://localhost:876> select dateCol = timestampCol 
> from test;
> +--+
> |  DATECOL = TIMESTAMPCOL  |
> +--+
> | true |
> +--+
> 1 row selected (0.019 seconds)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (PHOENIX-2946) Projected comparison between date and timestamp columns always returns true

2016-09-08 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor reassigned PHOENIX-2946:
-

Assignee: James Taylor  (was: Kevin Liew)

> Projected comparison between date and timestamp columns always returns true
> ---
>
> Key: PHOENIX-2946
> URL: https://issues.apache.org/jira/browse/PHOENIX-2946
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.6.0, 4.8.0
>Reporter: Kevin Liew
>Assignee: James Taylor
>Priority: Minor
>  Labels: comparison, date, timestamp
> Fix For: 4.9.0, 4.8.1
>
> Attachments: PHOENIX-2946_v2.patch, PHOENIX-2946_v3.patch, 
> PHOENIX-2946_v4.patch, PHOENIX-2946_v5.patch
>
>
> {code}
> 0: jdbc:phoenix:thin:url=http://localhost:876> create table test (dateCol 
> DATE primary key, timestampCol TIMESTAMP);
> No rows affected (2.559 seconds)
> 0: jdbc:phoenix:thin:url=http://localhost:876> upsert into test values 
> (TO_DATE('1990-01-01'), NOW());
> 1 row affected (0.255 seconds)
> 0: jdbc:phoenix:thin:url=http://localhost:876> select dateCol = timestampCol 
> from test;
> +--+
> |  DATECOL = TIMESTAMPCOL  |
> +--+
> | true |
> +--+
> 1 row selected (0.019 seconds)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-3255) Increase test coverage for TIMESTAMP

2016-09-08 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-3255:
--
Attachment: PHOENIX-3255.patch

Attaching patch on behalf of [~kliew].

> Increase test coverage for TIMESTAMP
> 
>
> Key: PHOENIX-3255
> URL: https://issues.apache.org/jira/browse/PHOENIX-3255
> Project: Phoenix
>  Issue Type: Test
>Reporter: James Taylor
>Assignee: Kevin Liew
> Attachments: PHOENIX-3255.patch
>
>
> We need to improve test coverage for TIMESTAMP. See PHOENIX-2946 for issues 
> found when comparing TIMESTAMP with DATE.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-3255) Increase test coverage for TIMESTAMP

2016-09-08 Thread James Taylor (JIRA)
James Taylor created PHOENIX-3255:
-

 Summary: Increase test coverage for TIMESTAMP
 Key: PHOENIX-3255
 URL: https://issues.apache.org/jira/browse/PHOENIX-3255
 Project: Phoenix
  Issue Type: Test
Reporter: James Taylor


We need to improve test coverage for TIMESTAMP. See PHOENIX-2946 for issues 
found when comparing TIMESTAMP with DATE.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-3255) Increase test coverage for TIMESTAMP

2016-09-08 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-3255:
--
Assignee: Kevin Liew

> Increase test coverage for TIMESTAMP
> 
>
> Key: PHOENIX-3255
> URL: https://issues.apache.org/jira/browse/PHOENIX-3255
> Project: Phoenix
>  Issue Type: Test
>Reporter: James Taylor
>Assignee: Kevin Liew
>
> We need to improve test coverage for TIMESTAMP. See PHOENIX-2946 for issues 
> found when comparing TIMESTAMP with DATE.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3193) Tracing UI improvements

2016-09-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15474257#comment-15474257
 ] 

Hadoop QA commented on PHOENIX-3193:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12827607/PHOENIX-3193.patch
  against master branch at commit 3a8724eee05aaf477bf6085415e781856990e1c0.
  ATTACHMENT ID: 12827607

{color:red}-1 @author{color}.  The patch appears to contain 6 @author tags 
which the Hadoop community has agreed to not allow in code contributions.

{color:green}+1 tests included{color}.  The patch appears to include 40 new 
or modified tests.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/558//console

This message is automatically generated.

> Tracing UI improvements
> ---
>
> Key: PHOENIX-3193
> URL: https://issues.apache.org/jira/browse/PHOENIX-3193
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Mujtaba Chohan
>Assignee: Nishani 
> Attachments: PHOENIX-3193.patch
>
>
> Points from GSoC presentation on tracing imporvements:
> *Tracing UI*
> * Remove line chart 
> * In list page, run query with description, start_time, (end_time-start_time) 
> duration from T where trace_id = ?
> * More space for descriptions on bar chart. Wrap if necessary
> * Label for X axis on timeline sometime again start from 0, if X axis is in 
> seconds then it should not rollover after 60 seconds unless minutes are also 
> shown
> * X-axis labeled as Node on various charts, but should be Percentage
> *Zipkin*
> * Flip zipkin chart on vertical axis with arrows going other way. So start 
> from the top level root on the leftmost side and work toward children on the 
> right.
> * Ask zipkin community if there's a way to tell it that date/time is in 
> milliseconds.
> *Overall*
> * Please put together a pull request to the phoenix project to add the 
> zipkiin work you've done to the OS project. Ideally, include the zipkin work 
> in the phoenix-tracing module and call it phoeix-tracing. Only if there is 
> some major hurdle, create a new module.
> * Test with trace_ids that have multiple spans with duration (end_time - 
> start_time) > 5 ms and verify that UI and Zipkin output shows the correct 
> corresponding timeline
>   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (PHOENIX-3193) Tracing UI improvements

2016-09-08 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15474210#comment-15474210
 ] 

James Taylor edited comment on PHOENIX-3193 at 9/8/16 3:57 PM:
---

This patch applies on master and builds. We asked Nishani to combine both the 
phoenix-tracing-webapp and the phoenix-zipkin into a single phoenix-tracing 
module. This was done in a somewhat strange manner, as the phoenix-tracing 
module has two submodules. Maybe this is ok and necessary, but it's not clear 
to me why. The patch introduces two new dependencies: springboot and zipkin. 
Both look to be ASF 2.0 licensed which is good, but it'd probably be worth the 
keen eye of [~elserj] if we want to move forward with it.

The main thing the tracing UI work is an owner, [~rajeshbabu]. We don't have 
any reasonable amount of testing on it, the docs are out of date, and no one in 
the OS community supports it. Without this, I think we should remove it from 
the source and binary distro and host it on github somewhere as an experimental 
add-on.

Thoughts?




was (Author: jamestaylor):
This patch applies on master and builds. We asked Nishani to combine both the 
phoenix-tracing-webapp and the phoenix-zipkin into a single phoenix-tracing 
module. This was done in a somewhat strange manner, as the phoenix-tracing 
module has two submodules. Maybe this is ok and necessary, but it's not clear 
to me why. The patch introduces two new dependencies: springboot and zipkin. 
Both look to be ASF 2.0 licensed which is good, but it'd probably be worth the 
keen eye of [~elserj] if we want to move forward with it.

The main thing the tracing UI work is an owner, [~rajeshbabu]. We don't have 
any reasonable amount of testing on it and no one to support this in the OS 
community. Without this, I think we should remove it from the source and binary 
distro and perhaps host it on github somewhere as an experimental add-on.

Thoughts?



> Tracing UI improvements
> ---
>
> Key: PHOENIX-3193
> URL: https://issues.apache.org/jira/browse/PHOENIX-3193
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Mujtaba Chohan
>Assignee: Nishani 
> Attachments: PHOENIX-3193.patch
>
>
> Points from GSoC presentation on tracing imporvements:
> *Tracing UI*
> * Remove line chart 
> * In list page, run query with description, start_time, (end_time-start_time) 
> duration from T where trace_id = ?
> * More space for descriptions on bar chart. Wrap if necessary
> * Label for X axis on timeline sometime again start from 0, if X axis is in 
> seconds then it should not rollover after 60 seconds unless minutes are also 
> shown
> * X-axis labeled as Node on various charts, but should be Percentage
> *Zipkin*
> * Flip zipkin chart on vertical axis with arrows going other way. So start 
> from the top level root on the leftmost side and work toward children on the 
> right.
> * Ask zipkin community if there's a way to tell it that date/time is in 
> milliseconds.
> *Overall*
> * Please put together a pull request to the phoenix project to add the 
> zipkiin work you've done to the OS project. Ideally, include the zipkin work 
> in the phoenix-tracing module and call it phoeix-tracing. Only if there is 
> some major hurdle, create a new module.
> * Test with trace_ids that have multiple spans with duration (end_time - 
> start_time) > 5 ms and verify that UI and Zipkin output shows the correct 
> corresponding timeline
>   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-3193) Tracing UI improvements

2016-09-08 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-3193:
--
Attachment: PHOENIX-3193.patch

This patch applies on master and builds. We asked Nishani to combine both the 
phoenix-tracing-webapp and the phoenix-zipkin into a single phoenix-tracing 
module. This was done in a somewhat strange manner, as the phoenix-tracing 
module has two submodules. Maybe this is ok and necessary, but it's not clear 
to me why. The patch introduces two new dependencies: springboot and zipkin. 
Both look to be ASF 2.0 licensed which is good, but it'd probably be worth the 
keen eye of [~elserj] if we want to move forward with it.

The main thing the tracing UI work is an owner, [~rajeshbabu]. We don't have 
any reasonable amount of testing on it and no one to support this in the OS 
community. Without this, I think we should remove it from the source and binary 
distro and perhaps host it on github somewhere as an experimental add-on.

Thoughts?



> Tracing UI improvements
> ---
>
> Key: PHOENIX-3193
> URL: https://issues.apache.org/jira/browse/PHOENIX-3193
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Mujtaba Chohan
>Assignee: Nishani 
> Attachments: PHOENIX-3193.patch
>
>
> Points from GSoC presentation on tracing imporvements:
> *Tracing UI*
> * Remove line chart 
> * In list page, run query with description, start_time, (end_time-start_time) 
> duration from T where trace_id = ?
> * More space for descriptions on bar chart. Wrap if necessary
> * Label for X axis on timeline sometime again start from 0, if X axis is in 
> seconds then it should not rollover after 60 seconds unless minutes are also 
> shown
> * X-axis labeled as Node on various charts, but should be Percentage
> *Zipkin*
> * Flip zipkin chart on vertical axis with arrows going other way. So start 
> from the top level root on the leftmost side and work toward children on the 
> right.
> * Ask zipkin community if there's a way to tell it that date/time is in 
> milliseconds.
> *Overall*
> * Please put together a pull request to the phoenix project to add the 
> zipkiin work you've done to the OS project. Ideally, include the zipkin work 
> in the phoenix-tracing module and call it phoeix-tracing. Only if there is 
> some major hurdle, create a new module.
> * Test with trace_ids that have multiple spans with duration (end_time - 
> start_time) > 5 ms and verify that UI and Zipkin output shows the correct 
> corresponding timeline
>   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-3193) Tracing UI improvements

2016-09-08 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-3193:
--
Assignee: Nishani   (was: James Taylor)

> Tracing UI improvements
> ---
>
> Key: PHOENIX-3193
> URL: https://issues.apache.org/jira/browse/PHOENIX-3193
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Mujtaba Chohan
>Assignee: Nishani 
>
> Points from GSoC presentation on tracing imporvements:
> *Tracing UI*
> * Remove line chart 
> * In list page, run query with description, start_time, (end_time-start_time) 
> duration from T where trace_id = ?
> * More space for descriptions on bar chart. Wrap if necessary
> * Label for X axis on timeline sometime again start from 0, if X axis is in 
> seconds then it should not rollover after 60 seconds unless minutes are also 
> shown
> * X-axis labeled as Node on various charts, but should be Percentage
> *Zipkin*
> * Flip zipkin chart on vertical axis with arrows going other way. So start 
> from the top level root on the leftmost side and work toward children on the 
> right.
> * Ask zipkin community if there's a way to tell it that date/time is in 
> milliseconds.
> *Overall*
> * Please put together a pull request to the phoenix project to add the 
> zipkiin work you've done to the OS project. Ideally, include the zipkin work 
> in the phoenix-tracing module and call it phoeix-tracing. Only if there is 
> some major hurdle, create a new module.
> * Test with trace_ids that have multiple spans with duration (end_time - 
> start_time) > 5 ms and verify that UI and Zipkin output shows the correct 
> corresponding timeline
>   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-3193) Tracing UI improvements

2016-09-08 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-3193:
--
Summary: Tracing UI improvements  (was: Tracing UI cleanup - final tasks 
before GSoC pull request)

> Tracing UI improvements
> ---
>
> Key: PHOENIX-3193
> URL: https://issues.apache.org/jira/browse/PHOENIX-3193
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Mujtaba Chohan
>Assignee: James Taylor
>
> Points from GSoC presentation on tracing imporvements:
> *Tracing UI*
> * Remove line chart 
> * In list page, run query with description, start_time, (end_time-start_time) 
> duration from T where trace_id = ?
> * More space for descriptions on bar chart. Wrap if necessary
> * Label for X axis on timeline sometime again start from 0, if X axis is in 
> seconds then it should not rollover after 60 seconds unless minutes are also 
> shown
> * X-axis labeled as Node on various charts, but should be Percentage
> *Zipkin*
> * Flip zipkin chart on vertical axis with arrows going other way. So start 
> from the top level root on the leftmost side and work toward children on the 
> right.
> * Ask zipkin community if there's a way to tell it that date/time is in 
> milliseconds.
> *Overall*
> * Please put together a pull request to the phoenix project to add the 
> zipkiin work you've done to the OS project. Ideally, include the zipkin work 
> in the phoenix-tracing module and call it phoeix-tracing. Only if there is 
> some major hurdle, create a new module.
> * Test with trace_ids that have multiple spans with duration (end_time - 
> start_time) > 5 ms and verify that UI and Zipkin output shows the correct 
> corresponding timeline
>   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (PHOENIX-3193) Tracing UI cleanup - final tasks before GSoC pull request

2016-09-08 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor reassigned PHOENIX-3193:
-

Assignee: James Taylor  (was: Nishani )

> Tracing UI cleanup - final tasks before GSoC pull request
> -
>
> Key: PHOENIX-3193
> URL: https://issues.apache.org/jira/browse/PHOENIX-3193
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Mujtaba Chohan
>Assignee: James Taylor
>
> Points from GSoC presentation on tracing imporvements:
> *Tracing UI*
> * Remove line chart 
> * In list page, run query with description, start_time, (end_time-start_time) 
> duration from T where trace_id = ?
> * More space for descriptions on bar chart. Wrap if necessary
> * Label for X axis on timeline sometime again start from 0, if X axis is in 
> seconds then it should not rollover after 60 seconds unless minutes are also 
> shown
> * X-axis labeled as Node on various charts, but should be Percentage
> *Zipkin*
> * Flip zipkin chart on vertical axis with arrows going other way. So start 
> from the top level root on the leftmost side and work toward children on the 
> right.
> * Ask zipkin community if there's a way to tell it that date/time is in 
> milliseconds.
> *Overall*
> * Please put together a pull request to the phoenix project to add the 
> zipkiin work you've done to the OS project. Ideally, include the zipkin work 
> in the phoenix-tracing module and call it phoeix-tracing. Only if there is 
> some major hurdle, create a new module.
> * Test with trace_ids that have multiple spans with duration (end_time - 
> start_time) > 5 ms and verify that UI and Zipkin output shows the correct 
> corresponding timeline
>   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


PhoenixIOException timeout issue

2016-09-08 Thread Srikanth K
Hi,

I am running a phoenix Mapreduce job where i am getting PhoenixIOException.
Below is the exception.

16/09/08 03:32:28 INFO mapreduce.Job: Task Id :
attempt_1471862728027_0351_m_07_0, Status : FAILED
Error: java.lang.RuntimeException:
org.apache.phoenix.exception.PhoenixIOException: 60143ms passed since the
last invocation, timeout is currently set to 6
at
org.apache.phoenix.mapreduce.PhoenixRecordReader.nextKeyValue(PhoenixRecordReader.java:147)
at
org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:556)
at
org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:80)
at
org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue(WrappedMapper.java:91)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)


Please let me know how i can increase the timeout when initiating the job.

Thanks,
Srikanth


  1   2   >