[jira] [Commented] (PHOENIX-3235) Tephra errors when trying to create a transactional table in Phoenix 4.8.0

2016-09-07 Thread Francis Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15472802#comment-15472802
 ] 

Francis Chuang commented on PHOENIX-3235:
-

Thanks, [~jamestaylor], that would be really awesome! Fingers crossed!

> Tephra errors when trying to create a transactional table in Phoenix 4.8.0
> --
>
> Key: PHOENIX-3235
> URL: https://issues.apache.org/jira/browse/PHOENIX-3235
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.8.0
> Environment: Alpine Linux 3.4, OpenJDK 8 (JDK) in Docker 1.12.1 
> containers
>Reporter: Francis Chuang
>
> I've build a docker image to run HBase 1.2 with Phoenix 4.8.0 in fully 
> distributed mode. There are 2 images, 1 containing HBase and Phoenix, and 1 
> with just the phoenix query server. Java is OpenJDK 8 (JDK flavour).
> The docker images can be found in my branch here: 
> https://github.com/F21/hbase-phoenix/tree/consolidate-images/
> To build each image, simply go into the appropriate folder and do a `docker 
> build . -t myimage`.
> To run the master, these environment variables are required:
> {code}
> CLUSTER_NAME: hbase
> HBASE_ROLE: master
> HBASE_ZOOKEEPER_QUORUM: myzk
> HDFS_CLUSTER_NAME: mycluster
> DFS_NAMENODE_RPC_ADDRESS_NN1: nn1:8020
> DFS_NAMENODE_RPC_ADDRESS_NN2: nn2:8020
> DFS_NAMENODE_HTTP_ADDRESS_NN1: nn1:50070
> DFS_NAMENODE_HTTP_ADDRESS_NN2: nn2:50070
> {code}
> To run the region server, these environment variables are required:
> {code}
> CLUSTER_NAME: hbase
> HBASE_ROLE: regionserver
> HBASE_ZOOKEEPER_QUORUM: myzk
> HDFS_CLUSTER_NAME: mycluster
> DFS_NAMENODE_RPC_ADDRESS_NN1: nn1:8020
> DFS_NAMENODE_RPC_ADDRESS_NN2: nn2:8020
> DFS_NAMENODE_HTTP_ADDRESS_NN1: nn1:50070
> DFS_NAMENODE_HTTP_ADDRESS_NN2: nn2:50070
> {code}
> HBase, the transaction server starts up correctly along with the query 
> server. I can connect to the query server using SquirrleSQL and the Phoenix 
> tables were created correctly. I am also able to create non-transactional 
> tables and run queries against them.
> However, if I try to create a transactional table, I get an error message 
> saying:
> {code}
> Error: Error -1 (0) : Error while executing SQL "CREATE TABLE my_table (k 
> BIGINT PRIMARY KEY, v VARCHAR) TRANSACTIONAL=true": Remote driver error: 
> RuntimeException: java.lang.Exception: Thrift error for 
> org.apache.tephra.distributed.TransactionServiceClient$2@660ae15c: Internal 
> error processing startShort -> Exception: Thrift error for 
> org.apache.tephra.distributed.TransactionServiceClient$2@660ae15c: Internal 
> error processing startShort -> TApplicationException: Internal error 
> processing startShort
> SQLState:  0
> ErrorCode: -1
> {code}
> JPS confirms that the transaction manager is still running:
> {code}
> bash-4.3# jps
> 771 Jps
> 138 HMaster
> 190 TransactionServiceMain
> {code}
> In ` /tmp/tephra-/tephra-service--m9edd51-hmaster1.m9edd51.log`, it logs the 
> following:
> {code}
> Wed Aug 31 23:12:33 UTC 2016 Starting tephra service on 
> m9edd51-hmaster1.m9edd51
> -f: file size (blocks) unlimited
> -t: cpu time (seconds) unlimited
> -d: data seg size (kb) unlimited
> -s: stack size (kb)8192
> -c: core file size (blocks)unlimited
> -m: resident set size (kb) unlimited
> -l: locked memory (kb) 64
> -p: processes  unlimited
> -n: file descriptors   65536
> -v: address space (kb) unlimited
> -w: locks  unlimited
> -e: scheduling priority0
> -r: real-time priority 0
> Command:  /usr/lib/jvm/java-1.8-openjdk/bin/java -XX:+UseConcMarkSweepGC -cp 
> 

[jira] [Commented] (PHOENIX-3235) Tephra errors when trying to create a transactional table in Phoenix 4.8.0

2016-09-07 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15472795#comment-15472795
 ] 

James Taylor commented on PHOENIX-3235:
---

Yes, [~francischuang] - that's the plan, assuming we get a new release of 
Tephra in time.

> Tephra errors when trying to create a transactional table in Phoenix 4.8.0
> --
>
> Key: PHOENIX-3235
> URL: https://issues.apache.org/jira/browse/PHOENIX-3235
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.8.0
> Environment: Alpine Linux 3.4, OpenJDK 8 (JDK) in Docker 1.12.1 
> containers
>Reporter: Francis Chuang
>
> I've build a docker image to run HBase 1.2 with Phoenix 4.8.0 in fully 
> distributed mode. There are 2 images, 1 containing HBase and Phoenix, and 1 
> with just the phoenix query server. Java is OpenJDK 8 (JDK flavour).
> The docker images can be found in my branch here: 
> https://github.com/F21/hbase-phoenix/tree/consolidate-images/
> To build each image, simply go into the appropriate folder and do a `docker 
> build . -t myimage`.
> To run the master, these environment variables are required:
> {code}
> CLUSTER_NAME: hbase
> HBASE_ROLE: master
> HBASE_ZOOKEEPER_QUORUM: myzk
> HDFS_CLUSTER_NAME: mycluster
> DFS_NAMENODE_RPC_ADDRESS_NN1: nn1:8020
> DFS_NAMENODE_RPC_ADDRESS_NN2: nn2:8020
> DFS_NAMENODE_HTTP_ADDRESS_NN1: nn1:50070
> DFS_NAMENODE_HTTP_ADDRESS_NN2: nn2:50070
> {code}
> To run the region server, these environment variables are required:
> {code}
> CLUSTER_NAME: hbase
> HBASE_ROLE: regionserver
> HBASE_ZOOKEEPER_QUORUM: myzk
> HDFS_CLUSTER_NAME: mycluster
> DFS_NAMENODE_RPC_ADDRESS_NN1: nn1:8020
> DFS_NAMENODE_RPC_ADDRESS_NN2: nn2:8020
> DFS_NAMENODE_HTTP_ADDRESS_NN1: nn1:50070
> DFS_NAMENODE_HTTP_ADDRESS_NN2: nn2:50070
> {code}
> HBase, the transaction server starts up correctly along with the query 
> server. I can connect to the query server using SquirrleSQL and the Phoenix 
> tables were created correctly. I am also able to create non-transactional 
> tables and run queries against them.
> However, if I try to create a transactional table, I get an error message 
> saying:
> {code}
> Error: Error -1 (0) : Error while executing SQL "CREATE TABLE my_table (k 
> BIGINT PRIMARY KEY, v VARCHAR) TRANSACTIONAL=true": Remote driver error: 
> RuntimeException: java.lang.Exception: Thrift error for 
> org.apache.tephra.distributed.TransactionServiceClient$2@660ae15c: Internal 
> error processing startShort -> Exception: Thrift error for 
> org.apache.tephra.distributed.TransactionServiceClient$2@660ae15c: Internal 
> error processing startShort -> TApplicationException: Internal error 
> processing startShort
> SQLState:  0
> ErrorCode: -1
> {code}
> JPS confirms that the transaction manager is still running:
> {code}
> bash-4.3# jps
> 771 Jps
> 138 HMaster
> 190 TransactionServiceMain
> {code}
> In ` /tmp/tephra-/tephra-service--m9edd51-hmaster1.m9edd51.log`, it logs the 
> following:
> {code}
> Wed Aug 31 23:12:33 UTC 2016 Starting tephra service on 
> m9edd51-hmaster1.m9edd51
> -f: file size (blocks) unlimited
> -t: cpu time (seconds) unlimited
> -d: data seg size (kb) unlimited
> -s: stack size (kb)8192
> -c: core file size (blocks)unlimited
> -m: resident set size (kb) unlimited
> -l: locked memory (kb) 64
> -p: processes  unlimited
> -n: file descriptors   65536
> -v: address space (kb) unlimited
> -w: locks  unlimited
> -e: scheduling priority0
> -r: real-time priority 0
> Command:  /usr/lib/jvm/java-1.8-openjdk/bin/java -XX:+UseConcMarkSweepGC -cp 
> 

[jira] [Commented] (PHOENIX-3235) Tephra errors when trying to create a transactional table in Phoenix 4.8.0

2016-09-07 Thread Francis Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15472726#comment-15472726
 ] 

Francis Chuang commented on PHOENIX-3235:
-

[~mynameisalian...@gmail.com] just fixed/closed TEPHRA-179. Will it be possible 
to get the fix into phoenix 4.8.1?

> Tephra errors when trying to create a transactional table in Phoenix 4.8.0
> --
>
> Key: PHOENIX-3235
> URL: https://issues.apache.org/jira/browse/PHOENIX-3235
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.8.0
> Environment: Alpine Linux 3.4, OpenJDK 8 (JDK) in Docker 1.12.1 
> containers
>Reporter: Francis Chuang
>
> I've build a docker image to run HBase 1.2 with Phoenix 4.8.0 in fully 
> distributed mode. There are 2 images, 1 containing HBase and Phoenix, and 1 
> with just the phoenix query server. Java is OpenJDK 8 (JDK flavour).
> The docker images can be found in my branch here: 
> https://github.com/F21/hbase-phoenix/tree/consolidate-images/
> To build each image, simply go into the appropriate folder and do a `docker 
> build . -t myimage`.
> To run the master, these environment variables are required:
> {code}
> CLUSTER_NAME: hbase
> HBASE_ROLE: master
> HBASE_ZOOKEEPER_QUORUM: myzk
> HDFS_CLUSTER_NAME: mycluster
> DFS_NAMENODE_RPC_ADDRESS_NN1: nn1:8020
> DFS_NAMENODE_RPC_ADDRESS_NN2: nn2:8020
> DFS_NAMENODE_HTTP_ADDRESS_NN1: nn1:50070
> DFS_NAMENODE_HTTP_ADDRESS_NN2: nn2:50070
> {code}
> To run the region server, these environment variables are required:
> {code}
> CLUSTER_NAME: hbase
> HBASE_ROLE: regionserver
> HBASE_ZOOKEEPER_QUORUM: myzk
> HDFS_CLUSTER_NAME: mycluster
> DFS_NAMENODE_RPC_ADDRESS_NN1: nn1:8020
> DFS_NAMENODE_RPC_ADDRESS_NN2: nn2:8020
> DFS_NAMENODE_HTTP_ADDRESS_NN1: nn1:50070
> DFS_NAMENODE_HTTP_ADDRESS_NN2: nn2:50070
> {code}
> HBase, the transaction server starts up correctly along with the query 
> server. I can connect to the query server using SquirrleSQL and the Phoenix 
> tables were created correctly. I am also able to create non-transactional 
> tables and run queries against them.
> However, if I try to create a transactional table, I get an error message 
> saying:
> {code}
> Error: Error -1 (0) : Error while executing SQL "CREATE TABLE my_table (k 
> BIGINT PRIMARY KEY, v VARCHAR) TRANSACTIONAL=true": Remote driver error: 
> RuntimeException: java.lang.Exception: Thrift error for 
> org.apache.tephra.distributed.TransactionServiceClient$2@660ae15c: Internal 
> error processing startShort -> Exception: Thrift error for 
> org.apache.tephra.distributed.TransactionServiceClient$2@660ae15c: Internal 
> error processing startShort -> TApplicationException: Internal error 
> processing startShort
> SQLState:  0
> ErrorCode: -1
> {code}
> JPS confirms that the transaction manager is still running:
> {code}
> bash-4.3# jps
> 771 Jps
> 138 HMaster
> 190 TransactionServiceMain
> {code}
> In ` /tmp/tephra-/tephra-service--m9edd51-hmaster1.m9edd51.log`, it logs the 
> following:
> {code}
> Wed Aug 31 23:12:33 UTC 2016 Starting tephra service on 
> m9edd51-hmaster1.m9edd51
> -f: file size (blocks) unlimited
> -t: cpu time (seconds) unlimited
> -d: data seg size (kb) unlimited
> -s: stack size (kb)8192
> -c: core file size (blocks)unlimited
> -m: resident set size (kb) unlimited
> -l: locked memory (kb) 64
> -p: processes  unlimited
> -n: file descriptors   65536
> -v: address space (kb) unlimited
> -w: locks  unlimited
> -e: scheduling priority0
> -r: real-time priority 0
> Command:  /usr/lib/jvm/java-1.8-openjdk/bin/java -XX:+UseConcMarkSweepGC -cp 
> 

[jira] [Commented] (PHOENIX-2946) Projected comparison between date and timestamp columns always returns true

2016-09-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15472628#comment-15472628
 ] 

Hadoop QA commented on PHOENIX-2946:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12827490/PHOENIX-2946_v4.patch
  against master branch at commit 3a8724eee05aaf477bf6085415e781856990e1c0.
  ATTACHMENT ID: 12827490

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
34 warning messages.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+"CREATE TABLE IF NOT EXISTS T1 (k1 INTEGER NOT NULL, dates 
DATE, timestamps TIMESTAMP, times TIME CONSTRAINT pk PRIMARY KEY (k1))";
+String dml = "UPSERT INTO T1 VALUES (1, TO_DATE('Sat, 3 Feb 2008 
03:05:06 GMT', 'EEE, d MMM  HH:mm:ss z', 'UTC'), TO_TIMESTAMP('2006-04-12 
15:10:20'), " +
+dml = "UPSERT INTO T1 VALUES (2, TO_DATE('Sat, 3 Feb 2008 03:05:06 
GMT', 'EEE, d MMM  HH:mm:ss z', 'UTC'), TO_TIMESTAMP('2006-04-12 
10:10:20'), " +
+dml = "UPSERT INTO T1 VALUES (3, TO_DATE('Sat, 3 Feb 2008 03:05:06 
GMT', 'EEE, d MMM  HH:mm:ss z', 'UTC'), TO_TIMESTAMP('2006-04-12 
08:10:20'), " +
+new DateCodec(), 11); // After TIMESTAMP and DATE to ensure 
toLiteral finds those first
+public Date toObject(byte[] b, int o, int l, PDataType actualType, 
SortOrder sortOrder, Integer maxLength, Integer scale) {
+return equalsAny(targetType, PDate.INSTANCE, PTime.INSTANCE, 
PTimestamp.INSTANCE, PVarbinary.INSTANCE, PBinary.INSTANCE);
+return super.isBytesComparableWith(otherType) || otherType == 
PTime.INSTANCE || otherType == PTimestamp.INSTANCE;
+Integer maxLength, Integer scale, SortOrder actualModifier, 
Integer desiredMaxLength, Integer desiredScale,
+if (ptr.getLength() > 0 && (actualType == PTimestamp.INSTANCE || 
actualType == PUnsignedTimestamp.INSTANCE)) {

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.phoenix.util.csv.StringToArrayConverterTest
  org.apache.phoenix.util.csv.CsvUpsertExecutorTest

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/556//testReport/
Javadoc warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/556//artifact/patchprocess/patchJavadocWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/556//console

This message is automatically generated.

> Projected comparison between date and timestamp columns always returns true
> ---
>
> Key: PHOENIX-2946
> URL: https://issues.apache.org/jira/browse/PHOENIX-2946
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.6.0, 4.8.0
>Reporter: Kevin Liew
>Assignee: Kevin Liew
>Priority: Minor
>  Labels: comparison, date, timestamp
> Fix For: 4.9.0, 4.8.1
>
> Attachments: PHOENIX-2946_v2.patch, PHOENIX-2946_v3.patch, 
> PHOENIX-2946_v4.patch
>
>
> {code}
> 0: jdbc:phoenix:thin:url=http://localhost:876> create table test (dateCol 
> DATE primary key, timestampCol TIMESTAMP);
> No rows affected (2.559 seconds)
> 0: jdbc:phoenix:thin:url=http://localhost:876> upsert into test values 
> (TO_DATE('1990-01-01'), NOW());
> 1 row affected (0.255 seconds)
> 0: jdbc:phoenix:thin:url=http://localhost:876> select dateCol = timestampCol 
> from test;
> +--+
> |  DATECOL = TIMESTAMPCOL  |
> +--+
> | true |
> +--+
> 1 row selected (0.019 seconds)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2946) Projected comparison between date and timestamp columns always returns true

2016-09-07 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-2946:
--
Attachment: PHOENIX-2946_v4.patch

Fix test failures

> Projected comparison between date and timestamp columns always returns true
> ---
>
> Key: PHOENIX-2946
> URL: https://issues.apache.org/jira/browse/PHOENIX-2946
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.6.0, 4.8.0
>Reporter: Kevin Liew
>Assignee: Kevin Liew
>Priority: Minor
>  Labels: comparison, date, timestamp
> Fix For: 4.9.0, 4.8.1
>
> Attachments: PHOENIX-2946_v2.patch, PHOENIX-2946_v3.patch, 
> PHOENIX-2946_v4.patch
>
>
> {code}
> 0: jdbc:phoenix:thin:url=http://localhost:876> create table test (dateCol 
> DATE primary key, timestampCol TIMESTAMP);
> No rows affected (2.559 seconds)
> 0: jdbc:phoenix:thin:url=http://localhost:876> upsert into test values 
> (TO_DATE('1990-01-01'), NOW());
> 1 row affected (0.255 seconds)
> 0: jdbc:phoenix:thin:url=http://localhost:876> select dateCol = timestampCol 
> from test;
> +--+
> |  DATECOL = TIMESTAMPCOL  |
> +--+
> | true |
> +--+
> 1 row selected (0.019 seconds)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2946) Projected comparison between date and timestamp columns always returns true

2016-09-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15472481#comment-15472481
 ] 

ASF GitHub Bot commented on PHOENIX-2946:
-

Github user kliewkliew closed the pull request at:

https://github.com/apache/phoenix/pull/206


> Projected comparison between date and timestamp columns always returns true
> ---
>
> Key: PHOENIX-2946
> URL: https://issues.apache.org/jira/browse/PHOENIX-2946
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.6.0, 4.8.0
>Reporter: Kevin Liew
>Assignee: Kevin Liew
>Priority: Minor
>  Labels: comparison, date, timestamp
> Fix For: 4.9.0, 4.8.1
>
> Attachments: PHOENIX-2946_v2.patch, PHOENIX-2946_v3.patch
>
>
> {code}
> 0: jdbc:phoenix:thin:url=http://localhost:876> create table test (dateCol 
> DATE primary key, timestampCol TIMESTAMP);
> No rows affected (2.559 seconds)
> 0: jdbc:phoenix:thin:url=http://localhost:876> upsert into test values 
> (TO_DATE('1990-01-01'), NOW());
> 1 row affected (0.255 seconds)
> 0: jdbc:phoenix:thin:url=http://localhost:876> select dateCol = timestampCol 
> from test;
> +--+
> |  DATECOL = TIMESTAMPCOL  |
> +--+
> | true |
> +--+
> 1 row selected (0.019 seconds)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3230) SYSTEM.CATALOG get restored from snapshot with multi-client connection

2016-09-07 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15472300#comment-15472300
 ] 

James Taylor commented on PHOENIX-3230:
---

And maybe use the INDEX_STATUS column (ideally renamed to just STATUS) and set 
the STATUS to a new value of UPGRADING while upgrading and clear it afterwards. 
Or something along those lines.

> SYSTEM.CATALOG get restored from snapshot with multi-client connection
> --
>
> Key: PHOENIX-3230
> URL: https://issues.apache.org/jira/browse/PHOENIX-3230
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Mujtaba Chohan
>Assignee: Samarth Jain
> Fix For: 4.8.1
>
>
> If two separate Phoenix connections try to upgrade Phoenix from v4.7 to 4.8.1 
> then second connection fails with the following exception. This happens even 
> if second connection is couple of seconds apart but within upgrade window. 
> This is likely to happen in situation where pool of client machines all get 
> upgraded to latest Phoenix version. After this exception, all clients will 
> cease to work with undefined column exception due to restore/aborted upgrade.
> {noformat}
> WARN query.ConnectionQueryServicesImpl: Table already modified at this 
> timestamp, so assuming add of these columns already done: IS_NAMESPACE_MAPPED 
> BOOLEAN
> WARN query.ConnectionQueryServicesImpl: Table already modified at this 
> timestamp, so assuming add of these columns already done: AUTO_PARTITION_SEQ 
> VARCHAR
> WARN query.ConnectionQueryServicesImpl: Table already modified at this 
> timestamp, so assuming add of these columns already done: APPEND_ONLY_SCHEMA 
> BOOLEAN
> WARN query.ConnectionQueryServicesImpl: Starting restore of SYSTEM.CATALOG 
> using snapshot SNAPSHOT_SYSTEM.CATALOG_4.7.x_TO_4.8.0_20160831114048-0700 
> because upgrade failed
> 16/08/31 11:41:05 WARN query.ConnectionQueryServicesImpl: Successfully 
> restored SYSTEM.CATALOG using snapshot 
> SNAPSHOT_SYSTEM.CATALOG_4.7.x_TO_4.8.0_20160831114048-0700
> 16/08/31 11:41:09 WARN query.ConnectionQueryServicesImpl: Successfully 
> restored and enabled SYSTEM.CATALOG using snapshot 
> SNAPSHOT_SYSTEM.CATALOG_4.7.x_TO_4.8.0_20160831114048-0700
> Error: ERROR 504 (42703): Undefined column. columnName=IS_NAMESPACE_MAPPED 
> (state=42703,code=504)
> org.apache.phoenix.schema.ColumnNotFoundException: ERROR 504 (42703): 
> Undefined column. columnName=IS_NAMESPACE_MAPPED
>   at org.apache.phoenix.schema.PTableImpl.getColumn(PTableImpl.java:693)
>   at 
> org.apache.phoenix.compile.FromCompiler$SingleTableColumnResolver.resolveColumn(FromCompiler.java:449)
>   at 
> org.apache.phoenix.compile.UpsertCompiler.compile(UpsertCompiler.java:418)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableUpsertStatement.compilePlan(PhoenixStatement.java:590)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableUpsertStatement.compilePlan(PhoenixStatement.java:578)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:333)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:328)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:326)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:247)
>   at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:172)
>   at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:177)
>   at 
> org.apache.phoenix.schema.MetaDataClient.createTableInternal(MetaDataClient.java:2275)
>   at 
> org.apache.phoenix.schema.MetaDataClient.createTable(MetaDataClient.java:920)
>   at 
> org.apache.phoenix.compile.CreateTableCompiler$2.execute(CreateTableCompiler.java:193)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:340)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:328)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:326)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeUpdate(PhoenixStatement.java:1369)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl$13.call(ConnectionQueryServicesImpl.java:2486)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl$13.call(ConnectionQueryServicesImpl.java:2282)
>   at 
> org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:78)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:2282)
>   at 
> 

[jira] [Commented] (PHOENIX-3230) SYSTEM.CATALOG get restored from snapshot with multi-client connection

2016-09-07 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15472293#comment-15472293
 ] 

James Taylor commented on PHOENIX-3230:
---

You could either have a polling loop, or you can fail the other connections 
with an "Upgrade in progress" exception. Probably the latter is fine.

> SYSTEM.CATALOG get restored from snapshot with multi-client connection
> --
>
> Key: PHOENIX-3230
> URL: https://issues.apache.org/jira/browse/PHOENIX-3230
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Mujtaba Chohan
>Assignee: Samarth Jain
> Fix For: 4.8.1
>
>
> If two separate Phoenix connections try to upgrade Phoenix from v4.7 to 4.8.1 
> then second connection fails with the following exception. This happens even 
> if second connection is couple of seconds apart but within upgrade window. 
> This is likely to happen in situation where pool of client machines all get 
> upgraded to latest Phoenix version. After this exception, all clients will 
> cease to work with undefined column exception due to restore/aborted upgrade.
> {noformat}
> WARN query.ConnectionQueryServicesImpl: Table already modified at this 
> timestamp, so assuming add of these columns already done: IS_NAMESPACE_MAPPED 
> BOOLEAN
> WARN query.ConnectionQueryServicesImpl: Table already modified at this 
> timestamp, so assuming add of these columns already done: AUTO_PARTITION_SEQ 
> VARCHAR
> WARN query.ConnectionQueryServicesImpl: Table already modified at this 
> timestamp, so assuming add of these columns already done: APPEND_ONLY_SCHEMA 
> BOOLEAN
> WARN query.ConnectionQueryServicesImpl: Starting restore of SYSTEM.CATALOG 
> using snapshot SNAPSHOT_SYSTEM.CATALOG_4.7.x_TO_4.8.0_20160831114048-0700 
> because upgrade failed
> 16/08/31 11:41:05 WARN query.ConnectionQueryServicesImpl: Successfully 
> restored SYSTEM.CATALOG using snapshot 
> SNAPSHOT_SYSTEM.CATALOG_4.7.x_TO_4.8.0_20160831114048-0700
> 16/08/31 11:41:09 WARN query.ConnectionQueryServicesImpl: Successfully 
> restored and enabled SYSTEM.CATALOG using snapshot 
> SNAPSHOT_SYSTEM.CATALOG_4.7.x_TO_4.8.0_20160831114048-0700
> Error: ERROR 504 (42703): Undefined column. columnName=IS_NAMESPACE_MAPPED 
> (state=42703,code=504)
> org.apache.phoenix.schema.ColumnNotFoundException: ERROR 504 (42703): 
> Undefined column. columnName=IS_NAMESPACE_MAPPED
>   at org.apache.phoenix.schema.PTableImpl.getColumn(PTableImpl.java:693)
>   at 
> org.apache.phoenix.compile.FromCompiler$SingleTableColumnResolver.resolveColumn(FromCompiler.java:449)
>   at 
> org.apache.phoenix.compile.UpsertCompiler.compile(UpsertCompiler.java:418)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableUpsertStatement.compilePlan(PhoenixStatement.java:590)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableUpsertStatement.compilePlan(PhoenixStatement.java:578)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:333)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:328)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:326)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:247)
>   at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:172)
>   at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:177)
>   at 
> org.apache.phoenix.schema.MetaDataClient.createTableInternal(MetaDataClient.java:2275)
>   at 
> org.apache.phoenix.schema.MetaDataClient.createTable(MetaDataClient.java:920)
>   at 
> org.apache.phoenix.compile.CreateTableCompiler$2.execute(CreateTableCompiler.java:193)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:340)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:328)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:326)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeUpdate(PhoenixStatement.java:1369)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl$13.call(ConnectionQueryServicesImpl.java:2486)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl$13.call(ConnectionQueryServicesImpl.java:2282)
>   at 
> org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:78)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:2282)
>   at 
> 

[jira] [Commented] (PHOENIX-3230) SYSTEM.CATALOG get restored from snapshot with multi-client connection

2016-09-07 Thread Samarth Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15472277#comment-15472277
 ] 

Samarth Jain commented on PHOENIX-3230:
---

Actually, I am not sure if the checkAndPut approach will work. We want only one 
client to run the upgrade and we want the other clients to wait for the upgrade 
to complete. Simple checkAndPut won't accomplish that. 

> SYSTEM.CATALOG get restored from snapshot with multi-client connection
> --
>
> Key: PHOENIX-3230
> URL: https://issues.apache.org/jira/browse/PHOENIX-3230
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Mujtaba Chohan
>Assignee: Samarth Jain
> Fix For: 4.8.1
>
>
> If two separate Phoenix connections try to upgrade Phoenix from v4.7 to 4.8.1 
> then second connection fails with the following exception. This happens even 
> if second connection is couple of seconds apart but within upgrade window. 
> This is likely to happen in situation where pool of client machines all get 
> upgraded to latest Phoenix version. After this exception, all clients will 
> cease to work with undefined column exception due to restore/aborted upgrade.
> {noformat}
> WARN query.ConnectionQueryServicesImpl: Table already modified at this 
> timestamp, so assuming add of these columns already done: IS_NAMESPACE_MAPPED 
> BOOLEAN
> WARN query.ConnectionQueryServicesImpl: Table already modified at this 
> timestamp, so assuming add of these columns already done: AUTO_PARTITION_SEQ 
> VARCHAR
> WARN query.ConnectionQueryServicesImpl: Table already modified at this 
> timestamp, so assuming add of these columns already done: APPEND_ONLY_SCHEMA 
> BOOLEAN
> WARN query.ConnectionQueryServicesImpl: Starting restore of SYSTEM.CATALOG 
> using snapshot SNAPSHOT_SYSTEM.CATALOG_4.7.x_TO_4.8.0_20160831114048-0700 
> because upgrade failed
> 16/08/31 11:41:05 WARN query.ConnectionQueryServicesImpl: Successfully 
> restored SYSTEM.CATALOG using snapshot 
> SNAPSHOT_SYSTEM.CATALOG_4.7.x_TO_4.8.0_20160831114048-0700
> 16/08/31 11:41:09 WARN query.ConnectionQueryServicesImpl: Successfully 
> restored and enabled SYSTEM.CATALOG using snapshot 
> SNAPSHOT_SYSTEM.CATALOG_4.7.x_TO_4.8.0_20160831114048-0700
> Error: ERROR 504 (42703): Undefined column. columnName=IS_NAMESPACE_MAPPED 
> (state=42703,code=504)
> org.apache.phoenix.schema.ColumnNotFoundException: ERROR 504 (42703): 
> Undefined column. columnName=IS_NAMESPACE_MAPPED
>   at org.apache.phoenix.schema.PTableImpl.getColumn(PTableImpl.java:693)
>   at 
> org.apache.phoenix.compile.FromCompiler$SingleTableColumnResolver.resolveColumn(FromCompiler.java:449)
>   at 
> org.apache.phoenix.compile.UpsertCompiler.compile(UpsertCompiler.java:418)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableUpsertStatement.compilePlan(PhoenixStatement.java:590)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableUpsertStatement.compilePlan(PhoenixStatement.java:578)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:333)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:328)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:326)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:247)
>   at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:172)
>   at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:177)
>   at 
> org.apache.phoenix.schema.MetaDataClient.createTableInternal(MetaDataClient.java:2275)
>   at 
> org.apache.phoenix.schema.MetaDataClient.createTable(MetaDataClient.java:920)
>   at 
> org.apache.phoenix.compile.CreateTableCompiler$2.execute(CreateTableCompiler.java:193)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:340)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:328)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:326)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeUpdate(PhoenixStatement.java:1369)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl$13.call(ConnectionQueryServicesImpl.java:2486)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl$13.call(ConnectionQueryServicesImpl.java:2282)
>   at 
> org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:78)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:2282)
>   at 
> 

[jira] [Commented] (PHOENIX-3230) SYSTEM.CATALOG get restored from snapshot with multi-client connection

2016-09-07 Thread Samarth Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15472181#comment-15472181
 ] 

Samarth Jain commented on PHOENIX-3230:
---

bq.
Another approach is to use an HBase checkAndPut on a new VERSION cell on the 
row representing the SYSTEM.CATALOG table header. If a client does this and the 
VERSION is already equal to the current version, then you don't do the upgrade 
(since another client beat you to it).

I like this approach better. Relying on ColumnAlreadyExists exception and not 
having to run rest of the upgrade when it happens will make the code a bit 
brittle, IMHO.

This would also need to be just a client side change so it should be OK 
backward compatibility wise too. Would have to be a little careful with how to 
use the checkAndPut to ensure mutex behavior. Will post a patch since 
explaining it in plain English would be tricky.





> SYSTEM.CATALOG get restored from snapshot with multi-client connection
> --
>
> Key: PHOENIX-3230
> URL: https://issues.apache.org/jira/browse/PHOENIX-3230
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Mujtaba Chohan
>Assignee: Samarth Jain
> Fix For: 4.8.1
>
>
> If two separate Phoenix connections try to upgrade Phoenix from v4.7 to 4.8.1 
> then second connection fails with the following exception. This happens even 
> if second connection is couple of seconds apart but within upgrade window. 
> This is likely to happen in situation where pool of client machines all get 
> upgraded to latest Phoenix version. After this exception, all clients will 
> cease to work with undefined column exception due to restore/aborted upgrade.
> {noformat}
> WARN query.ConnectionQueryServicesImpl: Table already modified at this 
> timestamp, so assuming add of these columns already done: IS_NAMESPACE_MAPPED 
> BOOLEAN
> WARN query.ConnectionQueryServicesImpl: Table already modified at this 
> timestamp, so assuming add of these columns already done: AUTO_PARTITION_SEQ 
> VARCHAR
> WARN query.ConnectionQueryServicesImpl: Table already modified at this 
> timestamp, so assuming add of these columns already done: APPEND_ONLY_SCHEMA 
> BOOLEAN
> WARN query.ConnectionQueryServicesImpl: Starting restore of SYSTEM.CATALOG 
> using snapshot SNAPSHOT_SYSTEM.CATALOG_4.7.x_TO_4.8.0_20160831114048-0700 
> because upgrade failed
> 16/08/31 11:41:05 WARN query.ConnectionQueryServicesImpl: Successfully 
> restored SYSTEM.CATALOG using snapshot 
> SNAPSHOT_SYSTEM.CATALOG_4.7.x_TO_4.8.0_20160831114048-0700
> 16/08/31 11:41:09 WARN query.ConnectionQueryServicesImpl: Successfully 
> restored and enabled SYSTEM.CATALOG using snapshot 
> SNAPSHOT_SYSTEM.CATALOG_4.7.x_TO_4.8.0_20160831114048-0700
> Error: ERROR 504 (42703): Undefined column. columnName=IS_NAMESPACE_MAPPED 
> (state=42703,code=504)
> org.apache.phoenix.schema.ColumnNotFoundException: ERROR 504 (42703): 
> Undefined column. columnName=IS_NAMESPACE_MAPPED
>   at org.apache.phoenix.schema.PTableImpl.getColumn(PTableImpl.java:693)
>   at 
> org.apache.phoenix.compile.FromCompiler$SingleTableColumnResolver.resolveColumn(FromCompiler.java:449)
>   at 
> org.apache.phoenix.compile.UpsertCompiler.compile(UpsertCompiler.java:418)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableUpsertStatement.compilePlan(PhoenixStatement.java:590)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableUpsertStatement.compilePlan(PhoenixStatement.java:578)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:333)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:328)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:326)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:247)
>   at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:172)
>   at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:177)
>   at 
> org.apache.phoenix.schema.MetaDataClient.createTableInternal(MetaDataClient.java:2275)
>   at 
> org.apache.phoenix.schema.MetaDataClient.createTable(MetaDataClient.java:920)
>   at 
> org.apache.phoenix.compile.CreateTableCompiler$2.execute(CreateTableCompiler.java:193)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:340)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:328)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:326)
>   at 
> 

[jira] [Commented] (PHOENIX-3230) SYSTEM.CATALOG get restored from snapshot with multi-client connection

2016-09-07 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15472133#comment-15472133
 ] 

James Taylor commented on PHOENIX-3230:
---

I think a simpler solution would be to use 
ConnectionQueryServicesImpl.addColumn() with an addIfNotExists as false. Only 
one client (the first) will be the one to succeed. The others will get a 
ColumnAlreadyExistsException which can be ignored, but we should not do the 
snapshot or other upgrade in this case.

Another approach is to use an HBase checkAndPut on a new VERSION cell on the 
row representing the SYSTEM.CATALOG table header. If a client does this and the 
VERSION is already equal to the current version, then you don't do the upgrade 
(since another client beat you to it).

I don't think we need to manage ZK distributed locks.

> SYSTEM.CATALOG get restored from snapshot with multi-client connection
> --
>
> Key: PHOENIX-3230
> URL: https://issues.apache.org/jira/browse/PHOENIX-3230
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Mujtaba Chohan
>Assignee: Samarth Jain
> Fix For: 4.8.1
>
>
> If two separate Phoenix connections try to upgrade Phoenix from v4.7 to 4.8.1 
> then second connection fails with the following exception. This happens even 
> if second connection is couple of seconds apart but within upgrade window. 
> This is likely to happen in situation where pool of client machines all get 
> upgraded to latest Phoenix version. After this exception, all clients will 
> cease to work with undefined column exception due to restore/aborted upgrade.
> {noformat}
> WARN query.ConnectionQueryServicesImpl: Table already modified at this 
> timestamp, so assuming add of these columns already done: IS_NAMESPACE_MAPPED 
> BOOLEAN
> WARN query.ConnectionQueryServicesImpl: Table already modified at this 
> timestamp, so assuming add of these columns already done: AUTO_PARTITION_SEQ 
> VARCHAR
> WARN query.ConnectionQueryServicesImpl: Table already modified at this 
> timestamp, so assuming add of these columns already done: APPEND_ONLY_SCHEMA 
> BOOLEAN
> WARN query.ConnectionQueryServicesImpl: Starting restore of SYSTEM.CATALOG 
> using snapshot SNAPSHOT_SYSTEM.CATALOG_4.7.x_TO_4.8.0_20160831114048-0700 
> because upgrade failed
> 16/08/31 11:41:05 WARN query.ConnectionQueryServicesImpl: Successfully 
> restored SYSTEM.CATALOG using snapshot 
> SNAPSHOT_SYSTEM.CATALOG_4.7.x_TO_4.8.0_20160831114048-0700
> 16/08/31 11:41:09 WARN query.ConnectionQueryServicesImpl: Successfully 
> restored and enabled SYSTEM.CATALOG using snapshot 
> SNAPSHOT_SYSTEM.CATALOG_4.7.x_TO_4.8.0_20160831114048-0700
> Error: ERROR 504 (42703): Undefined column. columnName=IS_NAMESPACE_MAPPED 
> (state=42703,code=504)
> org.apache.phoenix.schema.ColumnNotFoundException: ERROR 504 (42703): 
> Undefined column. columnName=IS_NAMESPACE_MAPPED
>   at org.apache.phoenix.schema.PTableImpl.getColumn(PTableImpl.java:693)
>   at 
> org.apache.phoenix.compile.FromCompiler$SingleTableColumnResolver.resolveColumn(FromCompiler.java:449)
>   at 
> org.apache.phoenix.compile.UpsertCompiler.compile(UpsertCompiler.java:418)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableUpsertStatement.compilePlan(PhoenixStatement.java:590)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableUpsertStatement.compilePlan(PhoenixStatement.java:578)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:333)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:328)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:326)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:247)
>   at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:172)
>   at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:177)
>   at 
> org.apache.phoenix.schema.MetaDataClient.createTableInternal(MetaDataClient.java:2275)
>   at 
> org.apache.phoenix.schema.MetaDataClient.createTable(MetaDataClient.java:920)
>   at 
> org.apache.phoenix.compile.CreateTableCompiler$2.execute(CreateTableCompiler.java:193)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:340)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:328)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:326)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeUpdate(PhoenixStatement.java:1369)
>   at 
> 

[jira] [Commented] (PHOENIX-3230) SYSTEM.CATALOG get restored from snapshot with multi-client connection

2016-09-07 Thread Samarth Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15472097#comment-15472097
 ] 

Samarth Jain commented on PHOENIX-3230:
---

When upgrading with multiple clients (different JVMs), we are running into race 
conditions.

Client 1 trying to execute:
{code}
metaConnection = addColumnsIfNotExists(metaConnection,

PhoenixDatabaseMetaData.SYSTEM_CATALOG,

MetaDataProtocol.MIN_SYSTEM_TABLE_TIMESTAMP_4_8_0 - 1,

PhoenixDatabaseMetaData.AUTO_PARTITION_SEQ + " "
+ 
PVarchar.INSTANCE.getSqlTypeName());

{code}

Client 2 trying to execute:
{code}
createSnapshot(snapshotName, 
sysCatalogTableName);
{code}

Client 2 then fails with:

{code}
java.sql.SQLException: org.apache.hadoop.hbase.snapshot.HBaseSnapshotException: 
org.apache.hadoop.hbase.snapshot.HBaseSnapshotException: Snapshot { 
ss=SNAPSHOT_SYSTEM.CATALOG_null_TO_4.8.0_20160907155025-0700 
table=SYSTEM.CATALOG type=FLUSH } had an error.  Procedure 
SNAPSHOT_SYSTEM.CATALOG_null_TO_4.8.0_20160907155025-0700 { waiting=[] 
done=[localhost,58539,1473287287332] }
at 
org.apache.hadoop.hbase.master.snapshot.SnapshotManager.isSnapshotDone(SnapshotManager.java:342)
at 
org.apache.hadoop.hbase.master.HMaster.isSnapshotDone(HMaster.java:3237)
at 
org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:43294)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:104)
at 
org.apache.hadoop.hbase.ipc.FifoRpcScheduler$1.run(FifoRpcScheduler.java:74)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: 
org.apache.hadoop.hbase.errorhandling.ForeignException$ProxyThrowable via 
localhost,58539,1473287287332:org.apache.hadoop.hbase.errorhandling.ForeignException$ProxyThrowable:
 org.apache.hadoop.hbase.NotServingRegionException: 
SYSTEM.CATALOG,,1473283460590.fe40df52aa069a8d4a3ee52e4b282e5c. is closing
at 
org.apache.hadoop.hbase.errorhandling.ForeignExceptionDispatcher.rethrowException(ForeignExceptionDispatcher.java:83)
at 
org.apache.hadoop.hbase.master.snapshot.TakeSnapshotHandler.rethrowExceptionIfFailed(TakeSnapshotHandler.java:307)
at 
org.apache.hadoop.hbase.master.snapshot.SnapshotManager.isSnapshotDone(SnapshotManager.java:332)
... 10 more
Caused by: 
org.apache.hadoop.hbase.errorhandling.ForeignException$ProxyThrowable: 
org.apache.hadoop.hbase.NotServingRegionException: 
SYSTEM.CATALOG,,1473283460590.fe40df52aa069a8d4a3ee52e4b282e5c. is closing
at 
org.apache.hadoop.hbase.regionserver.snapshot.RegionServerSnapshotManager$SnapshotSubprocedurePool.waitForOutstandingTasks(RegionServerSnapshotManager.java:338)
at 
org.apache.hadoop.hbase.regionserver.snapshot.FlushSnapshotSubprocedure.flushSnapshot(FlushSnapshotSubprocedure.java:138)
at 
org.apache.hadoop.hbase.regionserver.snapshot.FlushSnapshotSubprocedure.insideBarrier(FlushSnapshotSubprocedure.java:157)
at 
org.apache.hadoop.hbase.procedure.Subprocedure.call(Subprocedure.java:186)
at 
org.apache.hadoop.hbase.procedure.Subprocedure.call(Subprocedure.java:53)
... 4 more

at 
org.apache.phoenix.query.ConnectionQueryServicesImpl$13.createSnapshot(ConnectionQueryServicesImpl.java:2597)
at 
org.apache.phoenix.query.ConnectionQueryServicesImpl$13.call(ConnectionQueryServicesImpl.java:2337)
at 
org.apache.phoenix.query.ConnectionQueryServicesImpl$13.call(ConnectionQueryServicesImpl.java:1)
at 
org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:78)
at 
org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:2272)
at 
org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:232)
at 
org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:147)
at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:202)
at java.sql.DriverManager.getConnection(DriverManager.java:571)
at java.sql.DriverManager.getConnection(DriverManager.java:233)
at 
org.apache.phoenix.end2end.PhoenixRuntimeIT.testConnection(PhoenixRuntimeIT.java:150)
at 

[jira] [Commented] (PHOENIX-2785) Do not store NULLs for immutable tables

2016-09-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15472078#comment-15472078
 ] 

Hudson commented on PHOENIX-2785:
-

SUCCESS: Integrated in Jenkins build Phoenix-4.8-HBase-1.2 #10 (See 
[https://builds.apache.org/job/Phoenix-4.8-HBase-1.2/10/])
PHOENIX-2785 Do not store NULLs for immutable tables. (larsh: rev 
5ffe30f6762069253a2bd6b0deaf3712da7a1e4e)
* (edit) phoenix-core/src/main/java/org/apache/phoenix/schema/PTableImpl.java
* (edit) phoenix-core/src/it/java/org/apache/phoenix/end2end/StoreNullsIT.java


> Do not store NULLs for immutable tables
> ---
>
> Key: PHOENIX-2785
> URL: https://issues.apache.org/jira/browse/PHOENIX-2785
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.7.0
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Minor
> Fix For: 4.9.0, 4.8.1
>
> Attachments: 2785-v2.txt, 2785-v3.txt, 2785.txt
>
>
> Currently we always store Delete markers (or explicit Nulls). For immutable 
> tables that is not necessary. Null is not distinguishable from an absent 
> column.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2946) Projected comparison between date and timestamp columns always returns true

2016-09-07 Thread Thomas D'Silva (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15472074#comment-15472074
 ] 

Thomas D'Silva commented on PHOENIX-2946:
-

+1

> Projected comparison between date and timestamp columns always returns true
> ---
>
> Key: PHOENIX-2946
> URL: https://issues.apache.org/jira/browse/PHOENIX-2946
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.6.0, 4.8.0
>Reporter: Kevin Liew
>Assignee: Kevin Liew
>Priority: Minor
>  Labels: comparison, date, timestamp
> Fix For: 4.9.0, 4.8.1
>
> Attachments: PHOENIX-2946_v2.patch, PHOENIX-2946_v3.patch
>
>
> {code}
> 0: jdbc:phoenix:thin:url=http://localhost:876> create table test (dateCol 
> DATE primary key, timestampCol TIMESTAMP);
> No rows affected (2.559 seconds)
> 0: jdbc:phoenix:thin:url=http://localhost:876> upsert into test values 
> (TO_DATE('1990-01-01'), NOW());
> 1 row affected (0.255 seconds)
> 0: jdbc:phoenix:thin:url=http://localhost:876> select dateCol = timestampCol 
> from test;
> +--+
> |  DATECOL = TIMESTAMPCOL  |
> +--+
> | true |
> +--+
> 1 row selected (0.019 seconds)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2785) Do not store NULLs for immutable tables

2016-09-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15472069#comment-15472069
 ] 

Hudson commented on PHOENIX-2785:
-

FAILURE: Integrated in Jenkins build Phoenix-master #1384 (See 
[https://builds.apache.org/job/Phoenix-master/1384/])
PHOENIX-2785 Do not store NULLs for immutable tables. (larsh: rev 
3a8724eee05aaf477bf6085415e781856990e1c0)
* (edit) phoenix-core/src/main/java/org/apache/phoenix/schema/PTableImpl.java
* (edit) phoenix-core/src/it/java/org/apache/phoenix/end2end/StoreNullsIT.java


> Do not store NULLs for immutable tables
> ---
>
> Key: PHOENIX-2785
> URL: https://issues.apache.org/jira/browse/PHOENIX-2785
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.7.0
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Minor
> Fix For: 4.9.0, 4.8.1
>
> Attachments: 2785-v2.txt, 2785-v3.txt, 2785.txt
>
>
> Currently we always store Delete markers (or explicit Nulls). For immutable 
> tables that is not necessary. Null is not distinguishable from an absent 
> column.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3116) Support incompatible HBase 1.1.5 and HBase 1.2.2

2016-09-07 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15472025#comment-15472025
 ] 

James Taylor commented on PHOENIX-3116:
---

Agreed. The unsupported versions can be added to this[1] page which lives in 
the site/markdown/download.md file and can be updated as described here[2].


[1] https://phoenix.apache.org/download.html
[2] https://phoenix.apache.org/building_website.html

> Support incompatible HBase 1.1.5 and HBase 1.2.2
> 
>
> Key: PHOENIX-3116
> URL: https://issues.apache.org/jira/browse/PHOENIX-3116
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.8.0
>Reporter: Rob Leidle
>Assignee: James Taylor
>Priority: Minor
> Fix For: 4.9.0, 4.8.1
>
> Attachments: PHOENIX-3116_v2.patch, upgrade-hbase-to-1.2.2.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> HBase 1.2.2 made a backwards incompatible change in HTableInterface that 
> requires new overrides.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3116) Support incompatible HBase 1.1.5 and HBase 1.2.2

2016-09-07 Thread Poorna Chandra (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15471989#comment-15471989
 ] 

Poorna Chandra commented on PHOENIX-3116:
-

[~apurtell] Yes, having an errata will help a lot. Thanks for the offer.

What do you think [~jamestaylor]?

> Support incompatible HBase 1.1.5 and HBase 1.2.2
> 
>
> Key: PHOENIX-3116
> URL: https://issues.apache.org/jira/browse/PHOENIX-3116
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.8.0
>Reporter: Rob Leidle
>Assignee: James Taylor
>Priority: Minor
> Fix For: 4.9.0, 4.8.1
>
> Attachments: PHOENIX-3116_v2.patch, upgrade-hbase-to-1.2.2.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> HBase 1.2.2 made a backwards incompatible change in HTableInterface that 
> requires new overrides.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] phoenix issue #207: Phoenix-476 Support declaration of DEFAULT in CREATE sta...

2016-09-07 Thread kliewkliew
Github user kliewkliew commented on the issue:

https://github.com/apache/phoenix/pull/207
  
Based on the wrong branch


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request #207: Phoenix-476 Support declaration of DEFAULT in CRE...

2016-09-07 Thread kliewkliew
Github user kliewkliew closed the pull request at:

https://github.com/apache/phoenix/pull/207


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request #207: Phoenix-476 Support declaration of DEFAULT in CRE...

2016-09-07 Thread kliewkliew
GitHub user kliewkliew opened a pull request:

https://github.com/apache/phoenix/pull/207

Phoenix-476 Support declaration of DEFAULT in CREATE statement



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/kliewkliew/phoenix PHOENIX-476

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/phoenix/pull/207.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #207


commit e34704eeb97cedc16a088dd9faa67736e88a8bf4
Author: kliewkliew 
Date:   2016-08-30T15:16:50Z

Merge remote-tracking branch 'apache/master'

commit 4a1e97059d8aeed20d1c6a8512e71af50318c88d
Author: kliewkliew 
Date:   2016-08-30T15:23:53Z

PHOENIX-2946 Projected comparison between date and timestamp columns always 
returns true

commit b10e820ec38387bc70e5aa12abcc33a25937158a
Author: kliewkliew 
Date:   2016-08-31T14:53:13Z

Increase test coverage.

commit 8e0d6f5ecd5ac5eba49a47d1adf25f318f17c090
Author: kliewkliew 
Date:   2016-09-05T15:22:13Z

PHOENIX-476 Support declaration of DEFAULT in CREATE statement

commit f363feefde6cf9a24b209da0178f74880ec23eed
Author: kliewkliew 
Date:   2016-09-05T16:35:05Z

Fix unit test.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Comment Edited] (PHOENIX-476) Support declaration of DEFAULT in CREATE statement

2016-09-07 Thread Kevin Liew (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15471945#comment-15471945
 ] 

Kevin Liew edited comment on PHOENIX-476 at 9/7/16 10:21 PM:
-

I attached a patch implementing support for  in the CREATE statement. Is this 
the right approach? I will work on ALTER, DROP support, site documentation, and 
support for expressions in the  definition.

Should we save the default value in the SYSTEM.CATALOG table?


was (Author: kliew):
I attached a patch implementing support for  in the CREATE statement. Is this 
the right approach? I will work on ALTER, DROP support and support for 
expressions in the  definition.

Should we  the  value in the SYSTEM.CATALOG table?

> Support declaration of DEFAULT in CREATE statement
> --
>
> Key: PHOENIX-476
> URL: https://issues.apache.org/jira/browse/PHOENIX-476
> Project: Phoenix
>  Issue Type: Task
>Affects Versions: 3.0-Release
>Reporter: James Taylor
>Assignee: Kevin Liew
>  Labels: enhancement
> Attachments: PHOENIX-476.patch
>
>
> Support the declaration of a default value in the CREATE TABLE/VIEW statement 
> like this:
> CREATE TABLE Persons (
> Pid int NOT NULL PRIMARY KEY,
> LastName varchar(255) NOT NULL,
> FirstName varchar(255),
> Address varchar(255),
> City varchar(255) DEFAULT 'Sandnes'
> )
> To implement this, we'd need to:
> 1. add a new DEFAULT_VALUE key value column in SYSTEM.TABLE and pass through 
> the value when the table is created (in MetaDataClient).
> 2. always set NULLABLE to ResultSetMetaData.columnNoNulls if a default value 
> is present, since the column will never be null.
> 3. add a getDefaultValue() accessor in PColumn
> 4.  for a row key column, during UPSERT use the default value if no value was 
> specified for that column. This could be done in the PTableImpl.newKey method.
> 5.  for a key value column with a default value, we can get away without 
> incurring any storage cost. Although a little bit of extra effort than if we 
> persisted the default value on an UPSERT for key value columns, this approach 
> has the benefit of not incurring any storage cost for a default value.
> * serialize any default value into KeyValueColumnExpression
> * in the evaluate method of KeyValueColumnExpression, conditionally use 
> the default value if the column value is not present. If doing partial 
> evaluation, you should not yet return the default value, as we may not have 
> encountered the the KeyValue for the column yet (since a filter evaluates 
> each time it sees each KeyValue, and there may be more than one KeyValue 
> referenced in the expression). Partial evaluation is determined by calling 
> Tuple.isImmutable(), where false means it is NOT doing partial evaluation, 
> while true means it is.
> * modify EvaluateOnCompletionVisitor by adding a visitor method for 
> RowKeyColumnExpression and KeyValueColumnExpression to set 
> evaluateOnCompletion to true if they have a default value specified. This 
> will cause filter evaluation to execute one final time after all KeyValues 
> for a row have been seen, since it's at this time we know we should use the 
> default value.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (PHOENIX-476) Support declaration of DEFAULT in CREATE statement

2016-09-07 Thread Kevin Liew (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15471945#comment-15471945
 ] 

Kevin Liew edited comment on PHOENIX-476 at 9/7/16 10:11 PM:
-

I attached a patch implementing support for  in the CREATE statement. Is this 
the right approach? I will work on ALTER, DROP support and support for 
expressions in the  definition.

Should we  the  value in the SYSTEM.CATALOG table?


was (Author: kliew):
I attached a patch implementing support for DEFAULT in the CREATE statement. I 
will work on ALTER, DROP support and support for expressions in the DEFAULT 
definition.

Should we store the default value in the SYSTEM.CATALOG table?

> Support declaration of DEFAULT in CREATE statement
> --
>
> Key: PHOENIX-476
> URL: https://issues.apache.org/jira/browse/PHOENIX-476
> Project: Phoenix
>  Issue Type: Task
>Affects Versions: 3.0-Release
>Reporter: James Taylor
>Assignee: Kevin Liew
>  Labels: enhancement
> Attachments: PHOENIX-476.patch
>
>
> Support the declaration of a default value in the CREATE TABLE/VIEW statement 
> like this:
> CREATE TABLE Persons (
> Pid int NOT NULL PRIMARY KEY,
> LastName varchar(255) NOT NULL,
> FirstName varchar(255),
> Address varchar(255),
> City varchar(255) DEFAULT 'Sandnes'
> )
> To implement this, we'd need to:
> 1. add a new DEFAULT_VALUE key value column in SYSTEM.TABLE and pass through 
> the value when the table is created (in MetaDataClient).
> 2. always set NULLABLE to ResultSetMetaData.columnNoNulls if a default value 
> is present, since the column will never be null.
> 3. add a getDefaultValue() accessor in PColumn
> 4.  for a row key column, during UPSERT use the default value if no value was 
> specified for that column. This could be done in the PTableImpl.newKey method.
> 5.  for a key value column with a default value, we can get away without 
> incurring any storage cost. Although a little bit of extra effort than if we 
> persisted the default value on an UPSERT for key value columns, this approach 
> has the benefit of not incurring any storage cost for a default value.
> * serialize any default value into KeyValueColumnExpression
> * in the evaluate method of KeyValueColumnExpression, conditionally use 
> the default value if the column value is not present. If doing partial 
> evaluation, you should not yet return the default value, as we may not have 
> encountered the the KeyValue for the column yet (since a filter evaluates 
> each time it sees each KeyValue, and there may be more than one KeyValue 
> referenced in the expression). Partial evaluation is determined by calling 
> Tuple.isImmutable(), where false means it is NOT doing partial evaluation, 
> while true means it is.
> * modify EvaluateOnCompletionVisitor by adding a visitor method for 
> RowKeyColumnExpression and KeyValueColumnExpression to set 
> evaluateOnCompletion to true if they have a default value specified. This 
> will cause filter evaluation to execute one final time after all KeyValues 
> for a row have been seen, since it's at this time we know we should use the 
> default value.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-476) Support declaration of DEFAULT in CREATE statement

2016-09-07 Thread Kevin Liew (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Liew updated PHOENIX-476:
---
Attachment: PHOENIX-476.patch

I attached a patch implementing support for DEFAULT in the CREATE statement. I 
will work on ALTER, DROP support and support for expressions in the DEFAULT 
definition.

Should we store the default value in the SYSTEM.CATALOG table?

> Support declaration of DEFAULT in CREATE statement
> --
>
> Key: PHOENIX-476
> URL: https://issues.apache.org/jira/browse/PHOENIX-476
> Project: Phoenix
>  Issue Type: Task
>Affects Versions: 3.0-Release
>Reporter: James Taylor
>Assignee: Kevin Liew
>  Labels: enhancement
> Attachments: PHOENIX-476.patch
>
>
> Support the declaration of a default value in the CREATE TABLE/VIEW statement 
> like this:
> CREATE TABLE Persons (
> Pid int NOT NULL PRIMARY KEY,
> LastName varchar(255) NOT NULL,
> FirstName varchar(255),
> Address varchar(255),
> City varchar(255) DEFAULT 'Sandnes'
> )
> To implement this, we'd need to:
> 1. add a new DEFAULT_VALUE key value column in SYSTEM.TABLE and pass through 
> the value when the table is created (in MetaDataClient).
> 2. always set NULLABLE to ResultSetMetaData.columnNoNulls if a default value 
> is present, since the column will never be null.
> 3. add a getDefaultValue() accessor in PColumn
> 4.  for a row key column, during UPSERT use the default value if no value was 
> specified for that column. This could be done in the PTableImpl.newKey method.
> 5.  for a key value column with a default value, we can get away without 
> incurring any storage cost. Although a little bit of extra effort than if we 
> persisted the default value on an UPSERT for key value columns, this approach 
> has the benefit of not incurring any storage cost for a default value.
> * serialize any default value into KeyValueColumnExpression
> * in the evaluate method of KeyValueColumnExpression, conditionally use 
> the default value if the column value is not present. If doing partial 
> evaluation, you should not yet return the default value, as we may not have 
> encountered the the KeyValue for the column yet (since a filter evaluates 
> each time it sees each KeyValue, and there may be more than one KeyValue 
> referenced in the expression). Partial evaluation is determined by calling 
> Tuple.isImmutable(), where false means it is NOT doing partial evaluation, 
> while true means it is.
> * modify EvaluateOnCompletionVisitor by adding a visitor method for 
> RowKeyColumnExpression and KeyValueColumnExpression to set 
> evaluateOnCompletion to true if they have a default value specified. This 
> will cause filter evaluation to execute one final time after all KeyValues 
> for a row have been seen, since it's at this time we know we should use the 
> default value.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3228) Index table are configured with a custom/smaller MAX_FILESIZE

2016-09-07 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3228?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15471907#comment-15471907
 ] 

Lars Hofhansl commented on PHOENIX-3228:


Ran all the "ant package" tests, as well as 
SaltedIndexIT,TenantSpecifigViewIndexIT,TenantSpecificViewIndexSaltedIT,MutableIndexIT,IndexIT,ImmutbaleIndexIT.
 All pass.

Going to commit - unless there's more I should for tests.

> Index table are configured with a custom/smaller MAX_FILESIZE
> -
>
> Key: PHOENIX-3228
> URL: https://issues.apache.org/jira/browse/PHOENIX-3228
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
> Fix For: 4.9.0, 4.8.1
>
> Attachments: 3228-remove.txt, 3228.txt
>
>
> I do not think we should do this. The default of 10G is chosen to keep HBase 
> happy. For smaller tables or initially until the index gets large it might 
> lead to more index regions and hence be able to utilize more region server, 
> but generally, this is not the right thing to do.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Starting to think about 4.8.1

2016-09-07 Thread larsh
I meant: "It would be great if everybody could go through their jira's and push 
issues they won't get to within 10 days to 4.8.2 OR unschedule those from 4.8, 
to mark them as 4.9 only".

Also, please make sure that jira is up to date. If a fix was committed to the 
4.x/master and the 4.8 branches the jira should be marked with 4.8.1 and 4.9.0. 
I'll do a pass through git to make sure it matches up with jira.
-- Lars

  From: "la...@apache.org" 
 To: Dev  
 Sent: Wednesday, September 7, 2016 1:43 PM
 Subject: Starting to think about 4.8.1
   
I'd like to have an RC out within 10 days, to start a regular monthly cadence.

Just checked jira. There are 21 items fixed, and 30 items either open or 
patch-available.I'll do a pass through all the open issues. It would be great 
if everybody could go through their jira's and push issue they won't get to 
within 10 days to 4.8.2 and unschedule those from 4.8.
Also, if there's anything that must go in, please let me know.

Thanks.

-- Lars (- your friendly RM)



   

[jira] [Comment Edited] (PHOENIX-3210) Exception trying to cast Double to BigDecimal in UpsertCompiler

2016-09-07 Thread prakul agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15471864#comment-15471864
 ] 

prakul agarwal edited comment on PHOENIX-3210 at 9/7/16 9:29 PM:
-

[~samarthjain] I have moved some tests from DateTimeIT to DateTime2IT. 
 DateTime2IT has tests which are creating there own  tables for testing 
purpose, whereas DateTimeIT relies on @Before to create the tables. So we are 
creating unnecessary tables presently. 
I thought we can bifurcate them.What do you think ?


was (Author: prakul):
[~samarthjain] I have moved some tests from DateTimeIT to DateTime2IT. 
 DateTime2IT has tests which are creating there own  tables for testing 
purpose, whereas DateTimeIT relies on @Before to create the tables. So we are 
creating unnecessary tables presently.

> Exception trying to cast Double to BigDecimal in UpsertCompiler
> ---
>
> Key: PHOENIX-3210
> URL: https://issues.apache.org/jira/browse/PHOENIX-3210
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: Shehzaad Nakhoda
>Assignee: prakul agarwal
>  Labels: SFDC
> Fix For: 4.9.0, 4.8.1
>
> Attachments: PHOENIX-3210.patch, PHOENIX-3210_v2.patch
>
>
> We have an UPSERT statement that is resulting in this stack trace. 
> Unfortunately I can't get a hold of the actual Upsert statement since we 
> don't log it. 
> Cause0: java.lang.ClassCastException: java.lang.Double cannot be cast to 
> java.math.BigDecimal
>  Cause0-StackTrace: 
>   at 
> org.apache.phoenix.schema.types.PDecimal.isSizeCompatible(PDecimal.java:312)
>   at 
> org.apache.phoenix.compile.UpsertCompiler$3.execute(UpsertCompiler.java:887)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:335)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:323)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:321)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeUpdate(PhoenixStatement.java:1274)
>   at 
> phoenix.connection.ProtectedPhoenixStatement.executeUpdate(ProtectedPhoenixStatement.java:127)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3210) Exception trying to cast Double to BigDecimal in UpsertCompiler

2016-09-07 Thread prakul agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15471864#comment-15471864
 ] 

prakul agarwal commented on PHOENIX-3210:
-

[~samarthjain] I have moved some tests from DateTimeIT to DateTime2IT. 
 DateTime2IT has tests which are creating there own  tables for testing 
purpose, whereas DateTimeIT relies on @Before to create the tables. So we are 
creating unnecessary tables presently.

> Exception trying to cast Double to BigDecimal in UpsertCompiler
> ---
>
> Key: PHOENIX-3210
> URL: https://issues.apache.org/jira/browse/PHOENIX-3210
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: Shehzaad Nakhoda
>Assignee: prakul agarwal
>  Labels: SFDC
> Fix For: 4.9.0, 4.8.1
>
> Attachments: PHOENIX-3210.patch, PHOENIX-3210_v2.patch
>
>
> We have an UPSERT statement that is resulting in this stack trace. 
> Unfortunately I can't get a hold of the actual Upsert statement since we 
> don't log it. 
> Cause0: java.lang.ClassCastException: java.lang.Double cannot be cast to 
> java.math.BigDecimal
>  Cause0-StackTrace: 
>   at 
> org.apache.phoenix.schema.types.PDecimal.isSizeCompatible(PDecimal.java:312)
>   at 
> org.apache.phoenix.compile.UpsertCompiler$3.execute(UpsertCompiler.java:887)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:335)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:323)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:321)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeUpdate(PhoenixStatement.java:1274)
>   at 
> phoenix.connection.ProtectedPhoenixStatement.executeUpdate(ProtectedPhoenixStatement.java:127)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3246) U+2002 (En Space) not handled as whitespace in grammar

2016-09-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15471858#comment-15471858
 ] 

Hudson commented on PHOENIX-3246:
-

SUCCESS: Integrated in Jenkins build Phoenix-4.8-HBase-1.2 #9 (See 
[https://builds.apache.org/job/Phoenix-4.8-HBase-1.2/9/])
PHOENIX-3246 Treat U+2002 as whitespace in parser (elserj: rev 
f83a99342224cd9eae6f8fc2e5d4ae4eec4a7e9b)
* (edit) phoenix-core/src/main/java/org/apache/phoenix/parse/SQLParser.java
* (edit) 
phoenix-core/src/test/java/org/apache/phoenix/parse/QueryParserTest.java
* (edit) phoenix-core/src/main/antlr3/PhoenixSQL.g


> U+2002 (En Space) not handled as whitespace in grammar
> --
>
> Key: PHOENIX-3246
> URL: https://issues.apache.org/jira/browse/PHOENIX-3246
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Josh Elser
>Assignee: Josh Elser
> Fix For: 4.9.0, 4.8.1
>
> Attachments: PHOENIX-3246.001.patch
>
>
> I had the goofiest query issue the other day. A seemingly fine query was 
> throwing a parse error via sqlline.
> {noformat}
> Error: ERROR 601 (42P00): Syntax error. Unexpected char: ' ' 
> (state=42P00,code=601)
> org.apache.phoenix.exception.PhoenixParserException: ERROR 601 (42P00): 
> Syntax error. Unexpected char: ' '
>   at 
> org.apache.phoenix.exception.PhoenixParserException.newException(PhoenixParserException.java:33)
>   at 
> org.apache.phoenix.parse.SQLParser.parseStatement(SQLParser.java:118)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$PhoenixStatementParser.parseStatement(PhoenixStatement.java:1280)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.parseStatement(PhoenixStatement.java:1363)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1434)
>   at sqlline.Commands.execute(Commands.java:822)
>   at sqlline.Commands.sql(Commands.java:732)
>   at sqlline.SqlLine.dispatch(SqlLine.java:807)
>   at sqlline.SqlLine.runCommands(SqlLine.java:1710)
>   at sqlline.Commands.run(Commands.java:1285)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:36)
>   at sqlline.SqlLine.dispatch(SqlLine.java:803)
>   at sqlline.SqlLine.initArgs(SqlLine.java:613)
>   at sqlline.SqlLine.begin(SqlLine.java:656)
>   at sqlline.SqlLine.start(SqlLine.java:398)
>   at sqlline.SqlLine.main(SqlLine.java:292)
> Caused by: java.lang.RuntimeException: Unexpected char: ' '
>   at 
> org.apache.phoenix.parse.PhoenixSQLLexer.mOTHER(PhoenixSQLLexer.java:4324)
>   at 
> org.apache.phoenix.parse.PhoenixSQLLexer.mTokens(PhoenixSQLLexer.java:5437)
>   at 
> org.apache.phoenix.shaded.org.antlr.runtime.Lexer.nextToken(Lexer.java:85)
>   at 
> org.apache.phoenix.shaded.org.antlr.runtime.BufferedTokenStream.fetch(BufferedTokenStream.java:143)
>   at 
> org.apache.phoenix.shaded.org.antlr.runtime.BufferedTokenStream.sync(BufferedTokenStream.java:137)
>   at 
> org.apache.phoenix.shaded.org.antlr.runtime.CommonTokenStream.skipOffTokenChannels(CommonTokenStream.java:113)
>   at 
> org.apache.phoenix.shaded.org.antlr.runtime.CommonTokenStream.LT(CommonTokenStream.java:102)
>   at 
> org.apache.phoenix.shaded.org.antlr.runtime.BufferedTokenStream.LA(BufferedTokenStream.java:174)
>   at 
> org.apache.phoenix.shaded.org.antlr.runtime.BaseRecognizer.mismatchIsUnwantedToken(BaseRecognizer.java:127)
>   at 
> org.apache.phoenix.parse.PhoenixSQLParser.recoverFromMismatchedToken(PhoenixSQLParser.java:354)
>   at 
> org.apache.phoenix.shaded.org.antlr.runtime.BaseRecognizer.match(BaseRecognizer.java:115)
>   at 
> org.apache.phoenix.parse.PhoenixSQLParser.parseNoReserved(PhoenixSQLParser.java:9969)
>   at 
> org.apache.phoenix.parse.PhoenixSQLParser.identifier(PhoenixSQLParser.java:9936)
>   at 
> org.apache.phoenix.parse.PhoenixSQLParser.column_def(PhoenixSQLParser.java:3938)
>   at 
> org.apache.phoenix.parse.PhoenixSQLParser.column_defs(PhoenixSQLParser.java:3858)
>   at 
> org.apache.phoenix.parse.PhoenixSQLParser.create_table_node(PhoenixSQLParser.java:1104)
>   at 
> 

[jira] [Resolved] (PHOENIX-2785) Do not store NULLs for immutable tables

2016-09-07 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2785?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl resolved PHOENIX-2785.

Resolution: Fixed

Pushed to:
4.8-0.98
4.8-1.0
4.8-1.1
4.8-1.2
4.x-0.98
4.x-1.1
master (1.2)

> Do not store NULLs for immutable tables
> ---
>
> Key: PHOENIX-2785
> URL: https://issues.apache.org/jira/browse/PHOENIX-2785
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.7.0
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Minor
> Fix For: 4.9.0, 4.8.1
>
> Attachments: 2785-v2.txt, 2785-v3.txt, 2785.txt
>
>
> Currently we always store Delete markers (or explicit Nulls). For immutable 
> tables that is not necessary. Null is not distinguishable from an absent 
> column.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2785) Do not store NULLs for immutable tables

2016-09-07 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2785?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-2785:
---
Description: 
Currently we always store Delete markers (or explicit Nulls). For immutable 
tables that is not necessary. Null is not distinguishable from an absent column.


  was:
Currently we do store Delete markers (or explicit Nulls). For immutable tables 
that is not necessary. Null is that distinguishable from an absent column.



> Do not store NULLs for immutable tables
> ---
>
> Key: PHOENIX-2785
> URL: https://issues.apache.org/jira/browse/PHOENIX-2785
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.7.0
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Minor
> Fix For: 4.9.0, 4.8.1
>
> Attachments: 2785-v2.txt, 2785-v3.txt, 2785.txt
>
>
> Currently we always store Delete markers (or explicit Nulls). For immutable 
> tables that is not necessary. Null is not distinguishable from an absent 
> column.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3246) U+2002 (En Space) not handled as whitespace in grammar

2016-09-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15471817#comment-15471817
 ] 

Hudson commented on PHOENIX-3246:
-

FAILURE: Integrated in Jenkins build Phoenix-master #1383 (See 
[https://builds.apache.org/job/Phoenix-master/1383/])
PHOENIX-3246 Treat U+2002 as whitespace in parser (elserj: rev 
b65e385a828f89980ba4e5ae68f724d7cad50265)
* (edit) 
phoenix-core/src/test/java/org/apache/phoenix/parse/QueryParserTest.java
* (edit) phoenix-core/src/main/java/org/apache/phoenix/parse/SQLParser.java
* (edit) phoenix-core/src/main/antlr3/PhoenixSQL.g


> U+2002 (En Space) not handled as whitespace in grammar
> --
>
> Key: PHOENIX-3246
> URL: https://issues.apache.org/jira/browse/PHOENIX-3246
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Josh Elser
>Assignee: Josh Elser
> Fix For: 4.9.0, 4.8.1
>
> Attachments: PHOENIX-3246.001.patch
>
>
> I had the goofiest query issue the other day. A seemingly fine query was 
> throwing a parse error via sqlline.
> {noformat}
> Error: ERROR 601 (42P00): Syntax error. Unexpected char: ' ' 
> (state=42P00,code=601)
> org.apache.phoenix.exception.PhoenixParserException: ERROR 601 (42P00): 
> Syntax error. Unexpected char: ' '
>   at 
> org.apache.phoenix.exception.PhoenixParserException.newException(PhoenixParserException.java:33)
>   at 
> org.apache.phoenix.parse.SQLParser.parseStatement(SQLParser.java:118)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$PhoenixStatementParser.parseStatement(PhoenixStatement.java:1280)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.parseStatement(PhoenixStatement.java:1363)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1434)
>   at sqlline.Commands.execute(Commands.java:822)
>   at sqlline.Commands.sql(Commands.java:732)
>   at sqlline.SqlLine.dispatch(SqlLine.java:807)
>   at sqlline.SqlLine.runCommands(SqlLine.java:1710)
>   at sqlline.Commands.run(Commands.java:1285)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:36)
>   at sqlline.SqlLine.dispatch(SqlLine.java:803)
>   at sqlline.SqlLine.initArgs(SqlLine.java:613)
>   at sqlline.SqlLine.begin(SqlLine.java:656)
>   at sqlline.SqlLine.start(SqlLine.java:398)
>   at sqlline.SqlLine.main(SqlLine.java:292)
> Caused by: java.lang.RuntimeException: Unexpected char: ' '
>   at 
> org.apache.phoenix.parse.PhoenixSQLLexer.mOTHER(PhoenixSQLLexer.java:4324)
>   at 
> org.apache.phoenix.parse.PhoenixSQLLexer.mTokens(PhoenixSQLLexer.java:5437)
>   at 
> org.apache.phoenix.shaded.org.antlr.runtime.Lexer.nextToken(Lexer.java:85)
>   at 
> org.apache.phoenix.shaded.org.antlr.runtime.BufferedTokenStream.fetch(BufferedTokenStream.java:143)
>   at 
> org.apache.phoenix.shaded.org.antlr.runtime.BufferedTokenStream.sync(BufferedTokenStream.java:137)
>   at 
> org.apache.phoenix.shaded.org.antlr.runtime.CommonTokenStream.skipOffTokenChannels(CommonTokenStream.java:113)
>   at 
> org.apache.phoenix.shaded.org.antlr.runtime.CommonTokenStream.LT(CommonTokenStream.java:102)
>   at 
> org.apache.phoenix.shaded.org.antlr.runtime.BufferedTokenStream.LA(BufferedTokenStream.java:174)
>   at 
> org.apache.phoenix.shaded.org.antlr.runtime.BaseRecognizer.mismatchIsUnwantedToken(BaseRecognizer.java:127)
>   at 
> org.apache.phoenix.parse.PhoenixSQLParser.recoverFromMismatchedToken(PhoenixSQLParser.java:354)
>   at 
> org.apache.phoenix.shaded.org.antlr.runtime.BaseRecognizer.match(BaseRecognizer.java:115)
>   at 
> org.apache.phoenix.parse.PhoenixSQLParser.parseNoReserved(PhoenixSQLParser.java:9969)
>   at 
> org.apache.phoenix.parse.PhoenixSQLParser.identifier(PhoenixSQLParser.java:9936)
>   at 
> org.apache.phoenix.parse.PhoenixSQLParser.column_def(PhoenixSQLParser.java:3938)
>   at 
> org.apache.phoenix.parse.PhoenixSQLParser.column_defs(PhoenixSQLParser.java:3858)
>   at 
> org.apache.phoenix.parse.PhoenixSQLParser.create_table_node(PhoenixSQLParser.java:1104)
>   at 
> org.apache.phoenix.parse.PhoenixSQLParser.oneStatement(PhoenixSQLParser.java:816)
>  

[jira] [Commented] (PHOENIX-2650) Please delete old releases from mirroring system

2016-09-07 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15471782#comment-15471782
 ] 

James Taylor commented on PHOENIX-2650:
---

Thanks, [~elserj]. You are an outstanding OS citizen!

> Please delete old releases from mirroring system
> 
>
> Key: PHOENIX-2650
> URL: https://issues.apache.org/jira/browse/PHOENIX-2650
> Project: Phoenix
>  Issue Type: Task
> Environment: 
>Reporter: Sebb
>Assignee: Josh Elser
>
> To reduce the load on the ASF mirrors, projects are required to delete old 
> releases [1]
> Please can you remove all non-current releases?
> i.e. all except the latest:
> 3.3.1
> 4.6.0
> Thanks!
> Also, if you have a release guide, perhaps you could add a cleanup stage to 
> it?
> [1] http://www.apache.org/dev/release.html#when-to-archive



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2650) Please delete old releases from mirroring system

2016-09-07 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2650?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated PHOENIX-2650:

Issue Type: Task  (was: Bug)

> Please delete old releases from mirroring system
> 
>
> Key: PHOENIX-2650
> URL: https://issues.apache.org/jira/browse/PHOENIX-2650
> Project: Phoenix
>  Issue Type: Task
> Environment: 
>Reporter: Sebb
>Assignee: Josh Elser
>
> To reduce the load on the ASF mirrors, projects are required to delete old 
> releases [1]
> Please can you remove all non-current releases?
> i.e. all except the latest:
> 3.3.1
> 4.6.0
> Thanks!
> Also, if you have a release guide, perhaps you could add a cleanup stage to 
> it?
> [1] http://www.apache.org/dev/release.html#when-to-archive



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (PHOENIX-2650) Please delete old releases from mirroring system

2016-09-07 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2650?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser resolved PHOENIX-2650.
-
Resolution: Fixed

Updated http://phoenix.apache.org/release.html and removed 4.7.0 artifacts from 
dist.a.o.

> Please delete old releases from mirroring system
> 
>
> Key: PHOENIX-2650
> URL: https://issues.apache.org/jira/browse/PHOENIX-2650
> Project: Phoenix
>  Issue Type: Task
> Environment: 
>Reporter: Sebb
>Assignee: Josh Elser
>
> To reduce the load on the ASF mirrors, projects are required to delete old 
> releases [1]
> Please can you remove all non-current releases?
> i.e. all except the latest:
> 3.3.1
> 4.6.0
> Thanks!
> Also, if you have a release guide, perhaps you could add a cleanup stage to 
> it?
> [1] http://www.apache.org/dev/release.html#when-to-archive



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2946) Projected comparison between date and timestamp columns always returns true

2016-09-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15471766#comment-15471766
 ] 

Hadoop QA commented on PHOENIX-2946:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12827428/PHOENIX-2946_v3.patch
  against master branch at commit b65e385a828f89980ba4e5ae68f724d7cad50265.
  ATTACHMENT ID: 12827428

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
34 warning messages.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+"CREATE TABLE IF NOT EXISTS T1 (k1 INTEGER NOT NULL, dates 
DATE, timestamps TIMESTAMP, times TIME CONSTRAINT pk PRIMARY KEY (k1))";
+String dml = "UPSERT INTO T1 VALUES (1, TO_DATE('Sat, 3 Feb 2008 
03:05:06 GMT', 'EEE, d MMM  HH:mm:ss z', 'UTC'), TO_TIMESTAMP('2006-04-12 
15:10:20'), " +
+dml = "UPSERT INTO T1 VALUES (2, TO_DATE('Sat, 3 Feb 2008 03:05:06 
GMT', 'EEE, d MMM  HH:mm:ss z', 'UTC'), TO_TIMESTAMP('2006-04-12 
10:10:20'), " +
+dml = "UPSERT INTO T1 VALUES (3, TO_DATE('Sat, 3 Feb 2008 03:05:06 
GMT', 'EEE, d MMM  HH:mm:ss z', 'UTC'), TO_TIMESTAMP('2006-04-12 
08:10:20'), " +
+throw new 
RuntimeException(TypeMismatchException.newException(PTimestamp.INSTANCE, 
returnType, this.getName()));
+return super.isBytesComparableWith(otherType) || otherType == 
PTime.INSTANCE || otherType == PTimestamp.INSTANCE;
+return super.isBytesComparableWith(otherType) || otherType == 
PDate.INSTANCE || otherType == PTimestamp.INSTANCE;
+  return super.isBytesComparableWith(otherType) || otherType == 
PTime.INSTANCE || otherType == PDate.INSTANCE;
+return super.isBytesComparableWith(otherType) || otherType == 
PUnsignedTime.INSTANCE || otherType == PUnsignedTimestamp.INSTANCE;
+return super.isBytesComparableWith(otherType) || otherType == 
PUnsignedDate.INSTANCE || otherType == PUnsignedTimestamp.INSTANCE;

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.UpsertSelectIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.CastAndCoerceIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.SpooledSortMergeJoinIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.CSVCommonsLoaderIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.UpsertValuesIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.SortMergeJoinIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.ClientTimeArithmeticQueryIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.HashJoinIT

 {color:red}-1 core zombie tests{color}.  There are 1 zombie test(s): 

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/555//testReport/
Javadoc warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/555//artifact/patchprocess/patchJavadocWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/555//console

This message is automatically generated.

> Projected comparison between date and timestamp columns always returns true
> ---
>
> Key: PHOENIX-2946
> URL: https://issues.apache.org/jira/browse/PHOENIX-2946
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.6.0, 4.8.0
>Reporter: Kevin Liew
>Assignee: Kevin Liew
>Priority: Minor
>  Labels: comparison, date, timestamp
> Fix For: 4.9.0, 4.8.1
>
> Attachments: PHOENIX-2946_v2.patch, PHOENIX-2946_v3.patch
>
>
> {code}
> 0: jdbc:phoenix:thin:url=http://localhost:876> create table test (dateCol 
> DATE primary key, timestampCol TIMESTAMP);
> No rows affected (2.559 seconds)
> 0: jdbc:phoenix:thin:url=http://localhost:876> upsert into test values 
> (TO_DATE('1990-01-01'), NOW());
> 1 row affected (0.255 seconds)
> 0: jdbc:phoenix:thin:url=http://localhost:876> select dateCol = timestampCol 
> from test;
> 

[jira] [Updated] (PHOENIX-2785) Do not store NULLs for immutable tables

2016-09-07 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2785?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-2785:
---
Fix Version/s: 4.9.0

> Do not store NULLs for immutable tables
> ---
>
> Key: PHOENIX-2785
> URL: https://issues.apache.org/jira/browse/PHOENIX-2785
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.7.0
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Minor
> Fix For: 4.9.0, 4.8.1
>
> Attachments: 2785-v2.txt, 2785-v3.txt, 2785.txt
>
>
> Currently we do store Delete markers (or explicit Nulls). For immutable 
> tables that is not necessary. Null is that distinguishable from an absent 
> column.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2785) Do not store NULLs for immutable tables

2016-09-07 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15471740#comment-15471740
 ] 

Lars Hofhansl commented on PHOENIX-2785:


Coming back to this. Will commit now for 4.8.1.

> Do not store NULLs for immutable tables
> ---
>
> Key: PHOENIX-2785
> URL: https://issues.apache.org/jira/browse/PHOENIX-2785
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.7.0
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Minor
> Fix For: 4.8.1
>
> Attachments: 2785-v2.txt, 2785-v3.txt, 2785.txt
>
>
> Currently we do store Delete markers (or explicit Nulls). For immutable 
> tables that is not necessary. Null is that distinguishable from an absent 
> column.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Starting to think about 4.8.1

2016-09-07 Thread larsh
I'd like to have an RC out within 10 days, to start a regular monthly cadence.

Just checked jira. There are 21 items fixed, and 30 items either open or 
patch-available.I'll do a pass through all the open issues. It would be great 
if everybody could go through their jira's and push issue they won't get to 
within 10 days to 4.8.2 and unschedule those from 4.8.
Also, if there's anything that must go in, please let me know.

Thanks.

-- Lars (- your friendly RM)



[jira] [Commented] (PHOENIX-2650) Please delete old releases from mirroring system

2016-09-07 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15471585#comment-15471585
 ] 

Josh Elser commented on PHOENIX-2650:
-

Let me just take care of this.

> Please delete old releases from mirroring system
> 
>
> Key: PHOENIX-2650
> URL: https://issues.apache.org/jira/browse/PHOENIX-2650
> Project: Phoenix
>  Issue Type: Bug
> Environment: 
>Reporter: Sebb
>Assignee: Josh Elser
>
> To reduce the load on the ASF mirrors, projects are required to delete old 
> releases [1]
> Please can you remove all non-current releases?
> i.e. all except the latest:
> 3.3.1
> 4.6.0
> Thanks!
> Also, if you have a release guide, perhaps you could add a cleanup stage to 
> it?
> [1] http://www.apache.org/dev/release.html#when-to-archive



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (PHOENIX-2650) Please delete old releases from mirroring system

2016-09-07 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2650?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser reassigned PHOENIX-2650:
---

Assignee: Josh Elser  (was: Mujtaba Chohan)

> Please delete old releases from mirroring system
> 
>
> Key: PHOENIX-2650
> URL: https://issues.apache.org/jira/browse/PHOENIX-2650
> Project: Phoenix
>  Issue Type: Bug
> Environment: 
>Reporter: Sebb
>Assignee: Josh Elser
>
> To reduce the load on the ASF mirrors, projects are required to delete old 
> releases [1]
> Please can you remove all non-current releases?
> i.e. all except the latest:
> 3.3.1
> 4.6.0
> Thanks!
> Also, if you have a release guide, perhaps you could add a cleanup stage to 
> it?
> [1] http://www.apache.org/dev/release.html#when-to-archive



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2991) Add missing documentation for functions

2016-09-07 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15471584#comment-15471584
 ] 

James Taylor commented on PHOENIX-2991:
---

+1. Looks great, [~elserj]. Thanks!

> Add missing documentation for functions
> ---
>
> Key: PHOENIX-2991
> URL: https://issues.apache.org/jira/browse/PHOENIX-2991
> Project: Phoenix
>  Issue Type: Task
>Reporter: Josh Elser
>Assignee: Josh Elser
> Fix For: 4.9.0, 4.8.1
>
> Attachments: PHOENIX-2991.001.patch
>
>
> In PHOENIX-2990, I noticed that we were missing some functions on the 
> reference page on the website:
> * GetBitFunction.java
> * GetByteFunction.java
> * OctetLengthFunction.java
> * SetBitFunction.java
> * SetByteFunction.java
> * ToTimeFunction.java
> * ToTimestampFunction.java
> TO_TIME and TO_TIMESTAMP are probably the most heinous since we actually 
> refer to them elsewere in the same document. It would be nice to make sure 
> these are all documented.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2946) Projected comparison between date and timestamp columns always returns true

2016-09-07 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-2946:
--
Attachment: PHOENIX-2946_v3.patch

Attaching v3 of patch that removes the codec for PTimestamp and 
PUnsignedTimestamp (that should have never been there) and handles special 
cases for TIMESTAMP arguments without built-in functions.

FYI, [~kliew] - your new tests pass with this patch.

Would you mind reviewing, [~tdsilva]?

> Projected comparison between date and timestamp columns always returns true
> ---
>
> Key: PHOENIX-2946
> URL: https://issues.apache.org/jira/browse/PHOENIX-2946
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.6.0, 4.8.0
>Reporter: Kevin Liew
>Assignee: Kevin Liew
>Priority: Minor
>  Labels: comparison, date, timestamp
> Fix For: 4.9.0, 4.8.1
>
> Attachments: PHOENIX-2946_v2.patch, PHOENIX-2946_v3.patch
>
>
> {code}
> 0: jdbc:phoenix:thin:url=http://localhost:876> create table test (dateCol 
> DATE primary key, timestampCol TIMESTAMP);
> No rows affected (2.559 seconds)
> 0: jdbc:phoenix:thin:url=http://localhost:876> upsert into test values 
> (TO_DATE('1990-01-01'), NOW());
> 1 row affected (0.255 seconds)
> 0: jdbc:phoenix:thin:url=http://localhost:876> select dateCol = timestampCol 
> from test;
> +--+
> |  DATECOL = TIMESTAMPCOL  |
> +--+
> | true |
> +--+
> 1 row selected (0.019 seconds)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2991) Add missing documentation for functions

2016-09-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15471419#comment-15471419
 ] 

Hadoop QA commented on PHOENIX-2991:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12827418/PHOENIX-2991.001.patch
  against master branch at commit c02d6cb5971f7b17bcd5e308952fa081e32adf19.
  ATTACHMENT ID: 12827418

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+0 tests included{color}.  The patch appears to be a 
documentation, build,
or dev patch that doesn't require tests.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/554//console

This message is automatically generated.

> Add missing documentation for functions
> ---
>
> Key: PHOENIX-2991
> URL: https://issues.apache.org/jira/browse/PHOENIX-2991
> Project: Phoenix
>  Issue Type: Task
>Reporter: Josh Elser
>Assignee: Josh Elser
> Fix For: 4.9.0, 4.8.1
>
> Attachments: PHOENIX-2991.001.patch
>
>
> In PHOENIX-2990, I noticed that we were missing some functions on the 
> reference page on the website:
> * GetBitFunction.java
> * GetByteFunction.java
> * OctetLengthFunction.java
> * SetBitFunction.java
> * SetByteFunction.java
> * ToTimeFunction.java
> * ToTimestampFunction.java
> TO_TIME and TO_TIMESTAMP are probably the most heinous since we actually 
> refer to them elsewere in the same document. It would be nice to make sure 
> these are all documented.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (PHOENIX-2991) Add missing documentation for functions

2016-09-07 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15471405#comment-15471405
 ] 

Josh Elser edited comment on PHOENIX-2991 at 9/7/16 6:18 PM:
-

.001 Adds documentation for \{Get,Set\}_\{Bit,Byte\} functions, to_time, 
to_timestamp, and mentions the ANSI SQL date/time literals that James mentioned 
in the corresponding function description.


was (Author: elserj):
.001 Adds documentation for {Get,Set}_{Bit,Byte} functions, to_time, 
to_timestamp, and mentions the ANSI SQL date/time literals that James mentioned 
in the corresponding function description.

> Add missing documentation for functions
> ---
>
> Key: PHOENIX-2991
> URL: https://issues.apache.org/jira/browse/PHOENIX-2991
> Project: Phoenix
>  Issue Type: Task
>Reporter: Josh Elser
>Assignee: Josh Elser
> Fix For: 4.9.0, 4.8.1
>
> Attachments: PHOENIX-2991.001.patch
>
>
> In PHOENIX-2990, I noticed that we were missing some functions on the 
> reference page on the website:
> * GetBitFunction.java
> * GetByteFunction.java
> * OctetLengthFunction.java
> * SetBitFunction.java
> * SetByteFunction.java
> * ToTimeFunction.java
> * ToTimestampFunction.java
> TO_TIME and TO_TIMESTAMP are probably the most heinous since we actually 
> refer to them elsewere in the same document. It would be nice to make sure 
> these are all documented.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2991) Add missing documentation for functions

2016-09-07 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated PHOENIX-2991:

Attachment: PHOENIX-2991.001.patch

.001 Adds documentation for {Get,Set}_{Bit,Byte} functions, to_time, 
to_timestamp, and mentions the ANSI SQL date/time literals that James mentioned 
in the corresponding function description.

> Add missing documentation for functions
> ---
>
> Key: PHOENIX-2991
> URL: https://issues.apache.org/jira/browse/PHOENIX-2991
> Project: Phoenix
>  Issue Type: Task
>Reporter: Josh Elser
>Assignee: Josh Elser
> Fix For: 4.9.0, 4.8.1
>
> Attachments: PHOENIX-2991.001.patch
>
>
> In PHOENIX-2990, I noticed that we were missing some functions on the 
> reference page on the website:
> * GetBitFunction.java
> * GetByteFunction.java
> * OctetLengthFunction.java
> * SetBitFunction.java
> * SetByteFunction.java
> * ToTimeFunction.java
> * ToTimestampFunction.java
> TO_TIME and TO_TIMESTAMP are probably the most heinous since we actually 
> refer to them elsewere in the same document. It would be nice to make sure 
> these are all documented.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3194) Document Hive integration

2016-09-07 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15471001#comment-15471001
 ] 

James Taylor commented on PHOENIX-3194:
---

This looks really good, [~sergey.soldatov] - thanks so much. Here's some minor 
feedback:
- Would you mind adding the Hive menu item right after the "Apache Spark 
Integration" menu item?
- For the limitations, where it says you must modify Hive code, can you link to 
Hive JIRAs here (and file them if they're not filed)? That way we have 
something to track to round out the functionality. Same for any Phoenix-based 
limitations.

> Document Hive integration
> -
>
> Key: PHOENIX-3194
> URL: https://issues.apache.org/jira/browse/PHOENIX-3194
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Sergey Soldatov
> Attachments: PHOENIX-3194.patch
>
>
> We should document our Hive integration similar to the way we've documented 
> our Spark integration[1] and Pig integration[2]. This would focus on how to 
> use it (as opposed to how it was implemented), limitations, version 
> requirements, and include examples and any required/optional config 
> parameters or other setup required.
> [1] http://phoenix.apache.org/phoenix_spark.html
> [2] http://phoenix.apache.org/pig_integration.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3246) U+2002 (En Space) not handled as whitespace in grammar

2016-09-07 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15470988#comment-15470988
 ] 

James Taylor commented on PHOENIX-3246:
---

+1. Looks great. Thanks, [~elserj]! Please check-in to 4.8 and 4.x branches.

> U+2002 (En Space) not handled as whitespace in grammar
> --
>
> Key: PHOENIX-3246
> URL: https://issues.apache.org/jira/browse/PHOENIX-3246
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Josh Elser
>Assignee: Josh Elser
> Fix For: 4.9.0, 4.8.1
>
> Attachments: PHOENIX-3246.001.patch
>
>
> I had the goofiest query issue the other day. A seemingly fine query was 
> throwing a parse error via sqlline.
> {noformat}
> Error: ERROR 601 (42P00): Syntax error. Unexpected char: ' ' 
> (state=42P00,code=601)
> org.apache.phoenix.exception.PhoenixParserException: ERROR 601 (42P00): 
> Syntax error. Unexpected char: ' '
>   at 
> org.apache.phoenix.exception.PhoenixParserException.newException(PhoenixParserException.java:33)
>   at 
> org.apache.phoenix.parse.SQLParser.parseStatement(SQLParser.java:118)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$PhoenixStatementParser.parseStatement(PhoenixStatement.java:1280)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.parseStatement(PhoenixStatement.java:1363)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1434)
>   at sqlline.Commands.execute(Commands.java:822)
>   at sqlline.Commands.sql(Commands.java:732)
>   at sqlline.SqlLine.dispatch(SqlLine.java:807)
>   at sqlline.SqlLine.runCommands(SqlLine.java:1710)
>   at sqlline.Commands.run(Commands.java:1285)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:36)
>   at sqlline.SqlLine.dispatch(SqlLine.java:803)
>   at sqlline.SqlLine.initArgs(SqlLine.java:613)
>   at sqlline.SqlLine.begin(SqlLine.java:656)
>   at sqlline.SqlLine.start(SqlLine.java:398)
>   at sqlline.SqlLine.main(SqlLine.java:292)
> Caused by: java.lang.RuntimeException: Unexpected char: ' '
>   at 
> org.apache.phoenix.parse.PhoenixSQLLexer.mOTHER(PhoenixSQLLexer.java:4324)
>   at 
> org.apache.phoenix.parse.PhoenixSQLLexer.mTokens(PhoenixSQLLexer.java:5437)
>   at 
> org.apache.phoenix.shaded.org.antlr.runtime.Lexer.nextToken(Lexer.java:85)
>   at 
> org.apache.phoenix.shaded.org.antlr.runtime.BufferedTokenStream.fetch(BufferedTokenStream.java:143)
>   at 
> org.apache.phoenix.shaded.org.antlr.runtime.BufferedTokenStream.sync(BufferedTokenStream.java:137)
>   at 
> org.apache.phoenix.shaded.org.antlr.runtime.CommonTokenStream.skipOffTokenChannels(CommonTokenStream.java:113)
>   at 
> org.apache.phoenix.shaded.org.antlr.runtime.CommonTokenStream.LT(CommonTokenStream.java:102)
>   at 
> org.apache.phoenix.shaded.org.antlr.runtime.BufferedTokenStream.LA(BufferedTokenStream.java:174)
>   at 
> org.apache.phoenix.shaded.org.antlr.runtime.BaseRecognizer.mismatchIsUnwantedToken(BaseRecognizer.java:127)
>   at 
> org.apache.phoenix.parse.PhoenixSQLParser.recoverFromMismatchedToken(PhoenixSQLParser.java:354)
>   at 
> org.apache.phoenix.shaded.org.antlr.runtime.BaseRecognizer.match(BaseRecognizer.java:115)
>   at 
> org.apache.phoenix.parse.PhoenixSQLParser.parseNoReserved(PhoenixSQLParser.java:9969)
>   at 
> org.apache.phoenix.parse.PhoenixSQLParser.identifier(PhoenixSQLParser.java:9936)
>   at 
> org.apache.phoenix.parse.PhoenixSQLParser.column_def(PhoenixSQLParser.java:3938)
>   at 
> org.apache.phoenix.parse.PhoenixSQLParser.column_defs(PhoenixSQLParser.java:3858)
>   at 
> org.apache.phoenix.parse.PhoenixSQLParser.create_table_node(PhoenixSQLParser.java:1104)
>   at 
> org.apache.phoenix.parse.PhoenixSQLParser.oneStatement(PhoenixSQLParser.java:816)
>   at 
> org.apache.phoenix.parse.PhoenixSQLParser.statement(PhoenixSQLParser.java:508)
>   at 
> org.apache.phoenix.parse.SQLParser.parseStatement(SQLParser.java:108)
>   ... 18 more
> {noformat}
> Re-typing the statement by hand worked successfully.
> After some hexdump and diff action, I finally found out 

[jira] [Commented] (PHOENIX-3246) U+2002 (En Space) not handled as whitespace in grammar

2016-09-07 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15470898#comment-15470898
 ] 

Josh Elser commented on PHOENIX-3246:
-

Mighty [~jamestaylor], wdyt?

> U+2002 (En Space) not handled as whitespace in grammar
> --
>
> Key: PHOENIX-3246
> URL: https://issues.apache.org/jira/browse/PHOENIX-3246
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Josh Elser
>Assignee: Josh Elser
> Fix For: 4.9.0, 4.8.1
>
> Attachments: PHOENIX-3246.001.patch
>
>
> I had the goofiest query issue the other day. A seemingly fine query was 
> throwing a parse error via sqlline.
> {noformat}
> Error: ERROR 601 (42P00): Syntax error. Unexpected char: ' ' 
> (state=42P00,code=601)
> org.apache.phoenix.exception.PhoenixParserException: ERROR 601 (42P00): 
> Syntax error. Unexpected char: ' '
>   at 
> org.apache.phoenix.exception.PhoenixParserException.newException(PhoenixParserException.java:33)
>   at 
> org.apache.phoenix.parse.SQLParser.parseStatement(SQLParser.java:118)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$PhoenixStatementParser.parseStatement(PhoenixStatement.java:1280)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.parseStatement(PhoenixStatement.java:1363)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1434)
>   at sqlline.Commands.execute(Commands.java:822)
>   at sqlline.Commands.sql(Commands.java:732)
>   at sqlline.SqlLine.dispatch(SqlLine.java:807)
>   at sqlline.SqlLine.runCommands(SqlLine.java:1710)
>   at sqlline.Commands.run(Commands.java:1285)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:36)
>   at sqlline.SqlLine.dispatch(SqlLine.java:803)
>   at sqlline.SqlLine.initArgs(SqlLine.java:613)
>   at sqlline.SqlLine.begin(SqlLine.java:656)
>   at sqlline.SqlLine.start(SqlLine.java:398)
>   at sqlline.SqlLine.main(SqlLine.java:292)
> Caused by: java.lang.RuntimeException: Unexpected char: ' '
>   at 
> org.apache.phoenix.parse.PhoenixSQLLexer.mOTHER(PhoenixSQLLexer.java:4324)
>   at 
> org.apache.phoenix.parse.PhoenixSQLLexer.mTokens(PhoenixSQLLexer.java:5437)
>   at 
> org.apache.phoenix.shaded.org.antlr.runtime.Lexer.nextToken(Lexer.java:85)
>   at 
> org.apache.phoenix.shaded.org.antlr.runtime.BufferedTokenStream.fetch(BufferedTokenStream.java:143)
>   at 
> org.apache.phoenix.shaded.org.antlr.runtime.BufferedTokenStream.sync(BufferedTokenStream.java:137)
>   at 
> org.apache.phoenix.shaded.org.antlr.runtime.CommonTokenStream.skipOffTokenChannels(CommonTokenStream.java:113)
>   at 
> org.apache.phoenix.shaded.org.antlr.runtime.CommonTokenStream.LT(CommonTokenStream.java:102)
>   at 
> org.apache.phoenix.shaded.org.antlr.runtime.BufferedTokenStream.LA(BufferedTokenStream.java:174)
>   at 
> org.apache.phoenix.shaded.org.antlr.runtime.BaseRecognizer.mismatchIsUnwantedToken(BaseRecognizer.java:127)
>   at 
> org.apache.phoenix.parse.PhoenixSQLParser.recoverFromMismatchedToken(PhoenixSQLParser.java:354)
>   at 
> org.apache.phoenix.shaded.org.antlr.runtime.BaseRecognizer.match(BaseRecognizer.java:115)
>   at 
> org.apache.phoenix.parse.PhoenixSQLParser.parseNoReserved(PhoenixSQLParser.java:9969)
>   at 
> org.apache.phoenix.parse.PhoenixSQLParser.identifier(PhoenixSQLParser.java:9936)
>   at 
> org.apache.phoenix.parse.PhoenixSQLParser.column_def(PhoenixSQLParser.java:3938)
>   at 
> org.apache.phoenix.parse.PhoenixSQLParser.column_defs(PhoenixSQLParser.java:3858)
>   at 
> org.apache.phoenix.parse.PhoenixSQLParser.create_table_node(PhoenixSQLParser.java:1104)
>   at 
> org.apache.phoenix.parse.PhoenixSQLParser.oneStatement(PhoenixSQLParser.java:816)
>   at 
> org.apache.phoenix.parse.PhoenixSQLParser.statement(PhoenixSQLParser.java:508)
>   at 
> org.apache.phoenix.parse.SQLParser.parseStatement(SQLParser.java:108)
>   ... 18 more
> {noformat}
> Re-typing the statement by hand worked successfully.
> After some hexdump and diff action, I finally found out that some of the 
> space characters in the statement 

[jira] [Commented] (PHOENIX-3218) First draft of Phoenix Tuning Guide

2016-09-07 Thread Sumit Nigam (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15469745#comment-15469745
 ] 

Sumit Nigam commented on PHOENIX-3218:
--

Thank you for this good initiative. I have some queries/ suggestions:

In improving parallelization, you mentioned:
*Turn on stats, should improve it*
I thought stats are turned ON by default. Is that not correct?

Plus, I could not understand the following recommendation in the document:
*Configure stats to use small pipe size, creates even more stats*
Did you mean having lesser sized guideposts by reducing value of 
phoenix.stats.guidepost.width from default 100Mb?

Lastly, using the IncreasingToUpperBoundRegionSplitPolicy (default) causes 
first region split at 128Mb. Is phoenix.stats.guidepost.width (100M) a good 
value or should it ideally be reduced from beginning itself given it is almost 
same as region size?

We can also mention in the document that if we are reducing the size of the 
guideposts, then phoenix.query.queueSize may need increasing as more work will 
be queued thereafter.


> First draft of Phoenix Tuning Guide
> ---
>
> Key: PHOENIX-3218
> URL: https://issues.apache.org/jira/browse/PHOENIX-3218
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Peter Conrad
> Attachments: Phoenix-Tuning-Guide.md
>
>
> Here's a first draft of a Tuning Guide for Phoenix performance. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-3194) Document Hive integration

2016-09-07 Thread Sergey Soldatov (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Soldatov updated PHOENIX-3194:
-
Attachment: PHOENIX-3194.patch

The diff for web workspace. [~jamestaylor] sorry, missed your previous comment. 

> Document Hive integration
> -
>
> Key: PHOENIX-3194
> URL: https://issues.apache.org/jira/browse/PHOENIX-3194
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Sergey Soldatov
> Attachments: PHOENIX-3194.patch
>
>
> We should document our Hive integration similar to the way we've documented 
> our Spark integration[1] and Pig integration[2]. This would focus on how to 
> use it (as opposed to how it was implemented), limitations, version 
> requirements, and include examples and any required/optional config 
> parameters or other setup required.
> [1] http://phoenix.apache.org/phoenix_spark.html
> [2] http://phoenix.apache.org/pig_integration.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)