[jira] [Updated] (PHOENIX-3336) get the wrong results when using the local secondary index

2016-09-28 Thread Houliang Qi (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Houliang Qi updated PHOENIX-3336:
-
Attachment: (was: wrong-index.png)

> get the wrong results when using the local secondary index
> --
>
> Key: PHOENIX-3336
> URL: https://issues.apache.org/jira/browse/PHOENIX-3336
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.8.0
> Environment: hbase-1.1.2
>Reporter: Houliang Qi
>  Labels: phoenix, secondaryIndex
> Attachments: create_table_orders.sql, readme.txt, sample_1.csv, 
> sample_2.csv, wrong-index-2.png
>
>   Original Estimate: 120h
>  Remaining Estimate: 120h
>
> When using phoenix local secondary index, two clients concurrent upsert data 
> to the same row key, while using the index column to retrieve data, it gets 
> the wrong results.
> Just like the attachments, I create one table called orders_5, and create one 
> local index on table orders_5, called clerk_5; then I use two clients to 
> upsert data to the same row key of  table orders_5, and you will see that, 
> the local index clerk_5 have some stale record (maybe its OK for eventual 
> consistency),  however, when you use the previous value to retrieve the 
> record, you can still get the result, even more serious, the result is wrong, 
> namely it not the record which you have insert before, and also not the 
> record in the primary table(in the case ,is the table orders_5)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3336) get the wrong results when using the local secondary index

2016-09-28 Thread Houliang Qi (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15531626#comment-15531626
 ] 

Houliang Qi commented on PHOENIX-3336:
--

I have added the ddl statements and the test case in the readme.txt, the 
attachment file illustrates the steps which I have taken,I hope this will be 
helpful to you.

> get the wrong results when using the local secondary index
> --
>
> Key: PHOENIX-3336
> URL: https://issues.apache.org/jira/browse/PHOENIX-3336
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.8.0
> Environment: hbase-1.1.2
>Reporter: Houliang Qi
>  Labels: phoenix, secondaryIndex
> Attachments: create_table_orders.sql, readme.txt, sample_1.csv, 
> sample_2.csv, wrong-index-2.png, wrong-index.png
>
>   Original Estimate: 120h
>  Remaining Estimate: 120h
>
> When using phoenix local secondary index, two clients concurrent upsert data 
> to the same row key, while using the index column to retrieve data, it gets 
> the wrong results.
> Just like the attachments, I create one table called orders_5, and create one 
> local index on table orders_5, called clerk_5; then I use two clients to 
> upsert data to the same row key of  table orders_5, and you will see that, 
> the local index clerk_5 have some stale record (maybe its OK for eventual 
> consistency),  however, when you use the previous value to retrieve the 
> record, you can still get the result, even more serious, the result is wrong, 
> namely it not the record which you have insert before, and also not the 
> record in the primary table(in the case ,is the table orders_5)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-3336) get the wrong results when using the local secondary index

2016-09-28 Thread Houliang Qi (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Houliang Qi updated PHOENIX-3336:
-
Attachment: sample_2.csv
sample_1.csv
readme.txt
create_table_orders.sql

> get the wrong results when using the local secondary index
> --
>
> Key: PHOENIX-3336
> URL: https://issues.apache.org/jira/browse/PHOENIX-3336
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.8.0
> Environment: hbase-1.1.2
>Reporter: Houliang Qi
>  Labels: phoenix, secondaryIndex
> Attachments: create_table_orders.sql, readme.txt, sample_1.csv, 
> sample_2.csv, wrong-index-2.png, wrong-index.png
>
>   Original Estimate: 120h
>  Remaining Estimate: 120h
>
> When using phoenix local secondary index, two clients concurrent upsert data 
> to the same row key, while using the index column to retrieve data, it gets 
> the wrong results.
> Just like the attachments, I create one table called orders_5, and create one 
> local index on table orders_5, called clerk_5; then I use two clients to 
> upsert data to the same row key of  table orders_5, and you will see that, 
> the local index clerk_5 have some stale record (maybe its OK for eventual 
> consistency),  however, when you use the previous value to retrieve the 
> record, you can still get the result, even more serious, the result is wrong, 
> namely it not the record which you have insert before, and also not the 
> record in the primary table(in the case ,is the table orders_5)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3159) CachingHTableFactory may close HTable during eviction even if it is getting used for writing by another thread.

2016-09-28 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15531520#comment-15531520
 ] 

Devaraj Das commented on PHOENIX-3159:
--

+1 

> CachingHTableFactory may close HTable during eviction even if it is getting 
> used for writing by another thread.
> ---
>
> Key: PHOENIX-3159
> URL: https://issues.apache.org/jira/browse/PHOENIX-3159
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
> Fix For: 4.9.0
>
> Attachments: PHOENIX-3159.patch, PHOENIX-3159_v1.patch, 
> PHOENIX-3159_v2.patch, PHOENIX-3159_v3.patch, PHOENIX-3159_v4.patch
>
>
> CachingHTableFactory may close HTable during eviction even if it is getting 
> used for writing by another thread which results in writing thread to fail 
> and index is disabled.
> LRU eviction closing HTable or underlying connection when cache is full and 
> new HTable is requested.
> {code}
> 2016-08-04 13:45:21,109 DEBUG 
> [nat-s11-4-ioss-phoenix-1-5.openstacklocal,16020,1470297472814-index-writer--pool11-t35]
>  client.ConnectionManager$HConnectionImplementation: Closing HConnection 
> (debugging purposes only)
> java.lang.Exception
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.internalClose(ConnectionManager.java:2423)
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.close(ConnectionManager.java:2447)
> at 
> org.apache.hadoop.hbase.client.CoprocessorHConnection.close(CoprocessorHConnection.java:41)
> at 
> org.apache.hadoop.hbase.client.HTableWrapper.internalClose(HTableWrapper.java:91)
> at 
> org.apache.hadoop.hbase.client.HTableWrapper.close(HTableWrapper.java:107)
> at 
> org.apache.phoenix.hbase.index.table.CachingHTableFactory$HTableInterfaceLRUMap.removeLRU(CachingHTableFactory.java:61)
> at 
> org.apache.commons.collections.map.LRUMap.addMapping(LRUMap.java:256)
> at 
> org.apache.commons.collections.map.AbstractHashedMap.put(AbstractHashedMap.java:284)
> at 
> org.apache.phoenix.hbase.index.table.CachingHTableFactory.getTable(CachingHTableFactory.java:100)
> at 
> org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1.call(ParallelWriterIndexCommitter.java:160)
> at 
> org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1.call(ParallelWriterIndexCommitter.java:136)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> But the IndexWriter was using this old connection to write to the table which 
> was closed during LRU eviction
> {code}
> 016-08-04 13:44:59,553 ERROR [htable-pool659-t1] client.AsyncProcess: Cannot 
> get replica 0 location for 
> {"totalColumns":1,"row":"\\xC7\\x03\\x04\\x06X\\x1C)\\x00\\x80\\x07\\xB0X","families":{"0":[{"qualifier":"_0","vlen":2,"tag":[],"timestamp":1470318296425}]}}
> java.io.IOException: hconnection-0x21f468be closed
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1153)
> at 
> org.apache.hadoop.hbase.client.CoprocessorHConnection.locateRegion(CoprocessorHConnection.java:41)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.findAllLocationsOrFail(AsyncProcess.java:949)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.groupAndSendMultiAction(AsyncProcess.java:866)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.resubmit(AsyncProcess.java:1195)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.receiveGlobalFailure(AsyncProcess.java:1162)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.access$1100(AsyncProcess.java:584)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl$SingleServerRequestRunnable.run(AsyncProcess.java:727)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> Although the workaround is to the cache size(index.tablefactory.cache.size). 
> But still we should handle the closing of 

[jira] [Resolved] (PHOENIX-3235) Tephra errors when trying to create a transactional table in Phoenix 4.8.0

2016-09-28 Thread Francis Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francis Chuang resolved PHOENIX-3235.
-
   Resolution: Fixed
Fix Version/s: 4.8.1

Tested and confirmed to be fixed in 4.8.1.

> Tephra errors when trying to create a transactional table in Phoenix 4.8.0
> --
>
> Key: PHOENIX-3235
> URL: https://issues.apache.org/jira/browse/PHOENIX-3235
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.8.0
> Environment: Alpine Linux 3.4, OpenJDK 8 (JDK) in Docker 1.12.1 
> containers
>Reporter: Francis Chuang
> Fix For: 4.8.1
>
>
> I've build a docker image to run HBase 1.2 with Phoenix 4.8.0 in fully 
> distributed mode. There are 2 images, 1 containing HBase and Phoenix, and 1 
> with just the phoenix query server. Java is OpenJDK 8 (JDK flavour).
> The docker images can be found in my branch here: 
> https://github.com/F21/hbase-phoenix/tree/consolidate-images/
> To build each image, simply go into the appropriate folder and do a `docker 
> build . -t myimage`.
> To run the master, these environment variables are required:
> {code}
> CLUSTER_NAME: hbase
> HBASE_ROLE: master
> HBASE_ZOOKEEPER_QUORUM: myzk
> HDFS_CLUSTER_NAME: mycluster
> DFS_NAMENODE_RPC_ADDRESS_NN1: nn1:8020
> DFS_NAMENODE_RPC_ADDRESS_NN2: nn2:8020
> DFS_NAMENODE_HTTP_ADDRESS_NN1: nn1:50070
> DFS_NAMENODE_HTTP_ADDRESS_NN2: nn2:50070
> {code}
> To run the region server, these environment variables are required:
> {code}
> CLUSTER_NAME: hbase
> HBASE_ROLE: regionserver
> HBASE_ZOOKEEPER_QUORUM: myzk
> HDFS_CLUSTER_NAME: mycluster
> DFS_NAMENODE_RPC_ADDRESS_NN1: nn1:8020
> DFS_NAMENODE_RPC_ADDRESS_NN2: nn2:8020
> DFS_NAMENODE_HTTP_ADDRESS_NN1: nn1:50070
> DFS_NAMENODE_HTTP_ADDRESS_NN2: nn2:50070
> {code}
> HBase, the transaction server starts up correctly along with the query 
> server. I can connect to the query server using SquirrleSQL and the Phoenix 
> tables were created correctly. I am also able to create non-transactional 
> tables and run queries against them.
> However, if I try to create a transactional table, I get an error message 
> saying:
> {code}
> Error: Error -1 (0) : Error while executing SQL "CREATE TABLE my_table (k 
> BIGINT PRIMARY KEY, v VARCHAR) TRANSACTIONAL=true": Remote driver error: 
> RuntimeException: java.lang.Exception: Thrift error for 
> org.apache.tephra.distributed.TransactionServiceClient$2@660ae15c: Internal 
> error processing startShort -> Exception: Thrift error for 
> org.apache.tephra.distributed.TransactionServiceClient$2@660ae15c: Internal 
> error processing startShort -> TApplicationException: Internal error 
> processing startShort
> SQLState:  0
> ErrorCode: -1
> {code}
> JPS confirms that the transaction manager is still running:
> {code}
> bash-4.3# jps
> 771 Jps
> 138 HMaster
> 190 TransactionServiceMain
> {code}
> In ` /tmp/tephra-/tephra-service--m9edd51-hmaster1.m9edd51.log`, it logs the 
> following:
> {code}
> Wed Aug 31 23:12:33 UTC 2016 Starting tephra service on 
> m9edd51-hmaster1.m9edd51
> -f: file size (blocks) unlimited
> -t: cpu time (seconds) unlimited
> -d: data seg size (kb) unlimited
> -s: stack size (kb)8192
> -c: core file size (blocks)unlimited
> -m: resident set size (kb) unlimited
> -l: locked memory (kb) 64
> -p: processes  unlimited
> -n: file descriptors   65536
> -v: address space (kb) unlimited
> -w: locks  unlimited
> -e: scheduling priority0
> -r: real-time priority 0
> Command:  /usr/lib/jvm/java-1.8-openjdk/bin/java -XX:+UseConcMarkSweepGC -cp 
> 

[jira] [Commented] (PHOENIX-3181) Run test methods in parallel to reduce test suite run time

2016-09-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15531384#comment-15531384
 ] 

Hudson commented on PHOENIX-3181:
-

SUCCESS: Integrated in Jenkins build Phoenix-master #1422 (See 
[https://builds.apache.org/job/Phoenix-master/1422/])
PHOENIX-3181 Run test methods in parallel to reduce test suite run time 
(jamestaylor: rev 3032a6f6f70fe0abdf24731fdc0058e22f345247)
* (edit) pom.xml
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/ParallelRunListener.java


> Run test methods in parallel to reduce test suite run time
> --
>
> Key: PHOENIX-3181
> URL: https://issues.apache.org/jira/browse/PHOENIX-3181
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
> Attachments: PHOENIX-3181_4.x-HBase-1.1_WIP.patch, 
> PHOENIX-3181_v1.patch, PHOENIX-3181_v2.patch, PHOENIX-3181_v3.patch, 
> PHOENIX-3181_v4.patch, PHOENIX-3181_v5.patch, PHOENIX-3181_v6.patch, 
> PHOENIX-3181_v7.patch, PHOENIX-3181_v8.patch, PHOENIX-3181_v9.patch, 
> parallelize_4.x-Hbase1.1_wip.patch, serverLogForParallelTests.txt
>
>
> With PHOENIX-3036, the tests within a test class can be executed in parallel 
> since the table names won't conflict. This should greatly reduce the time 
> taken to run all of our tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-6) Support ON DUPLICATE KEY construct

2016-09-28 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-6?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15531383#comment-15531383
 ] 

James Taylor commented on PHOENIX-6:


bq. but incrementing a numeric column can use the optimized path, right?
- An Increment does a read+write under row lock like we're planning for this 
JIRA. See HRegion.java [1], doIncrement, applyIncrementsToColumnFamily and 
getIncrementCurrentValue calls.
- Our mechanism will have only one code path - there's no need to use Increment 
(and it wouldn't work in the context of Phoenix for a variety of reasons).
- Our mechanism will be more general as you can set a column value to the 
evaluation of *any* expression, not just column+1
- Our mechanism also gives you the ability to define the initial value while 
Increment doesn't

[1] 
https://github.com/apache/hbase/blob/branch-1.2/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java

> Support ON DUPLICATE KEY construct
> --
>
> Key: PHOENIX-6
> URL: https://issues.apache.org/jira/browse/PHOENIX-6
> Project: Phoenix
>  Issue Type: New Feature
>Reporter: James Taylor
>Assignee: James Taylor
> Fix For: 4.9.0
>
>
> To support inserting a new row only if it doesn't already exist, we should 
> support the "on duplicate key" construct for UPSERT. With this construct, the 
> UPSERT VALUES statement would run atomically and would thus require a read 
> before write which would obviously have a negative impact on performance. For 
> an example of similar syntax , see MySQL documentation at 
> http://dev.mysql.com/doc/refman/5.7/en/insert-on-duplicate.html
> See this discussion for more detail: 
> https://groups.google.com/d/msg/phoenix-hbase-user/Bof-TLrbTGg/68bnc8ZcWe0J. 
> A related discussion is on PHOENIX-2909.
> Initially we'd support the following:
> # This would prevent the setting of VAL to 0 if the row already exists:
> {code}
> UPSERT INTO T (PK, VAL) VALUES ('a',0) 
> ON DUPLICATE KEY IGNORE;
> {code}
> # This would increment the valueS of COUNTER1 and COUNTER2 if the row already 
> exists and otherwise initialize them to 0:
> {code}
> UPSERT INTO T (PK, COUNTER1, COUNTER2) VALUES ('a',0,0) 
> ON DUPLICATE KEY COUNTER1 = COUNTER1 + 1, COUNTER2 = COUNTER2 + 1;
> {code}
> So the general form is:
> {code}
> UPSERT ... VALUES ... [ ON DUPLICATE KEY [IGNORE | UPDATE 
> =, ...] ]
> {code}
> The following restrictions will apply:
> - The  may not be part of the primary key constraint - only KeyValue 
> columns will be allowed.
> - If the table is immutable, the  may not appear in a secondary 
> index. This is because the mutations for indexes on immutable tables are 
> calculated on the client-side, while this new syntax would potentially modify 
> the value on the server-side.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-6) Support ON DUPLICATE KEY construct

2016-09-28 Thread Gary Horen (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-6?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15531316#comment-15531316
 ] 

Gary Horen commented on PHOENIX-6:
--

[~giacomotaylor]:
>>would thus require a read before write

For a clause
 
ON DUPLICATE KEY counter = counter +1

you could just issue a put that contains an Increment, right? My understanding 
is that hbase would either instantiate the column (and row) if it doesn't 
exist, or apply the Increment to the existing column, right? We discussed this 
a couple of days ago but I'm not seeing any explicit description of it in this 
Jira.

ON DUPLICATE KEY IGNORE, and ON DUPLICATE KEY  would require read-then-write as far as I 
understand, but incrementing a numeric column can use the optimized path, right?

> Support ON DUPLICATE KEY construct
> --
>
> Key: PHOENIX-6
> URL: https://issues.apache.org/jira/browse/PHOENIX-6
> Project: Phoenix
>  Issue Type: New Feature
>Reporter: James Taylor
>Assignee: James Taylor
> Fix For: 4.9.0
>
>
> To support inserting a new row only if it doesn't already exist, we should 
> support the "on duplicate key" construct for UPSERT. With this construct, the 
> UPSERT VALUES statement would run atomically and would thus require a read 
> before write which would obviously have a negative impact on performance. For 
> an example of similar syntax , see MySQL documentation at 
> http://dev.mysql.com/doc/refman/5.7/en/insert-on-duplicate.html
> See this discussion for more detail: 
> https://groups.google.com/d/msg/phoenix-hbase-user/Bof-TLrbTGg/68bnc8ZcWe0J. 
> A related discussion is on PHOENIX-2909.
> Initially we'd support the following:
> # This would prevent the setting of VAL to 0 if the row already exists:
> {code}
> UPSERT INTO T (PK, VAL) VALUES ('a',0) 
> ON DUPLICATE KEY IGNORE;
> {code}
> # This would increment the valueS of COUNTER1 and COUNTER2 if the row already 
> exists and otherwise initialize them to 0:
> {code}
> UPSERT INTO T (PK, COUNTER1, COUNTER2) VALUES ('a',0,0) 
> ON DUPLICATE KEY COUNTER1 = COUNTER1 + 1, COUNTER2 = COUNTER2 + 1;
> {code}
> So the general form is:
> {code}
> UPSERT ... VALUES ... [ ON DUPLICATE KEY [IGNORE | UPDATE 
> =, ...] ]
> {code}
> The following restrictions will apply:
> - The  may not be part of the primary key constraint - only KeyValue 
> columns will be allowed.
> - If the table is immutable, the  may not appear in a secondary 
> index. This is because the mutations for indexes on immutable tables are 
> calculated on the client-side, while this new syntax would potentially modify 
> the value on the server-side.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3259) Create fat jar for transaction manager

2016-09-28 Thread Francis Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15531229#comment-15531229
 ] 

Francis Chuang commented on PHOENIX-3259:
-

 [~jamestaylor], replacing guava 12 with guava 13 worked! Thanks for the work 
around!

> Create fat jar for transaction manager
> --
>
> Key: PHOENIX-3259
> URL: https://issues.apache.org/jira/browse/PHOENIX-3259
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>
> Due to the incompatible guava version in HBase (12 instead of 13), the 
> transaction manager will not work by just pointing it to the HBase lib dir. A 
> reasonable alternative would be for Phoenix to build another fat jar 
> specifically for the transaction manager which includes all necessary 
> dependencies (namely guava 13). Then the bin/tephra script (we should rename 
> that to tephra.sh perhaps?) would simply need to have this jar on the 
> classpath.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3201) Implement DAYOFWEEK and DAYOFYEAR built-in functions

2016-09-28 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15531179#comment-15531179
 ] 

James Taylor commented on PHOENIX-3201:
---

[~an...@apache.org] - there's a bunch of scalar date functions. See all of the 
ones derived from DateScalarFunction. None of these consider the 
phoenix.query.dateFormatTimeZone as far as I know. Please file a separate JIRA 
if you think they should. What does the SQL spec say?

> Implement DAYOFWEEK and DAYOFYEAR built-in functions
> 
>
> Key: PHOENIX-3201
> URL: https://issues.apache.org/jira/browse/PHOENIX-3201
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: prakul agarwal
>  Labels: newbie
> Fix For: 4.9.0
>
> Attachments: PHOENIX-3201.patch, PHOENIX-3201_4.x-HBase-0.98.patch, 
> PHOENIX-3201_4.x-HBase-1.1.patch, PHOENIX-3201_master.patch
>
>
> DAYOFWEEK() as documented here: 
> https://docs.oracle.com/cd/B19188_01/doc/B15917/sqfunc.htm#i1005645
> DAYOFYEAR() as documented here: 
> https://docs.oracle.com/cd/B19188_01/doc/B15917/sqfunc.htm#i1005676



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-6) Support ON DUPLICATE KEY construct

2016-09-28 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-6?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-6:
---
Description: 
To support inserting a new row only if it doesn't already exist, we should 
support the "on duplicate key" construct for UPSERT. With this construct, the 
UPSERT VALUES statement would run atomically and would thus require a read 
before write which would obviously have a negative impact on performance. For 
an example of similar syntax , see MySQL documentation at 
http://dev.mysql.com/doc/refman/5.7/en/insert-on-duplicate.html

See this discussion for more detail: 
https://groups.google.com/d/msg/phoenix-hbase-user/Bof-TLrbTGg/68bnc8ZcWe0J. A 
related discussion is on PHOENIX-2909.

Initially we'd support the following:
# This would prevent the setting of VAL to 0 if the row already exists:
{code}
UPSERT INTO T (PK, VAL) VALUES ('a',0) 
ON DUPLICATE KEY IGNORE;
{code}
# This would increment the valueS of COUNTER1 and COUNTER2 if the row already 
exists and otherwise initialize them to 0:
{code}
UPSERT INTO T (PK, COUNTER1, COUNTER2) VALUES ('a',0,0) 
ON DUPLICATE KEY COUNTER1 = COUNTER1 + 1, COUNTER2 = COUNTER2 + 1;
{code}

So the general form is:
{code}
UPSERT ... VALUES ... [ ON DUPLICATE KEY [IGNORE | UPDATE 
=, ...] ]
{code}
The following restrictions will apply:
- The  may not be part of the primary key constraint - only KeyValue 
columns will be allowed.
- If the table is immutable, the  may not appear in a secondary index. 
This is because the mutations for indexes on immutable tables are calculated on 
the client-side, while this new syntax would potentially modify the value on 
the server-side.

  was:
To support inserting a new row only if it doesn't already exist, we should 
support the "on duplicate key" construct for UPSERT. With this construct, the 
UPSERT VALUES statement would run atomically and would thus require a read 
before write which would obviously have a negative impact on performance. For 
an example of similar syntax , see MySQL documentation at 
http://dev.mysql.com/doc/refman/5.7/en/insert-on-duplicate.html

See this discussion for more detail: 
https://groups.google.com/d/msg/phoenix-hbase-user/Bof-TLrbTGg/68bnc8ZcWe0J. A 
related discussion is on PHOENIX-2909.

Initially we'd support the following:
# This would prevent the setting of VAL to 0 if the row already exists:
{code}
UPSERT INTO T (PK, VAL) VALUES ('a',0) 
ON DUPLICATE KEY IGNORE;
{code}
# This would increment the valueS of COUNTER1 and COUNTER2 if the row already 
exists and otherwise initialize them to 0:
{code}
UPSERT INTO T (PK, COUNTER1, COUNTER2) VALUES ('a',0,0) 
ON DUPLICATE KEY COUNTER1 = COUNTER1 + 1, COUNTER2 = COUNTER2 + 1;
{code}



> Support ON DUPLICATE KEY construct
> --
>
> Key: PHOENIX-6
> URL: https://issues.apache.org/jira/browse/PHOENIX-6
> Project: Phoenix
>  Issue Type: New Feature
>Reporter: James Taylor
>Assignee: James Taylor
> Fix For: 4.9.0
>
>
> To support inserting a new row only if it doesn't already exist, we should 
> support the "on duplicate key" construct for UPSERT. With this construct, the 
> UPSERT VALUES statement would run atomically and would thus require a read 
> before write which would obviously have a negative impact on performance. For 
> an example of similar syntax , see MySQL documentation at 
> http://dev.mysql.com/doc/refman/5.7/en/insert-on-duplicate.html
> See this discussion for more detail: 
> https://groups.google.com/d/msg/phoenix-hbase-user/Bof-TLrbTGg/68bnc8ZcWe0J. 
> A related discussion is on PHOENIX-2909.
> Initially we'd support the following:
> # This would prevent the setting of VAL to 0 if the row already exists:
> {code}
> UPSERT INTO T (PK, VAL) VALUES ('a',0) 
> ON DUPLICATE KEY IGNORE;
> {code}
> # This would increment the valueS of COUNTER1 and COUNTER2 if the row already 
> exists and otherwise initialize them to 0:
> {code}
> UPSERT INTO T (PK, COUNTER1, COUNTER2) VALUES ('a',0,0) 
> ON DUPLICATE KEY COUNTER1 = COUNTER1 + 1, COUNTER2 = COUNTER2 + 1;
> {code}
> So the general form is:
> {code}
> UPSERT ... VALUES ... [ ON DUPLICATE KEY [IGNORE | UPDATE 
> =, ...] ]
> {code}
> The following restrictions will apply:
> - The  may not be part of the primary key constraint - only KeyValue 
> columns will be allowed.
> - If the table is immutable, the  may not appear in a secondary 
> index. This is because the mutations for indexes on immutable tables are 
> calculated on the client-side, while this new syntax would potentially modify 
> the value on the server-side.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3181) Run test methods in parallel to reduce test suite run time

2016-09-28 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15531099#comment-15531099
 ] 

James Taylor commented on PHOENIX-3181:
---

Checked into master only for now. Will monitor how the test runs go, but it 
looks like the overall time is cut in half, down to about 34mins.

> Run test methods in parallel to reduce test suite run time
> --
>
> Key: PHOENIX-3181
> URL: https://issues.apache.org/jira/browse/PHOENIX-3181
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
> Attachments: PHOENIX-3181_4.x-HBase-1.1_WIP.patch, 
> PHOENIX-3181_v1.patch, PHOENIX-3181_v2.patch, PHOENIX-3181_v3.patch, 
> PHOENIX-3181_v4.patch, PHOENIX-3181_v5.patch, PHOENIX-3181_v6.patch, 
> PHOENIX-3181_v7.patch, PHOENIX-3181_v8.patch, PHOENIX-3181_v9.patch, 
> parallelize_4.x-Hbase1.1_wip.patch, serverLogForParallelTests.txt
>
>
> With PHOENIX-3036, the tests within a test class can be executed in parallel 
> since the table names won't conflict. This should greatly reduce the time 
> taken to run all of our tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3181) Run test methods in parallel to reduce test suite run time

2016-09-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15531078#comment-15531078
 ] 

Hadoop QA commented on PHOENIX-3181:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12830784/PHOENIX-3181_v9.patch
  against master branch at commit 58596bbc416ce577d3407910f1a127150d8c.
  ATTACHMENT ID: 12830784

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+0 tests included{color}.  The patch appears to be a 
documentation, build,
or dev patch that doesn't require tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
38 warning messages.

{color:red}-1 release audit{color}.  The applied patch generated 1 release 
audit warnings (more than the master's current 0 warnings).

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+at 
org.apache.hadoop.hbase.regionserver.HRegion$RowLockContext.cleanUp(HRegion.java:5206)
+at 
org.apache.hadoop.hbase.regionserver.HRegion$RowLockImpl.release(HRegion.java:5246)
+at 
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doGetTable(MetaDataEndpointImpl.java:2898)
+at 
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doGetTable(MetaDataEndpointImpl.java:2835)
+at 
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:490)
 -->

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/605//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/605//artifact/patchprocess/patchReleaseAuditWarnings.txt
Javadoc warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/605//artifact/patchprocess/patchJavadocWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/605//console

This message is automatically generated.

> Run test methods in parallel to reduce test suite run time
> --
>
> Key: PHOENIX-3181
> URL: https://issues.apache.org/jira/browse/PHOENIX-3181
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
> Attachments: PHOENIX-3181_4.x-HBase-1.1_WIP.patch, 
> PHOENIX-3181_v1.patch, PHOENIX-3181_v2.patch, PHOENIX-3181_v3.patch, 
> PHOENIX-3181_v4.patch, PHOENIX-3181_v5.patch, PHOENIX-3181_v6.patch, 
> PHOENIX-3181_v7.patch, PHOENIX-3181_v8.patch, PHOENIX-3181_v9.patch, 
> parallelize_4.x-Hbase1.1_wip.patch, serverLogForParallelTests.txt
>
>
> With PHOENIX-3036, the tests within a test class can be executed in parallel 
> since the table names won't conflict. This should greatly reduce the time 
> taken to run all of our tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2909) Surface checkAndPut through UPDATE statement

2016-09-28 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15530934#comment-15530934
 ] 

James Taylor commented on PHOENIX-2909:
---

This solution isn't complete because it doesn't handle the potential race 
between a row being inserted and a row being updated. Instead, the plan is to 
implement PHOENIX-6. I think this addresses your concern as well, 
[~cameron.hatfield].

> Surface checkAndPut through UPDATE statement
> 
>
> Key: PHOENIX-2909
> URL: https://issues.apache.org/jira/browse/PHOENIX-2909
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>
> We can surface atomic checkAndPut like functionality through support of the 
> SQL UPSERT statement.
> For example, the following could use do a get under row lock to perform the 
> row update atomically
> {code}
> UPDATE  my_table SET counter=coalesce(counter,0) + 1 
> FROM my_table WHERE pk1 = 1 AND pk2 = 2;
> {code}
> To force prior MVCC transactions to complete (making it serializable as an 
> Increment is), we'd have code like this:
> {code}
> mvcc = region.getMVCC();
> mvcc.completeMemstoreInsert(mvcc.beginMemstoreInsert());
> {code}
> By users setting auto commit to true and issuing an UPDATE statement over a 
> non transactional table, they'd get a way for row updates to be atomic. This 
> would work especially well to support counters.
> An UPDATE statement would simply be translated to an equivalent UPSERT SELECT 
> with a flag being passed to the server such that the row lock and read occurs 
> when executed. For example, the above statement would become:
> {code}
> UPSERT INTO  my_table(pk1,pk2,counter) SELECT pk1, pk2, coalesce(counter,0) + 
> 1 
> FROM my_table WHERE pk1 = 1 AND pk2 = 2;
> {code}
> Note that the coalesce call above handles the case where counter is null. 
> This could be made prettier with support for the DEFAULT clause at CREATE 
> TABLE time (PHOENIX-476).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2909) Surface checkAndPut through UPDATE statement

2016-09-28 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-2909:
--
Assignee: (was: James Taylor)

> Surface checkAndPut through UPDATE statement
> 
>
> Key: PHOENIX-2909
> URL: https://issues.apache.org/jira/browse/PHOENIX-2909
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>
> We can surface atomic checkAndPut like functionality through support of the 
> SQL UPSERT statement.
> For example, the following could use do a get under row lock to perform the 
> row update atomically
> {code}
> UPDATE  my_table SET counter=coalesce(counter,0) + 1 
> FROM my_table WHERE pk1 = 1 AND pk2 = 2;
> {code}
> To force prior MVCC transactions to complete (making it serializable as an 
> Increment is), we'd have code like this:
> {code}
> mvcc = region.getMVCC();
> mvcc.completeMemstoreInsert(mvcc.beginMemstoreInsert());
> {code}
> By users setting auto commit to true and issuing an UPDATE statement over a 
> non transactional table, they'd get a way for row updates to be atomic. This 
> would work especially well to support counters.
> An UPDATE statement would simply be translated to an equivalent UPSERT SELECT 
> with a flag being passed to the server such that the row lock and read occurs 
> when executed. For example, the above statement would become:
> {code}
> UPSERT INTO  my_table(pk1,pk2,counter) SELECT pk1, pk2, coalesce(counter,0) + 
> 1 
> FROM my_table WHERE pk1 = 1 AND pk2 = 2;
> {code}
> Note that the coalesce call above handles the case where counter is null. 
> This could be made prettier with support for the DEFAULT clause at CREATE 
> TABLE time (PHOENIX-476).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (PHOENIX-6) Support ON DUPLICATE KEY construct

2016-09-28 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-6?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor reassigned PHOENIX-6:
--

Assignee: James Taylor

> Support ON DUPLICATE KEY construct
> --
>
> Key: PHOENIX-6
> URL: https://issues.apache.org/jira/browse/PHOENIX-6
> Project: Phoenix
>  Issue Type: New Feature
>Reporter: James Taylor
>Assignee: James Taylor
> Fix For: 4.9.0
>
>
> To support inserting a new row only if it doesn't already exist, we should 
> support the "on duplicate key" construct for UPSERT. With this construct, the 
> UPSERT VALUES statement would run atomically and would thus require a read 
> before write which would obviously have a negative impact on performance. For 
> an example of similar syntax , see MySQL documentation at 
> http://dev.mysql.com/doc/refman/5.7/en/insert-on-duplicate.html
> See this discussion for more detail: 
> https://groups.google.com/d/msg/phoenix-hbase-user/Bof-TLrbTGg/68bnc8ZcWe0J. 
> A related discussion is on PHOENIX-2909.
> Initially we'd support the following:
> # This would prevent the setting of VAL to 0 if the row already exists:
> {code}
> UPSERT INTO T (PK, VAL) VALUES ('a',0) 
> ON DUPLICATE KEY IGNORE;
> {code}
> # This would increment the valueS of COUNTER1 and COUNTER2 if the row already 
> exists and otherwise initialize them to 0:
> {code}
> UPSERT INTO T (PK, COUNTER1, COUNTER2) VALUES ('a',0,0) 
> ON DUPLICATE KEY COUNTER1 = COUNTER1 + 1, COUNTER2 = COUNTER2 + 1;
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-6) Support ON DUPLICATE KEY construct

2016-09-28 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-6?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-6:
---
Description: 
To support inserting a new row only if it doesn't already exist, we should 
support the "on duplicate key" construct for UPSERT. With this construct, the 
UPSERT VALUES statement would run atomically and would thus require a read 
before write which would obviously have a negative impact on performance. For 
an example of similar syntax , see MySQL documentation at 
http://dev.mysql.com/doc/refman/5.7/en/insert-on-duplicate.html

See this discussion for more detail: 
https://groups.google.com/d/msg/phoenix-hbase-user/Bof-TLrbTGg/68bnc8ZcWe0J. A 
related discussion is on PHOENIX-2909.

Initially we'd support the following:
# This would prevent the setting of VAL to 0 if the row already exists:
{code}
UPSERT INTO T (PK, VAL) VALUES ('a',0) 
ON DUPLICATE KEY IGNORE;
{code}
# This would increment the valueS of COUNTER1 and COUNTER2 if the row already 
exists and otherwise initialize them to 0:
{code}
UPSERT INTO T (PK, COUNTER1, COUNTER2) VALUES ('a',0,0) 
ON DUPLICATE KEY COUNTER1 = COUNTER1 + 1, COUNTER2 = COUNTER2 + 1;
{code}


  was:
To support inserting a new row only if it doesn't already exist, we should 
support the "on duplicate key" construct for UPSERT. With this construct, the 
UPSERT VALUES statement would run atomically and would thus require a read 
before write which would obviously have a negative impact on performance. For 
an example of similar syntax , see MySQL documentation at 
http://dev.mysql.com/doc/refman/5.7/en/insert-on-duplicate.html

See this discussion for more detail: 
https://groups.google.com/d/msg/phoenix-hbase-user/Bof-TLrbTGg/68bnc8ZcWe0J. A 
related discussion is on PHOENIX-2909.

Initially we'd support the following:
# This would prevent the setting of VAL to 0 if the row already exists:
{code}
UPSERT INTO T (PK, VAL) VALUES ('a',0) 
ON DUPLICATE KEY IGNORE;
{code}
# This would increment the valueS of COUNTER1 and COUNTER2 if the row already 
exists and otherwise initialize them to 0:
{code}
UPSERT INTO T (PK, COUNTER1, COUNTER2) VALUES ('a',0) 
ON DUPLICATE KEY COUNTER1 = COUNTER1 + 1, COUNTER2 = COUNTER2 + 1;
{code}



> Support ON DUPLICATE KEY construct
> --
>
> Key: PHOENIX-6
> URL: https://issues.apache.org/jira/browse/PHOENIX-6
> Project: Phoenix
>  Issue Type: New Feature
>Reporter: James Taylor
> Fix For: 4.9.0
>
>
> To support inserting a new row only if it doesn't already exist, we should 
> support the "on duplicate key" construct for UPSERT. With this construct, the 
> UPSERT VALUES statement would run atomically and would thus require a read 
> before write which would obviously have a negative impact on performance. For 
> an example of similar syntax , see MySQL documentation at 
> http://dev.mysql.com/doc/refman/5.7/en/insert-on-duplicate.html
> See this discussion for more detail: 
> https://groups.google.com/d/msg/phoenix-hbase-user/Bof-TLrbTGg/68bnc8ZcWe0J. 
> A related discussion is on PHOENIX-2909.
> Initially we'd support the following:
> # This would prevent the setting of VAL to 0 if the row already exists:
> {code}
> UPSERT INTO T (PK, VAL) VALUES ('a',0) 
> ON DUPLICATE KEY IGNORE;
> {code}
> # This would increment the valueS of COUNTER1 and COUNTER2 if the row already 
> exists and otherwise initialize them to 0:
> {code}
> UPSERT INTO T (PK, COUNTER1, COUNTER2) VALUES ('a',0,0) 
> ON DUPLICATE KEY COUNTER1 = COUNTER1 + 1, COUNTER2 = COUNTER2 + 1;
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2909) Surface checkAndPut through UPDATE statement

2016-09-28 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-2909:
--
Fix Version/s: (was: 4.9.0)

> Surface checkAndPut through UPDATE statement
> 
>
> Key: PHOENIX-2909
> URL: https://issues.apache.org/jira/browse/PHOENIX-2909
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
>
> We can surface atomic checkAndPut like functionality through support of the 
> SQL UPSERT statement.
> For example, the following could use do a get under row lock to perform the 
> row update atomically
> {code}
> UPDATE  my_table SET counter=coalesce(counter,0) + 1 
> FROM my_table WHERE pk1 = 1 AND pk2 = 2;
> {code}
> To force prior MVCC transactions to complete (making it serializable as an 
> Increment is), we'd have code like this:
> {code}
> mvcc = region.getMVCC();
> mvcc.completeMemstoreInsert(mvcc.beginMemstoreInsert());
> {code}
> By users setting auto commit to true and issuing an UPDATE statement over a 
> non transactional table, they'd get a way for row updates to be atomic. This 
> would work especially well to support counters.
> An UPDATE statement would simply be translated to an equivalent UPSERT SELECT 
> with a flag being passed to the server such that the row lock and read occurs 
> when executed. For example, the above statement would become:
> {code}
> UPSERT INTO  my_table(pk1,pk2,counter) SELECT pk1, pk2, coalesce(counter,0) + 
> 1 
> FROM my_table WHERE pk1 = 1 AND pk2 = 2;
> {code}
> Note that the coalesce call above handles the case where counter is null. 
> This could be made prettier with support for the DEFAULT clause at CREATE 
> TABLE time (PHOENIX-476).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-6) Support ON DUPLICATE KEY construct

2016-09-28 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-6?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15530929#comment-15530929
 ] 

James Taylor commented on PHOENIX-6:


FYI, [~gho...@salesforce.com]

> Support ON DUPLICATE KEY construct
> --
>
> Key: PHOENIX-6
> URL: https://issues.apache.org/jira/browse/PHOENIX-6
> Project: Phoenix
>  Issue Type: New Feature
>Reporter: James Taylor
> Fix For: 4.9.0
>
>
> To support inserting a new row only if it doesn't already exist, we should 
> support the "on duplicate key" construct for UPSERT. With this construct, the 
> UPSERT VALUES statement would run atomically and would thus require a read 
> before write which would obviously have a negative impact on performance. For 
> an example of similar syntax , see MySQL documentation at 
> http://dev.mysql.com/doc/refman/5.7/en/insert-on-duplicate.html
> See this discussion for more detail: 
> https://groups.google.com/d/msg/phoenix-hbase-user/Bof-TLrbTGg/68bnc8ZcWe0J. 
> A related discussion is on PHOENIX-2909.
> Initially we'd support the following:
> # This would prevent the setting of VAL to 0 if the row already exists:
> {code}
> UPSERT INTO T (PK, VAL) VALUES ('a',0) 
> ON DUPLICATE KEY IGNORE;
> {code}
> # This would increment the valueS of COUNTER1 and COUNTER2 if the row already 
> exists and otherwise initialize them to 0:
> {code}
> UPSERT INTO T (PK, COUNTER1, COUNTER2) VALUES ('a',0,0) 
> ON DUPLICATE KEY COUNTER1 = COUNTER1 + 1, COUNTER2 = COUNTER2 + 1;
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-6) Support ON DUPLICATE KEY construct

2016-09-28 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-6?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-6:
---
Description: 
To support inserting a new row only if it doesn't already exist, we should 
support the "on duplicate key" construct for UPSERT. With this construct, the 
UPSERT VALUES statement would run atomically and would thus require a read 
before write which would obviously have a negative impact on performance. For 
an example of similar syntax , see MySQL documentation at 
http://dev.mysql.com/doc/refman/5.7/en/insert-on-duplicate.html

See this discussion for more detail: 
https://groups.google.com/d/msg/phoenix-hbase-user/Bof-TLrbTGg/68bnc8ZcWe0J. A 
related discussion is on PHOENIX-2909.

Initially we'd support the following:
# This would prevent the setting of VAL to 0 if the row already exists:
{code}
UPSERT INTO T (PK, VAL) VALUES ('a',0) 
ON DUPLICATE KEY IGNORE;
{code}
# This would increment the valueS of COUNTER1 and COUNTER2 if the row already 
exists and otherwise initialize them to 0:
{code}
UPSERT INTO T (PK, COUNTER1, COUNTER2) VALUES ('a',0) 
ON DUPLICATE KEY COUNTER1 = COUNTER1 + 1, COUNTER2 = COUNTER2 + 1;
{code}


  was:
To support inserting a new row only if it doesn't already exist, we should 
support the "on duplicate key ignore" construct (or it's SQL standard 
equivalent) for UPSERT.

See this discussion for more detail: 
https://groups.google.com/d/msg/phoenix-hbase-user/Bof-TLrbTGg/68bnc8ZcWe0J



> Support ON DUPLICATE KEY construct
> --
>
> Key: PHOENIX-6
> URL: https://issues.apache.org/jira/browse/PHOENIX-6
> Project: Phoenix
>  Issue Type: New Feature
>Reporter: James Taylor
> Fix For: 4.9.0
>
>
> To support inserting a new row only if it doesn't already exist, we should 
> support the "on duplicate key" construct for UPSERT. With this construct, the 
> UPSERT VALUES statement would run atomically and would thus require a read 
> before write which would obviously have a negative impact on performance. For 
> an example of similar syntax , see MySQL documentation at 
> http://dev.mysql.com/doc/refman/5.7/en/insert-on-duplicate.html
> See this discussion for more detail: 
> https://groups.google.com/d/msg/phoenix-hbase-user/Bof-TLrbTGg/68bnc8ZcWe0J. 
> A related discussion is on PHOENIX-2909.
> Initially we'd support the following:
> # This would prevent the setting of VAL to 0 if the row already exists:
> {code}
> UPSERT INTO T (PK, VAL) VALUES ('a',0) 
> ON DUPLICATE KEY IGNORE;
> {code}
> # This would increment the valueS of COUNTER1 and COUNTER2 if the row already 
> exists and otherwise initialize them to 0:
> {code}
> UPSERT INTO T (PK, COUNTER1, COUNTER2) VALUES ('a',0) 
> ON DUPLICATE KEY COUNTER1 = COUNTER1 + 1, COUNTER2 = COUNTER2 + 1;
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-6) Support ON DUPLICATE KEY construct

2016-09-28 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-6?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-6:
---
Summary: Support ON DUPLICATE KEY construct  (was: Support on duplicate key 
ignore construct)

> Support ON DUPLICATE KEY construct
> --
>
> Key: PHOENIX-6
> URL: https://issues.apache.org/jira/browse/PHOENIX-6
> Project: Phoenix
>  Issue Type: New Feature
>Reporter: James Taylor
> Fix For: 4.9.0
>
>
> To support inserting a new row only if it doesn't already exist, we should 
> support the "on duplicate key ignore" construct (or it's SQL standard 
> equivalent) for UPSERT.
> See this discussion for more detail: 
> https://groups.google.com/d/msg/phoenix-hbase-user/Bof-TLrbTGg/68bnc8ZcWe0J



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-6) Support on duplicate key ignore construct

2016-09-28 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-6?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-6:
---
Fix Version/s: 4.9.0

> Support on duplicate key ignore construct
> -
>
> Key: PHOENIX-6
> URL: https://issues.apache.org/jira/browse/PHOENIX-6
> Project: Phoenix
>  Issue Type: New Feature
>Reporter: James Taylor
> Fix For: 4.9.0
>
>
> To support inserting a new row only if it doesn't already exist, we should 
> support the "on duplicate key ignore" construct (or it's SQL standard 
> equivalent) for UPSERT.
> See this discussion for more detail: 
> https://groups.google.com/d/msg/phoenix-hbase-user/Bof-TLrbTGg/68bnc8ZcWe0J



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-3181) Run test methods in parallel to reduce test suite run time

2016-09-28 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-3181:
--
Attachment: PHOENIX-3181_v9.patch

Two passed test runs in a row. Let's try one more time.

> Run test methods in parallel to reduce test suite run time
> --
>
> Key: PHOENIX-3181
> URL: https://issues.apache.org/jira/browse/PHOENIX-3181
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
> Attachments: PHOENIX-3181_4.x-HBase-1.1_WIP.patch, 
> PHOENIX-3181_v1.patch, PHOENIX-3181_v2.patch, PHOENIX-3181_v3.patch, 
> PHOENIX-3181_v4.patch, PHOENIX-3181_v5.patch, PHOENIX-3181_v6.patch, 
> PHOENIX-3181_v7.patch, PHOENIX-3181_v8.patch, PHOENIX-3181_v9.patch, 
> parallelize_4.x-Hbase1.1_wip.patch, serverLogForParallelTests.txt
>
>
> With PHOENIX-3036, the tests within a test class can be executed in parallel 
> since the table names won't conflict. This should greatly reduce the time 
> taken to run all of our tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-3181) Run test methods in parallel to reduce test suite run time

2016-09-28 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-3181:
--
Attachment: (was: PHOENIX-3181_v9.patch)

> Run test methods in parallel to reduce test suite run time
> --
>
> Key: PHOENIX-3181
> URL: https://issues.apache.org/jira/browse/PHOENIX-3181
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
> Attachments: PHOENIX-3181_4.x-HBase-1.1_WIP.patch, 
> PHOENIX-3181_v1.patch, PHOENIX-3181_v2.patch, PHOENIX-3181_v3.patch, 
> PHOENIX-3181_v4.patch, PHOENIX-3181_v5.patch, PHOENIX-3181_v6.patch, 
> PHOENIX-3181_v7.patch, PHOENIX-3181_v8.patch, 
> parallelize_4.x-Hbase1.1_wip.patch, serverLogForParallelTests.txt
>
>
> With PHOENIX-3036, the tests within a test class can be executed in parallel 
> since the table names won't conflict. This should greatly reduce the time 
> taken to run all of our tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3181) Run test methods in parallel to reduce test suite run time

2016-09-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15530830#comment-15530830
 ] 

Hadoop QA commented on PHOENIX-3181:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12830773/PHOENIX-3181_v9.patch
  against master branch at commit 58596bbc416ce577d3407910f1a127150d8c.
  ATTACHMENT ID: 12830773

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+0 tests included{color}.  The patch appears to be a 
documentation, build,
or dev patch that doesn't require tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
38 warning messages.

{color:red}-1 release audit{color}.  The applied patch generated 1 release 
audit warnings (more than the master's current 0 warnings).

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+at 
org.apache.hadoop.hbase.regionserver.HRegion$RowLockContext.cleanUp(HRegion.java:5206)
+at 
org.apache.hadoop.hbase.regionserver.HRegion$RowLockImpl.release(HRegion.java:5246)
+at 
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doGetTable(MetaDataEndpointImpl.java:2898)
+at 
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doGetTable(MetaDataEndpointImpl.java:2835)
+at 
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:490)
 -->

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/604//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/604//artifact/patchprocess/patchReleaseAuditWarnings.txt
Javadoc warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/604//artifact/patchprocess/patchJavadocWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/604//console

This message is automatically generated.

> Run test methods in parallel to reduce test suite run time
> --
>
> Key: PHOENIX-3181
> URL: https://issues.apache.org/jira/browse/PHOENIX-3181
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
> Attachments: PHOENIX-3181_4.x-HBase-1.1_WIP.patch, 
> PHOENIX-3181_v1.patch, PHOENIX-3181_v2.patch, PHOENIX-3181_v3.patch, 
> PHOENIX-3181_v4.patch, PHOENIX-3181_v5.patch, PHOENIX-3181_v6.patch, 
> PHOENIX-3181_v7.patch, PHOENIX-3181_v8.patch, PHOENIX-3181_v9.patch, 
> parallelize_4.x-Hbase1.1_wip.patch, serverLogForParallelTests.txt
>
>
> With PHOENIX-3036, the tests within a test class can be executed in parallel 
> since the table names won't conflict. This should greatly reduce the time 
> taken to run all of our tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-3181) Run test methods in parallel to reduce test suite run time

2016-09-28 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-3181:
--
Attachment: (was: PHOENIX-3181_v9.patch)

> Run test methods in parallel to reduce test suite run time
> --
>
> Key: PHOENIX-3181
> URL: https://issues.apache.org/jira/browse/PHOENIX-3181
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
> Attachments: PHOENIX-3181_4.x-HBase-1.1_WIP.patch, 
> PHOENIX-3181_v1.patch, PHOENIX-3181_v2.patch, PHOENIX-3181_v3.patch, 
> PHOENIX-3181_v4.patch, PHOENIX-3181_v5.patch, PHOENIX-3181_v6.patch, 
> PHOENIX-3181_v7.patch, PHOENIX-3181_v8.patch, PHOENIX-3181_v9.patch, 
> parallelize_4.x-Hbase1.1_wip.patch, serverLogForParallelTests.txt
>
>
> With PHOENIX-3036, the tests within a test class can be executed in parallel 
> since the table names won't conflict. This should greatly reduce the time 
> taken to run all of our tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-3181) Run test methods in parallel to reduce test suite run time

2016-09-28 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-3181:
--
Attachment: PHOENIX-3181_v9.patch

> Run test methods in parallel to reduce test suite run time
> --
>
> Key: PHOENIX-3181
> URL: https://issues.apache.org/jira/browse/PHOENIX-3181
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
> Attachments: PHOENIX-3181_4.x-HBase-1.1_WIP.patch, 
> PHOENIX-3181_v1.patch, PHOENIX-3181_v2.patch, PHOENIX-3181_v3.patch, 
> PHOENIX-3181_v4.patch, PHOENIX-3181_v5.patch, PHOENIX-3181_v6.patch, 
> PHOENIX-3181_v7.patch, PHOENIX-3181_v8.patch, PHOENIX-3181_v9.patch, 
> parallelize_4.x-Hbase1.1_wip.patch, serverLogForParallelTests.txt
>
>
> With PHOENIX-3036, the tests within a test class can be executed in parallel 
> since the table names won't conflict. This should greatly reduce the time 
> taken to run all of our tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3181) Run test methods in parallel to reduce test suite run time

2016-09-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15530676#comment-15530676
 ] 

Hadoop QA commented on PHOENIX-3181:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12830767/PHOENIX-3181_v9.patch
  against master branch at commit 58596bbc416ce577d3407910f1a127150d8c.
  ATTACHMENT ID: 12830767

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+0 tests included{color}.  The patch appears to be a 
documentation, build,
or dev patch that doesn't require tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
38 warning messages.

{color:red}-1 release audit{color}.  The applied patch generated 1 release 
audit warnings (more than the master's current 0 warnings).

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+at 
org.apache.hadoop.hbase.regionserver.HRegion$RowLockContext.cleanUp(HRegion.java:5206)
+at 
org.apache.hadoop.hbase.regionserver.HRegion$RowLockImpl.release(HRegion.java:5246)
+at 
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doGetTable(MetaDataEndpointImpl.java:2898)
+at 
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doGetTable(MetaDataEndpointImpl.java:2835)
+at 
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:490)
 -->

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/603//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/603//artifact/patchprocess/patchReleaseAuditWarnings.txt
Javadoc warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/603//artifact/patchprocess/patchJavadocWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/603//console

This message is automatically generated.

> Run test methods in parallel to reduce test suite run time
> --
>
> Key: PHOENIX-3181
> URL: https://issues.apache.org/jira/browse/PHOENIX-3181
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
> Attachments: PHOENIX-3181_4.x-HBase-1.1_WIP.patch, 
> PHOENIX-3181_v1.patch, PHOENIX-3181_v2.patch, PHOENIX-3181_v3.patch, 
> PHOENIX-3181_v4.patch, PHOENIX-3181_v5.patch, PHOENIX-3181_v6.patch, 
> PHOENIX-3181_v7.patch, PHOENIX-3181_v8.patch, PHOENIX-3181_v9.patch, 
> parallelize_4.x-Hbase1.1_wip.patch, serverLogForParallelTests.txt
>
>
> With PHOENIX-3036, the tests within a test class can be executed in parallel 
> since the table names won't conflict. This should greatly reduce the time 
> taken to run all of our tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-3181) Run test methods in parallel to reduce test suite run time

2016-09-28 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-3181:
--
Attachment: PHOENIX-3181_v9.patch

> Run test methods in parallel to reduce test suite run time
> --
>
> Key: PHOENIX-3181
> URL: https://issues.apache.org/jira/browse/PHOENIX-3181
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
> Attachments: PHOENIX-3181_4.x-HBase-1.1_WIP.patch, 
> PHOENIX-3181_v1.patch, PHOENIX-3181_v2.patch, PHOENIX-3181_v3.patch, 
> PHOENIX-3181_v4.patch, PHOENIX-3181_v5.patch, PHOENIX-3181_v6.patch, 
> PHOENIX-3181_v7.patch, PHOENIX-3181_v8.patch, PHOENIX-3181_v9.patch, 
> parallelize_4.x-Hbase1.1_wip.patch, serverLogForParallelTests.txt
>
>
> With PHOENIX-3036, the tests within a test class can be executed in parallel 
> since the table names won't conflict. This should greatly reduce the time 
> taken to run all of our tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-3181) Run test methods in parallel to reduce test suite run time

2016-09-28 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-3181:
--
Attachment: (was: PHOENIX-3181_v9.patch)

> Run test methods in parallel to reduce test suite run time
> --
>
> Key: PHOENIX-3181
> URL: https://issues.apache.org/jira/browse/PHOENIX-3181
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
> Attachments: PHOENIX-3181_4.x-HBase-1.1_WIP.patch, 
> PHOENIX-3181_v1.patch, PHOENIX-3181_v2.patch, PHOENIX-3181_v3.patch, 
> PHOENIX-3181_v4.patch, PHOENIX-3181_v5.patch, PHOENIX-3181_v6.patch, 
> PHOENIX-3181_v7.patch, PHOENIX-3181_v8.patch, PHOENIX-3181_v9.patch, 
> parallelize_4.x-Hbase1.1_wip.patch, serverLogForParallelTests.txt
>
>
> With PHOENIX-3036, the tests within a test class can be executed in parallel 
> since the table names won't conflict. This should greatly reduce the time 
> taken to run all of our tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (PHOENIX-3281) Workaround surefire error of there was a timeout or other error in the fork

2016-09-28 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3281?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor resolved PHOENIX-3281.
---
   Resolution: Fixed
Fix Version/s: 4.9.0

Most of the issues seem to be related to HBase hanging when attempting to bring 
down the mini cluster. Adding a shutdown hook that just halts the JVM seems to 
have solved the majority of issues (as we don't need to do any cleanup for unit 
tests).

Further fixes can be done under a different JIRA that will primary identify any 
tests that are hanging.

> Workaround surefire error of there was a timeout or other error in the fork
> ---
>
> Key: PHOENIX-3281
> URL: https://issues.apache.org/jira/browse/PHOENIX-3281
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: James Taylor
>Assignee: James Taylor
> Fix For: 4.9.0
>
>
> The following error is occurring when running the methods in parallel, though 
> all tests are passing. It seems that the JVM is not shutting down as expected.
> {code}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-failsafe-plugin:2.19.1:verify 
> (HBaseManagedTimeTableReuseTest) on project phoenix-core: There was a timeout 
> or other error in the fork -> [Help 1]
> org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute 
> goal org.apache.maven.plugins:maven-failsafe-plugin:2.19.1:verify 
> (HBaseManagedTimeTableReuseTest) on project phoenix-core: There was a timeout 
> or other error in the fork
>   at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:212)
>   at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153)
>   at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145)
>   at 
> org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:116)
>   at 
> org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:80)
>   at 
> org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build(SingleThreadedBuilder.java:51)
>   at 
> org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:128)
>   at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:307)
>   at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:193)
>   at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:106)
>   at org.apache.maven.cli.MavenCli.execute(MavenCli.java:863)
>   at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:288)
>   at org.apache.maven.cli.MavenCli.main(MavenCli.java:199)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:289)
>   at 
> org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:229)
>   at 
> org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:415)
>   at 
> org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:356)
> Caused by: org.apache.maven.plugin.MojoExecutionException: There was a 
> timeout or other error in the fork
>   at 
> org.apache.maven.plugin.surefire.SurefireHelper.reportExecution(SurefireHelper.java:87)
>   at 
> org.apache.maven.plugin.failsafe.VerifyMojo.execute(VerifyMojo.java:202)
>   at 
> org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:134)
>   at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:207)
>   ... 20 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (PHOENIX-3281) Workaround surefire error of there was a timeout or other error in the fork

2016-09-28 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3281?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor reassigned PHOENIX-3281:
-

Assignee: James Taylor

> Workaround surefire error of there was a timeout or other error in the fork
> ---
>
> Key: PHOENIX-3281
> URL: https://issues.apache.org/jira/browse/PHOENIX-3281
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: James Taylor
>Assignee: James Taylor
>
> The following error is occurring when running the methods in parallel, though 
> all tests are passing. It seems that the JVM is not shutting down as expected.
> {code}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-failsafe-plugin:2.19.1:verify 
> (HBaseManagedTimeTableReuseTest) on project phoenix-core: There was a timeout 
> or other error in the fork -> [Help 1]
> org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute 
> goal org.apache.maven.plugins:maven-failsafe-plugin:2.19.1:verify 
> (HBaseManagedTimeTableReuseTest) on project phoenix-core: There was a timeout 
> or other error in the fork
>   at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:212)
>   at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153)
>   at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145)
>   at 
> org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:116)
>   at 
> org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:80)
>   at 
> org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build(SingleThreadedBuilder.java:51)
>   at 
> org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:128)
>   at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:307)
>   at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:193)
>   at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:106)
>   at org.apache.maven.cli.MavenCli.execute(MavenCli.java:863)
>   at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:288)
>   at org.apache.maven.cli.MavenCli.main(MavenCli.java:199)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:289)
>   at 
> org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:229)
>   at 
> org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:415)
>   at 
> org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:356)
> Caused by: org.apache.maven.plugin.MojoExecutionException: There was a 
> timeout or other error in the fork
>   at 
> org.apache.maven.plugin.surefire.SurefireHelper.reportExecution(SurefireHelper.java:87)
>   at 
> org.apache.maven.plugin.failsafe.VerifyMojo.execute(VerifyMojo.java:202)
>   at 
> org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:134)
>   at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:207)
>   ... 20 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3181) Run test methods in parallel to reduce test suite run time

2016-09-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15530480#comment-15530480
 ] 

Hadoop QA commented on PHOENIX-3181:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12830752/PHOENIX-3181_v9.patch
  against master branch at commit 58596bbc416ce577d3407910f1a127150d8c.
  ATTACHMENT ID: 12830752

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+0 tests included{color}.  The patch appears to be a 
documentation, build,
or dev patch that doesn't require tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
38 warning messages.

{color:red}-1 release audit{color}.  The applied patch generated 1 release 
audit warnings (more than the master's current 0 warnings).

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+at 
org.apache.hadoop.hbase.regionserver.HRegion$RowLockContext.cleanUp(HRegion.java:5206)
+at 
org.apache.hadoop.hbase.regionserver.HRegion$RowLockImpl.release(HRegion.java:5246)
+at 
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doGetTable(MetaDataEndpointImpl.java:2898)
+at 
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doGetTable(MetaDataEndpointImpl.java:2835)
+at 
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:490)
 -->

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.AlterTableIT

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/602//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/602//artifact/patchprocess/patchReleaseAuditWarnings.txt
Javadoc warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/602//artifact/patchprocess/patchJavadocWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/602//console

This message is automatically generated.

> Run test methods in parallel to reduce test suite run time
> --
>
> Key: PHOENIX-3181
> URL: https://issues.apache.org/jira/browse/PHOENIX-3181
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
> Attachments: PHOENIX-3181_4.x-HBase-1.1_WIP.patch, 
> PHOENIX-3181_v1.patch, PHOENIX-3181_v2.patch, PHOENIX-3181_v3.patch, 
> PHOENIX-3181_v4.patch, PHOENIX-3181_v5.patch, PHOENIX-3181_v6.patch, 
> PHOENIX-3181_v7.patch, PHOENIX-3181_v8.patch, PHOENIX-3181_v9.patch, 
> parallelize_4.x-Hbase1.1_wip.patch, serverLogForParallelTests.txt
>
>
> With PHOENIX-3036, the tests within a test class can be executed in parallel 
> since the table names won't conflict. This should greatly reduce the time 
> taken to run all of our tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (PHOENIX-3140) Convert UpgradeIT to be derived from BaseHBaseManagedTimeTableReuseIT

2016-09-28 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor reassigned PHOENIX-3140:
-

Assignee: James Taylor  (was: prakul agarwal)

> Convert UpgradeIT to be derived from BaseHBaseManagedTimeTableReuseIT
> -
>
> Key: PHOENIX-3140
> URL: https://issues.apache.org/jira/browse/PHOENIX-3140
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: prakul agarwal
>Assignee: James Taylor
>Priority: Minor
> Fix For: 4.9.0
>
> Attachments: UpgradeIT.diff
>
>
> Stacktrace:
> {code}
> java.lang.AssertionError: 
> Expected :-100
> Actual   :0
>  
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:834)
>   at org.junit.Assert.assertEquals(Assert.java:645)
>   at org.junit.Assert.assertEquals(Assert.java:631)
>   at 
> org.apache.phoenix.end2end.UpgradeIT.checkBaseColumnCount(UpgradeIT.java:555)
>   at 
> org.apache.phoenix.end2end.UpgradeIT.testSettingBaseColumnCountForMultipleViewsOnTable(UpgradeIT.java:453)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (PHOENIX-3153) Convert join-related IT tests to be derived from BaseHBaseManagedTimeTableReuseIT

2016-09-28 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3153?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor resolved PHOENIX-3153.
---
   Resolution: Fixed
Fix Version/s: 4.9.0

> Convert join-related IT tests to be derived from 
> BaseHBaseManagedTimeTableReuseIT
> -
>
> Key: PHOENIX-3153
> URL: https://issues.apache.org/jira/browse/PHOENIX-3153
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: prakul agarwal
>Assignee: James Taylor
>Priority: Minor
> Fix For: 4.9.0
>
> Attachments: PHOENIX-3153.patch
>
>
> The following 5 test cases follow same pattern (initJoinTableValues method 
> for table generation) and are still extending BaseHBaseManagedTimeIT .
> HashJoinIt
> SortMergeJoinIT
> HashJoinLocal index
> SubqueryIT
> SubqueryUsingSortMergeJoinIT



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (PHOENIX-3140) Convert UpgradeIT to be derived from BaseHBaseManagedTimeTableReuseIT

2016-09-28 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor resolved PHOENIX-3140.
---
Resolution: Fixed

> Convert UpgradeIT to be derived from BaseHBaseManagedTimeTableReuseIT
> -
>
> Key: PHOENIX-3140
> URL: https://issues.apache.org/jira/browse/PHOENIX-3140
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: prakul agarwal
>Assignee: James Taylor
>Priority: Minor
> Fix For: 4.9.0
>
> Attachments: UpgradeIT.diff
>
>
> Stacktrace:
> {code}
> java.lang.AssertionError: 
> Expected :-100
> Actual   :0
>  
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:834)
>   at org.junit.Assert.assertEquals(Assert.java:645)
>   at org.junit.Assert.assertEquals(Assert.java:631)
>   at 
> org.apache.phoenix.end2end.UpgradeIT.checkBaseColumnCount(UpgradeIT.java:555)
>   at 
> org.apache.phoenix.end2end.UpgradeIT.testSettingBaseColumnCountForMultipleViewsOnTable(UpgradeIT.java:453)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-3140) Convert UpgradeIT to be derived from BaseHBaseManagedTimeTableReuseIT

2016-09-28 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-3140:
--
Fix Version/s: 4.9.0

> Convert UpgradeIT to be derived from BaseHBaseManagedTimeTableReuseIT
> -
>
> Key: PHOENIX-3140
> URL: https://issues.apache.org/jira/browse/PHOENIX-3140
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: prakul agarwal
>Assignee: James Taylor
>Priority: Minor
> Fix For: 4.9.0
>
> Attachments: UpgradeIT.diff
>
>
> Stacktrace:
> {code}
> java.lang.AssertionError: 
> Expected :-100
> Actual   :0
>  
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:834)
>   at org.junit.Assert.assertEquals(Assert.java:645)
>   at org.junit.Assert.assertEquals(Assert.java:631)
>   at 
> org.apache.phoenix.end2end.UpgradeIT.checkBaseColumnCount(UpgradeIT.java:555)
>   at 
> org.apache.phoenix.end2end.UpgradeIT.testSettingBaseColumnCountForMultipleViewsOnTable(UpgradeIT.java:453)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-3181) Run test methods in parallel to reduce test suite run time

2016-09-28 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-3181:
--
Attachment: PHOENIX-3181_v9.patch

> Run test methods in parallel to reduce test suite run time
> --
>
> Key: PHOENIX-3181
> URL: https://issues.apache.org/jira/browse/PHOENIX-3181
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
> Attachments: PHOENIX-3181_4.x-HBase-1.1_WIP.patch, 
> PHOENIX-3181_v1.patch, PHOENIX-3181_v2.patch, PHOENIX-3181_v3.patch, 
> PHOENIX-3181_v4.patch, PHOENIX-3181_v5.patch, PHOENIX-3181_v6.patch, 
> PHOENIX-3181_v7.patch, PHOENIX-3181_v8.patch, PHOENIX-3181_v9.patch, 
> parallelize_4.x-Hbase1.1_wip.patch, serverLogForParallelTests.txt
>
>
> With PHOENIX-3036, the tests within a test class can be executed in parallel 
> since the table names won't conflict. This should greatly reduce the time 
> taken to run all of our tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[DISCUSS] Making upgrade to a new minor release a manual step

2016-09-28 Thread Samarth Jain
(Resending email with a proper subject)

Hello Phoenix folks,

The purpose of this email is two fold:
1) to introduce folks about the new optional upgrade process and,
2) to get a consensus on what should the default behavior of the process
be.

As you may already know, when a new minor release is rolled out, in order
to support new features Phoenix needs to update its internal metadata. This
update is done as part of the upgrade process that is automatically kicked
off when a Phoenix client for a new minor release connects to the HBase
cluster.

To provide more control on when this upgrade should be run, we wrote a new
feature which makes this upgrade optionally a manual step (see
https://issues.apache.org/jira/browse/PHOENIX-3174 for details). The
upgrade behavior is controlled by a client side config named
phoenix.autoupgrade.enabled. If the config is set to false, then Phoenix
won't kick off the upgrade process automatically. When ready to upgrade,
users can kick off the upgrade process by calling EXECUTE UPGRADE sql
command.

Keep in mind that till the upgrade is run, Phoenix won't let you execute
any other SQL commands using the new minor release client. Clients running
older versions of Phoenix though will continue to work as before.

I propose that we should by default have the config
phoenix.autoupgrade.enabled set to false. Providing users more control and
making the upgrade process an explicit manual step is the right thing to
do, IMHO.

Interested to know what do you all think.

Thanks,
Samarth


[jira] [Commented] (PHOENIX-3282) Convert BaseClientManagedTimeIT to be derived from BaseHBaseManagedTimeTableReuseIT

2016-09-28 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15530141#comment-15530141
 ] 

James Taylor commented on PHOENIX-3282:
---

[~prakul] - are you available to help with this? Please let me know to prevent 
any duplication of effort.

> Convert BaseClientManagedTimeIT to be derived from 
> BaseHBaseManagedTimeTableReuseIT
> ---
>
> Key: PHOENIX-3282
> URL: https://issues.apache.org/jira/browse/PHOENIX-3282
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: James Taylor
>
> Convert BaseClientManagedTimeIT to be derived from 
> BaseHBaseManagedTimeTableReuseIT by:
> - Using unique table names in all tests
> - Removing any usage of BaseTest.ensureTableCreated()
> - Change static methods and member variables to be instance level
> Also, we should convert these tests to not manage there own timestamps (i.e. 
> do not set the CURRENT_SCN property and do not open new connections) unless 
> the test depends on this functionality (which I believe is only the case for 
> PointInTimeQueryIT).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-3282) Convert BaseClientManagedTimeIT to be derived from BaseHBaseManagedTimeTableReuseIT

2016-09-28 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3282?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-3282:
--
Description: 
Convert BaseClientManagedTimeIT to be derived from 
BaseHBaseManagedTimeTableReuseIT by:
- Using unique table names in all tests
- Removing any usage of BaseTest.ensureTableCreated()
- Change static methods and member variables to be instance level

Also, we should convert these tests to not manage there own timestamps (i.e. do 
not set the CURRENT_SCN property and do not open new connections) unless the 
test depends on this functionality (which I believe is only the case for 
PointInTimeQueryIT).

  was:
Convert BaseClientManagedTimeIT to be derived from 
BaseHBaseManagedTimeTableReuseIT by:
- Using unique table names in all tests
- Removing any usage of BaseTest.ensureTableCreated()
- Change static methods and member variables to be instance level


> Convert BaseClientManagedTimeIT to be derived from 
> BaseHBaseManagedTimeTableReuseIT
> ---
>
> Key: PHOENIX-3282
> URL: https://issues.apache.org/jira/browse/PHOENIX-3282
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: James Taylor
>
> Convert BaseClientManagedTimeIT to be derived from 
> BaseHBaseManagedTimeTableReuseIT by:
> - Using unique table names in all tests
> - Removing any usage of BaseTest.ensureTableCreated()
> - Change static methods and member variables to be instance level
> Also, we should convert these tests to not manage there own timestamps (i.e. 
> do not set the CURRENT_SCN property and do not open new connections) unless 
> the test depends on this functionality (which I believe is only the case for 
> PointInTimeQueryIT).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (PHOENIX-3290) Move and/or combine as many NeedsOwnCluster tests to bring down test run time

2016-09-28 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor resolved PHOENIX-3290.
---
Resolution: Fixed

> Move and/or combine as many NeedsOwnCluster tests to bring down test run time
> -
>
> Key: PHOENIX-3290
> URL: https://issues.apache.org/jira/browse/PHOENIX-3290
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: James Taylor
>Assignee: James Taylor
> Fix For: 4.9.0
>
> Attachments: PHOENIX-3290_WIP.patch, PHOENIX-3290_addendum1.patch, 
> PHOENIX-3290_v2.patch, PHOENIX-3290_v3.patch, PHOENIX-3290_v4.patch, 
> PHOENIX-3290_v5.patch, PHOENIX-3290_v6.patch, PHOENIX-3290_v7.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (PHOENIX-3293) Create separate test categories for stats enabled and disabled

2016-09-28 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3293?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor resolved PHOENIX-3293.
---
   Resolution: Fixed
Fix Version/s: 4.9.0

> Create separate test categories for stats enabled and disabled
> --
>
> Key: PHOENIX-3293
> URL: https://issues.apache.org/jira/browse/PHOENIX-3293
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: James Taylor
>Assignee: James Taylor
> Fix For: 4.9.0
>
> Attachments: PHOENIX-3293.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-3290) Move and/or combine as many NeedsOwnCluster tests to bring down test run time

2016-09-28 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-3290:
--
Fix Version/s: 4.9.0

> Move and/or combine as many NeedsOwnCluster tests to bring down test run time
> -
>
> Key: PHOENIX-3290
> URL: https://issues.apache.org/jira/browse/PHOENIX-3290
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: James Taylor
> Fix For: 4.9.0
>
> Attachments: PHOENIX-3290_WIP.patch, PHOENIX-3290_addendum1.patch, 
> PHOENIX-3290_v2.patch, PHOENIX-3290_v3.patch, PHOENIX-3290_v4.patch, 
> PHOENIX-3290_v5.patch, PHOENIX-3290_v6.patch, PHOENIX-3290_v7.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (PHOENIX-3290) Move and/or combine as many NeedsOwnCluster tests to bring down test run time

2016-09-28 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor reassigned PHOENIX-3290:
-

Assignee: James Taylor

> Move and/or combine as many NeedsOwnCluster tests to bring down test run time
> -
>
> Key: PHOENIX-3290
> URL: https://issues.apache.org/jira/browse/PHOENIX-3290
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: James Taylor
>Assignee: James Taylor
> Fix For: 4.9.0
>
> Attachments: PHOENIX-3290_WIP.patch, PHOENIX-3290_addendum1.patch, 
> PHOENIX-3290_v2.patch, PHOENIX-3290_v3.patch, PHOENIX-3290_v4.patch, 
> PHOENIX-3290_v5.patch, PHOENIX-3290_v6.patch, PHOENIX-3290_v7.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (PHOENIX-3293) Create separate test categories for stats enabled and disabled

2016-09-28 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3293?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor reassigned PHOENIX-3293:
-

Assignee: James Taylor

> Create separate test categories for stats enabled and disabled
> --
>
> Key: PHOENIX-3293
> URL: https://issues.apache.org/jira/browse/PHOENIX-3293
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: James Taylor
>Assignee: James Taylor
> Fix For: 4.9.0
>
> Attachments: PHOENIX-3293.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (PHOENIX-3286) Use regular queue depth instead of overriding it to zero for tests

2016-09-28 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor resolved PHOENIX-3286.
---
Resolution: Fixed

> Use regular queue depth instead of overriding it to zero for tests
> --
>
> Key: PHOENIX-3286
> URL: https://issues.apache.org/jira/browse/PHOENIX-3286
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: James Taylor
>Assignee: James Taylor
> Fix For: 4.9.0
>
> Attachments: PHOENIX-3286.patch
>
>
> Setting the queue depth to zero makes our thread pool synchronized which 
> won't work when parallelizing the tests. We should remove that override and 
> just use the default value for that. The thread pool size is higher than we 
> need it as well. We should lower that to value of around 10.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (PHOENIX-3286) Use regular queue depth instead of overriding it to zero for tests

2016-09-28 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor reassigned PHOENIX-3286:
-

Assignee: James Taylor

> Use regular queue depth instead of overriding it to zero for tests
> --
>
> Key: PHOENIX-3286
> URL: https://issues.apache.org/jira/browse/PHOENIX-3286
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: James Taylor
>Assignee: James Taylor
> Fix For: 4.9.0
>
> Attachments: PHOENIX-3286.patch
>
>
> Setting the queue depth to zero makes our thread pool synchronized which 
> won't work when parallelizing the tests. We should remove that override and 
> just use the default value for that. The thread pool size is higher than we 
> need it as well. We should lower that to value of around 10.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-3286) Use regular queue depth instead of overriding it to zero for tests

2016-09-28 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-3286:
--
Fix Version/s: 4.9.0

> Use regular queue depth instead of overriding it to zero for tests
> --
>
> Key: PHOENIX-3286
> URL: https://issues.apache.org/jira/browse/PHOENIX-3286
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: James Taylor
>Assignee: James Taylor
> Fix For: 4.9.0
>
> Attachments: PHOENIX-3286.patch
>
>
> Setting the queue depth to zero makes our thread pool synchronized which 
> won't work when parallelizing the tests. We should remove that override and 
> just use the default value for that. The thread pool size is higher than we 
> need it as well. We should lower that to value of around 10.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (PHOENIX-3335) Improve documention of unsigned_long type mapping

2016-09-28 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3335?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor resolved PHOENIX-3335.
---
Resolution: Fixed

Thanks for the patch, [~yhxx511]. I've aded you as a Phoenix contributor, 
assigned the issue to you, and committed it. ASAIR, the build.sh is a bit picky 
about the java used to compile. I'm using Java(TM) SE Runtime Environment 
(build 1.7.0_80-b15).

> Improve documention of unsigned_long type mapping
> -
>
> Key: PHOENIX-3335
> URL: https://issues.apache.org/jira/browse/PHOENIX-3335
> Project: Phoenix
>  Issue Type: Task
>Affects Versions: 4.8.0
>Reporter: Kui Xiang
>Assignee: William Yang
> Attachments: PHOENIX-3335.patch
>
>
> when i use the function of increament in hbase 2.0.x
> push a value like '\x00\x00\x00\x00\x00\x00\x00\x01' in hbase 
> then i create pheonix table like:
> 
> create table click_pv(pk varchar primary key,"default"."min_time" 
> VARCHAR,"default"."pid" VARCHAR,"default"."pv" BIGINT);
> 
> the pv column 's type use BIGINT will mapping the value to 
> -9223372036854775805
> and when i use UNSIGNED_LONG type ,it will works ok.
> it looks a little strange..



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3331) Bug in calculating minDisabledTimestamp for a batch

2016-09-28 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3331?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15530062#comment-15530062
 ] 

James Taylor commented on PHOENIX-3331:
---

Please commit ASAP, [~an...@apache.org] to both 4.8, 4.x, and master branches.

> Bug in calculating minDisabledTimestamp for a batch
> ---
>
> Key: PHOENIX-3331
> URL: https://issues.apache.org/jira/browse/PHOENIX-3331
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
> Fix For: 4.9.0, 4.8.2
>
> Attachments: PHOENIX-3331.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-3335) Improve documention of unsigned_long type mapping

2016-09-28 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3335?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-3335:
--
Assignee: William Yang

> Improve documention of unsigned_long type mapping
> -
>
> Key: PHOENIX-3335
> URL: https://issues.apache.org/jira/browse/PHOENIX-3335
> Project: Phoenix
>  Issue Type: Task
>Affects Versions: 4.8.0
>Reporter: Kui Xiang
>Assignee: William Yang
> Attachments: PHOENIX-3335.patch
>
>
> when i use the function of increament in hbase 2.0.x
> push a value like '\x00\x00\x00\x00\x00\x00\x00\x01' in hbase 
> then i create pheonix table like:
> 
> create table click_pv(pk varchar primary key,"default"."min_time" 
> VARCHAR,"default"."pid" VARCHAR,"default"."pv" BIGINT);
> 
> the pv column 's type use BIGINT will mapping the value to 
> -9223372036854775805
> and when i use UNSIGNED_LONG type ,it will works ok.
> it looks a little strange..



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3254) IndexId Sequence is incremented even if index exists already.

2016-09-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15530037#comment-15530037
 ] 

ASF GitHub Bot commented on PHOENIX-3254:
-

Github user JamesRTaylor commented on the issue:

https://github.com/apache/phoenix/pull/211
  
+1 on the changes (but please fix any formatting issues). Nice work, 
@ankitsinghal.


> IndexId Sequence is incremented even if index exists already.
> -
>
> Key: PHOENIX-3254
> URL: https://issues.apache.org/jira/browse/PHOENIX-3254
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
> Attachments: PHOENIX-3254.patch, PHOENIX-3254_wip.patch
>
>
> As we are incrementing sequence at the client even if we are not going to 
> create a index (in case index already exists and user is using CREATE INDEX 
> IF NOT EXISTS) or DDL failed in later stage(due to parent table not found or 
> something).
> If this keeps on happening then user may reach the limit of Short.MAX_VALUE 
> and TOO_MANY_INDEXES exception will be thrown if user tries to create a new 
> index.
> To prevent, this we should increment sequences when we are about to create a 
> index at server.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] phoenix issue #211: PHOENIX-3254 IndexId Sequence is incremented even if ind...

2016-09-28 Thread JamesRTaylor
Github user JamesRTaylor commented on the issue:

https://github.com/apache/phoenix/pull/211
  
+1 on the changes (but please fix any formatting issues). Nice work, 
@ankitsinghal.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (PHOENIX-3254) IndexId Sequence is incremented even if index exists already.

2016-09-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15530035#comment-15530035
 ] 

ASF GitHub Bot commented on PHOENIX-3254:
-

Github user JamesRTaylor commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/211#discussion_r80950387
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
 ---
@@ -1499,6 +1502,53 @@ public void createTable(RpcController controller, 
CreateTableRequest request,
 cell.getTimestamp(), 
Type.codeToType(cell.getTypeByte()), bytes);
 cells.add(viewConstantCell);
 }
+Short indexId = null;
+if (request.hasAllocateIndexId() && 
request.getAllocateIndexId()) {
+String tenantIdStr = tenantIdBytes.length == 0 ? null 
: Bytes.toString(tenantIdBytes);
+final Properties props = new Properties();
+UpgradeUtil.doNotUpgradeOnFirstConnection(props);
+try (PhoenixConnection connection = 
DriverManager.getConnection(MetaDataUtil.getJdbcUrl(env), 
props).unwrap(PhoenixConnection.class)){
+PName physicalName = parentTable.getPhysicalName();
+int nSequenceSaltBuckets = 
connection.getQueryServices().getSequenceSaltBuckets();
+SequenceKey key = 
MetaDataUtil.getViewIndexSequenceKey(tenantIdStr, physicalName,
+nSequenceSaltBuckets, 
parentTable.isNamespaceMapped() );
+// TODO Review Earlier sequence was created at 
(SCN-1/LATEST_TIMESTAMP) and incremented at the client 
max(SCN,dataTable.getTimestamp), but it seems we should
+// use always LATEST_TIMESTAMP to avoid seeing 
wrong sequence values by different connection having SCN
+// or not. 
+long sequenceTimestamp = HConstants.LATEST_TIMESTAMP;
+try {
+
connection.getQueryServices().createSequence(key.getTenantId(), 
key.getSchemaName(), key.getSequenceName(),
--- End diff --

The RPCs are minimal here because of the way sequences are implemented. By 
default, we cache 100 sequences and dole them out as needed. So there's only a 
single RPC per 100 invocations of this.


> IndexId Sequence is incremented even if index exists already.
> -
>
> Key: PHOENIX-3254
> URL: https://issues.apache.org/jira/browse/PHOENIX-3254
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
> Attachments: PHOENIX-3254.patch, PHOENIX-3254_wip.patch
>
>
> As we are incrementing sequence at the client even if we are not going to 
> create a index (in case index already exists and user is using CREATE INDEX 
> IF NOT EXISTS) or DDL failed in later stage(due to parent table not found or 
> something).
> If this keeps on happening then user may reach the limit of Short.MAX_VALUE 
> and TOO_MANY_INDEXES exception will be thrown if user tries to create a new 
> index.
> To prevent, this we should increment sequences when we are about to create a 
> index at server.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] phoenix pull request #211: PHOENIX-3254 IndexId Sequence is incremented even...

2016-09-28 Thread JamesRTaylor
Github user JamesRTaylor commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/211#discussion_r80950387
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
 ---
@@ -1499,6 +1502,53 @@ public void createTable(RpcController controller, 
CreateTableRequest request,
 cell.getTimestamp(), 
Type.codeToType(cell.getTypeByte()), bytes);
 cells.add(viewConstantCell);
 }
+Short indexId = null;
+if (request.hasAllocateIndexId() && 
request.getAllocateIndexId()) {
+String tenantIdStr = tenantIdBytes.length == 0 ? null 
: Bytes.toString(tenantIdBytes);
+final Properties props = new Properties();
+UpgradeUtil.doNotUpgradeOnFirstConnection(props);
+try (PhoenixConnection connection = 
DriverManager.getConnection(MetaDataUtil.getJdbcUrl(env), 
props).unwrap(PhoenixConnection.class)){
+PName physicalName = parentTable.getPhysicalName();
+int nSequenceSaltBuckets = 
connection.getQueryServices().getSequenceSaltBuckets();
+SequenceKey key = 
MetaDataUtil.getViewIndexSequenceKey(tenantIdStr, physicalName,
+nSequenceSaltBuckets, 
parentTable.isNamespaceMapped() );
+// TODO Review Earlier sequence was created at 
(SCN-1/LATEST_TIMESTAMP) and incremented at the client 
max(SCN,dataTable.getTimestamp), but it seems we should
+// use always LATEST_TIMESTAMP to avoid seeing 
wrong sequence values by different connection having SCN
+// or not. 
+long sequenceTimestamp = HConstants.LATEST_TIMESTAMP;
+try {
+
connection.getQueryServices().createSequence(key.getTenantId(), 
key.getSchemaName(), key.getSequenceName(),
--- End diff --

The RPCs are minimal here because of the way sequences are implemented. By 
default, we cache 100 sequences and dole them out as needed. So there's only a 
single RPC per 100 invocations of this.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (PHOENIX-3336) get the wrong results when using the local secondary index

2016-09-28 Thread Rajeshbabu Chintaguntla (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15530003#comment-15530003
 ] 

Rajeshbabu Chintaguntla commented on PHOENIX-3336:
--

[~houliang] Can you provide ddl statements or would be great if you add a test 
case and upload here.

> get the wrong results when using the local secondary index
> --
>
> Key: PHOENIX-3336
> URL: https://issues.apache.org/jira/browse/PHOENIX-3336
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.8.0
> Environment: hbase-1.1.2
>Reporter: Houliang Qi
>  Labels: phoenix, secondaryIndex
> Attachments: wrong-index-2.png, wrong-index.png
>
>   Original Estimate: 120h
>  Remaining Estimate: 120h
>
> When using phoenix local secondary index, two clients concurrent upsert data 
> to the same row key, while using the index column to retrieve data, it gets 
> the wrong results.
> Just like the attachments, I create one table called orders_5, and create one 
> local index on table orders_5, called clerk_5; then I use two clients to 
> upsert data to the same row key of  table orders_5, and you will see that, 
> the local index clerk_5 have some stale record (maybe its OK for eventual 
> consistency),  however, when you use the previous value to retrieve the 
> record, you can still get the result, even more serious, the result is wrong, 
> namely it not the record which you have insert before, and also not the 
> record in the primary table(in the case ,is the table orders_5)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3161) Check possibility of moving rebuilding code to coprocessor of data table.

2016-09-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15529925#comment-15529925
 ] 

Hadoop QA commented on PHOENIX-3161:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12830693/PHOENIX-3161.patch
  against master branch at commit 58596bbc416ce577d3407910f1a127150d8c.
  ATTACHMENT ID: 12830693

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
38 warning messages.

{color:red}-1 release audit{color}.  The applied patch generated 1 release 
audit warnings (more than the master's current 0 warnings).

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
++ 
SchemaUtil.getPhysicalTableName(fullTableName.getBytes(), 
isNamespaceMapped)+"\nCLIENT MERGE SORT";
++ 
SchemaUtil.getPhysicalTableName(fullTableName.getBytes(), 
isNamespaceMapped)+"\nCLIENT MERGE SORT";
+TableRef tableRef = new TableRef(null, dataPTable, 
HConstants.LATEST_TIMESTAMP, false);
+MutationPlan plan = 
compiler.compile(Collections.singletonList(tableRef), null, null, null,
+Scan dataTableScan = 
IndexManagementUtil.newLocalStateScan(plan.getContext().getScan(),
+
dataTableScan.setAttribute(BaseScannerRegionObserver.REBUILD_INDEXES, 
TRUE_BYTES);
+IndexMaintainer.serializeAdditional(dataPTable, 
indexMetaDataPtr, indexesToPartiallyRebuild,
+} else if (ScanUtil.isIndexRebuild(scan)) { return rebuildIndices(s, 
region, scan, env.getConfiguration()); }
+private RegionScanner rebuildIndices(final RegionScanner innerScanner, 
final Region region, final Scan scan,
+int batchSize = config.getInt(MUTATE_BATCH_SIZE_ATTRIB, 
QueryServicesOptions.DEFAULT_MUTATE_BATCH_SIZE);

{color:green}+1 core tests{color}.  The patch passed unit tests in .

 {color:red}-1 core zombie tests{color}.  There are 1 zombie test(s):   
at org.apache.oozie.test.MiniHCatServer$1.run(MiniHCatServer.java:137)
at 
org.apache.oozie.test.XTestCase$MiniClusterShutdownMonitor.run(XTestCase.java:1114)
at org.apache.oozie.test.XTestCase.waitFor(XTestCase.java:713)
at 
org.apache.oozie.action.hadoop.TestHiveActionExecutor.testHiveAction(TestHiveActionExecutor.java:168)

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/601//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/601//artifact/patchprocess/patchReleaseAuditWarnings.txt
Javadoc warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/601//artifact/patchprocess/patchJavadocWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/601//console

This message is automatically generated.

> Check possibility of moving rebuilding code to coprocessor of data table.
> -
>
> Key: PHOENIX-3161
> URL: https://issues.apache.org/jira/browse/PHOENIX-3161
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
> Fix For: 4.9.0
>
> Attachments: PHOENIX-3161.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] phoenix pull request #211: PHOENIX-3254 IndexId Sequence is incremented even...

2016-09-28 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/211#discussion_r80925250
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
 ---
@@ -1445,6 +1445,9 @@ public MetaDataResponse call(MetaDataService 
instance) throws IOException {
 builder.addTableMetadataMutations(mp.toByteString());
 }
 
builder.setClientVersion(VersionUtil.encodeVersion(PHOENIX_MAJOR_VERSION, 
PHOENIX_MINOR_VERSION, PHOENIX_PATCH_NUMBER));
+if (allocateIndexId) {
--- End diff --

nit: just format code here.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (PHOENIX-3336) get the wrong results when using the local secondary index

2016-09-28 Thread Houliang Qi (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Houliang Qi updated PHOENIX-3336:
-
Attachment: wrong-index-2.png

> get the wrong results when using the local secondary index
> --
>
> Key: PHOENIX-3336
> URL: https://issues.apache.org/jira/browse/PHOENIX-3336
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.8.0
> Environment: hbase-1.1.2
>Reporter: Houliang Qi
>  Labels: phoenix, secondaryIndex
> Attachments: wrong-index-2.png, wrong-index.png
>
>   Original Estimate: 120h
>  Remaining Estimate: 120h
>
> When using phoenix local secondary index, two clients concurrent upsert data 
> to the same row key, while using the index column to retrieve data, it gets 
> the wrong results.
> Just like the attachments, I create one table called orders_5, and create one 
> local index on table orders_5, called clerk_5; then I use two clients to 
> upsert data to the same row key of  table orders_5, and you will see that, 
> the local index clerk_5 have some stale record (maybe its OK for eventual 
> consistency),  however, when you use the previous value to retrieve the 
> record, you can still get the result, even more serious, the result is wrong, 
> namely it not the record which you have insert before, and also not the 
> record in the primary table(in the case ,is the table orders_5)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3254) IndexId Sequence is incremented even if index exists already.

2016-09-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15529749#comment-15529749
 ] 

ASF GitHub Bot commented on PHOENIX-3254:
-

Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/211#discussion_r80925250
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
 ---
@@ -1445,6 +1445,9 @@ public MetaDataResponse call(MetaDataService 
instance) throws IOException {
 builder.addTableMetadataMutations(mp.toByteString());
 }
 
builder.setClientVersion(VersionUtil.encodeVersion(PHOENIX_MAJOR_VERSION, 
PHOENIX_MINOR_VERSION, PHOENIX_PATCH_NUMBER));
+if (allocateIndexId) {
--- End diff --

nit: just format code here.


> IndexId Sequence is incremented even if index exists already.
> -
>
> Key: PHOENIX-3254
> URL: https://issues.apache.org/jira/browse/PHOENIX-3254
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
> Attachments: PHOENIX-3254.patch, PHOENIX-3254_wip.patch
>
>
> As we are incrementing sequence at the client even if we are not going to 
> create a index (in case index already exists and user is using CREATE INDEX 
> IF NOT EXISTS) or DDL failed in later stage(due to parent table not found or 
> something).
> If this keeps on happening then user may reach the limit of Short.MAX_VALUE 
> and TOO_MANY_INDEXES exception will be thrown if user tries to create a new 
> index.
> To prevent, this we should increment sequences when we are about to create a 
> index at server.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-3336) get the wrong results when using the local secondary index

2016-09-28 Thread Houliang Qi (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Houliang Qi updated PHOENIX-3336:
-
Description: 
When using phoenix local secondary index, two clients concurrent upsert data to 
the same row key, while using the index column to retrieve data, it gets the 
wrong results.

Just like the attachments, I create one table called orders_5, and create one 
local index on table orders_5, called clerk_5; then I use two clients to upsert 
data to the same row key of  table orders_5, and you will see that, the local 
index clerk_5 have some stale record (maybe its OK for eventual consistency),  
however, when you use the previous value to retrieve the record, you can still 
get the result, even more serious, the result is wrong, namely it not the 
record which you have insert before, and also not the record in the primary 
table(in the case ,is the table orders_5)


  was:when using phoenix local secondary index, two clients concurrent upsert 
data to the same row key, while using the index column to retrieve data, it 
gets the wrong results.


> get the wrong results when using the local secondary index
> --
>
> Key: PHOENIX-3336
> URL: https://issues.apache.org/jira/browse/PHOENIX-3336
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.8.0
> Environment: hbase-1.1.2
>Reporter: Houliang Qi
>  Labels: phoenix, secondaryIndex
> Attachments: wrong-index.png
>
>   Original Estimate: 120h
>  Remaining Estimate: 120h
>
> When using phoenix local secondary index, two clients concurrent upsert data 
> to the same row key, while using the index column to retrieve data, it gets 
> the wrong results.
> Just like the attachments, I create one table called orders_5, and create one 
> local index on table orders_5, called clerk_5; then I use two clients to 
> upsert data to the same row key of  table orders_5, and you will see that, 
> the local index clerk_5 have some stale record (maybe its OK for eventual 
> consistency),  however, when you use the previous value to retrieve the 
> record, you can still get the result, even more serious, the result is wrong, 
> namely it not the record which you have insert before, and also not the 
> record in the primary table(in the case ,is the table orders_5)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-3336) get the wrong results when using the local secondary index

2016-09-28 Thread Houliang Qi (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Houliang Qi updated PHOENIX-3336:
-
Attachment: wrong-index.png

> get the wrong results when using the local secondary index
> --
>
> Key: PHOENIX-3336
> URL: https://issues.apache.org/jira/browse/PHOENIX-3336
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.8.0
> Environment: hbase-1.1.2
>Reporter: Houliang Qi
>  Labels: phoenix, secondaryIndex
> Attachments: wrong-index.png
>
>   Original Estimate: 120h
>  Remaining Estimate: 120h
>
> when using phoenix local secondary index, two clients concurrent upsert data 
> to the same row key, while using the index column to retrieve data, it gets 
> the wrong results.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3254) IndexId Sequence is incremented even if index exists already.

2016-09-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15529642#comment-15529642
 ] 

ASF GitHub Bot commented on PHOENIX-3254:
-

Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/211#discussion_r80916280
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
 ---
@@ -1499,6 +1502,53 @@ public void createTable(RpcController controller, 
CreateTableRequest request,
 cell.getTimestamp(), 
Type.codeToType(cell.getTypeByte()), bytes);
 cells.add(viewConstantCell);
 }
+Short indexId = null;
+if (request.hasAllocateIndexId() && 
request.getAllocateIndexId()) {
+String tenantIdStr = tenantIdBytes.length == 0 ? null 
: Bytes.toString(tenantIdBytes);
+final Properties props = new Properties();
+UpgradeUtil.doNotUpgradeOnFirstConnection(props);
+try (PhoenixConnection connection = 
DriverManager.getConnection(MetaDataUtil.getJdbcUrl(env), 
props).unwrap(PhoenixConnection.class)){
+PName physicalName = parentTable.getPhysicalName();
+int nSequenceSaltBuckets = 
connection.getQueryServices().getSequenceSaltBuckets();
+SequenceKey key = 
MetaDataUtil.getViewIndexSequenceKey(tenantIdStr, physicalName,
+nSequenceSaltBuckets, 
parentTable.isNamespaceMapped() );
+// TODO Review Earlier sequence was created at 
(SCN-1/LATEST_TIMESTAMP) and incremented at the client 
max(SCN,dataTable.getTimestamp), but it seems we should
+// use always LATEST_TIMESTAMP to avoid seeing 
wrong sequence values by different connection having SCN
+// or not. 
+long sequenceTimestamp = HConstants.LATEST_TIMESTAMP;
+try {
+
connection.getQueryServices().createSequence(key.getTenantId(), 
key.getSchemaName(), key.getSequenceName(),
--- End diff --

@ankitsinghal  This patch introducing inter table rpc calls but seems it's 
needed. What do you think of moving create sequence to meta data client and 
perform sequence increment alone here? So that atleast we can reduce the rpc 
calls here.


> IndexId Sequence is incremented even if index exists already.
> -
>
> Key: PHOENIX-3254
> URL: https://issues.apache.org/jira/browse/PHOENIX-3254
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
> Attachments: PHOENIX-3254.patch, PHOENIX-3254_wip.patch
>
>
> As we are incrementing sequence at the client even if we are not going to 
> create a index (in case index already exists and user is using CREATE INDEX 
> IF NOT EXISTS) or DDL failed in later stage(due to parent table not found or 
> something).
> If this keeps on happening then user may reach the limit of Short.MAX_VALUE 
> and TOO_MANY_INDEXES exception will be thrown if user tries to create a new 
> index.
> To prevent, this we should increment sequences when we are about to create a 
> index at server.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] phoenix pull request #211: PHOENIX-3254 IndexId Sequence is incremented even...

2016-09-28 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/211#discussion_r80916280
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
 ---
@@ -1499,6 +1502,53 @@ public void createTable(RpcController controller, 
CreateTableRequest request,
 cell.getTimestamp(), 
Type.codeToType(cell.getTypeByte()), bytes);
 cells.add(viewConstantCell);
 }
+Short indexId = null;
+if (request.hasAllocateIndexId() && 
request.getAllocateIndexId()) {
+String tenantIdStr = tenantIdBytes.length == 0 ? null 
: Bytes.toString(tenantIdBytes);
+final Properties props = new Properties();
+UpgradeUtil.doNotUpgradeOnFirstConnection(props);
+try (PhoenixConnection connection = 
DriverManager.getConnection(MetaDataUtil.getJdbcUrl(env), 
props).unwrap(PhoenixConnection.class)){
+PName physicalName = parentTable.getPhysicalName();
+int nSequenceSaltBuckets = 
connection.getQueryServices().getSequenceSaltBuckets();
+SequenceKey key = 
MetaDataUtil.getViewIndexSequenceKey(tenantIdStr, physicalName,
+nSequenceSaltBuckets, 
parentTable.isNamespaceMapped() );
+// TODO Review Earlier sequence was created at 
(SCN-1/LATEST_TIMESTAMP) and incremented at the client 
max(SCN,dataTable.getTimestamp), but it seems we should
+// use always LATEST_TIMESTAMP to avoid seeing 
wrong sequence values by different connection having SCN
+// or not. 
+long sequenceTimestamp = HConstants.LATEST_TIMESTAMP;
+try {
+
connection.getQueryServices().createSequence(key.getTenantId(), 
key.getSchemaName(), key.getSequenceName(),
--- End diff --

@ankitsinghal  This patch introducing inter table rpc calls but seems it's 
needed. What do you think of moving create sequence to meta data client and 
perform sequence increment alone here? So that atleast we can reduce the rpc 
calls here.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (PHOENIX-3336) get the wrong results when using the local secondary index

2016-09-28 Thread Houliang Qi (JIRA)
Houliang Qi created PHOENIX-3336:


 Summary: get the wrong results when using the local secondary index
 Key: PHOENIX-3336
 URL: https://issues.apache.org/jira/browse/PHOENIX-3336
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.8.0
 Environment: hbase-1.1.2
Reporter: Houliang Qi


when using phoenix local secondary index, two clients concurrent upsert data to 
the same row key, while using the index column to retrieve data, it gets the 
wrong results.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-3161) Check possibility of moving rebuilding code to coprocessor of data table.

2016-09-28 Thread Ankit Singhal (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3161?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-3161:
---
Attachment: PHOENIX-3161.patch

> Check possibility of moving rebuilding code to coprocessor of data table.
> -
>
> Key: PHOENIX-3161
> URL: https://issues.apache.org/jira/browse/PHOENIX-3161
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
> Fix For: 4.9.0
>
> Attachments: PHOENIX-3161.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-3161) Check possibility of moving rebuilding code to coprocessor of data table.

2016-09-28 Thread Ankit Singhal (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3161?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-3161:
---
Attachment: (was: PHOENIX-3161.patch)

> Check possibility of moving rebuilding code to coprocessor of data table.
> -
>
> Key: PHOENIX-3161
> URL: https://issues.apache.org/jira/browse/PHOENIX-3161
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
> Fix For: 4.9.0
>
> Attachments: PHOENIX-3161.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-3161) Check possibility of moving rebuilding code to coprocessor of data table.

2016-09-28 Thread Ankit Singhal (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3161?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-3161:
---
Attachment: PHOENIX-3161.patch

[~jamestaylor], can you please review. This is what we discussed on PHOENIX-3277

> Check possibility of moving rebuilding code to coprocessor of data table.
> -
>
> Key: PHOENIX-3161
> URL: https://issues.apache.org/jira/browse/PHOENIX-3161
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
> Fix For: 4.9.0
>
> Attachments: PHOENIX-3161.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3335) Improve documention of unsigned_long type mapping

2016-09-28 Thread William Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15528692#comment-15528692
 ] 

William Yang commented on PHOENIX-3335:
---

Build.sh failed on my MAC and I'm still trying to make it work. Before this, 
you can have a glimpse of the changes of MD file.  [~giacomotaylor]

> Improve documention of unsigned_long type mapping
> -
>
> Key: PHOENIX-3335
> URL: https://issues.apache.org/jira/browse/PHOENIX-3335
> Project: Phoenix
>  Issue Type: Task
>Affects Versions: 4.8.0
>Reporter: Kui Xiang
> Attachments: PHOENIX-3335.patch
>
>
> when i use the function of increament in hbase 2.0.x
> push a value like '\x00\x00\x00\x00\x00\x00\x00\x01' in hbase 
> then i create pheonix table like:
> 
> create table click_pv(pk varchar primary key,"default"."min_time" 
> VARCHAR,"default"."pid" VARCHAR,"default"."pv" BIGINT);
> 
> the pv column 's type use BIGINT will mapping the value to 
> -9223372036854775805
> and when i use UNSIGNED_LONG type ,it will works ok.
> it looks a little strange..



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-3335) Improve documention of unsigned_long type mapping

2016-09-28 Thread William Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3335?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

William Yang updated PHOENIX-3335:
--
Attachment: PHOENIX-3335.patch

> Improve documention of unsigned_long type mapping
> -
>
> Key: PHOENIX-3335
> URL: https://issues.apache.org/jira/browse/PHOENIX-3335
> Project: Phoenix
>  Issue Type: Task
>Affects Versions: 4.8.0
>Reporter: Kui Xiang
> Attachments: PHOENIX-3335.patch
>
>
> when i use the function of increament in hbase 2.0.x
> push a value like '\x00\x00\x00\x00\x00\x00\x00\x01' in hbase 
> then i create pheonix table like:
> 
> create table click_pv(pk varchar primary key,"default"."min_time" 
> VARCHAR,"default"."pid" VARCHAR,"default"."pv" BIGINT);
> 
> the pv column 's type use BIGINT will mapping the value to 
> -9223372036854775805
> and when i use UNSIGNED_LONG type ,it will works ok.
> it looks a little strange..



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-3331) Bug in calculating minDisabledTimestamp for a batch

2016-09-28 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3331?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-3331:
--
Fix Version/s: 4.8.2
   4.9.0

> Bug in calculating minDisabledTimestamp for a batch
> ---
>
> Key: PHOENIX-3331
> URL: https://issues.apache.org/jira/browse/PHOENIX-3331
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
> Fix For: 4.9.0, 4.8.2
>
> Attachments: PHOENIX-3331.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3331) Bug in calculating minDisabledTimestamp for a batch

2016-09-28 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3331?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15528590#comment-15528590
 ] 

James Taylor commented on PHOENIX-3331:
---

+1

> Bug in calculating minDisabledTimestamp for a batch
> ---
>
> Key: PHOENIX-3331
> URL: https://issues.apache.org/jira/browse/PHOENIX-3331
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
> Fix For: 4.9.0, 4.8.2
>
> Attachments: PHOENIX-3331.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (PHOENIX-3259) Create fat jar for transaction manager

2016-09-28 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3259?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor resolved PHOENIX-3259.
---
Resolution: Won't Fix

The workaround is to remove the guava 12 lib from the hbase dir and replace it 
with guava 13. The permanent fix is for Tephra to not depend on guava 13 since 
it's not what HBase ships with.

> Create fat jar for transaction manager
> --
>
> Key: PHOENIX-3259
> URL: https://issues.apache.org/jira/browse/PHOENIX-3259
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>
> Due to the incompatible guava version in HBase (12 instead of 13), the 
> transaction manager will not work by just pointing it to the HBase lib dir. A 
> reasonable alternative would be for Phoenix to build another fat jar 
> specifically for the transaction manager which includes all necessary 
> dependencies (namely guava 13). Then the bin/tephra script (we should rename 
> that to tephra.sh perhaps?) would simply need to have this jar on the 
> classpath.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3322) TPCH 100 query 2 exceeds size of hash cache

2016-09-28 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15528545#comment-15528545
 ] 

Lars Hofhansl commented on PHOENIX-3322:


Anything between 100mb and 2gb to try?


Also, unrelated... Is this a test system?
{code}
16/09/13 20:35:29 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
16/09/13 20:35:30 WARN shortcircuit.DomainSocketFactory: The short-circuit 
local reads feature cannot be used because libhadoop cannot be loaded.
{code}

Read from disk will be slow(er) without the native library, and SCR can't be 
used.

> TPCH 100 query 2 exceeds size of hash cache
> ---
>
> Key: PHOENIX-3322
> URL: https://issues.apache.org/jira/browse/PHOENIX-3322
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.8.0
> Environment: HDP 2.4.2 + 4.0.8 binary download
>Reporter: Aaron Molitor
>
> Executing  TPC-H query 2 results in the following error:
> h5. output from sqlline:
> {noformat}
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in 
> [jar:file:/opt/phoenix/apache-phoenix-4.8.0-HBase-1.1-bin/phoenix-4.8.0-HBase-1.1-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/usr/hdp/2.4.2.0-258/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation.
> 16/09/13 20:35:29 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> 16/09/13 20:35:30 WARN shortcircuit.DomainSocketFactory: The short-circuit 
> local reads feature cannot be used because libhadoop cannot be loaded.
> 1/1  SELECT 
> S_ACCTBAL, 
> S_NAME, 
> N_NAME, 
> P_PARTKEY, 
> P_MFGR, 
> S_ADDRESS, 
> S_PHONE, 
> S_COMMENT 
> FROM 
> TPCH.PART, 
> TPCH.SUPPLIER, 
> TPCH.PARTSUPP, 
> TPCH.NATION, 
> TPCH.REGION 
> WHERE 
> P_PARTKEY = PS_PARTKEY 
> AND S_SUPPKEY = PS_SUPPKEY 
> AND P_SIZE = 15  
> AND P_TYPE LIKE '%BRASS' 
> AND S_NATIONKEY = N_NATIONKEY 
> AND N_REGIONKEY = R_REGIONKEY 
> AND R_NAME = 'EUROPE' 
> AND PS_SUPPLYCOST = ( 
> SELECT MIN(PS_SUPPLYCOST) 
> FROM 
> TPCH.PARTSUPP, 
> TPCH.SUPPLIER, 
> TPCH.NATION, 
> TPCH.REGION 
> WHERE 
> P_PARTKEY = PS_PARTKEY 
> AND S_SUPPKEY = PS_SUPPKEY 
> AND S_NATIONKEY = N_NATIONKEY 
> AND N_REGIONKEY = R_REGIONKEY 
> AND R_NAME = 'EUROPE' 
> ) 
> ORDER BY  
> S_ACCTBAL DESC, 
> N_NAME, 
> S_NAME, 
> P_PARTKEY 
> LIMIT 100 
> ;
> Error: Encountered exception in sub plan [0] execution. (state=,code=0)
> java.sql.SQLException: Encountered exception in sub plan [0] execution.
> at org.apache.phoenix.execute.HashJoinPlan.iterator(HashJoinPlan.java:198)
> at org.apache.phoenix.execute.HashJoinPlan.iterator(HashJoinPlan.java:143)
> at org.apache.phoenix.execute.HashJoinPlan.iterator(HashJoinPlan.java:138)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:281)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:266)
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:265)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1444)
> at sqlline.Commands.execute(Commands.java:822)
> at sqlline.Commands.sql(Commands.java:732)
> at sqlline.SqlLine.dispatch(SqlLine.java:807)
> at sqlline.SqlLine.runCommands(SqlLine.java:1710)
> at sqlline.Commands.run(Commands.java:1285)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at 
> sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:36)
> at sqlline.SqlLine.dispatch(SqlLine.java:803)
> at sqlline.SqlLine.initArgs(SqlLine.java:613)
> at sqlline.SqlLine.begin(SqlLine.java:656)
> at sqlline.SqlLine.start(SqlLine.java:398)
> at sqlline.SqlLine.main(SqlLine.java:292)
> Caused by: org.apache.phoenix.join.MaxServerCacheSizeExceededException: Size 
> of hash cache (104857615 bytes) exceeds the maximum allowed size (104857600 
> bytes)
> at 
> org.apache.phoenix.join.HashCacheClient.serialize(HashCacheClient.java:110)
> at 
> org.apache.phoenix.join.HashCacheClient.addHashCache(HashCacheClient.java:83)
> at 
> org.apache.phoenix.execute.HashJoinPlan$HashSubPlan.execute(HashJoinPlan.java:385)
> at org.apache.phoenix.execute.HashJoinPlan$1.call(HashJoinPlan.java:167)
>