Re: Tephra errors when trying to create a transactional table in Phoenix 4.8.0

2016-08-31 Thread James Taylor
Hi Francis,
Is that the complete log for the transaction manager and if not would you
mind attaching that to a new JIRA?
Thanks,
James

On Wednesday, August 31, 2016, F21  wrote:

> Hey Thomas,
>
> Where are the Transaction Manager logs located? I have a
> /tmp/tephra-/tephra-service--m9edd51-hmaster1.m9edd51.log, which was
> where I got the logs from yesterday. There wasn't any information on
> whether the Transaction Manager crashed or not:
>
> Here's the last half of the logs:
> 2016-08-31 22:31:43,783 INFO  [main] zookeeper.ZooKeeper: Client
> environment:java.io.tmpdir=/tmp
> 2016-08-31 22:31:43,783 INFO  [main] zookeeper.ZooKeeper: Client
> environment:java.compiler=
> 2016-08-31 22:31:43,783 INFO  [main] zookeeper.ZooKeeper: Client
> environment:os.name=Linux
> 2016-08-31 22:31:43,783 INFO  [main] zookeeper.ZooKeeper: Client
> environment:os.arch=amd64
> 2016-08-31 22:31:43,783 INFO  [main] zookeeper.ZooKeeper: Client
> environment:os.version=4.4.0-36-generic
> 2016-08-31 22:31:43,783 INFO  [main] zookeeper.ZooKeeper: Client
> environment:user.name=hadoop
> 2016-08-31 22:31:43,783 INFO  [main] zookeeper.ZooKeeper: Client
> environment:user.home=/opt/hbase
> 2016-08-31 22:31:43,783 INFO  [main] zookeeper.ZooKeeper: Client
> environment:user.dir=/opt/hbase
> 2016-08-31 22:31:43,784 INFO  [main] zookeeper.ZooKeeper: Initiating
> client connection, connectString=m9edd51-zookeeper.m9edd51
> sessionTimeout=9 watcher=org.apache.tephra.zookeeper.
> TephraZKClientService$5@45c7e403
> 2016-08-31 22:31:43,824 INFO  
> [main-SendThread(m9edd51-zookeeper.m9edd51:2181)]
> zookeeper.ClientCnxn: Opening socket connection to server
> m9edd51-zookeeper.m9edd51/172.18.0.2:2181. Will not attempt to
> authenticate using SASL (unknown error)
> 2016-08-31 22:31:43,829 INFO  
> [main-SendThread(m9edd51-zookeeper.m9edd51:2181)]
> zookeeper.ClientCnxn: Socket connection established to
> m9edd51-zookeeper.m9edd51/172.18.0.2:2181, initiating session
> 2016-08-31 22:31:43,835 INFO  
> [main-SendThread(m9edd51-zookeeper.m9edd51:2181)]
> zookeeper.ClientCnxn: Session establishment complete on server
> m9edd51-zookeeper.m9edd51/172.18.0.2:2181, sessionid = 0x156e2ba5ba50004,
> negotiated timeout = 4
> 2016-08-31 22:31:43,881 INFO  [main] inmemory.InMemoryTransactionService:
> Configuring TransactionService, address: 0.0.0.0, port: 15165, threads: 20,
> io threads: 2, max read buffer (bytes): 16777216
> 2016-08-31 22:31:43,882 INFO  [main] tephra.TransactionServiceMain:
> Starting TransactionServiceMain
> 2016-08-31 22:31:43,890 INFO  [main] zookeeper.LeaderElection: Start
> leader election on m9edd51-zookeeper.m9edd51/tx.service/leader with guid
> 9fd6dcfa-72eb-4705-a79f-aad1ec914945
> 2016-08-31 22:31:43,953 INFO  [leader-election-tx.service-leader] 
> metrics.DefaultMetricsCollector:
> Configured metrics report to emit every 60 seconds
> 2016-08-31 22:31:44,177 INFO  [ThriftRPCServer] tephra.TransactionManager:
> Starting transaction manager.
> 2016-08-31 22:31:44,179 INFO  [DefaultMetricsCollector STARTING] 
> metrics.DefaultMetricsCollector:
> Started metrics reporter
> 2016-08-31 22:31:44,309 WARN  [HDFSTransactionStateStorage STARTING]
> util.NativeCodeLoader: Unable to load native-hadoop library for your
> platform... using builtin-java classes where applicable
> 2016-08-31 22:31:45,041 INFO  [HDFSTransactionStateStorage STARTING]
> persist.HDFSTransactionStateStorage: Using snapshot dir
> /tmp/tephra/snapshots
> 2016-08-31 22:31:45,118 INFO  [ThriftRPCServer] 
> persist.HDFSTransactionStateStorage:
> Creating snapshot dir at /tmp/tephra/snapshots
> 2016-08-31 22:31:45,212 INFO  [ThriftRPCServer] 
> persist.HDFSTransactionStateStorage:
> No snapshot files found in /tmp/tephra/snapshots
> 2016-08-31 22:31:45,214 INFO  [ThriftRPCServer] tephra.TransactionManager:
> Starting periodic timed-out transaction cleanup every 10 seconds with
> default timeout of 60 seconds.
> 2016-08-31 22:31:45,215 INFO  [ThriftRPCServer] tephra.TransactionManager:
> Starting periodic snapshot thread, frequency = 300 seconds, location =
> /tmp/tephra/snapshots
> 2016-08-31 22:31:45,215 INFO  [ThriftRPCServer] tephra.TransactionManager:
> Starting periodic Metrics Emitter thread, frequency = 1
> 2016-08-31 22:31:45,242 INFO  [ThriftRPCServer] rpc.ThriftRPCServer:
> Starting RPC server for TTransactionServer
> 2016-08-31 22:31:45,255 INFO  [ThriftRPCServer] rpc.ThriftRPCServer:
> Running RPC server for TTransactionServer
> 2016-08-31 22:31:45,255 INFO  [ThriftRPCServer] server.
> TThreadedSelectorServerWithFix: Starting TThreadedSelectorServerWithFix
> 2016-08-31 22:31:45,359 INFO  [leader-election-tx.service-leader]
> distributed.TransactionService: Transaction Thrift Service started
> successfully on m9edd51-hmaster1.m9edd51/172.18.0.12:15165
> 2016-08-31 22:32:43,325 INFO  
> [main-SendThread(m9edd51-zookeeper.m9edd51:2181)]
> zookeeper.ClientCnxn: Unable to read additional data from server sessionid
> 

Re: Tephra errors when trying to create a transactional table in Phoenix 4.8.0

2016-08-31 Thread F21

Hey Thomas,

Where are the Transaction Manager logs located? I have a 
/tmp/tephra-/tephra-service--m9edd51-hmaster1.m9edd51.log, which was 
where I got the logs from yesterday. There wasn't any information on 
whether the Transaction Manager crashed or not:


Here's the last half of the logs:
2016-08-31 22:31:43,783 INFO  [main] zookeeper.ZooKeeper: Client 
environment:java.io.tmpdir=/tmp
2016-08-31 22:31:43,783 INFO  [main] zookeeper.ZooKeeper: Client 
environment:java.compiler=
2016-08-31 22:31:43,783 INFO  [main] zookeeper.ZooKeeper: Client 
environment:os.name=Linux
2016-08-31 22:31:43,783 INFO  [main] zookeeper.ZooKeeper: Client 
environment:os.arch=amd64
2016-08-31 22:31:43,783 INFO  [main] zookeeper.ZooKeeper: Client 
environment:os.version=4.4.0-36-generic
2016-08-31 22:31:43,783 INFO  [main] zookeeper.ZooKeeper: Client 
environment:user.name=hadoop
2016-08-31 22:31:43,783 INFO  [main] zookeeper.ZooKeeper: Client 
environment:user.home=/opt/hbase
2016-08-31 22:31:43,783 INFO  [main] zookeeper.ZooKeeper: Client 
environment:user.dir=/opt/hbase
2016-08-31 22:31:43,784 INFO  [main] zookeeper.ZooKeeper: Initiating 
client connection, connectString=m9edd51-zookeeper.m9edd51 
sessionTimeout=9 
watcher=org.apache.tephra.zookeeper.TephraZKClientService$5@45c7e403
2016-08-31 22:31:43,824 INFO 
[main-SendThread(m9edd51-zookeeper.m9edd51:2181)] zookeeper.ClientCnxn: 
Opening socket connection to server 
m9edd51-zookeeper.m9edd51/172.18.0.2:2181. Will not attempt to 
authenticate using SASL (unknown error)
2016-08-31 22:31:43,829 INFO 
[main-SendThread(m9edd51-zookeeper.m9edd51:2181)] zookeeper.ClientCnxn: 
Socket connection established to 
m9edd51-zookeeper.m9edd51/172.18.0.2:2181, initiating session
2016-08-31 22:31:43,835 INFO 
[main-SendThread(m9edd51-zookeeper.m9edd51:2181)] zookeeper.ClientCnxn: 
Session establishment complete on server 
m9edd51-zookeeper.m9edd51/172.18.0.2:2181, sessionid = 
0x156e2ba5ba50004, negotiated timeout = 4
2016-08-31 22:31:43,881 INFO  [main] 
inmemory.InMemoryTransactionService: Configuring TransactionService, 
address: 0.0.0.0, port: 15165, threads: 20, io threads: 2, max read 
buffer (bytes): 16777216
2016-08-31 22:31:43,882 INFO  [main] tephra.TransactionServiceMain: 
Starting TransactionServiceMain
2016-08-31 22:31:43,890 INFO  [main] zookeeper.LeaderElection: Start 
leader election on m9edd51-zookeeper.m9edd51/tx.service/leader with guid 
9fd6dcfa-72eb-4705-a79f-aad1ec914945
2016-08-31 22:31:43,953 INFO  [leader-election-tx.service-leader] 
metrics.DefaultMetricsCollector: Configured metrics report to emit every 
60 seconds
2016-08-31 22:31:44,177 INFO  [ThriftRPCServer] 
tephra.TransactionManager: Starting transaction manager.
2016-08-31 22:31:44,179 INFO  [DefaultMetricsCollector STARTING] 
metrics.DefaultMetricsCollector: Started metrics reporter
2016-08-31 22:31:44,309 WARN  [HDFSTransactionStateStorage STARTING] 
util.NativeCodeLoader: Unable to load native-hadoop library for your 
platform... using builtin-java classes where applicable
2016-08-31 22:31:45,041 INFO  [HDFSTransactionStateStorage STARTING] 
persist.HDFSTransactionStateStorage: Using snapshot dir 
/tmp/tephra/snapshots
2016-08-31 22:31:45,118 INFO  [ThriftRPCServer] 
persist.HDFSTransactionStateStorage: Creating snapshot dir at 
/tmp/tephra/snapshots
2016-08-31 22:31:45,212 INFO  [ThriftRPCServer] 
persist.HDFSTransactionStateStorage: No snapshot files found in 
/tmp/tephra/snapshots
2016-08-31 22:31:45,214 INFO  [ThriftRPCServer] 
tephra.TransactionManager: Starting periodic timed-out transaction 
cleanup every 10 seconds with default timeout of 60 seconds.
2016-08-31 22:31:45,215 INFO  [ThriftRPCServer] 
tephra.TransactionManager: Starting periodic snapshot thread, frequency 
= 300 seconds, location = /tmp/tephra/snapshots
2016-08-31 22:31:45,215 INFO  [ThriftRPCServer] 
tephra.TransactionManager: Starting periodic Metrics Emitter thread, 
frequency = 1
2016-08-31 22:31:45,242 INFO  [ThriftRPCServer] rpc.ThriftRPCServer: 
Starting RPC server for TTransactionServer
2016-08-31 22:31:45,255 INFO  [ThriftRPCServer] rpc.ThriftRPCServer: 
Running RPC server for TTransactionServer
2016-08-31 22:31:45,255 INFO  [ThriftRPCServer] 
server.TThreadedSelectorServerWithFix: Starting 
TThreadedSelectorServerWithFix
2016-08-31 22:31:45,359 INFO  [leader-election-tx.service-leader] 
distributed.TransactionService: Transaction Thrift Service started 
successfully on m9edd51-hmaster1.m9edd51/172.18.0.12:15165
2016-08-31 22:32:43,325 INFO 
[main-SendThread(m9edd51-zookeeper.m9edd51:2181)] zookeeper.ClientCnxn: 
Unable to read additional data from server sessionid 0x156e2ba5ba50004, 
likely server has closed socket, closing socket connection and 
attempting reconnect
2016-08-31 22:32:43,427 INFO  [leader-election-tx.service-leader] 
zookeeper.LeaderElection: Disconnected from ZK: 
m9edd51-zookeeper.m9edd51 for /tx.service/leader
2016-08-31 22:32:43,427 INFO  [leader-election-tx.service-leader] 

Re: Tephra errors when trying to create a transactional table in Phoenix 4.8.0

2016-08-31 Thread Thomas D'Silva
Can you check the Transaction Manager logs and see if there are any error?
Also can you do a jps and see confirm the Transaction Manager is running ?

On Wed, Aug 31, 2016 at 2:12 AM, F21  wrote:

> Just another update. Even though the logs says that the transaction
> manager is not running, it is actually running.
>
> I confirmed this by checking the output of ps and connecting to the
> transaction manager:
>
> bash-4.3# ps
> PID   USER TIME   COMMAND
> 1 root   0:01 bash /run-hbase-phoenix.sh
>   137 hadoop 0:19 /usr/lib/jvm/java-1.8-openjdk/bin/java
> -Dproc_master -XX:OnOutOfMemoryError=kill -9 %p -XX:+UseConcMarkSweepGC
> -XX:PermSize=128m -XX:Ma
>   189 hadoop 0:08 /usr/lib/jvm/java-1.8-openjdk/bin/java
> -XX:+UseConcMarkSweepGC -cp /opt/hbase/bin/../lib/*:/opt/h
> base/bin/../conf/:/opt/hbase/phoenix-c
>   542 root   0:00 /bin/bash
>  9035 root   0:00 sleep 1
>  9036 root   0:00 ps
>
> bash-4.3# wget localhost:15165
> Connecting to localhost:15165 (127.0.0.1:15165)
> wget: error getting response: Connection reset by peer
>
>
> On 31/08/2016 3:25 PM, F21 wrote:
>
>> This only seems to be a problem when I have HBase running in fully
>> distributed mode (1 master, 1 regionserver and 1 zookeeper node in
>> different docker images).
>>
>> If I have HBase running in standalone mode with HBase and Phoenix and the
>> Query server in 1 docker image, it works correctly.
>>
>> On 31/08/2016 11:21 AM, F21 wrote:
>>
>>> I have HBase 1.2.2 and Phoenix 4.8.0 running on my HBase master running
>>> on alpine linux with OpenJDK JRE 8.
>>>
>>> This is my hbase-site.xml:
>>>
>>> 
>>> 
>>> 
>>>   
>>> hbase.rootdir
>>> hdfs://mycluster/hbase
>>>   
>>>   
>>> zookeeper.znode.parent
>>> /hbase
>>>   
>>>   
>>> hbase.cluster.distributed
>>> true
>>>   
>>>   
>>> hbase.zookeeper.quorum
>>> m9edd51-zookeeper.m9edd51
>>>   
>>>   
>>> data.tx.snapshot.dir
>>> /tmp/tephra/snapshots
>>>   
>>>   
>>> data.tx.timeout
>>> 60
>>>   
>>>   
>>> phoenix.transactions.enabled
>>> true
>>>   
>>> 
>>>
>>> I am able to start the master correctly. I am also able to create
>>> non-transactional table.
>>>
>>> However, if I create a transactional table, I get this error: ERROR
>>> [TTransactionServer-rpc-0] thrift.ProcessFunction: Internal error
>>> processing startShort
>>>
>>> This is what I see in the logs:
>>>
>>> 2016-08-31 01:08:33,560 WARN 
>>> [main-SendThread(m9edd51-zookeeper.m9edd51:2181)]
>>> zookeeper.ClientCnxn: Session 0x156de22abec0004 for server null, unexpected
>>> error, closing socket connection and attempting reconnect
>>> java.net.ConnectException: Connection refused
>>> at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
>>> at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl
>>> .java:717)
>>> at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientC
>>> nxnSocketNIO.java:361)
>>> at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.
>>> java:1081)
>>> 2016-08-31 01:08:33,616 INFO  [DefaultMetricsCollector STOPPING]
>>> metrics.DefaultMetricsCollector: Stopped metrics reporter
>>> 2016-08-31 01:08:33,623 INFO  [ThriftRPCServer]
>>> tephra.TransactionManager: Took 170.7 ms to stop
>>> 2016-08-31 01:08:33,623 INFO  [ThriftRPCServer] rpc.ThriftRPCServer: RPC
>>> server for TTransactionServer stopped.
>>> 2016-08-31 01:08:34,776 INFO 
>>> [main-SendThread(m9edd51-zookeeper.m9edd51:2181)]
>>> zookeeper.ClientCnxn: Opening socket connection to server
>>> m9edd51-zookeeper.m9edd51/172.18.0.2:2181. Will not attempt to
>>> authenticate using SASL (unknown error)
>>> 2016-08-31 01:08:34,777 INFO 
>>> [main-SendThread(m9edd51-zookeeper.m9edd51:2181)]
>>> zookeeper.ClientCnxn: Socket connection established to
>>> m9edd51-zookeeper.m9edd51/172.18.0.2:2181, initiating session
>>> 2016-08-31 01:08:34,778 INFO 
>>> [main-SendThread(m9edd51-zookeeper.m9edd51:2181)]
>>> zookeeper.ClientCnxn: Session establishment complete on server
>>> m9edd51-zookeeper.m9edd51/172.18.0.2:2181, sessionid =
>>> 0x156de22abec0004, negotiated timeout = 4
>>> 2016-08-31 01:08:34,783 INFO [leader-election-tx.service-leader]
>>> zookeeper.LeaderElection: Connected to ZK, running election:
>>> m9edd51-zookeeper.m9edd51 for /tx.service/leader
>>> 2016-08-31 01:08:34,815 INFO  [ThriftRPCServer] rpc.ThriftRPCServer:
>>> Starting RPC server for TTransactionServer
>>> 2016-08-31 01:08:34,815 INFO  [ThriftRPCServer] rpc.ThriftRPCServer:
>>> Running RPC server for TTransactionServer
>>> 2016-08-31 01:08:34,816 INFO  [ThriftRPCServer]
>>> server.TThreadedSelectorServerWithFix: Starting
>>> TThreadedSelectorServerWithFix
>>> 2016-08-31 01:08:34,822 INFO [leader-election-tx.service-leader]
>>> distributed.TransactionService: Transaction Thrift Service started
>>> successfully on m9edd51-hmaster1.m9edd51/172.18.0.12:15165
>>> 2016-08-31 01:10:42,830 ERROR [TTransactionServer-rpc-0]
>>> 

Re: Phoenix has slow response times compared to HBase

2016-08-31 Thread Mujtaba Chohan
Something seems inherently wrong in these test results.

* How are you running Phoenix queries? Were the concurrent Phoenix queries
using the same JVM? Was the JVM restarted after changing number of
concurrent users?
* Is the response time plotted when query is executed for the first time or
second or average of both?
* Is the UUID filtered on randomly distributed? Does UUID match a single
row?
* It seems that even non-concurrent Phoenix query which filters on UUID
takes 500ms in your environment. Can you try the same query in Sqlline a
few times and see how much time it takes for each run?
* What is the explain  plan
for your Phoenix query?
* If it's slow in Sqlline as well then try truncating your SYSTEM.STATS
table and reconnect Sqlline and execute the query again
* Can you share your table schema and how you ran Phoenix queries and your
HBase equivalent code?
* Any phoenix tuning defaults that you changed?

Thanks,
Mujtaba

(previous response wasn't complete before I hit send)


On Wed, Aug 31, 2016 at 10:40 AM, Mujtaba Chohan  wrote:

> Something seems inherently wrong in these test results.
>
> * How are you running Phoenix queries? Were the concurrent Phoenix queries
> using the same JVM? Was the JVM restarted after changing number of
> concurrent users?
> * Is the response time plotted when query is executed for the first time
> or second or average of both?
> * Is the UUID filtered on randomly distributed? Does UUID match a single
> row?
> * It seems that even non-concurrent Phoenix query which filters on UUID
> takes 500ms in your environment. Can you try the same query in Sqlline a
> few times and see how much time it takes for each run?
> * If it's slow in Sqlline as well then try truncating your SYSTEM.STATS
> * Can you share your table schema and how you ran Phoenix queries and your
> HBase equivalent code?
>
>
>
>
> On Wed, Aug 31, 2016 at 5:42 AM, Narros, Eduardo (ELS-LON) <
> e.nar...@elsevier.com> wrote:
>
>> Hi,
>>
>>
>> We are exploring starting to use Phoenix and have done some load tests to
>> see whether Phoenix would scale. We have noted that compared to HBase,
>> Phoenix response times have a much slower average as the number of
>> concurrent users increases. We are trying to understand whether this is
>> expected or there is something we are missing out.
>>
>>
>> This is the test we have performed:
>>
>>
>>- Create table (20 columns) and load it with 400 million records
>>indexed via a column called 'uuid'.
>>- Perform the following queries using 10,20,100,200,400 and 600 users
>>per second, each user will perform each query twice:
>>   - Phoenix: select * from schema.DOCUMENTS where uuid = ?
>>   - Phoenix: select /*+ SERIAL SMALL */* from schema.DOCUMENTS where
>>   uuid = ?
>>   - Hbase equivalent to: select * from schema.DOCUMENTS where uuid =
>>   ?
>>- The results are attached and they show that Phoenix response times
>>are at least an order of magnitude above those of HBase
>>
>> The tests were run from the Master node of a CDH5.7.2 cluster with
>> Phoenix 4.7.0.
>>
>> Are these test results expected?
>>
>> Kind Regards,
>>
>> Edu
>>
>>
>> --
>>
>> Elsevier Limited. Registered Office: The Boulevard, Langford Lane,
>> Kidlington, Oxford, OX5 1GB, United Kingdom, Registration No. 1982084,
>> Registered in England and Wales.
>>
>
>


Re: Phoenix has slow response times compared to HBase

2016-08-31 Thread Mujtaba Chohan
Something seems inherently wrong in these test results.

* How are you running Phoenix queries? Were the concurrent Phoenix queries
using the same JVM? Was the JVM restarted after changing number of
concurrent users?
* Is the response time plotted when query is executed for the first time or
second or average of both?
* Is the UUID filtered on randomly distributed? Does UUID match a single
row?
* It seems that even non-concurrent Phoenix query which filters on UUID
takes 500ms in your environment. Can you try the same query in Sqlline a
few times and see how much time it takes for each run?
* If it's slow in Sqlline as well then try truncating your SYSTEM.STATS
* Can you share your table schema and how you ran Phoenix queries and your
HBase equivalent code?




On Wed, Aug 31, 2016 at 5:42 AM, Narros, Eduardo (ELS-LON) <
e.nar...@elsevier.com> wrote:

> Hi,
>
>
> We are exploring starting to use Phoenix and have done some load tests to
> see whether Phoenix would scale. We have noted that compared to HBase,
> Phoenix response times have a much slower average as the number of
> concurrent users increases. We are trying to understand whether this is
> expected or there is something we are missing out.
>
>
> This is the test we have performed:
>
>
>- Create table (20 columns) and load it with 400 million records
>indexed via a column called 'uuid'.
>- Perform the following queries using 10,20,100,200,400 and 600 users
>per second, each user will perform each query twice:
>   - Phoenix: select * from schema.DOCUMENTS where uuid = ?
>   - Phoenix: select /*+ SERIAL SMALL */* from schema.DOCUMENTS where
>   uuid = ?
>   - Hbase equivalent to: select * from schema.DOCUMENTS where uuid = ?
>- The results are attached and they show that Phoenix response times
>are at least an order of magnitude above those of HBase
>
> The tests were run from the Master node of a CDH5.7.2 cluster with Phoenix
> 4.7.0.
>
> Are these test results expected?
>
> Kind Regards,
>
> Edu
>
>
> --
>
> Elsevier Limited. Registered Office: The Boulevard, Langford Lane,
> Kidlington, Oxford, OX5 1GB, United Kingdom, Registration No. 1982084,
> Registered in England and Wales.
>


Re: Cannot select data from a system table

2016-08-31 Thread Ted Yu
Thanks for confirmation, Ankit. 

> On Aug 31, 2016, at 3:36 AM, Ankit Singhal  wrote:
> 
> bq. Is this documented somewhere ?
> not as such, https://phoenix.apache.org/language/index.html#quoted_name is 
> generally for case sensitive identifier(and to allow some special characters) 
> and same can be used for keywords. 
> 
> bq. Looks like tokens in phoenix-core/src/main/antlr3/PhoenixSQL.g would give 
> us good idea.
> Yes Ted, you are right . Phoenix keywords are the tokens in 
> phoenix-core/src/main/antlr3/PhoenixSQL.g 
> 
> 
> 
>> On Sun, Aug 21, 2016 at 8:33 PM, Ted Yu  wrote:
>> Looks like tokens in phoenix-core/src/main/antlr3/PhoenixSQL.g would give us 
>> good idea.
>> 
>> Experts please correct me if I am wrong.
>> 
>>> On Sun, Aug 21, 2016 at 7:21 AM, Aaron Molitor  
>>> wrote:
>>> Thanks, Ankit, that worked. 
>>> 
>>> And on the heels of Ted's question... Are the reserved words documented 
>>> (even if just a list) somewhere, I've been looking at this page: 
>>> http://phoenix.apache.org/language/index.html  -- it feels like where I 
>>> should find a list like that, but I don't see it explicitly called out.  
>>> 
>>> -Aaron
 On Aug 21, 2016, at 09:04, Ted Yu  wrote:
 
 Ankit:
 Is this documented somewhere ?
 
 Thanks
 
> On Sun, Aug 21, 2016 at 6:07 AM, Ankit Singhal  
> wrote:
> Aaron,
> 
> you can escape check for reserved keyword with double quotes ""
> 
> SELECT * FROM SYSTEM."FUNCTION"
> 
> Regards,
> Ankit Singhal
> 
>> On Fri, Aug 19, 2016 at 10:47 PM, Aaron Molitor 
>>  wrote:
>> Looks like the SYSTEM.FUNCTION table is names with a reserved word. Is 
>> this a known bug?
>> 
>> 
>> 0: jdbc:phoenix:stl-colo-srv073.splicemachine> !tables
>> ++--+-+---+--+++-+--+-+---+---+-++---+
>> | TABLE_CAT  | TABLE_SCHEM  | TABLE_NAME  |  TABLE_TYPE   | REMARKS  | 
>> TYPE_NAME  | SELF_REFERENCING_COL_NAME  | REF_GENERATION  | INDEX_STATE  
>> | IMMUTABLE_ROWS  | SALT_BUCKETS  | MULTI_TENANT  | VIEW_STATEMENT  | 
>> VIEW_TYPE  | INDEX_TYP |
>> ++--+-+---+--+++-+--+-+---+---+-++---+
>> || SYSTEM   | CATALOG | SYSTEM TABLE  |  |   
>>  || |  | 
>> false   | null  | false | |  
>>   |   |
>> || SYSTEM   | FUNCTION| SYSTEM TABLE  |  |   
>>  || |  | 
>> false   | null  | false | |  
>>   |   |
>> || SYSTEM   | SEQUENCE| SYSTEM TABLE  |  |   
>>  || |  | 
>> false   | null  | false | |  
>>   |   |
>> || SYSTEM   | STATS   | SYSTEM TABLE  |  |   
>>  || |  | 
>> false   | null  | false | |  
>>   |   |
>> || TPCH | CUSTOMER| TABLE |  |   
>>  || |  | 
>> false   | null  | false | |  
>>   |   |
>> || TPCH | LINEITEM| TABLE |  |   
>>  || |  | 
>> false   | null  | false | |  
>>   |   |
>> || TPCH | NATION  | TABLE |  |   
>>  || |  | 
>> false   | null  | false | |  
>>   |   |
>> || TPCH | ORDERS  | TABLE |  |   
>>  || |  | 
>> false   | null  | false | |  
>>   |   |
>> || TPCH | PART| TABLE |  |   
>>  ||   

Re: Can phoenix run without hadoop? (hbase standalone)

2016-08-31 Thread F21

chmod +x myfile.sh

Also, check that the line endings for the file are LR and not CRLF.

On 31/08/2016 9:37 PM, Cheyenne Forbes wrote:

how do I make start-hbase-phoenix.sh executable?

Regards,

Cheyenne Forbes

Chief Executive Officer
Avapno Omnitech

Chief Operating Officer
Avapno Solutions, Co.

Chairman
Avapno Assets, LLC

Bethel Town P.O
Westmoreland
Jamaica

Email: cheyenne.osanu.for...@gmail.com 


Mobile: 876-881-7889 
skype: cheyenne.forbes1


On Wed, Aug 31, 2016 at 5:39 AM, F21 > wrote:


Did you build the image yourself? If so, you need to make
start-hbase-phoenix.sh executable before building it.


On 31/08/2016 8:02 PM, Cheyenne Forbes wrote:


" ': No such file or directory"








Re: Can phoenix run without hadoop? (hbase standalone)

2016-08-31 Thread Cheyenne Forbes
how do I make start-hbase-phoenix.sh executable?

Regards,

Cheyenne Forbes

Chief Executive Officer
Avapno Omnitech

Chief Operating Officer
Avapno Solutions, Co.

Chairman
Avapno Assets, LLC

Bethel Town P.O
Westmoreland
Jamaica

Email: cheyenne.osanu.for...@gmail.com
Mobile: 876-881-7889
skype: cheyenne.forbes1


On Wed, Aug 31, 2016 at 5:39 AM, F21  wrote:

> Did you build the image yourself? If so, you need to make
> start-hbase-phoenix.sh executable before building it.
>
>
> On 31/08/2016 8:02 PM, Cheyenne Forbes wrote:
>
>>
>> " ': No such file or directory"
>>
>
>
>


Re: Can phoenix run without hadoop? (hbase standalone)

2016-08-31 Thread F21
Did you build the image yourself? If so, you need to make 
start-hbase-phoenix.sh executable before building it.


On 31/08/2016 8:02 PM, Cheyenne Forbes wrote:


" ': No such file or directory"





Re: Cannot select data from a system table

2016-08-31 Thread Ankit Singhal
bq. Is this documented somewhere ?
not as such, https://phoenix.apache.org/language/index.html#quoted_name is
generally for case sensitive identifier(and to allow some special
characters) and same can be used for keywords.

bq. Looks like tokens in phoenix-core/src/main/antlr3/PhoenixSQL.g would
give us good idea.
Yes Ted, you are right . Phoenix keywords are the tokens in
phoenix-core/src/main/antlr3/PhoenixSQL.g



On Sun, Aug 21, 2016 at 8:33 PM, Ted Yu  wrote:

> Looks like tokens in phoenix-core/src/main/antlr3/PhoenixSQL.g would give
> us good idea.
>
> Experts please correct me if I am wrong.
>
> On Sun, Aug 21, 2016 at 7:21 AM, Aaron Molitor  > wrote:
>
>> Thanks, Ankit, that worked.
>>
>> And on the heels of Ted's question... Are the reserved words documented
>> (even if just a list) somewhere, I've been looking at this page:
>> http://phoenix.apache.org/language/index.html  -- it feels like where I
>> should find a list like that, but I don't see it explicitly called out.
>>
>> -Aaron
>>
>> On Aug 21, 2016, at 09:04, Ted Yu  wrote:
>>
>> Ankit:
>> Is this documented somewhere ?
>>
>> Thanks
>>
>> On Sun, Aug 21, 2016 at 6:07 AM, Ankit Singhal 
>> wrote:
>>
>>> Aaron,
>>>
>>> you can escape check for reserved keyword with double quotes ""
>>>
>>> SELECT * FROM SYSTEM."FUNCTION"
>>>
>>> Regards,
>>> Ankit Singhal
>>>
>>> On Fri, Aug 19, 2016 at 10:47 PM, Aaron Molitor <
>>> amoli...@splicemachine.com> wrote:
>>>
 Looks like the SYSTEM.FUNCTION table is names with a reserved word. Is
 this a known bug?


 0: jdbc:phoenix:stl-colo-srv073.splicemachine> !tables
 ++--+-+---+-
 -+++
 -+--+-+---+-
 --+-++---+
 | TABLE_CAT  | TABLE_SCHEM  | TABLE_NAME  |  TABLE_TYPE   | REMARKS  |
 TYPE_NAME  | SELF_REFERENCING_COL_NAME  | REF_GENERATION  | INDEX_STATE  |
 IMMUTABLE_ROWS  | SALT_BUCKETS  | MULTI_TENANT  | VIEW_STATEMENT  |
 VIEW_TYPE  | INDEX_TYP |
 ++--+-+---+-
 -+++
 -+--+-+---+-
 --+-++---+
 || SYSTEM   | CATALOG | SYSTEM TABLE  |  |
   || |  |
 false   | null  | false | |
 |   |
 || SYSTEM   | FUNCTION| SYSTEM TABLE  |  |
   || |  |
 false   | null  | false | |
 |   |
 || SYSTEM   | SEQUENCE| SYSTEM TABLE  |  |
   || |  |
 false   | null  | false | |
 |   |
 || SYSTEM   | STATS   | SYSTEM TABLE  |  |
   || |  |
 false   | null  | false | |
 |   |
 || TPCH | CUSTOMER| TABLE |  |
   || |  |
 false   | null  | false | |
 |   |
 || TPCH | LINEITEM| TABLE |  |
   || |  |
 false   | null  | false | |
 |   |
 || TPCH | NATION  | TABLE |  |
   || |  |
 false   | null  | false | |
 |   |
 || TPCH | ORDERS  | TABLE |  |
   || |  |
 false   | null  | false | |
 |   |
 || TPCH | PART| TABLE |  |
   || |  |
 false   | null  | false | |
 |   |
 || TPCH | PARTSUPP| TABLE |  |
   || |  |
 false   | null  | false | |
 | 

Re: Can phoenix run without hadoop? (hbase standalone)

2016-08-31 Thread Cheyenne Forbes
I decided to use docker but after executing "docker run -i -t 4044a9f77527
./start-hbase-phoenix.sh" I get the following error (including open quote
beside the colon)

" ': No such file or directory"

Regards,
Cheyenne Forbes


On Sat, Aug 27, 2016 at 2:00 AM, Cheyenne Forbes <
cheyenne.osanu.for...@gmail.com> wrote:

> This is in the query server log:
>
> 2016-08-27 01:30:33,595 WARN org.apache.hadoop.util.NativeCodeLoader:
> Unable to load native-hadoop library for your platform.$
> 2016-08-27 01:30:49,021 INFO org.apache.phoenix.metrics.Metrics:
> Initializing metrics system: phoenix
> 2016-08-27 01:30:49,756 WARN org.apache.hadoop.metrics2.impl.MetricsConfig:
> Cannot locate configuration: tried hadoop-metrics$
> 2016-08-27 01:30:50,915 INFO 
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl:
> Scheduled snapshot period at 10 second(s).
> 2016-08-27 01:30:50,915 INFO 
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl:
> phoenix metrics system started
> 2016-08-27 01:30:58,688 INFO 
> org.apache.phoenix.query.ConnectionQueryServicesImpl:
> Found quorum: localhost:2181
> 2016-08-27 01:31:56,305 INFO org.apache.hadoop.hbase.client.RpcRetryingCaller:
> Call exception, tries=10, retries=35, started=$
> 2016-08-27 01:32:20,069 INFO org.apache.hadoop.hbase.
> client.ConnectionManager$HConnectionImplementation: Closing master proto$
> 2016-08-27 01:32:20,070 INFO org.apache.hadoop.hbase.
> client.ConnectionManager$HConnectionImplementation: Closing zookeeper se$
> 2016-08-27 01:33:48,561 INFO 
> org.apache.phoenix.query.ConnectionQueryServicesImpl:
> Found quorum: localhost:2181
> 2016-08-27 01:34:02,644 INFO org.apache.hadoop.hbase.
> client.ConnectionManager$HConnectionImplementation: Closing master proto$
> 2016-08-27 01:34:02,644 INFO org.apache.hadoop.hbase.
> client.ConnectionManager$HConnectionImplementation: Closing zookeeper se$
> 2016-08-27 01:34:08,127 INFO 
> org.apache.phoenix.query.ConnectionQueryServicesImpl:
> Found quorum: localhost:2181
> 2016-08-27 01:34:11,006 INFO org.apache.hadoop.hbase.
> client.ConnectionManager$HConnectionImplementation: Closing master proto$
> 2016-08-27 01:34:11,007 INFO org.apache.hadoop.hbase.
> client.ConnectionManager$HConnectionImplementation: Closing zookeeper se$
> 2016-08-27 01:34:13,227 INFO 
> org.apache.phoenix.query.ConnectionQueryServicesImpl:
> Found quorum: localhost:2181
> 2016-08-27 01:34:14,035 INFO org.apache.hadoop.hbase.
> client.ConnectionManager$HConnectionImplementation: Closing master proto$
> 2016-08-27 01:34:14,043 INFO org.apache.hadoop.hbase.
> client.ConnectionManager$HConnectionImplementation: Closing zookeeper se$
> 2016-08-27 01:34:17,566 INFO org.eclipse.jetty.util.log: Logging
> initialized @243633ms
> 2016-08-27 01:34:33,564 INFO org.eclipse.jetty.server.Server:
> jetty-9.2.z-SNAPSHOT
> 2016-08-27 01:34:34,156 INFO org.eclipse.jetty.server.ServerConnector:
> Started ServerConnector@539b5b78{HTTP/1.1}{0.0.0.0:876$
> 2016-08-27 01:34:34,157 INFO org.eclipse.jetty.server.Server: Started
> @260431ms
> 2016-08-27 01:34:34,157 INFO org.apache.calcite.avatica.server.HttpServer:
> Service listening on port 8765.
>
>
> Regards,
>
> Cheyenne Forbes
>
> Chief Executive Officer
> Avapno Omnitech
>
> Chief Operating Officer
> Avapno Solutions
>
> Chairman
> Avapno Assets, LLC
>
> Bethel Town P.O
> Westmoreland
> Jamaica
>
> Email: cheyenne.osanu.for...@gmail.com
> Mobile: 876-881-7889
> skype: cheyenne.forbes1
>
>
>
> On Sat, Aug 27, 2016 at 12:31 AM, Cheyenne Forbes <
> cheyenne.osanu.for...@gmail.com> wrote:
>
>> I already see it in the start-hbase-phoenix.sh file
>>
>> export HBASE_CONF_DIR=/opt/hbase/conf
>> export HBASE_CP=/opt/hbase/lib
>> export HBASE_HOME=/opt/hbase
>>
>>
>>
>> On Wed, Aug 24, 2016 at 1:36 AM, F21  wrote:
>> >
>> > You probably need to export some environment variables for HBase to
>> point things to the right place, so that it can use its own bundled hadoop
>> libraries:
>> >
>> > export HBASE_CONF_DIR=/opt/hbase/conf
>> > export HBASE_CP=/opt/hbase/lib
>> > export HBASE_HOME=/opt/hbase
>> >
>> > On 24/08/2016 3:56 PM, Cheyenne Forbes wrote:
>> >
>> > this is what I see:
>> >
>> > 2016-08-24 00:14:45,079 WARN org.apache.hadoop.util.NativeCodeLoader:
>> Unable to load native-hadoop library for your platform.$
>> > 2016-08-24 00:14:58,479 INFO org.apache.phoenix.metrics.Metrics:
>> Initializing metrics system: phoenix
>> > 2016-08-24 00:14:59,065 WARN org.apache.hadoop.metrics2.impl.MetricsConfig:
>> Cannot locate configuration: tried hadoop-metrics$
>> > 2016-08-24 00:14:59,695 INFO 
>> > org.apache.hadoop.metrics2.impl.MetricsSystemImpl:
>> Scheduled snapshot period at 10 second(s).
>> > 2016-08-24 00:14:59,695 INFO 
>> > org.apache.hadoop.metrics2.impl.MetricsSystemImpl:
>> phoenix metrics system started
>> > 2016-08-24 00:15:05,891 INFO 
>> > org.apache.phoenix.query.ConnectionQueryServicesImpl:
>> Found quorum: localhost:2181
>> >
>> >
>> > Regards,
>> >
>> > Cheyenne Forbes
>> >
>> > 

Re: Tephra errors when trying to create a transactional table in Phoenix 4.8.0

2016-08-31 Thread F21
Just another update. Even though the logs says that the transaction 
manager is not running, it is actually running.


I confirmed this by checking the output of ps and connecting to the 
transaction manager:


bash-4.3# ps
PID   USER TIME   COMMAND
1 root   0:01 bash /run-hbase-phoenix.sh
  137 hadoop 0:19 /usr/lib/jvm/java-1.8-openjdk/bin/java 
-Dproc_master -XX:OnOutOfMemoryError=kill -9 %p -XX:+UseConcMarkSweepGC 
-XX:PermSize=128m -XX:Ma
  189 hadoop 0:08 /usr/lib/jvm/java-1.8-openjdk/bin/java 
-XX:+UseConcMarkSweepGC -cp 
/opt/hbase/bin/../lib/*:/opt/hbase/bin/../conf/:/opt/hbase/phoenix-c

  542 root   0:00 /bin/bash
 9035 root   0:00 sleep 1
 9036 root   0:00 ps

bash-4.3# wget localhost:15165
Connecting to localhost:15165 (127.0.0.1:15165)
wget: error getting response: Connection reset by peer

On 31/08/2016 3:25 PM, F21 wrote:
This only seems to be a problem when I have HBase running in fully 
distributed mode (1 master, 1 regionserver and 1 zookeeper node in 
different docker images).


If I have HBase running in standalone mode with HBase and Phoenix and 
the Query server in 1 docker image, it works correctly.


On 31/08/2016 11:21 AM, F21 wrote:
I have HBase 1.2.2 and Phoenix 4.8.0 running on my HBase master 
running on alpine linux with OpenJDK JRE 8.


This is my hbase-site.xml:




  
hbase.rootdir
hdfs://mycluster/hbase
  
  
zookeeper.znode.parent
/hbase
  
  
hbase.cluster.distributed
true
  
  
hbase.zookeeper.quorum
m9edd51-zookeeper.m9edd51
  
  
data.tx.snapshot.dir
/tmp/tephra/snapshots
  
  
data.tx.timeout
60
  
  
phoenix.transactions.enabled
true
  


I am able to start the master correctly. I am also able to create 
non-transactional table.


However, if I create a transactional table, I get this error: ERROR 
[TTransactionServer-rpc-0] thrift.ProcessFunction: Internal error 
processing startShort


This is what I see in the logs:

2016-08-31 01:08:33,560 WARN 
[main-SendThread(m9edd51-zookeeper.m9edd51:2181)] 
zookeeper.ClientCnxn: Session 0x156de22abec0004 for server null, 
unexpected error, closing socket connection and attempting reconnect

java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at 
sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at 
org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361)
at 
org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)
2016-08-31 01:08:33,616 INFO  [DefaultMetricsCollector STOPPING] 
metrics.DefaultMetricsCollector: Stopped metrics reporter
2016-08-31 01:08:33,623 INFO  [ThriftRPCServer] 
tephra.TransactionManager: Took 170.7 ms to stop
2016-08-31 01:08:33,623 INFO  [ThriftRPCServer] rpc.ThriftRPCServer: 
RPC server for TTransactionServer stopped.
2016-08-31 01:08:34,776 INFO 
[main-SendThread(m9edd51-zookeeper.m9edd51:2181)] 
zookeeper.ClientCnxn: Opening socket connection to server 
m9edd51-zookeeper.m9edd51/172.18.0.2:2181. Will not attempt to 
authenticate using SASL (unknown error)
2016-08-31 01:08:34,777 INFO 
[main-SendThread(m9edd51-zookeeper.m9edd51:2181)] 
zookeeper.ClientCnxn: Socket connection established to 
m9edd51-zookeeper.m9edd51/172.18.0.2:2181, initiating session
2016-08-31 01:08:34,778 INFO 
[main-SendThread(m9edd51-zookeeper.m9edd51:2181)] 
zookeeper.ClientCnxn: Session establishment complete on server 
m9edd51-zookeeper.m9edd51/172.18.0.2:2181, sessionid = 
0x156de22abec0004, negotiated timeout = 4
2016-08-31 01:08:34,783 INFO [leader-election-tx.service-leader] 
zookeeper.LeaderElection: Connected to ZK, running election: 
m9edd51-zookeeper.m9edd51 for /tx.service/leader
2016-08-31 01:08:34,815 INFO  [ThriftRPCServer] rpc.ThriftRPCServer: 
Starting RPC server for TTransactionServer
2016-08-31 01:08:34,815 INFO  [ThriftRPCServer] rpc.ThriftRPCServer: 
Running RPC server for TTransactionServer
2016-08-31 01:08:34,816 INFO  [ThriftRPCServer] 
server.TThreadedSelectorServerWithFix: Starting 
TThreadedSelectorServerWithFix
2016-08-31 01:08:34,822 INFO [leader-election-tx.service-leader] 
distributed.TransactionService: Transaction Thrift Service started 
successfully on m9edd51-hmaster1.m9edd51/172.18.0.12:15165
2016-08-31 01:10:42,830 ERROR [TTransactionServer-rpc-0] 
thrift.ProcessFunction: Internal error processing startShort

java.lang.IllegalStateException: Transaction Manager is not running.
at 
com.google.common.base.Preconditions.checkState(Preconditions.java:149)
at 
org.apache.tephra.TransactionManager.ensureAvailable(TransactionManager.java:709)
at 
org.apache.tephra.TransactionManager.startTx(TransactionManager.java:768)
at 
org.apache.tephra.TransactionManager.startShort(TransactionManager.java:728)
at 
org.apache.tephra.TransactionManager.startShort(TransactionManager.java:716)
at 

java.lang.IllegalStateException: Number of bytes to resize to must be greater than zero, but instead is -1984010164

2016-08-31 Thread Dong-iL, Kim
Hi.

when I’m using simple groupby query, exception occured as below.
What shall I do?

Thanks.

Error: org.apache.phoenix.exception.PhoenixIOException: 
org.apache.hadoop.hbase.DoNotRetryIOException: 
PLAYER_ACTION,\x00\x00B\x8D2J7GvczV2U8J8Z2itjcODTGI\x003\x001\x004\x00\x80\x00\x00\x00\x00l\xE1e1157988054337976043\x00\x00\x00\x01V\xBA\x8DH~,1472148086609.d1b6338ff6820e5469eb78643e4e177d.:
 Number of bytes to resize to must be greater than zero, but instead is 
-1984010164
at 
org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:87)
at 
org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:53)
at 
org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:229)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$52.call(RegionCoprocessorHost.java:1334)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1673)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1749)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1712)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postScannerOpen(RegionCoprocessorHost.java:1329)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2412)
at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33648)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2178)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.IllegalStateException: Number of bytes to resize to must 
be greater than zero, but instead is -1984010164
at 
org.apache.phoenix.memory.GlobalMemoryManager$GlobalMemoryChunk.resize(GlobalMemoryManager.java:135)
at 
org.apache.phoenix.cache.aggcache.SpillableGroupByCache$1.removeEldestEntry(SpillableGroupByCache.java:172)
at java.util.LinkedHashMap.afterNodeInsertion(LinkedHashMap.java:299)
at java.util.HashMap.putVal(HashMap.java:663)
at java.util.HashMap.put(HashMap.java:611)
at 
org.apache.phoenix.cache.aggcache.SpillableGroupByCache.cache(SpillableGroupByCache.java:249)
at 
org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver.scanUnordered(GroupedAggregateRegionObserver.java:420)
at 
org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver.doPostScannerOpen(GroupedAggregateRegionObserver.java:165)
at 
org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:202)
... 12 more (state=08000,code=101)