Is it possible to run the phoenix query server on a machine other than the regionservers?

2015-12-17 Thread F21
I have successfully deployed phoenix and the phoenix query server into a 
toy HBase cluster.


I am currently running the http query server on all regionserver, 
however I think it would be much better if I can run the http query 
servers on separate docker containers or machines. This way, I can 
easily scale the number of query servers and put them against a DNS name 
such as phoenix.mycompany.internal.


I've had a look at the configuration, but it seems to be heavily tied to 
HBase. For example, it requires the HBASE_CONF_DIR environment variable 
to be set.


Is this something that's currently possible?


Re: Is it possible to run the phoenix query server on a machine other than the regionservers?

2015-12-17 Thread F21

Hey Rafa,

So in terms of the hbase-site.xml, I just need the entries for the 
location to the zookeeper quorum and the zookeeper znode for the cluster 
right?


Cheers!

On 17/12/2015 9:48 PM, rafa wrote:

Hi F21,

You can install Query Server in any server that has network connection 
with your cluster. You'll need connection with zookeeper.


Usually the Apache Phoenix Query Server is installed in the master nodes.

Accordong to the Apache Phoenix doc: 
https://phoenix.apache.org/server.html


"The server is packaged in a standalone jar, 
phoenix-server--runnable.jar. This jar and HBASE_CONF_DIR on 
the classpath are all that is required to launch the server."


you'll only need that jar and the Hbase XML config files,

Regards,
Rafa.

On Thu, Dec 17, 2015 at 11:31 AM, F21 <f21.gro...@gmail.com 
<mailto:f21.gro...@gmail.com>> wrote:


I have successfully deployed phoenix and the phoenix query server
into a toy HBase cluster.

I am currently running the http query server on all regionserver,
however I think it would be much better if I can run the http
query servers on separate docker containers or machines. This way,
I can easily scale the number of query servers and put them
against a DNS name such as phoenix.mycompany.internal.

I've had a look at the configuration, but it seems to be heavily
tied to HBase. For example, it requires the HBASE_CONF_DIR
environment variable to be set.

Is this something that's currently possible?






Re: Avatica/Phoenix-Query-Server .NET driver

2016-06-27 Thread F21
I haven't used this driver (don't write any .NET code), but I used it as 
a reference while building (https://github.com/Boostport/avatica), in 
particular, setting up the HTTP requests correctly.


Francis

On 28/06/2016 8:19 AM, Josh Elser wrote:

Hi,

I was just made aware of a neat little .NET driver for Avatica 
(specifically, the authors were focused on Phoenix's use of Avatica in 
the Phoenix Query Server).


https://www.nuget.org/packages/Microsoft.Phoenix.Client/1.0.0-preview

I'll have to try it out at some point, but would love to hear from 
anyone who beats me to it :). I reckon that it should work for any 
>=Avatica-1.6.0 or Apache Phoenix 4.7.0. The documented "Phoenix 
4.4.0" is probably more of a call-out to the version available on Azure.


- Josh





Re: Tephra not starting correctly.

2016-03-30 Thread F21
3-30 23:50:42,289 INFO  [Thread-0] tephra.TransactionServiceMain: 
Stopping TransactionServiceMain


This is the hbase-site.xml for the master (which tephra also runs on):


  
hbase.master.loadbalancer.class
org.apache.phoenix.hbase.index.balancer.IndexLoadBalancer
  
  
hbase.coprocessor.master.classes
org.apache.phoenix.hbase.index.master.IndexMasterObserver
  
  
data.tx.snapshot.dir
/tmp/tephra/snapshots
  
  
data.tx.timeout
60
  
  
hbase.rootdir
hdfs://mycluster/hbase
  
  
zookeeper.znode.parent
/hbase
  
  
hbase.cluster.distributed
true
  
  
hbase.zookeeper.quorum
f826338-zookeeper.f826338
  


On 31/03/2016 4:47 AM, Mujtaba Chohan wrote:

Few pointers:

- phoenix-core-*.jar is a subset of phoenix-*-server.jar so just 
phoenix-*-server.jar in hbase/lib is enough for region servers and master.
- phoenix-server-*-runnable.jar and phoenix-*-server.jar should be 
enough for query server. Client jar would only duplicate HBase classes 
in hbase/lib.
- Check for exception starting tephra in 
/tmp/tephra-*/tephra-service-*.log (assuming this is the log location 
configured in your tephra-env.sh)


- mujtaba


On Wed, Mar 30, 2016 at 2:54 AM, F21 <f21.gro...@gmail.com 
<mailto:f21.gro...@gmail.com>> wrote:


I have been trying to get tephra working, but wasn't able to get
it starting successfully.

I have a HDFS and HBase 1.1 cluster running in docker containers.
I have confirmed that Phoenix, HDFS and HBase are both working
correctly. Phoenix and the Phoenix query server are also installed
correctly and I can access the cluster using Squirrel SQL with the
thin client.

Here's what I have done:

In the hbase-site.xml of the region servers and masters, add the
following:


  data.tx.snapshot.dir
  /tmp/tephra/snapshots



  data.tx.timeout
  60


In the hbase-site.xml of the phoenix query server, add:


  phoenix.transactions.enabled
  true


For the master, copy the following to hbase/lib:
phoenix-4.7.0-HBase-1.1-server
phoenix-core-4.7.0-HBase-1.1

Also, copy tephra and tephra-env.sh to hbase/bin

On the region server, copy the following to hbase/lib:
phoenix-4.7.0-HBase-1.1-server
phoenix-core-4.7.0-HBase-1.1

For the phoenix query server, copy the following to hbase/lib:
phoenix-server-4.7.0-HBase-1.1-runnable
phoenix-4.7.0-HBase-1.1-client

This is what I get when I try to start tephra on the master:

root@f826338-hmaster1:/opt/hbase/bin# ./tephra start
Wed Mar 30 09:54:08 UTC 2016 Starting tephra service on
f826338-hmaster1.f826338
Running class co.cask.tephra.TransactionServiceMain

root@f826338-hmaster1:/opt/hbase/bin# ./tephra status
checking status
 * tephra is not running

Any pointers appreciated! :)







Re: Tephra not starting correctly.

2016-03-30 Thread F21

I think that might be from the tephra start up script.

The folder /opt/hbase/phoenix-assembly/ does not exist on my system.

On 31/03/2016 11:53 AM, Mujtaba Chohan wrote:
I still see you have the following on classpath: 
opt/hbase/phoenix-assembly/target/*


On Wed, Mar 30, 2016 at 5:42 PM, F21 <f21.gro...@gmail.com 
<mailto:f21.gro...@gmail.com>> wrote:


Thanks for the hints.

If I remove the client jar, it complains about a missing class:
2016-03-31 00:38:25,929 INFO  [main]
tephra.TransactionServiceMain: Starting TransactionServiceMain
Exception in thread "main" java.lang.NoClassDefFoundError:
com/google/common/util/concurrent/Service$Listener
at

co.cask.tephra.distributed.TransactionService.doStart(TransactionService.java:78)
at

com.google.common.util.concurrent.AbstractService.start(AbstractService.java:90)
at

com.google.common.util.concurrent.AbstractService.startAndWait(AbstractService.java:129)
at
co.cask.tephra.TransactionServiceMain.start(TransactionServiceMain.java:116)
at
co.cask.tephra.TransactionServiceMain.doMain(TransactionServiceMain.java:83)
at
co.cask.tephra.TransactionServiceMain.main(TransactionServiceMain.java:47)
Caused by: java.lang.ClassNotFoundException:
com.google.common.util.concurrent.Service$Listener
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at
sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 6 more
2016-03-31 00:38:25,931 INFO  [Thread-0]
tephra.TransactionServiceMain: Stopping TransactionServiceMain

After adding the client-without-hbase jar, I get a missing method
error:
java.lang.NoSuchMethodError:

co.cask.tephra.TransactionManager.addListener(Lcom/google/common/util/concurrent/Service$Listener;Ljava/util/concurrent/Executor;)V
at

co.cask.tephra.distributed.TransactionService$1.leader(TransactionService.java:83)
at

org.apache.twill.internal.zookeeper.LeaderElection.becomeLeader(LeaderElection.java:229)
at

org.apache.twill.internal.zookeeper.LeaderElection.access$1800(LeaderElection.java:53)
at

org.apache.twill.internal.zookeeper.LeaderElection$5.onSuccess(LeaderElection.java:207)
at

org.apache.twill.internal.zookeeper.LeaderElection$5.onSuccess(LeaderElection.java:186)
at
com.google.common.util.concurrent.Futures$5.run(Futures.java:768)
at

java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

I am not very familiar with java or phoenix itself, but here's my
classpath:
2016-03-31 00:41:06,062 INFO  [main] zookeeper.ZooKeeper: Client

environment:java.class.path=/opt/hbase/bin/../lib/hadoop-mapreduce-client-core-2.5.1.jar:/opt/hbase/bin/../lib/api-asn1-api-1.0.0-M20.jar:/opt/hbase/bin/../lib/hadoop-mapreduce-client-app-2.5.1.jar:/opt/hbase/bin/../lib/commons-beanutils-1.7.0.jar:/opt/hbase/bin/../lib/jsp-2.1-6.1.14.jar:/opt/hbase/bin/../lib/jasper-compiler-5.5.23.jar:/opt/hbase/bin/../lib/hbase-rest-1.1.3.jar:/opt/hbase/bin/../lib/hadoop-annotations-2.5.1.jar:/opt/hbase/bin/../lib/hbase-hadoop2-compat-1.1.3.jar:/opt/hbase/bin/../lib/hadoop-common-2.5.1.jar:/opt/hbase/bin/../lib/disruptor-3.3.0.jar:/opt/hbase/bin/../lib/jackson-core-asl-1.9.13.jar:/opt/hbase/bin/../lib/aopalliance-1.0.jar:/opt/hbase/bin/../lib/jaxb-api-2.2.2.jar:/opt/hbase/bin/../lib/jaxb-impl-2.2.3-1.jar:/opt/hbase/bin/../lib/java-xmlbuilder-0.4.jar:/opt/hbase/bin/../lib/protobuf-java-2.5.0.jar:/opt/hbase/bin/../lib/junit-4.12.jar:/opt/hbase/bin/../lib/hbase-shell-1.1.3.jar:/opt/hbase/bin/../lib/phoenix-4.7.0-HBase-1.1-server.jar:/opt/hbase/bin/..

/lib/hbase-it-1.1.3.jar:/opt/hbase/bin/../lib/jruby-complete-1.6.8.jar:/opt/hbase/bin/../lib/spymemcached-2.11.6.jar:/opt/hbase/bin/../lib/hbase-server-1.1.3-tests.jar:/opt/hbase/bin/../lib/slf4j-log4j12-1.7.5.jar:/opt/hbase/bin/../lib/api-util-1.0.0-M20.jar:/opt/hbase/bin/../lib/jets3t-0.9.0.jar:/opt/hbase/bin/../lib/netty-all-4.0.23.Final.jar:/opt/hbase/bin/../lib/paranamer-2.3.jar:/opt/hbase/bin/../lib/jersey-core-1.9.jar:/opt/hbase/bin/../lib/servlet-api-2.5-6.1.14.jar:/opt/hbase/bin/../lib/leveldbjni-all-1.8.jar:/opt/hbase/bin/../lib/commons-io-2.4.jar:/opt/hbase/bin/../lib/commons-logging-1.2.jar:/opt/hbase/bin/../lib/commons-compress-1.4.1.jar:/opt/hbase/bin/../lib/jersey-guice-1.9.jar:/opt/hbase/bin/../lib/xmlenc-0.52.jar:/opt/hbase/bin/../lib/apacheds-kerberos-codec-2.0.0-M15.jar:/opt/hbase/bin/../lib/hadoop-mapreduce-client-common-2.5.1.jar:/opt/

Re: Tephra not starting correctly.

2016-03-30 Thread F21

I removed the following from hbase-site.xml and tephra started correctly:

  
hbase.zookeeper.quorum
f826338-zookeeper.f826338
  

However, it now keeps trying to connect to zookeeper on localhost, which 
wouldn't work, because my zookeeper is on another host:


2016-03-31 00:06:21,972 WARN  [main-SendThread(localhost:2181)] 
zookeeper.ClientCnxn: Session 0x0 for server null, unexpected error, 
closing socket connection and attempting reconnect

java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at 
sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at 
org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361)
at 
org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)


Any ideas how this can be fixed?



On 31/03/2016 4:47 AM, Mujtaba Chohan wrote:

Few pointers:

- phoenix-core-*.jar is a subset of phoenix-*-server.jar so just 
phoenix-*-server.jar in hbase/lib is enough for region servers and master.
- phoenix-server-*-runnable.jar and phoenix-*-server.jar should be 
enough for query server. Client jar would only duplicate HBase classes 
in hbase/lib.
- Check for exception starting tephra in 
/tmp/tephra-*/tephra-service-*.log (assuming this is the log location 
configured in your tephra-env.sh)


- mujtaba


On Wed, Mar 30, 2016 at 2:54 AM, F21 <f21.gro...@gmail.com 
<mailto:f21.gro...@gmail.com>> wrote:


I have been trying to get tephra working, but wasn't able to get
it starting successfully.

I have a HDFS and HBase 1.1 cluster running in docker containers.
I have confirmed that Phoenix, HDFS and HBase are both working
correctly. Phoenix and the Phoenix query server are also installed
correctly and I can access the cluster using Squirrel SQL with the
thin client.

Here's what I have done:

In the hbase-site.xml of the region servers and masters, add the
following:


  data.tx.snapshot.dir
  /tmp/tephra/snapshots



  data.tx.timeout
  60


In the hbase-site.xml of the phoenix query server, add:


  phoenix.transactions.enabled
  true


For the master, copy the following to hbase/lib:
phoenix-4.7.0-HBase-1.1-server
phoenix-core-4.7.0-HBase-1.1

Also, copy tephra and tephra-env.sh to hbase/bin

On the region server, copy the following to hbase/lib:
phoenix-4.7.0-HBase-1.1-server
phoenix-core-4.7.0-HBase-1.1

For the phoenix query server, copy the following to hbase/lib:
phoenix-server-4.7.0-HBase-1.1-runnable
phoenix-4.7.0-HBase-1.1-client

This is what I get when I try to start tephra on the master:

root@f826338-hmaster1:/opt/hbase/bin# ./tephra start
Wed Mar 30 09:54:08 UTC 2016 Starting tephra service on
f826338-hmaster1.f826338
Running class co.cask.tephra.TransactionServiceMain

root@f826338-hmaster1:/opt/hbase/bin# ./tephra status
checking status
 * tephra is not running

Any pointers appreciated! :)







Re: Tephra not starting correctly.

2016-03-30 Thread F21
I just downloaded the tephra 0.7.0 from github and extracted it into the 
container.


Using the same setup as before, I ran:
export HBASE_CP=/opt/hbase/lib
export HBASE_HOME=/opt/hbase

Running the standalone tephra using ./tephra start worked correctly and 
it was able to become the leader.


Do you think this might be a bug?

On 31/03/2016 11:53 AM, Mujtaba Chohan wrote:
I still see you have the following on classpath: 
opt/hbase/phoenix-assembly/target/*


On Wed, Mar 30, 2016 at 5:42 PM, F21 <f21.gro...@gmail.com 
<mailto:f21.gro...@gmail.com>> wrote:


Thanks for the hints.

If I remove the client jar, it complains about a missing class:
2016-03-31 00:38:25,929 INFO  [main]
tephra.TransactionServiceMain: Starting TransactionServiceMain
Exception in thread "main" java.lang.NoClassDefFoundError:
com/google/common/util/concurrent/Service$Listener
at

co.cask.tephra.distributed.TransactionService.doStart(TransactionService.java:78)
at

com.google.common.util.concurrent.AbstractService.start(AbstractService.java:90)
at

com.google.common.util.concurrent.AbstractService.startAndWait(AbstractService.java:129)
at
co.cask.tephra.TransactionServiceMain.start(TransactionServiceMain.java:116)
at
co.cask.tephra.TransactionServiceMain.doMain(TransactionServiceMain.java:83)
at
co.cask.tephra.TransactionServiceMain.main(TransactionServiceMain.java:47)
Caused by: java.lang.ClassNotFoundException:
com.google.common.util.concurrent.Service$Listener
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at
sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 6 more
2016-03-31 00:38:25,931 INFO  [Thread-0]
tephra.TransactionServiceMain: Stopping TransactionServiceMain

After adding the client-without-hbase jar, I get a missing method
error:
java.lang.NoSuchMethodError:

co.cask.tephra.TransactionManager.addListener(Lcom/google/common/util/concurrent/Service$Listener;Ljava/util/concurrent/Executor;)V
at

co.cask.tephra.distributed.TransactionService$1.leader(TransactionService.java:83)
at

org.apache.twill.internal.zookeeper.LeaderElection.becomeLeader(LeaderElection.java:229)
at

org.apache.twill.internal.zookeeper.LeaderElection.access$1800(LeaderElection.java:53)
at

org.apache.twill.internal.zookeeper.LeaderElection$5.onSuccess(LeaderElection.java:207)
at

org.apache.twill.internal.zookeeper.LeaderElection$5.onSuccess(LeaderElection.java:186)
at
com.google.common.util.concurrent.Futures$5.run(Futures.java:768)
at

java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

I am not very familiar with java or phoenix itself, but here's my
classpath:
2016-03-31 00:41:06,062 INFO  [main] zookeeper.ZooKeeper: Client

environment:java.class.path=/opt/hbase/bin/../lib/hadoop-mapreduce-client-core-2.5.1.jar:/opt/hbase/bin/../lib/api-asn1-api-1.0.0-M20.jar:/opt/hbase/bin/../lib/hadoop-mapreduce-client-app-2.5.1.jar:/opt/hbase/bin/../lib/commons-beanutils-1.7.0.jar:/opt/hbase/bin/../lib/jsp-2.1-6.1.14.jar:/opt/hbase/bin/../lib/jasper-compiler-5.5.23.jar:/opt/hbase/bin/../lib/hbase-rest-1.1.3.jar:/opt/hbase/bin/../lib/hadoop-annotations-2.5.1.jar:/opt/hbase/bin/../lib/hbase-hadoop2-compat-1.1.3.jar:/opt/hbase/bin/../lib/hadoop-common-2.5.1.jar:/opt/hbase/bin/../lib/disruptor-3.3.0.jar:/opt/hbase/bin/../lib/jackson-core-asl-1.9.13.jar:/opt/hbase/bin/../lib/aopalliance-1.0.jar:/opt/hbase/bin/../lib/jaxb-api-2.2.2.jar:/opt/hbase/bin/../lib/jaxb-impl-2.2.3-1.jar:/opt/hbase/bin/../lib/java-xmlbuilder-0.4.jar:/opt/hbase/bin/../lib/protobuf-java-2.5.0.jar:/opt/hbase/bin/../lib/junit-4.12.jar:/opt/hbase/bin/../lib/hbase-shell-1.1.3.jar:/opt/hbase/bin/../lib/phoenix-4.7.0-HBase-1.1-server.jar:/opt/hbase/bin/..

/lib/hbase-it-1.1.3.jar:/opt/hbase/bin/../lib/jruby-complete-1.6.8.jar:/opt/hbase/bin/../lib/spymemcached-2.11.6.jar:/opt/hbase/bin/../lib/hbase-server-1.1.3-tests.jar:/opt/hbase/bin/../lib/slf4j-log4j12-1.7.5.jar:/opt/hbase/bin/../lib/api-util-1.0.0-M20.jar:/opt/hbase/bin/../lib/jets3t-0.9.0.jar:/opt/hbase/bin/../lib/netty-all-4.0.23.Final.jar:/opt/hbase/bin/../lib/paranamer-2.3.jar:/opt/hbase/bin/../lib/jersey-core-1.9.jar:/opt/hbase/bin/../lib/servlet-api-2.5-6.1.14.jar:/opt/hbase/bin/../lib/leveldbjni-all-1.8.jar:/opt/hbase/bin/../lib/commons-io-2.4.jar:/opt/hbase/bin/../lib/commons-logging-1.2.jar:/opt/hbase/bin/../lib/commons-compress-1.4.1.jar:/opt/hbase/b

Re: apache phoenix json api

2016-04-13 Thread F21
Your PrepareAndExecute request is missing a statementId: 
https://calcite.apache.org/docs/avatica_json_reference.html#prepareandexecuterequest


Before calling PrepareAndExecute, you need to send a CreateStatement 
request to the server so that it can give you a statementId. Then, use 
that statementId in your PrepareAndExecute request and all should be fine :)



On 13/04/2016 8:24 PM, Plamen Paskov wrote:

Hi guys,
I just setup apache phoenix 4.7 and set the serialization to JSON. Now 
i'm trying to run a select statement but what i receive is this:


{
  "response": "executeResults",
  "missingStatement": true,
  "rpcMetadata": {
"response": "rpcMetadata",
"serverAddress": "ip-172-31-27-198:8765"
  },
  "results": null
}

My request looks like this:

curl -XPOST -H 'request: {"request":"prepareAndExecute", 
"connectionId":"1", "sql":"select * from us_population", 
"maxRowCount":-1}' http://52.31.63.96:8765/


Running the select above from the command line is fine and it returns 
2 rows :


sqlline version 1.1.8
0: jdbc:phoenix:localhost> select * from us_population;
+---+--+--+ 

| STATE |   CITY   | 
POPULATION|
+---+--+--+ 

| CA| Los Angeles  | 
3844829  |
| NY| New York | 
8143197  |
+---+--+--+ 


2 rows selected (0.087 seconds)

Can you give me some direction what i'm doing wrong as i'm not java 
dev and it's not possible for me to read and understand the source code.


Thanks in advance !




Re: apache phoenix json api

2016-04-13 Thread F21
I am currently building a golang client as well, so I've been looking 
the api over the last few weeks.


I am not sure about the decision to have to create a statement first, 
but in terms of go, it fits the sql package very well, where statements 
are opened and closed.


I don't think there are any books (as of yet), but the references and 
digging through the code should be quite useful. I also recommend 
checking out the avatica project (which is a sub project of calcite) 
which is used to power the query server.


Also, the query server uses protobufs by default now, so it would 
probably be better to use that rather than the JSON api.


On 13/04/2016 10:21 PM, Plamen Paskov wrote:

thanks for your quick and accurate answer ! it's working now!
can you give me a brief explanation of why is it to mantain the state 
via the json api so i can better understand how to create a php 
wrapper library. if there are some books or references where i can 
read more about apache phoenix will be very helpful.

thanks

On 13.04.2016 13:29, F21 wrote:
Your PrepareAndExecute request is missing a statementId: 
https://calcite.apache.org/docs/avatica_json_reference.html#prepareandexecuterequest


Before calling PrepareAndExecute, you need to send a CreateStatement 
request to the server so that it can give you a statementId. Then, 
use that statementId in your PrepareAndExecute request and all should 
be fine :)



On 13/04/2016 8:24 PM, Plamen Paskov wrote:

Hi guys,
I just setup apache phoenix 4.7 and set the serialization to JSON. 
Now i'm trying to run a select statement but what i receive is this:


{
  "response": "executeResults",
  "missingStatement": true,
  "rpcMetadata": {
"response": "rpcMetadata",
"serverAddress": "ip-172-31-27-198:8765"
  },
  "results": null
}

My request looks like this:

curl -XPOST -H 'request: {"request":"prepareAndExecute", 
"connectionId":"1", "sql":"select * from us_population", 
"maxRowCount":-1}' http://52.31.63.96:8765/


Running the select above from the command line is fine and it 
returns 2 rows :


sqlline version 1.1.8
0: jdbc:phoenix:localhost> select * from us_population;
+---+--+--+ 

| STATE |   CITY   | 
POPULATION|
+---+--+--+ 

| CA| Los Angeles  | 
3844829  |
| NY| New York | 
8143197  |
+---+--+--+ 


2 rows selected (0.087 seconds)

Can you give me some direction what i'm doing wrong as i'm not java 
dev and it's not possible for me to read and understand the source 
code.


Thanks in advance !








How do I query the phoenix query server?

2016-03-23 Thread F21
I am interested in building a Go client to query the phoenix query 
server using protocol buffers.


The query server is running on http://localhost:8765, so I tried POSTing 
to localhost:8765  with the marshalled protocol buffer as the body.


Unfortunately, the server responds with:

content="text/html;charset=ISO-8859-1"/>

Error 500 


HTTP ERROR: 500
Problem accessing /. Reason:
Cannot find parser for 123456
Powered by Jetty://



"123456" is my connection-id.

There doesn't seem to be any documentation on how to query the query 
server (i.e, which endpoints and how the marshalled protocol buffer 
should be sent). If someone could point me in the right direction, that 
would be awesome!


Re: Phoenix transactions not committing.

2016-04-02 Thread F21

@James Taylor:

I was unable to reproduce the problem today after extensive testing. I 
think the problem is probably due to SquirrelSQL and not the query 
server. Not familiar with the thin-client and SquirrelSQL, but does it 
do any caching?


On 3/04/2016 5:12 AM, James Taylor wrote:
Glad you have a work around. Would you mind filing a Calcite bug for 
the Avatica component after you finish your testing?


Thanks,
James

On Sat, Apr 2, 2016 at 4:10 AM, F21 <f21.gro...@gmail.com 
<mailto:f21.gro...@gmail.com>> wrote:


I was able to successfully commit a transaction if I set the
serialization of the phoenix query server to JSON.

I will test more with protobufs and report back.

On 2/04/2016 1:11 AM, Steve Terrell wrote:

You might try looking up previous emails from me in this mailing
list.  I had some problems doing commits when using the thin
client and Phoenix 4.6.0.

Hope this helps,
Steve

On Thu, Mar 31, 2016 at 11:25 PM, F21 <f21.gro...@gmail.com
<mailto:f21.gro...@gmail.com>> wrote:

As I mentioned about a week ago, I am working on a golang
client using protobuf serialization with the phoenix query
server. I have successfully dealt with the serialization of
requests and responses.

However, I am trying to commit a transaction and just doesn't
seem to commit.

Here's what I am doing (I am not including the WireMessage
message that wraps the requests/responses for brevity):

I have a table called "my_table", created by running this sql
in Squirrel SQL: CREATE TABLE my_table (k BIGINT PRIMARY KEY,
v VARCHAR) TRANSACTIONAL=true

OpenConnectionRequest {
  connection_id: "myconnectionid"
}

statementID = CreateStatementRequest {
  connection_id: "myconnectionid"
}

PrepareAndExecuteRequest {
  connection_id : "myconnectionid"
  statement_id = statementID
  sql = " UPSERT INTO my_table VALUES (1,'A')"
}

CommitRequest {
  connection_id: "myconnectionid"
}

After sending the commands to the query service, I executed
"SELECT * FROM my_table" in Squirrel SQL, but I do not see
any records. There also doesn't seem to be anything
interesting in the tephra or hbase master logs.

What is causing this problem?









Non-transactional table has transaction-like behavior

2016-04-04 Thread F21
I am using HBase 1.1.3 with Phoenix 4.8.0-SNAPSHOT. To talk to phoenix, 
I am using the phoenix query server with serialization set to JSON.


First, I create a non-transactional table:
CREATE TABLE my_table3 (k BIGINT PRIMARY KEY, v VARCHAR) 
TRANSACTIONAL=false;


I then send the following requests to the query server using curl:
curl localhost:8765 -XPOST --data '{"request": 
"openConnection","connectionId": "my-conn"}'


curl localhost:8765 -XPOST --data '{"request": 
"connectionSync","connectionId": "my-conn","connProps": {"connProps": 
"connPropsImpl","autoCommit": false,"transactionIsolation": 8}}'


curl localhost:8765 -XPOST --data '{"request": 
"createStatement","connectionId": "my-conn"}'
curl localhost:8765 -XPOST --data '{"request": 
"prepareAndExecute","connectionId": "my-conn","statementId": 
12345,"sql": "UPSERT INTO my_table3 VALUES (1,\'A\')","maxRowCount": 
100}' #Update the statement id


curl localhost:8765 -XPOST --data '{"request": 
"createStatement","connectionId": "my-conn"}'
curl localhost:8765 -XPOST --data '{"request": 
"prepareAndExecute","connectionId": "my-conn","statementId": 
12345,"sql": "UPSERT INTO my_table3 VALUES (2,\'B\')","maxRowCount": 
100}' #Update the statement id


Connect to the phoenix query server using Squirrel SQL and see no rows 
exist for the my_table3 table.


curl localhost:8765 -XPOST --data '{"request": 
"createStatement","connectionId": "my-conn"}'
curl localhost:8765 -XPOST --data '{"request": 
"prepareAndExecute","connectionId": "my-conn","statementId": 
12345,"sql": "SELECT * FROM my_table3","maxRowCount": 100}' # Shows no 
results


curl localhost:8765 -XPOST --data '{"request": "commit","connectionId": 
"my-conn"}'


Connect to the phoenix query server using Squirrel SQL and see 2 rows 
for the my_table3 table.


It seems a bit strange that the table exhibits some properties where it 
supports transactions to some extent. Is this something that should be 
fixed?


Ideally, if a connectionSync request turns off autocommit for a given 
connection, reading and writing to a non-transactional table using that 
connection should return an error.




Re: unsubscribe

2016-03-28 Thread F21
Send your unsubscribe request to user-unsubscr...@phoenix.apache.org to 
unsubscribe. :)


On 29/03/2016 4:54 PM, Dor Ben Dov wrote:


This message and the information contained herein is proprietary and 
confidential and subject to the Amdocs policy statement, you may 
review at http://www.amdocs.com/email_disclaimer.asp 




Re: Tephra not starting correctly.

2016-03-31 Thread F21

Hey Mujtaba,

I was able to get it working. I think the addition of HBASE_HOME and 
changing the way my entrypoint scripts in my docker container were being 
called helped solved the issue.


Thanks again!

On 31/03/2016 11:38 PM, Mujtaba Chohan wrote:
Shouldn't be a bug there as it has been working in our environment. To 
verify can you please try this? Copy only tephra and tephra-env.sh 
files supplied with Phoenix in a new directory with HBASE_HOME env 
variable set and then run tephra.


Thanks,
Mujtaba

On Wed, Mar 30, 2016 at 9:59 PM, F21 <f21.gro...@gmail.com 
<mailto:f21.gro...@gmail.com>> wrote:


I just downloaded the tephra 0.7.0 from github and extracted it
into the container.

Using the same setup as before, I ran:
export HBASE_CP=/opt/hbase/lib
export HBASE_HOME=/opt/hbase

Running the standalone tephra using ./tephra start worked
correctly and it was able to become the leader.

Do you think this might be a bug?

On 31/03/2016 11:53 AM, Mujtaba Chohan wrote:

I still see you have the following on classpath:
opt/hbase/phoenix-assembly/target/*

On Wed, Mar 30, 2016 at 5:42 PM, F21 <f21.gro...@gmail.com
<mailto:f21.gro...@gmail.com>> wrote:

Thanks for the hints.

If I remove the client jar, it complains about a missing class:
2016-03-31 00:38:25,929 INFO  [main]
tephra.TransactionServiceMain: Starting TransactionServiceMain
Exception in thread "main" java.lang.NoClassDefFoundError:
com/google/common/util/concurrent/Service$Listener
at

co.cask.tephra.distributed.TransactionService.doStart(TransactionService.java:78)
at

com.google.common.util.concurrent.AbstractService.start(AbstractService.java:90)
at

com.google.common.util.concurrent.AbstractService.startAndWait(AbstractService.java:129)
at

co.cask.tephra.TransactionServiceMain.start(TransactionServiceMain.java:116)
at

co.cask.tephra.TransactionServiceMain.doMain(TransactionServiceMain.java:83)
at

co.cask.tephra.TransactionServiceMain.main(TransactionServiceMain.java:47)
Caused by: java.lang.ClassNotFoundException:
com.google.common.util.concurrent.Service$Listener
at
java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at
sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 6 more
2016-03-31 00:38:25,931 INFO [Thread-0]
tephra.TransactionServiceMain: Stopping TransactionServiceMain

After adding the client-without-hbase jar, I get a missing
method error:
java.lang.NoSuchMethodError:

co.cask.tephra.TransactionManager.addListener(Lcom/google/common/util/concurrent/Service$Listener;Ljava/util/concurrent/Executor;)V
at

co.cask.tephra.distributed.TransactionService$1.leader(TransactionService.java:83)
at

org.apache.twill.internal.zookeeper.LeaderElection.becomeLeader(LeaderElection.java:229)
at

org.apache.twill.internal.zookeeper.LeaderElection.access$1800(LeaderElection.java:53)
at

org.apache.twill.internal.zookeeper.LeaderElection$5.onSuccess(LeaderElection.java:207)
at

org.apache.twill.internal.zookeeper.LeaderElection$5.onSuccess(LeaderElection.java:186)
at
com.google.common.util.concurrent.Futures$5.run(Futures.java:768)
at

java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

I am not very familiar with java or phoenix itself, but
here's my classpath:
2016-03-31 00:41:06,062 INFO  [main] zookeeper.ZooKeeper:
Client

environment:java.class.path=/opt/hbase/bin/../lib/hadoop-mapreduce-client-core-2.5.1.jar:/opt/hbase/bin/../lib/api-asn1-api-1.0.0-M20.jar:/opt/hbase/bin/../lib/hadoop-mapreduce-client-app-2.5.1.jar:/opt/hbase/bin/../lib/commons-beanutils-1.7.0.jar:/opt/hbase/bin/../lib/jsp-2.1-6.1.14.jar:/opt/hbase/bin/../lib/jasper-compiler-5.5.23.jar:/opt/hbase/bin/../lib/hbase-rest-1.1.3.jar:/opt/hbase/bin/../lib/hadoop-annotations-2.5.1.jar:/opt/hbase/bin/../lib/hbase-hadoop2-compat-1.1.3.jar:/opt/hbase/bin/../lib/hadoop-common-2.5.1.jar:/opt/hbase/bin/../lib/disruptor-3.3.0.jar:/opt/hbase/bin/../lib/jackson-core-asl-1.9.13.jar:/opt/hbase/bin/../lib/aopalliance-1.0.jar:/opt/hbase/bin/../lib/jaxb-api-2.2.2.jar:/opt/hbase/bin/../lib/jaxb-impl-2.2.3-1.jar:/opt/hbase/bin/../lib/

Re: Phoenix transactions not committing.

2016-04-02 Thread F21
I was able to successfully commit a transaction if I set the 
serialization of the phoenix query server to JSON.


I will test more with protobufs and report back.

On 2/04/2016 1:11 AM, Steve Terrell wrote:
You might try looking up previous emails from me in this mailing 
list.  I had some problems doing commits when using the thin client 
and Phoenix 4.6.0.


Hope this helps,
Steve

On Thu, Mar 31, 2016 at 11:25 PM, F21 <f21.gro...@gmail.com 
<mailto:f21.gro...@gmail.com>> wrote:


As I mentioned about a week ago, I am working on a golang client
using protobuf serialization with the phoenix query server. I have
successfully dealt with the serialization of requests and responses.

However, I am trying to commit a transaction and just doesn't seem
to commit.

Here's what I am doing (I am not including the WireMessage message
that wraps the requests/responses for brevity):

I have a table called "my_table", created by running this sql in
Squirrel SQL: CREATE TABLE my_table (k BIGINT PRIMARY KEY, v
VARCHAR) TRANSACTIONAL=true

OpenConnectionRequest {
  connection_id: "myconnectionid"
}

statementID = CreateStatementRequest {
  connection_id: "myconnectionid"
}

PrepareAndExecuteRequest {
  connection_id : "myconnectionid"
  statement_id = statementID
  sql = " UPSERT INTO my_table VALUES (1,'A')"
}

CommitRequest {
  connection_id: "myconnectionid"
}

After sending the commands to the query service, I executed
"SELECT * FROM my_table" in Squirrel SQL, but I do not see any
records. There also doesn't seem to be anything interesting in the
tephra or hbase master logs.

What is causing this problem?






Re: Phoenix transactions not committing.

2016-04-02 Thread F21

Hey Steve,

Thanks for your reply. I am using Phoenix 4.7.0 so these problem to be 
fixed. Anyway, I did some more tests and noticed that the transactions 
were timing out:


2016-04-02 09:58:28,189 INFO  [tx-clean-timeout] 
tephra.TransactionManager: Tx invalid list: added tx 145959104707900 
because of timeout
2016-04-02 09:58:28,189 INFO  [tx-clean-timeout] 
tephra.TransactionManager: Invalidated 1 transactions due to timeout.


On 2/04/2016 1:11 AM, Steve Terrell wrote:
You might try looking up previous emails from me in this mailing 
list.  I had some problems doing commits when using the thin client 
and Phoenix 4.6.0.


Hope this helps,
Steve

On Thu, Mar 31, 2016 at 11:25 PM, F21 <f21.gro...@gmail.com 
<mailto:f21.gro...@gmail.com>> wrote:


As I mentioned about a week ago, I am working on a golang client
using protobuf serialization with the phoenix query server. I have
successfully dealt with the serialization of requests and responses.

However, I am trying to commit a transaction and just doesn't seem
to commit.

Here's what I am doing (I am not including the WireMessage message
that wraps the requests/responses for brevity):

I have a table called "my_table", created by running this sql in
Squirrel SQL: CREATE TABLE my_table (k BIGINT PRIMARY KEY, v
VARCHAR) TRANSACTIONAL=true

OpenConnectionRequest {
  connection_id: "myconnectionid"
}

statementID = CreateStatementRequest {
  connection_id: "myconnectionid"
}

PrepareAndExecuteRequest {
  connection_id : "myconnectionid"
  statement_id = statementID
  sql = " UPSERT INTO my_table VALUES (1,'A')"
}

CommitRequest {
  connection_id: "myconnectionid"
}

After sending the commands to the query service, I executed
"SELECT * FROM my_table" in Squirrel SQL, but I do not see any
records. There also doesn't seem to be anything interesting in the
tephra or hbase master logs.

What is causing this problem?






Phoenix transactions not committing.

2016-03-31 Thread F21
As I mentioned about a week ago, I am working on a golang client using 
protobuf serialization with the phoenix query server. I have 
successfully dealt with the serialization of requests and responses.


However, I am trying to commit a transaction and just doesn't seem to 
commit.


Here's what I am doing (I am not including the WireMessage message that 
wraps the requests/responses for brevity):


I have a table called "my_table", created by running this sql in 
Squirrel SQL: CREATE TABLE my_table (k BIGINT PRIMARY KEY, v VARCHAR) 
TRANSACTIONAL=true


OpenConnectionRequest {
  connection_id: "myconnectionid"
}

statementID = CreateStatementRequest {
  connection_id: "myconnectionid"
}

PrepareAndExecuteRequest {
  connection_id : "myconnectionid"
  statement_id = statementID
  sql = " UPSERT INTO my_table VALUES (1,'A')"
}

CommitRequest {
  connection_id: "myconnectionid"
}

After sending the commands to the query service, I executed "SELECT * 
FROM my_table" in Squirrel SQL, but I do not see any records. There also 
doesn't seem to be anything interesting in the tephra or hbase master logs.


What is causing this problem?


Golang driver for Phoenix and Avatica available

2016-05-16 Thread F21

Hi all,

I have just open sourced a golang driver for Phoenix and Avatica.

The code is licensed using the Apache 2 License and is available here: 
https://github.com/Boostport/avatica

Contributions are very welcomed :)

Cheers,

Francis



Re: apache phoenix json api

2016-04-20 Thread F21
lable":1,
  "signed":true,
  "displaySize":40,
  "label":"POPULATION",
  "columnName":"POPULATION",
  "schemaName":"",
  "precision":0,
  "scale":0,
  "tableName":"US_POPULATION",
  "catalogName":"",
  "type":{
 "type":"scalar",
 "id":-5,
 "name":"BIGINT",
 "rep":"PRIMITIVE_LONG"
  },
  "readOnly":true,
  "writable":false,
  "definitelyWritable":false,
  "columnClassName":"java.lang.Long"
   }
],
"sql":null,
"parameters":[

],
"cursorFactory":{
   "style":"LIST",
   "clazz":null,
   "fieldNames":null
},
"statementType":null
 },
 "firstFrame":{
"offset":0,
"done":true,
"rows":[
   [
  "CA",
  "California",
  10
   ]
]
 },
 "updateCount":-1,
 "rpcMetadata":{
"response":"rpcMetadata",
"serverAddress":"f826338-phoenix-server.f826338:8765"
 }
  }
   ]
}

Can you confirm the versions you are running for HBase and Phoenix and 
try issuing those requests again with a new table?


Cheers,
Francis

On 20/04/2016 5:24 PM, Plamen Paskov wrote:

Josh,
I hope someone familiar can answer this question :)

On 19.04.2016 22:59, Josh Elser wrote:

Thanks for helping out, Francis!

Interesting that Jackson didn't fail when the connectionId was being 
passed as a number and not a string (maybe it's smart enough to 
convert that?).


Why does your commit response have a result set in it? A 
CommitResponse is essentially empty.


http://calcite.apache.org/avatica/docs/json_reference.html#commitresponse 



Plamen Paskov wrote:

i confirm that the data is missing when connecting using sqlline.py
command line client. If i upsert a record from within sqlline.py it's
ok. I will give a try to what you suggest to issue prepare and execute
as separate requests.
thanks !

On 19.04.2016 14:24, F21 wrote:

Can you try using something like SquirrelSQL or sqlline to see if the
data was inserted properly?

Another thing I would try is to use separate prepare and execute
requests when SELECTing rather than using prepareAndExecute.

On 19/04/2016 9:21 PM, Plamen Paskov wrote:

Yep
Here are the responses (the new data is missing again):

Prepare and execute response for upsert

{
"response": "executeResults",
"missingStatement": false,
"rpcMetadata": {
"response": "rpcMetadata",
"serverAddress": "ip-172-31-27-198:8765"
},
"results": [
{
"response": "resultSet",
"connectionId": "9",
"statementId": 21,
"ownStatement": false,
"signature": null,
"firstFrame": null,
"updateCount": 1,
"rpcMetadata": {
"response": "rpcMetadata",
"serverAddress": "ip-172-31-27-198:8765"
}
}
]
}

commit response
{
"response": "resultSet",
"connectionId": "9",
"statementId": 22,
"ownStatement": true,
"signature": {
"columns": [
{
"ordinal": 0,
"autoIncrement": false,
"caseSensitive": false,
"searchable": true,
"currency": false,
"nullable": 1,
"signed": false,
"displaySize": 40,
"label": "TABLE_SCHEM",
"columnName": "TABLE_SCHEM",
"schemaName": "",
"precision": 0,
"scale": 0,
"tableName": "SYSTEM.TABLE",
"catalogName": "",
"type": {
"type": "scalar",
"id": 12,
"name": "VARCHAR",
"rep": "STRING"
},
"readOnly": true,
"writable": false,
"definitelyWritable": false,
"columnClassName": "java.lang.String"
},
{
"ordinal": 1,
"autoIncrement": false,
"caseSensitive": false,
"searchable": true,
"currenc

Re: apache phoenix json api

2016-04-19 Thread F21
Can you show the requests you are currently sending? This is what a 
PrepareAndExecute request should look like:

https://calcite.apache.org/docs/avatica_json_reference.html#prepareandexecuterequest

On 19/04/2016 4:47 PM, Plamen Paskov wrote:

Josh,
I removed the quotation but the result is still the same. I still 
cannot see the new data added neither with prepareAndExecute or 
prepareAndExecuteBatch


On 17.04.2016 22:45, Josh Elser wrote:
statementId is an integer, not a string. Remove the quotation marks 
around the value "2".


Plamen Paskov wrote:

Now another error appears for prepare and execute batch request:



content="text/html;charset=ISO-8859-1"/>

Error 500 


HTTP ERROR: 500
Problem accessing /. Reason:

 com.fasterxml.jackson.databind.exc.UnrecognizedPropertyException:
Unrecognized field "statementId" (class
org.apache.calcite.avatica.remote.Service$SchemasRequest), not 
marked as

ignorable (3 known properties: , "connectionId", "catalog",
"schemaPattern"])
at [Source: java.io.StringReader@3b5c02a5; line: 6, column: 2] (through
reference chain:
org.apache.calcite.avatica.remote.SchemasRequest["statementId"])



Powered by Jetty://




My request looks like:
{
"request": "prepareAndExecuteBatch",
"connectionId": "3",
"statementId": "2",
"sqlCommands": [ "UPSERT INTO us_population VALUES('C1','City
1',10)", "UPSERT INTO us_population VALUES('C2','City 
2',100)" ]

}

Any help will be appreciated!
Thanks

On 14.04.2016 14:58, Plamen Paskov wrote:

Ah i found the error. It should be "sqlCommands": instead of
"sqlCommands",
The documentation syntax is wrong for this request type:
http://calcite.apache.org/avatica/docs/json_reference.html#prepareandexecutebatchrequest 




On 14.04.2016 11:09, Plamen Paskov wrote:

@Josh: thanks for your answer.

Folks,
I'm trying to prepare and execute batch request with no luck.
These are the requests i send:

{
"request": "openConnection",
"connectionId": "2"
}

{
"request": "createStatement",
"connectionId": "2"
}

{
"request": "prepareAndExecuteBatch",
"connectionId": "2",
"statementId": 1,
"sqlCommands", [ "UPSERT INTO us_population(STATE,CITY,POPULATION)
VALUES('C1','City 1',10)", "UPSERT INTO
us_population(STATE,CITY,POPULATION) VALUES('C2','City 2',100)" ]
}

And this is the response i receive:



content="text/html;charset=ISO-8859-1"/>

Error 500 


HTTP ERROR: 500
Problem accessing /. Reason:

 com.fasterxml.jackson.core.JsonParseException: Unexpected
character (',' (code 44)): was expecting a colon to separate field
name and value
at [Source: java.io.StringReader@41709697; line: 5, column: 17]



Powered by Jetty://







On 13.04.2016 19:27, Josh Elser wrote:

For reference materials: definitely check out
https://calcite.apache.org/avatica/

While JSON is easy to get started with, there are zero guarantees on
compatibility between versions. If you use protobuf, we should be
able to hide all schema drift from you as a client (e.g.
applications you write against Phoenix 4.7 should continue to work
against 4.8, 4.9, etc).

Good luck with the PHP client -- feel free to reach out if you have
more issues. Let us know you have something to shared. I'm sure
others would also find it very useful.

F21 wrote:
I am currently building a golang client as well, so I've been 
looking

the api over the last few weeks.

I am not sure about the decision to have to create a statement 
first,

but in terms of go, it fits the sql package very well, where
statements
are opened and closed.

I don't think there are any books (as of yet), but the 
references and

digging through the code should be quite useful. I also recommend
checking out the avatica project (which is a sub project of 
calcite)

which is used to power the query server.

Also, the query server uses protobufs by default now, so it would
probably be better to use that rather than the JSON api.

On 13/04/2016 10:21 PM, Plamen Paskov wrote:

thanks for your quick and accurate answer ! it's working now!
can you give me a brief explanation of why is it to mantain the 
state

via the json api so i can better understand how to create a php
wrapper library. if there are some books or references where i can
read more about apache phoenix will be very helpful.
thanks

On 13.04.2016 13:29, F21 wrote:

Your PrepareAndExecute request is missing a statementId:
https://calcite.apache.org/docs/avatica_json_reference.html#prepareandexecuterequest 





Before calling PrepareAndExecute, you need to send a 
CreateStatement
request to the server so that it can give you a statementId. 
Then,

use that statementId in your PrepareAndExecute request and al

Re: apache phoenix json api

2016-04-19 Thread F21

The connectionId for all requests should be a string. Can you try that?


On 19/04/2016 5:07 PM, Plamen Paskov wrote:

That's what i tried but with no luck again:

{
  "request": "openConnection",
  "connectionId": 8
}

{
  "request": "createStatement",
  "connectionId": 8
}

{
  "request": "prepareAndExecute",
  "connectionId": 8,
  "statementId": 18,
  "sql": "UPSERT INTO us_population VALUES('YA','Yambol',10)",
  "maxRowCount": -1
}

{
  "request": "commit",
  "connectionId": 8
}

{
  "request": "createStatement",
  "connectionId": 8
}

{
  "request": "prepareAndExecute",
  "connectionId": 8,
  "statementId": 20,
  "sql": "SELECT * FROM us_population",
  "maxRowCount": -1
}

And this is the commit command response (if it can give you more 
insights)


{
  "response": "resultSet",
  "connectionId": "8",
  "statementId": 19,
  "ownStatement": true,
  "signature": {
"columns": [
  {
"ordinal": 0,
"autoIncrement": false,
"caseSensitive": false,
"searchable": true,
"currency": false,
"nullable": 1,
"signed": false,
"displaySize": 40,
"label": "TABLE_SCHEM",
"columnName": "TABLE_SCHEM",
"schemaName": "",
"precision": 0,
"scale": 0,
"tableName": "SYSTEM.TABLE",
"catalogName": "",
"type": {
  "type": "scalar",
  "id": 12,
  "name": "VARCHAR",
  "rep": "STRING"
},
"readOnly": true,
"writable": false,
"definitelyWritable": false,
"columnClassName": "java.lang.String"
  },
  {
"ordinal": 1,
"autoIncrement": false,
"caseSensitive": false,
"searchable": true,
"currency": false,
"nullable": 1,
"signed": false,
"displaySize": 40,
"label": "TABLE_CATALOG",
"columnName": "TABLE_CATALOG",
"schemaName": "",
"precision": 0,
"scale": 0,
    "tableName": "SYSTEM.TABLE",
"catalogName": "",
"type": {
  "type": "scalar",
  "id": 12,
  "name": "VARCHAR",
  "rep": "STRING"
},
"readOnly": true,
"writable": false,
"definitelyWritable": false,
"columnClassName": "java.lang.String"
  }
],
"sql": null,
"parameters": [],
"cursorFactory": {
  "style": "LIST",
  "clazz": null,
  "fieldNames": null
},
"statementType": null
  },
  "firstFrame": {
"offset": 0,
"done": true,
"rows": [
  [
null,
null
  ],
  [
"SYSTEM",
null
  ]
]
  },
  "updateCount": -1,
  "rpcMetadata": {
"response": "rpcMetadata",
"serverAddress": "ip-172-31-27-198:8765"
  }
}


On 19.04.2016 09:56, F21 wrote:

That looks fine to me!

I think phoenix has AutoCommit set to false by default. So, you will 
need to issue a commit before selecting: 
https://calcite.apache.org/docs/avatica_json_reference.html#commitrequest


Let me know if it works! :)

On 19/04/2016 4:54 PM, Plamen Paskov wrote:

The requests are as follow:

- open a connection
{
  "request": "openConnection",
  "connectionId": 5
}

- create statement
{
  "request": "createStatement",
  "connectionId": 5
}

- prepare and execute the upsert
{
  "request": "prepareAndExecute",
  "connectionId": 5,
  "statementId": 12,
  "sql": "UPSERT INTO us_population VALUES('CA','California',10)",
  "maxRowCount": -1
}

- create new statement for next select (not sure if i need it)
{
  "request": "createStatement",
  "connectionId": 5
}

- select all cities
{
  "request&qu

Re: apache phoenix json api

2016-04-19 Thread F21
ot;STATE",
"columnName": "STATE",
"schemaName": "",
"precision": 2,
"scale": 0,
"tableName": "US_POPULATION",
"catalogName": "",
"type": {
  "type": "scalar",
  "id": 1,
  "name": "CHAR",
  "rep": "STRING"
},
"readOnly": true,
"writable": false,
"definitelyWritable": false,
"columnClassName": "java.lang.String"
  },
  {
"ordinal": 1,
"autoIncrement": false,
"caseSensitive": false,
"searchable": true,
"currency": false,
"nullable": 0,
"signed": false,
"displaySize": 40,
"label": "CITY",
"columnName": "CITY",
"schemaName": "",
"precision": 0,
"scale": 0,
"tableName": "US_POPULATION",
"catalogName": "",
"type": {
  "type": "scalar",
  "id": 12,
  "name": "VARCHAR",
  "rep": "STRING"
},
"readOnly": true,
"writable": false,
"definitelyWritable": false,
"columnClassName": "java.lang.String"
  },
  {
"ordinal": 2,
"autoIncrement": false,
"caseSensitive": false,
"searchable": true,
"currency": false,
"nullable": 1,
"signed": true,
"displaySize": 40,
"label": "POPULATION",
"columnName": "POPULATION",
"schemaName": "",
"precision": 0,
"scale": 0,
"tableName": "US_POPULATION",
"catalogName": "",
"type": {
  "type": "scalar",
  "id": -5,
  "name": "BIGINT",
  "rep": "PRIMITIVE_LONG"
},
"readOnly": true,
"writable": false,
"definitelyWritable": false,
"columnClassName": "java.lang.Long"
  }
],
"sql": null,
"parameters": [],
"cursorFactory": {
  "style": "LIST",
  "clazz": null,
  "fieldNames": null
},
"statementType": null
  },
  "firstFrame": {
"offset": 0,
"done": true,
"rows": [
  [
"CA",
"Los Angeles",
3844829
  ],
  [
"IL",
"Chicago",
2000
  ],
  [
"NY",
"New York",
8143197
  ]
]
  },
  "updateCount": -1,
  "rpcMetadata": {
"response": "rpcMetadata",
"serverAddress": "ip-172-31-27-198:8765"
  }
}
  ]
}

On 19.04.2016 11:52, F21 wrote:

The connectionId for all requests should be a string. Can you try that?


On 19/04/2016 5:07 PM, Plamen Paskov wrote:

That's what i tried but with no luck again:

{
  "request": "openConnection",
  "connectionId": 8
}

{
  "request": "createStatement",
  "connectionId": 8
}

{
  "request": "prepareAndExecute",
  "connectionId": 8,
  "statementId": 18,
  "sql": "UPSERT INTO us_population VALUES('YA','Yambol',10)",
  "maxRowCount": -1
}

{
  "request": "commit",
  "connectionId": 8
}

{
  "request": "createStatement",
  "connectionId": 8
}

{
  "request": "prepareAndExecute",
  "connectionId": 8,
  "statementId": 20,
  "sql": "SELECT * FROM us_population",
  "maxRowCount": -1
}

And this is the commit comman

Re: Tephra errors when trying to create a transactional table in Phoenix 4.8.0

2016-08-31 Thread F21
)
at 
org.apache.tephra.TransactionManager.ensureAvailable(TransactionManager.java:709)
at 
org.apache.tephra.TransactionManager.startTx(TransactionManager.java:768)
at 
org.apache.tephra.TransactionManager.startShort(TransactionManager.java:728)
at 
org.apache.tephra.TransactionManager.startShort(TransactionManager.java:716)
at 
org.apache.tephra.distributed.TransactionServiceThriftHandler.startShort(TransactionServiceThriftHandler.java:71)

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.tephra.rpc.ThriftRPCServer$1.invoke(ThriftRPCServer.java:261)

at com.sun.proxy.$Proxy17.startShort(Unknown Source)
at 
org.apache.tephra.distributed.thrift.TTransactionServer$Processor$startShort.getResult(TTransactionServer.java:974)
at 
org.apache.tephra.distributed.thrift.TTransactionServer$Processor$startShort.getResult(TTransactionServer.java:959)
at 
org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)

at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
at 
org.apache.thrift.server.AbstractNonblockingServer$FrameBuffer.invoke(AbstractNonblockingServer.java:478)

at org.apache.thrift.server.Invocation.run(Invocation.java:18)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)

at java.lang.Thread.run(Thread.java:745)

If I run jps, I get the following:
bash-4.3# jps
771 Jps
138 HMaster
190 TransactionServiceMain

Cheers,
Francis

On 1/09/2016 4:01 AM, Thomas D'Silva wrote:
Can you check the Transaction Manager logs and see if there are any 
error? Also can you do a jps and see confirm the Transaction Manager 
is running ?


On Wed, Aug 31, 2016 at 2:12 AM, F21 <f21.gro...@gmail.com 
<mailto:f21.gro...@gmail.com>> wrote:


Just another update. Even though the logs says that the
transaction manager is not running, it is actually running.

I confirmed this by checking the output of ps and connecting to
the transaction manager:

bash-4.3# ps
PID   USER TIME   COMMAND
1 root   0:01 bash /run-hbase-phoenix.sh
  137 hadoop 0:19 /usr/lib/jvm/java-1.8-openjdk/bin/java
-Dproc_master -XX:OnOutOfMemoryError=kill -9 %p
-XX:+UseConcMarkSweepGC -XX:PermSize=128m -XX:Ma
  189 hadoop 0:08 /usr/lib/jvm/java-1.8-openjdk/bin/java
-XX:+UseConcMarkSweepGC -cp
/opt/hbase/bin/../lib/*:/opt/hbase/bin/../conf/:/opt/hbase/phoenix-c
  542 root   0:00 /bin/bash
 9035 root   0:00 sleep 1
 9036 root   0:00 ps

bash-4.3# wget localhost:15165
Connecting to localhost:15165 (127.0.0.1:15165
<http://127.0.0.1:15165>)
wget: error getting response: Connection reset by peer


On 31/08/2016 3:25 PM, F21 wrote:

This only seems to be a problem when I have HBase running in
fully distributed mode (1 master, 1 regionserver and 1
zookeeper node in different docker images).

If I have HBase running in standalone mode with HBase and
Phoenix and the Query server in 1 docker image, it works
correctly.

On 31/08/2016 11:21 AM, F21 wrote:

I have HBase 1.2.2 and Phoenix 4.8.0 running on my HBase
master running on alpine linux with OpenJDK JRE 8.

This is my hbase-site.xml:




  
hbase.rootdir
hdfs://mycluster/hbase
  
  
zookeeper.znode.parent
/hbase
  
  
hbase.cluster.distributed
true
  
  
hbase.zookeeper.quorum
m9edd51-zookeeper.m9edd51
  
  
data.tx.snapshot.dir
/tmp/tephra/snapshots
  
  
data.tx.timeout
60
  
  
phoenix.transactions.enabled
true
  


I am able to start the master correctly. I am also able to
create non-transactional table.

However, if I create a transactional table, I get this
error: ERROR [TTransactionServer-rpc-0]
thrift.ProcessFunction: Internal error processing startShort

This is what I see in the logs:

2016-08-31 01:08:33,560 WARN
[main-SendThread(m9edd51-zookeeper.m9edd51:2181)]
zookeeper.ClientCnxn: Session 0x156de22abec0004 for server
null, unexpected error, closing

Re: 回复: 回复: 回复: Can query server run with hadoop ha mode?

2016-09-08 Thread F21

Glad you got it working! :)

Cheers,
Francis

On 8/09/2016 7:11 PM, zengbaitang wrote:


I found the reason ,  because i have not set the env : HADOOP_CONF_DIR 
while i set the env, the problem solved .

Thank you F21, thank you very much!


--  --
*??:* "F21";<f21.gro...@gmail.com>;
*:* 2016??9??8??(??) 3:33
*??:* "user"<user@phoenix.apache.org>;
*:* Re: ?? ?? Can query server run with hadoop ha mode??

From the response of your curl, it appears that the query server is 
started correctly and running. The next bit to check is to see if it 
can talk to the HBase servers properly.


Add phoenix.queryserver.serialization to the hbase-site.xml for the 
query server and set the value to JSON.


Then try and send a CatalogsRequest to the query server using curl or 
wget.
See here for how to set up the request 
https://calcite.apache.org/docs/avatica_json_reference.html#catalogsrequest


Before sending the CatalogsRequest, remember to send an 
OpenConnectionRequest first: 
https://calcite.apache.org/docs/avatica_json_reference.html#openconnectionrequest


In your case, the `info` key of the OpenConnectionRequest can be omitted.

Cheers,
Francis

On 8/09/2016 4:12 PM, zengbaitang wrote:

yes, the query server run on one of the regionservers

and exec curl 'http://tnode02:8765' the terminal returns :


Error 404 - Not Found

Error 404 - Not Found.
No context on this server matched or handled this 
request.Contexts known to this server are: href="http://eclipse.org/jetty;>src="/favicon.ico"/>href="http://eclipse.org/jetty;>Powered by Jetty:// Java Web 
Server







------  --
*??:* "F21";<f21.gro...@gmail.com>;
*:* 2016??9??8??(??) 2:01
*??:* "user"<user@phoenix.apache.org>;
*:* Re: ?? Can query server run with hadoop ha mode??

Your logs do not seem to show any errors.

You mentioned that you have 2 hbase-site.xml. Are the Phoenix query 
servers running on the same machine as the HBase servers? If not, the 
hbase-site.xml for the phoenix query servers also needs the zookeeper 
configuration.


Did you also try to use curl or wget to get 
http://your-phoenix-query-server:8765 to see if there's a response?


Cheers,
Francis

On 8/09/2016 3:54 PM, zengbaitang wrote:

hi F21 ,  I am sure hbase-site.xml was configured properly ,

here is my *hbase-site.xml (hbase side)*:


hbase.rootdir
hdfs://stage-cluster/hbase



hbase.cluster.distributed
true


hbase.zookeeper.quorum
tnode01,tnode02,tnode03


zookeeper.znode.parent
/hbase


dfs.support.append
true


zookeeper.session.timeout
18


hbase.rpc.timeout
12


hbase.hregion.memstore.flush.size
67108864


hfile.block.cache.size
0.1




phoenix.schema.isNamespaceMappingEnabled
true






hbase.regionserver.wal.codec
org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec



hbase.region.server.rpc.scheduler.factory.class
org.apache.hadoop.hbase.ipc.PhoenixRpcSchedulerFactory
Factory to create the Phoenix RPC Scheduler 
that uses separate queues for index and metadata updates




hbase.rpc.controllerfactory.class
org.apache.hadoop.hbase.ipc.controller.ServerRpcControllerFactory
Factory to create the Phoenix RPC Scheduler 
that uses separate queues for index and metadata updates






*and the following is phoenix side hbase-site.xml*

  
hbase.regionserver.wal.codec
org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec
  

  
phoenix.schema.isNamespaceMappingEnabled
true
  



*and the following is query server log*
*
*
2016-09-08 13:33:03,218 INFO 
org.apache.phoenix.queryserver.server.Main: 
env:PATH=/usr/lib64/qt-3.3/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hadoop/bin:/usr/local/hadoop-2.7.1/bin:/usr/local/hbase-1.1.2/bin:/usr/local/apache-hive-1.2.1-bin/bin:/usr/local/sqoop-1.4.6.bin__hadoop-2.0.4-alpha/bin
2016-09-08 13:33:03,219 INFO 
org.apache.phoenix.queryserver.server.Main: env:HISTCONTROL=ignoredups
2016-09-08 13:33:03,219 INFO 
org.apache.phoenix.queryserver.server.Main: 
env:HCAT_HOME=/usr/local/apache-hive-1.2.1-bin/hcatalog
2016-09-08 13:33:03,220 INFO 
org.apache.phoenix.queryserver.server.Main: env:HISTSIZE=1000
2016-09-08 13:33:03,220 INFO 
org.apache.phoenix.queryserver.server.Main: 
env:JAVA_HOME=/usr/local/java/latest
2016-09-08 13:33:03,220 INFO 
org.apache.phoenix.queryserver.server.Main: env:TERM=xterm
2016-09-08 13:33:03,220 INFO 
org.apache.phoenix.queryserver.server.Main: 
env:XFILESEARCHPATH=/usr/dt/app-defaults/%L/Dt
2016-09-08 13:33:03,220 INFO 
org.apache.phoenix.queryserver.server.Main: env:LANG=en_US.UTF-8
2016-09-08 13:33:03,220 INFO 
org.

Re: 回复: Can query server run with hadoop ha mode?

2016-09-08 Thread F21

Your logs do not seem to show any errors.

You mentioned that you have 2 hbase-site.xml. Are the Phoenix query 
servers running on the same machine as the HBase servers? If not, the 
hbase-site.xml for the phoenix query servers also needs the zookeeper 
configuration.


Did you also try to use curl or wget to get 
http://your-phoenix-query-server:8765 to see if there's a response?


Cheers,
Francis

On 8/09/2016 3:54 PM, zengbaitang wrote:

hi F21 ,  I am sure hbase-site.xml was configured properly ,

here is my *hbase-site.xml (hbase side)*:


hbase.rootdir
hdfs://stage-cluster/hbase



hbase.cluster.distributed
true


hbase.zookeeper.quorum
tnode01,tnode02,tnode03


zookeeper.znode.parent
/hbase


dfs.support.append
true


zookeeper.session.timeout
18


hbase.rpc.timeout
12


hbase.hregion.memstore.flush.size
67108864


hfile.block.cache.size
0.1




phoenix.schema.isNamespaceMappingEnabled
true



hbase.regionserver.wal.codec
org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec



hbase.region.server.rpc.scheduler.factory.class
org.apache.hadoop.hbase.ipc.PhoenixRpcSchedulerFactory
Factory to create the Phoenix RPC Scheduler that 
uses separate queues for index and metadata updates




hbase.rpc.controllerfactory.class
org.apache.hadoop.hbase.ipc.controller.ServerRpcControllerFactory
Factory to create the Phoenix RPC Scheduler that 
uses separate queues for index and metadata updates






*and the following is phoenix side hbase-site.xml*

  
hbase.regionserver.wal.codec
org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec
  

  
phoenix.schema.isNamespaceMappingEnabled
true
  



*and the following is query server log*
*
*
2016-09-08 13:33:03,218 INFO 
org.apache.phoenix.queryserver.server.Main: 
env:PATH=/usr/lib64/qt-3.3/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hadoop/bin:/usr/local/hadoop-2.7.1/bin:/usr/local/hbase-1.1.2/bin:/usr/local/apache-hive-1.2.1-bin/bin:/usr/local/sqoop-1.4.6.bin__hadoop-2.0.4-alpha/bin
2016-09-08 13:33:03,219 INFO 
org.apache.phoenix.queryserver.server.Main: env:HISTCONTROL=ignoredups
2016-09-08 13:33:03,219 INFO 
org.apache.phoenix.queryserver.server.Main: 
env:HCAT_HOME=/usr/local/apache-hive-1.2.1-bin/hcatalog
2016-09-08 13:33:03,220 INFO 
org.apache.phoenix.queryserver.server.Main: env:HISTSIZE=1000
2016-09-08 13:33:03,220 INFO 
org.apache.phoenix.queryserver.server.Main: 
env:JAVA_HOME=/usr/local/java/latest
2016-09-08 13:33:03,220 INFO 
org.apache.phoenix.queryserver.server.Main: env:TERM=xterm
2016-09-08 13:33:03,220 INFO 
org.apache.phoenix.queryserver.server.Main: 
env:XFILESEARCHPATH=/usr/dt/app-defaults/%L/Dt
2016-09-08 13:33:03,220 INFO 
org.apache.phoenix.queryserver.server.Main: env:LANG=en_US.UTF-8
2016-09-08 13:33:03,220 INFO 
org.apache.phoenix.queryserver.server.Main: env:G_BROKEN_FILENAMES=1
2016-09-08 13:33:03,220 INFO 
org.apache.phoenix.queryserver.server.Main: env:SELINUX_LEVEL_REQUESTED=
2016-09-08 13:33:03,221 INFO 
org.apache.phoenix.queryserver.server.Main: env:SELINUX_ROLE_REQUESTED=
2016-09-08 13:33:03,221 INFO 
org.apache.phoenix.queryserver.server.Main: 
env:MAIL=/var/spool/mail/hadoop
2016-09-08 13:33:03,221 INFO 
org.apache.phoenix.queryserver.server.Main: env:LOGNAME=hadoop
2016-09-08 13:33:03,221 INFO 
org.apache.phoenix.queryserver.server.Main: 
env:PWD=/usr/local/apache-phoenix-4.8.0-HBase-1.1-bin/bin
2016-09-08 13:33:03,221 INFO 
org.apache.phoenix.queryserver.server.Main: 
env:KYLIN_HOME=/usr/local/apache-kylin-1.5.1-bin
2016-09-08 13:33:03,221 INFO 
org.apache.phoenix.queryserver.server.Main: env:_=./queryserver.py
2016-09-08 13:33:03,221 INFO 
org.apache.phoenix.queryserver.server.Main: 
env:LESSOPEN=|/usr/bin/lesspipe.sh %s
2016-09-08 13:33:03,222 INFO 
org.apache.phoenix.queryserver.server.Main: env:SHELL=/bin/bash
2016-09-08 13:33:03,222 INFO 
org.apache.phoenix.queryserver.server.Main: env:SELINUX_USE_CURRENT_RANGE=
2016-09-08 13:33:03,222 INFO 
org.apache.phoenix.queryserver.server.Main: 
env:QTINC=/usr/lib64/qt-3.3/include
2016-09-08 13:33:03,222 INFO 
org.apache.phoenix.queryserver.server.Main: env:CVS_RSH=ssh
2016-09-08 13:33:03,222 INFO 
org.apache.phoenix.queryserver.server.Main: env:SSH_TTY=/dev/pts/0
2016-09-08 13:33:03,222 INFO 
org.apache.phoenix.queryserver.server.Main: 
env:SSH_CLIENT=172.18.100.27 51441 22
2016-09-08 13:33:03,223 INFO 
org.apache.phoenix.queryserver.server.Main: 
env:HIVE_HOME=/usr/local/apache-hive-1.2.1-bin
2016-09-08 13:33:03,223 INFO 
org.apache.phoenix.queryserver.server.Main: 
env:OLDPWD=/usr/local/hadoop-2.7.1/etc/hadoop
2016-09-08 13:33:03,223 INFO 
org.apache.phoenix.queryserver.server.Main: env:USER=hadoop
2016-09-08 13:33:03,223 INFO

Re: 回复: 回复: Can query server run with hadoop ha mode?

2016-09-08 Thread F21
From the response of your curl, it appears that the query server is 
started correctly and running. The next bit to check is to see if it can 
talk to the HBase servers properly.


Add phoenix.queryserver.serialization to the hbase-site.xml for the 
query server and set the value to JSON.


Then try and send a CatalogsRequest to the query server using curl or wget.
See here for how to set up the request 
https://calcite.apache.org/docs/avatica_json_reference.html#catalogsrequest


Before sending the CatalogsRequest, remember to send an 
OpenConnectionRequest first: 
https://calcite.apache.org/docs/avatica_json_reference.html#openconnectionrequest


In your case, the `info` key of the OpenConnectionRequest can be omitted.

Cheers,
Francis

On 8/09/2016 4:12 PM, zengbaitang wrote:

yes, the query server run on one of the regionservers

and exec curl 'http://tnode02:8765' the terminal returns :


Error 404 - Not Found

Error 404 - Not Found.
No context on this server matched or handled this request.Contexts 
known to this server are: href="http://eclipse.org/jetty;>src="/favicon.ico"/>href="http://eclipse.org/jetty;>Powered by Jetty:// Java Web 
Server







--  ----------
*??:* "F21";<f21.gro...@gmail.com>;
*:* 2016??9??8??(??) 2:01
*??:* "user"<user@phoenix.apache.org>;
*:* Re: ?? Can query server run with hadoop ha mode??

Your logs do not seem to show any errors.

You mentioned that you have 2 hbase-site.xml. Are the Phoenix query 
servers running on the same machine as the HBase servers? If not, the 
hbase-site.xml for the phoenix query servers also needs the zookeeper 
configuration.


Did you also try to use curl or wget to get 
http://your-phoenix-query-server:8765 to see if there's a response?


Cheers,
Francis

On 8/09/2016 3:54 PM, zengbaitang wrote:

hi F21 ,  I am sure hbase-site.xml was configured properly ,

here is my *hbase-site.xml (hbase side)*:


hbase.rootdir
hdfs://stage-cluster/hbase



hbase.cluster.distributed
true


hbase.zookeeper.quorum
tnode01,tnode02,tnode03


zookeeper.znode.parent
/hbase


dfs.support.append
true


zookeeper.session.timeout
18


hbase.rpc.timeout
12


hbase.hregion.memstore.flush.size
67108864


hfile.block.cache.size
0.1




phoenix.schema.isNamespaceMappingEnabled
true



hbase.regionserver.wal.codec
org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec



hbase.region.server.rpc.scheduler.factory.class
org.apache.hadoop.hbase.ipc.PhoenixRpcSchedulerFactory
Factory to create the Phoenix RPC Scheduler that 
uses separate queues for index and metadata updates




hbase.rpc.controllerfactory.class
org.apache.hadoop.hbase.ipc.controller.ServerRpcControllerFactory
Factory to create the Phoenix RPC Scheduler that 
uses separate queues for index and metadata updates






*and the following is phoenix side hbase-site.xml*

  
hbase.regionserver.wal.codec
org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec
  

  
phoenix.schema.isNamespaceMappingEnabled
true
  



*and the following is query server log*
*
*
2016-09-08 13:33:03,218 INFO 
org.apache.phoenix.queryserver.server.Main: 
env:PATH=/usr/lib64/qt-3.3/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hadoop/bin:/usr/local/hadoop-2.7.1/bin:/usr/local/hbase-1.1.2/bin:/usr/local/apache-hive-1.2.1-bin/bin:/usr/local/sqoop-1.4.6.bin__hadoop-2.0.4-alpha/bin
2016-09-08 13:33:03,219 INFO 
org.apache.phoenix.queryserver.server.Main: env:HISTCONTROL=ignoredups
2016-09-08 13:33:03,219 INFO 
org.apache.phoenix.queryserver.server.Main: 
env:HCAT_HOME=/usr/local/apache-hive-1.2.1-bin/hcatalog
2016-09-08 13:33:03,220 INFO 
org.apache.phoenix.queryserver.server.Main: env:HISTSIZE=1000
2016-09-08 13:33:03,220 INFO 
org.apache.phoenix.queryserver.server.Main: 
env:JAVA_HOME=/usr/local/java/latest
2016-09-08 13:33:03,220 INFO 
org.apache.phoenix.queryserver.server.Main: env:TERM=xterm
2016-09-08 13:33:03,220 INFO 
org.apache.phoenix.queryserver.server.Main: 
env:XFILESEARCHPATH=/usr/dt/app-defaults/%L/Dt
2016-09-08 13:33:03,220 INFO 
org.apache.phoenix.queryserver.server.Main: env:LANG=en_US.UTF-8
2016-09-08 13:33:03,220 INFO 
org.apache.phoenix.queryserver.server.Main: env:G_BROKEN_FILENAMES=1
2016-09-08 13:33:03,220 INFO 
org.apache.phoenix.queryserver.server.Main: env:SELINUX_LEVEL_REQUESTED=
2016-09-08 13:33:03,221 INFO 
org.apache.phoenix.queryserver.server.Main: env:SELINUX_ROLE_REQUESTED=
2016-09-08 13:33:03,221 INFO 
org.apache.phoenix.queryserver.server.Main: 
env:MAIL=/var/spool/mail/hadoop
2016-09-08 13:33:03,221 INFO 
org.apache.phoenix.queryserver.server.Main: env:LOGNAME=hadoop
2016-09-08 13:33:03,221 INFO 
or

Re: Is the JSON that is sent to the server converted to Protobufs or is the Protobufs converted to JSON to be used by Phoenix

2016-09-04 Thread F21
I am not sure what you mean here. The phoenix query server (which is 
based on avatica, which is a subproject in the Apache Calcite project) 
accepts both Protobufs and JSON depending on the value of 
"phoenix.queryserver.serialization".


The server implements readers that will convert the Protobuf or JSON 
request into its internal data structures to perform further processing.


On 4/09/2016 7:19 PM, Cheyenne Forbes wrote:

It would be great if this question could be answered.

Thank you,

Cheyenne





Re: Can query server run with hadoop ha mode?

2016-09-07 Thread F21
I have a test cluster running HDFS in HA mode with HBase + Phoenix on 
docker running successfully.


Can you check if you have a properly configured hbase-site.xml that is 
available to your phoenix query server? Make sure hbase.zookeeper.quorum 
and zookeeper.znode.parent is present. If zookeeper does not run on 
2181, you will also need hbase.zookeeper.property.clientPort.


As a quick test, can you wget or curl http://your-phoenix-server:8765 to 
see if it has any response? Finally, if you could post the logs from the 
query server, that would be great too.


Cheers,
Francis


On 8/09/2016 12:55 PM, zengbaitang wrote:

I have a hadoop ha cluster and hbase, and  have installed phoenix.

I try to use query server today , I start the queryserver and then I 
exec the following command


./sqlline-thin.py http://tnode02:8765 sel.sql

the terminal responds the following error , and the *stage-cluster*  
is the value of  hadoop dfs.nameservices ,

how to solve this error?

AvaticaClientRuntimeException: Remote driver error: RuntimeException: 
java.sql.SQLException: ERROR 103 (08004): Unable to establish 
connection. -> SQLException: ERROR 103 (08004): Unable to establish 
connection. -> IOException: 
java.lang.reflect.InvocationTargetException -> 
InvocationTargetException: (null exception message) -> 
ExceptionInInitializerError: (null exception message) -> 
IllegalArgumentException: java.net.UnknownHostException: stage-cluster 
-> UnknownHostException: stage-cluster. Error -1 (0) null


java.lang.RuntimeException: java.sql.SQLException: ERROR 103 (08004): 
Unable to establish connection.
at 
org.apache.calcite.avatica.jdbc.JdbcMeta.openConnection(JdbcMeta.java:619)
at 
org.apache.calcite.avatica.remote.LocalService.apply(LocalService.java:299)
at 
org.apache.calcite.avatica.remote.Service$OpenConnectionRequest.accept(Service.java:1748)
at 
org.apache.calcite.avatica.remote.Service$OpenConnectionRequest.accept(Service.java:1728)
at 
org.apache.calcite.avatica.remote.AbstractHandler.apply(AbstractHandler.java:95)
at 
org.apache.calcite.avatica.remote.ProtobufHandler.apply(ProtobufHandler.java:46)
at 
org.apache.calcite.avatica.server.AvaticaProtobufHandler.handle(AvaticaProtobufHandler.java:124)
at 
org.apache.phoenix.shaded.org.eclipse.jetty.server.handler.HandlerList.handle(HandlerList.java:52)
at 
org.apache.phoenix.shaded.org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
at 
org.apache.phoenix.shaded.org.eclipse.jetty.server.Server.handle(Server.java:499)
at 
org.apache.phoenix.shaded.org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:311)
at 
org.apache.phoenix.shaded.org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
at 
org.apache.phoenix.shaded.org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:544)
at 
org.apache.phoenix.shaded.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
at 
org.apache.phoenix.shaded.org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)

at java.lang.Thread.run(Thread.java:745)
Caused by: java.sql.SQLException: ERROR 103 (08004): Unable to 
establish connection.
at 
org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:454)
at 
org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:145)
at 
org.apache.phoenix.query.ConnectionQueryServicesImpl.openConnection(ConnectionQueryServicesImpl.java:393)
at 
org.apache.phoenix.query.ConnectionQueryServicesImpl.access$300(ConnectionQueryServicesImpl.java:219)
at 
org.apache.phoenix.query.ConnectionQueryServicesImpl$13.call(ConnectionQueryServicesImpl.java:2321)
at 
org.apache.phoenix.query.ConnectionQueryServicesImpl$13.call(ConnectionQueryServicesImpl.java:2300)
at 
org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:78)
at 
org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:2300)
at 
org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:231)
at 
org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:144)
at 
org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:202)

at java.sql.DriverManager.getConnection(DriverManager.java:664)
at java.sql.DriverManager.getConnection(DriverManager.java:208)
at 
org.apache.calcite.avatica.jdbc.JdbcMeta.openConnection(JdbcMeta.java:616)

... 15 more
Caused by: java.io.IOException: 
java.lang.reflect.InvocationTargetException
at 
org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:240)
at 

Re: Tephra errors when trying to create a transactional table in Phoenix 4.8.0

2016-08-31 Thread F21
Just another update. Even though the logs says that the transaction 
manager is not running, it is actually running.


I confirmed this by checking the output of ps and connecting to the 
transaction manager:


bash-4.3# ps
PID   USER TIME   COMMAND
1 root   0:01 bash /run-hbase-phoenix.sh
  137 hadoop 0:19 /usr/lib/jvm/java-1.8-openjdk/bin/java 
-Dproc_master -XX:OnOutOfMemoryError=kill -9 %p -XX:+UseConcMarkSweepGC 
-XX:PermSize=128m -XX:Ma
  189 hadoop 0:08 /usr/lib/jvm/java-1.8-openjdk/bin/java 
-XX:+UseConcMarkSweepGC -cp 
/opt/hbase/bin/../lib/*:/opt/hbase/bin/../conf/:/opt/hbase/phoenix-c

  542 root   0:00 /bin/bash
 9035 root   0:00 sleep 1
 9036 root   0:00 ps

bash-4.3# wget localhost:15165
Connecting to localhost:15165 (127.0.0.1:15165)
wget: error getting response: Connection reset by peer

On 31/08/2016 3:25 PM, F21 wrote:
This only seems to be a problem when I have HBase running in fully 
distributed mode (1 master, 1 regionserver and 1 zookeeper node in 
different docker images).


If I have HBase running in standalone mode with HBase and Phoenix and 
the Query server in 1 docker image, it works correctly.


On 31/08/2016 11:21 AM, F21 wrote:
I have HBase 1.2.2 and Phoenix 4.8.0 running on my HBase master 
running on alpine linux with OpenJDK JRE 8.


This is my hbase-site.xml:




  
hbase.rootdir
hdfs://mycluster/hbase
  
  
zookeeper.znode.parent
/hbase
  
  
hbase.cluster.distributed
true
  
  
hbase.zookeeper.quorum
m9edd51-zookeeper.m9edd51
  
  
data.tx.snapshot.dir
/tmp/tephra/snapshots
  
  
data.tx.timeout
60
  
  
phoenix.transactions.enabled
true
  


I am able to start the master correctly. I am also able to create 
non-transactional table.


However, if I create a transactional table, I get this error: ERROR 
[TTransactionServer-rpc-0] thrift.ProcessFunction: Internal error 
processing startShort


This is what I see in the logs:

2016-08-31 01:08:33,560 WARN 
[main-SendThread(m9edd51-zookeeper.m9edd51:2181)] 
zookeeper.ClientCnxn: Session 0x156de22abec0004 for server null, 
unexpected error, closing socket connection and attempting reconnect

java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at 
sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at 
org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361)
at 
org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)
2016-08-31 01:08:33,616 INFO  [DefaultMetricsCollector STOPPING] 
metrics.DefaultMetricsCollector: Stopped metrics reporter
2016-08-31 01:08:33,623 INFO  [ThriftRPCServer] 
tephra.TransactionManager: Took 170.7 ms to stop
2016-08-31 01:08:33,623 INFO  [ThriftRPCServer] rpc.ThriftRPCServer: 
RPC server for TTransactionServer stopped.
2016-08-31 01:08:34,776 INFO 
[main-SendThread(m9edd51-zookeeper.m9edd51:2181)] 
zookeeper.ClientCnxn: Opening socket connection to server 
m9edd51-zookeeper.m9edd51/172.18.0.2:2181. Will not attempt to 
authenticate using SASL (unknown error)
2016-08-31 01:08:34,777 INFO 
[main-SendThread(m9edd51-zookeeper.m9edd51:2181)] 
zookeeper.ClientCnxn: Socket connection established to 
m9edd51-zookeeper.m9edd51/172.18.0.2:2181, initiating session
2016-08-31 01:08:34,778 INFO 
[main-SendThread(m9edd51-zookeeper.m9edd51:2181)] 
zookeeper.ClientCnxn: Session establishment complete on server 
m9edd51-zookeeper.m9edd51/172.18.0.2:2181, sessionid = 
0x156de22abec0004, negotiated timeout = 4
2016-08-31 01:08:34,783 INFO [leader-election-tx.service-leader] 
zookeeper.LeaderElection: Connected to ZK, running election: 
m9edd51-zookeeper.m9edd51 for /tx.service/leader
2016-08-31 01:08:34,815 INFO  [ThriftRPCServer] rpc.ThriftRPCServer: 
Starting RPC server for TTransactionServer
2016-08-31 01:08:34,815 INFO  [ThriftRPCServer] rpc.ThriftRPCServer: 
Running RPC server for TTransactionServer
2016-08-31 01:08:34,816 INFO  [ThriftRPCServer] 
server.TThreadedSelectorServerWithFix: Starting 
TThreadedSelectorServerWithFix
2016-08-31 01:08:34,822 INFO [leader-election-tx.service-leader] 
distributed.TransactionService: Transaction Thrift Service started 
successfully on m9edd51-hmaster1.m9edd51/172.18.0.12:15165
2016-08-31 01:10:42,830 ERROR [TTransactionServer-rpc-0] 
thrift.ProcessFunction: Internal error processing startShort

java.lang.IllegalStateException: Transaction Manager is not running.
at 
com.google.common.base.Preconditions.checkState(Preconditions.java:149)
at 
org.apache.tephra.TransactionManager.ensureAvailable(TransactionManager.java:709)
at 
org.apache.tephra.TransactionManager.startTx(TransactionManager.java:768)
at 
org.apache.tephra.TransactionManager.startShort(TransactionManager.java:728)
at 
org.apache.tephra.TransactionManager.startShort(TransactionManager.java:716

Re: Tephra errors when trying to create a transactional table in Phoenix 4.8.0

2016-08-30 Thread F21
This only seems to be a problem when I have HBase running in fully 
distributed mode (1 master, 1 regionserver and 1 zookeeper node in 
different docker images).


If I have HBase running in standalone mode with HBase and Phoenix and 
the Query server in 1 docker image, it works correctly.


On 31/08/2016 11:21 AM, F21 wrote:
I have HBase 1.2.2 and Phoenix 4.8.0 running on my HBase master 
running on alpine linux with OpenJDK JRE 8.


This is my hbase-site.xml:




  
hbase.rootdir
hdfs://mycluster/hbase
  
  
zookeeper.znode.parent
/hbase
  
  
hbase.cluster.distributed
true
  
  
hbase.zookeeper.quorum
m9edd51-zookeeper.m9edd51
  
  
data.tx.snapshot.dir
/tmp/tephra/snapshots
  
  
data.tx.timeout
60
  
  
phoenix.transactions.enabled
true
  


I am able to start the master correctly. I am also able to create 
non-transactional table.


However, if I create a transactional table, I get this error: ERROR 
[TTransactionServer-rpc-0] thrift.ProcessFunction: Internal error 
processing startShort


This is what I see in the logs:

2016-08-31 01:08:33,560 WARN 
[main-SendThread(m9edd51-zookeeper.m9edd51:2181)] 
zookeeper.ClientCnxn: Session 0x156de22abec0004 for server null, 
unexpected error, closing socket connection and attempting reconnect

java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at 
sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at 
org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361)
at 
org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)
2016-08-31 01:08:33,616 INFO  [DefaultMetricsCollector STOPPING] 
metrics.DefaultMetricsCollector: Stopped metrics reporter
2016-08-31 01:08:33,623 INFO  [ThriftRPCServer] 
tephra.TransactionManager: Took 170.7 ms to stop
2016-08-31 01:08:33,623 INFO  [ThriftRPCServer] rpc.ThriftRPCServer: 
RPC server for TTransactionServer stopped.
2016-08-31 01:08:34,776 INFO 
[main-SendThread(m9edd51-zookeeper.m9edd51:2181)] 
zookeeper.ClientCnxn: Opening socket connection to server 
m9edd51-zookeeper.m9edd51/172.18.0.2:2181. Will not attempt to 
authenticate using SASL (unknown error)
2016-08-31 01:08:34,777 INFO 
[main-SendThread(m9edd51-zookeeper.m9edd51:2181)] 
zookeeper.ClientCnxn: Socket connection established to 
m9edd51-zookeeper.m9edd51/172.18.0.2:2181, initiating session
2016-08-31 01:08:34,778 INFO 
[main-SendThread(m9edd51-zookeeper.m9edd51:2181)] 
zookeeper.ClientCnxn: Session establishment complete on server 
m9edd51-zookeeper.m9edd51/172.18.0.2:2181, sessionid = 
0x156de22abec0004, negotiated timeout = 4
2016-08-31 01:08:34,783 INFO  [leader-election-tx.service-leader] 
zookeeper.LeaderElection: Connected to ZK, running election: 
m9edd51-zookeeper.m9edd51 for /tx.service/leader
2016-08-31 01:08:34,815 INFO  [ThriftRPCServer] rpc.ThriftRPCServer: 
Starting RPC server for TTransactionServer
2016-08-31 01:08:34,815 INFO  [ThriftRPCServer] rpc.ThriftRPCServer: 
Running RPC server for TTransactionServer
2016-08-31 01:08:34,816 INFO  [ThriftRPCServer] 
server.TThreadedSelectorServerWithFix: Starting 
TThreadedSelectorServerWithFix
2016-08-31 01:08:34,822 INFO  [leader-election-tx.service-leader] 
distributed.TransactionService: Transaction Thrift Service started 
successfully on m9edd51-hmaster1.m9edd51/172.18.0.12:15165
2016-08-31 01:10:42,830 ERROR [TTransactionServer-rpc-0] 
thrift.ProcessFunction: Internal error processing startShort

java.lang.IllegalStateException: Transaction Manager is not running.
at 
com.google.common.base.Preconditions.checkState(Preconditions.java:149)
at 
org.apache.tephra.TransactionManager.ensureAvailable(TransactionManager.java:709)
at 
org.apache.tephra.TransactionManager.startTx(TransactionManager.java:768)
at 
org.apache.tephra.TransactionManager.startShort(TransactionManager.java:728)
at 
org.apache.tephra.TransactionManager.startShort(TransactionManager.java:716)
at 
org.apache.tephra.distributed.TransactionServiceThriftHandler.startShort(TransactionServiceThriftHandler.java:71)

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.tephra.rpc.ThriftRPCServer$1.invoke(ThriftRPCServer.java:261)

at com.sun.proxy.$Proxy17.startShort(Unknown Source)
at 
org.apache.tephra.distributed.thrift.TTransactionServer$Processor$startShort.getResult(TTransactionServer.java:974)
at 
org.apache.tephra.distributed.thrift.TTransactionServer$Processor$startShort.getResult(TTransactionServer.java:959)
at 
org.apache.thrift.ProcessFunction.process(ProcessFunction.java

Tephra errors when trying to create a transactional table in Phoenix 4.8.0

2016-08-30 Thread F21
I have HBase 1.2.2 and Phoenix 4.8.0 running on my HBase master running 
on alpine linux with OpenJDK JRE 8.


This is my hbase-site.xml:




  
hbase.rootdir
hdfs://mycluster/hbase
  
  
zookeeper.znode.parent
/hbase
  
  
hbase.cluster.distributed
true
  
  
hbase.zookeeper.quorum
m9edd51-zookeeper.m9edd51
  
  
data.tx.snapshot.dir
/tmp/tephra/snapshots
  
  
data.tx.timeout
60
  
  
phoenix.transactions.enabled
true
  


I am able to start the master correctly. I am also able to create 
non-transactional table.


However, if I create a transactional table, I get this error: ERROR 
[TTransactionServer-rpc-0] thrift.ProcessFunction: Internal error 
processing startShort


This is what I see in the logs:

2016-08-31 01:08:33,560 WARN 
[main-SendThread(m9edd51-zookeeper.m9edd51:2181)] zookeeper.ClientCnxn: 
Session 0x156de22abec0004 for server null, unexpected error, closing 
socket connection and attempting reconnect

java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at 
sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at 
org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361)
at 
org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)
2016-08-31 01:08:33,616 INFO  [DefaultMetricsCollector STOPPING] 
metrics.DefaultMetricsCollector: Stopped metrics reporter
2016-08-31 01:08:33,623 INFO  [ThriftRPCServer] 
tephra.TransactionManager: Took 170.7 ms to stop
2016-08-31 01:08:33,623 INFO  [ThriftRPCServer] rpc.ThriftRPCServer: RPC 
server for TTransactionServer stopped.
2016-08-31 01:08:34,776 INFO 
[main-SendThread(m9edd51-zookeeper.m9edd51:2181)] zookeeper.ClientCnxn: 
Opening socket connection to server 
m9edd51-zookeeper.m9edd51/172.18.0.2:2181. Will not attempt to 
authenticate using SASL (unknown error)
2016-08-31 01:08:34,777 INFO 
[main-SendThread(m9edd51-zookeeper.m9edd51:2181)] zookeeper.ClientCnxn: 
Socket connection established to 
m9edd51-zookeeper.m9edd51/172.18.0.2:2181, initiating session
2016-08-31 01:08:34,778 INFO 
[main-SendThread(m9edd51-zookeeper.m9edd51:2181)] zookeeper.ClientCnxn: 
Session establishment complete on server 
m9edd51-zookeeper.m9edd51/172.18.0.2:2181, sessionid = 
0x156de22abec0004, negotiated timeout = 4
2016-08-31 01:08:34,783 INFO  [leader-election-tx.service-leader] 
zookeeper.LeaderElection: Connected to ZK, running election: 
m9edd51-zookeeper.m9edd51 for /tx.service/leader
2016-08-31 01:08:34,815 INFO  [ThriftRPCServer] rpc.ThriftRPCServer: 
Starting RPC server for TTransactionServer
2016-08-31 01:08:34,815 INFO  [ThriftRPCServer] rpc.ThriftRPCServer: 
Running RPC server for TTransactionServer
2016-08-31 01:08:34,816 INFO  [ThriftRPCServer] 
server.TThreadedSelectorServerWithFix: Starting 
TThreadedSelectorServerWithFix
2016-08-31 01:08:34,822 INFO  [leader-election-tx.service-leader] 
distributed.TransactionService: Transaction Thrift Service started 
successfully on m9edd51-hmaster1.m9edd51/172.18.0.12:15165
2016-08-31 01:10:42,830 ERROR [TTransactionServer-rpc-0] 
thrift.ProcessFunction: Internal error processing startShort

java.lang.IllegalStateException: Transaction Manager is not running.
at 
com.google.common.base.Preconditions.checkState(Preconditions.java:149)
at 
org.apache.tephra.TransactionManager.ensureAvailable(TransactionManager.java:709)
at 
org.apache.tephra.TransactionManager.startTx(TransactionManager.java:768)
at 
org.apache.tephra.TransactionManager.startShort(TransactionManager.java:728)
at 
org.apache.tephra.TransactionManager.startShort(TransactionManager.java:716)
at 
org.apache.tephra.distributed.TransactionServiceThriftHandler.startShort(TransactionServiceThriftHandler.java:71)

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.tephra.rpc.ThriftRPCServer$1.invoke(ThriftRPCServer.java:261)

at com.sun.proxy.$Proxy17.startShort(Unknown Source)
at 
org.apache.tephra.distributed.thrift.TTransactionServer$Processor$startShort.getResult(TTransactionServer.java:974)
at 
org.apache.tephra.distributed.thrift.TTransactionServer$Processor$startShort.getResult(TTransactionServer.java:959)
at 
org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)

at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
at 
org.apache.thrift.server.AbstractNonblockingServer$FrameBuffer.invoke(AbstractNonblockingServer.java:478)

at org.apache.thrift.server.Invocation.run(Invocation.java:18)
at 

Re: Can phoenix run without hadoop? (hbase standalone)

2016-08-31 Thread F21

chmod +x myfile.sh

Also, check that the line endings for the file are LR and not CRLF.

On 31/08/2016 9:37 PM, Cheyenne Forbes wrote:

how do I make start-hbase-phoenix.sh executable?

Regards,

Cheyenne Forbes

Chief Executive Officer
Avapno Omnitech

Chief Operating Officer
Avapno Solutions, Co.

Chairman
Avapno Assets, LLC

Bethel Town P.O
Westmoreland
Jamaica

Email: cheyenne.osanu.for...@gmail.com 
<mailto:cheyenne.osanu.for...@gmail.com>

Mobile: 876-881-7889 
skype: cheyenne.forbes1


On Wed, Aug 31, 2016 at 5:39 AM, F21 <f21.gro...@gmail.com 
<mailto:f21.gro...@gmail.com>> wrote:


Did you build the image yourself? If so, you need to make
start-hbase-phoenix.sh executable before building it.


On 31/08/2016 8:02 PM, Cheyenne Forbes wrote:


" ': No such file or directory"








Re: Can phoenix run without hadoop? (hbase standalone)

2016-08-31 Thread F21
Did you build the image yourself? If so, you need to make 
start-hbase-phoenix.sh executable before building it.


On 31/08/2016 8:02 PM, Cheyenne Forbes wrote:


" ': No such file or directory"





Re: FW: Phoenix Query Server not returning any results

2016-09-12 Thread F21

Hey,

You mentioned that you sent a PrepareAndExecuteRequest. However, to do 
that, you would need to first:
1. Open a connection: 
https://calcite.apache.org/docs/avatica_json_reference.html#openconnectionrequest
2. Create a statement: 
https://calcite.apache.org/docs/avatica_json_reference.html#createstatementrequest
3. Call your PrepareAndExecute request: 
https://calcite.apache.org/docs/avatica_json_reference.html#prepareandexecuterequest


The phoenix query service uses Avatica (part of Apache Calcite), so you 
can find all the documentation for talking with the query server here: 
https://calcite.apache.org/docs/avatica_json_reference.html


I do recommend using protobufs over JSON as protobufs are now the 
default serialization method.


In terms of making your clients talk to the query server directly,  I 
don't think that would be the best idea except in the case of some 
exceptions. While the query server supports authentication and 
authorization, I don't think it would be granular enough for most apps. 
Usually, you would need to have some business/domain layer and use 
phoenix as the storage/data access layer. However, it depends on your 
use-case :)


Cheers,
Francis

On 12/09/2016 9:02 PM, Puneeth Prasad wrote:


Hello,

Wehave just started using Apache Phoenix on HBase. We have a setup 
where Phoenix is working (We are able to CRUD in tables cleanly). Now, 
we want an application running outside the network (say a system 
hosting a mobile app) to be able to query Phoenix table. For that, one 
of the options we are trying is using Phoenix Query Server (PQS). I 
have ensured that the port 8765 is accessible from outside network and 
so, when we use below CURL command, we expect the desired result:


[root@externalsystem ~]# curl -XPOST -H 
'request:{"request":"prepareAndExecute","connectionId":"00---","statementId":12345,"sql": 
"SELECT * FROM 
QUESTTWEETS1","maxRowCount":1}'http://here.comes.external.ip:8765/


But the response which we get is:

{"response":"executeResults","missingStatement":true,"rpcMetadata":{"response":"rpcMetadata","serverAddress":"viper.quest.com:8765 
"},"results":null}


We are using HDP 2.3.4.7-4 and aligned versions of HBase and PQS.

Very clearly, I am passing the SQL as one of the keys in the request. 
Can somebody please help me understand what am I doing wrong here? 
Additionally, since the goal of this is to provide a way to access 
Phoenix tables at high concurrency (which moble apps can demand), is 
PQS a decent solution or there are any better options to access 
Phoenix tables? Since I am a newbie in using HBase and Phoenix, please 
let me know if there are any other details required.


Best Regards

Puneeth





Which statements are supported when using transactions?

2016-10-06 Thread F21

I just ran into the following scenario with Phoenix 4.8.1 and HBase 1.2.3.

1. Create a transactional table: CREATE TABLE schemas(version varchar 
not null primary key) TRANSACTIONAL=true


2. Confirm it exists/is created: SELECT * FROM schemas

3. Begin transaction.

4. Insert into schemas: UPSERT INTO schemas (version) VALUES 
('some-release')


5. Also create a table: CREATE TABLE test_table (id integer not null 
primary key)


6. Commit the transaction.

Once the transaction has been committed, I can see that test_table is 
created. However, the schemas table is missing the 'some-release' entry.



Which statements do not support transactions? In MySQL, things like 
CREATE TABLE, etc are non-transactional and would implicitly commit. 
What is the case with Phoenix? It seems a bit odd that the table is 
created, but the UPSERT has not effect, event after committing the 
transaction.


Cheers,

Francis



Re: Can phoenix run without hadoop? (hbase standalone)

2016-08-23 Thread F21
You don't have to download the docker image. Look at the Dockerfile file 
to see how to install phoenix with hbase in standalone mode. Then look 
at start-hbase-phoenix.sh to see how to set up the configuration file 
and start the services.


On 24/08/2016 9:03 AM, Cheyenne Forbes wrote:
I already have Hbase and Phoenix but I dont want to set up Hadoop, how 
can I do it with what I already have? (without downloading the docker 
file)


Regards,

Cheyenne Forbes

Chief Executive Officer
Avapno Omnitech

Chief Operating Officer
Avapno Solutions

Chairman
Avapno Assets, LLC

Bethel Town P.O
Westmoreland
Jamaica

Email: cheyenne.osanu.for...@gmail.com 
<mailto:cheyenne.osanu.for...@gmail.com>

Mobile: 876-881-7889 
skype: cheyenne.forbes1



On Tue, Aug 23, 2016 at 5:49 PM, F21 <f21.gro...@gmail.com 
<mailto:f21.gro...@gmail.com>> wrote:


It's possible to run phoenix with out hadoop using hbase in
standalone mode. However, you will not be able to do bulk load,
etc. The safety of your data is also not guaranteed without HDFS.

For reference, I have a docker image running hbase standalone with
phoenix for testing purposes:
https://github.com/Boostport/hbase-phoenix-all-in-one
<https://github.com/Boostport/hbase-phoenix-all-in-one>


On 24/08/2016 8:23 AM, Cheyenne Forbes wrote:

what settings should be changed if I can?

Hbase 1.1.2
Phoenix 4.4.0
Ubuntu 14

*I try in this
order* "/usr/lihbase-1.1.2/bin/start-hbase.sh"* then* 
"/usr/lib/phoenix-4.4.0/bin/queryserver.py
start" *then*"/usr/lib/phoenix-4.4.0/bin/sqlline-thin.py
http://localhost:8765 <http://localhost:8765/>" *but it* *gives
this error:*

java.lang.RuntimeException: java.net.ConnectException: Connection
refused
  at
org.apache.calcite.avatica.remote.RemoteService.apply(RemoteService.java:59)
  at
org.apache.calcite.avatica.remote.JsonService.apply(JsonService.java:235)
  at

org.apache.calcite.avatica.remote.RemoteMeta.connectionSync(RemoteMeta.java:97)
  at

org.apache.calcite.avatica.AvaticaConnection.getAutoCommit(AvaticaConnection.java:135)
  at sqlline.SqlLine.autocommitStatus(SqlLine.java:951)
  at
sqlline.DatabaseConnection.connect(DatabaseConnection.java:178)
  at
sqlline.DatabaseConnection.getConnection(DatabaseConnection.java:203)
  at sqlline.Commands.connect(Commands.java:1064)
  at sqlline.Commands.connect(Commands.java:996)
  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  at

sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
  at

sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.lang.reflect.Method.invoke(Method.java:606)
  at
sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:36)
  at sqlline.SqlLine.dispatch(SqlLine.java:804)
  at sqlline.SqlLine.initArgs(SqlLine.java:588)
  at sqlline.SqlLine.begin(SqlLine.java:656)
  at sqlline.SqlLine.start(SqlLine.java:398)
  at sqlline.SqlLine.main(SqlLine.java:292)
Caused by: java.net.ConnectException: Connection refused
  at java.net.PlainSocketImpl.socketConnect(Native Method)
  at java.net

<http://java.net/>.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
  at java.net

<http://java.net/>.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
  at java.net

<http://java.net/>.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
  at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
  at java.net.Socket.connect(Socket.java:579)
  at java.net.Socket.connect(Socket.java:528)
  at sun.net.NetworkClient.doConnect(NetworkClient.java:180)
  at sun.net.www.http.HttpClient.openServer(HttpClient.java:432)
  at sun.net.www.http.HttpClient.openServer(HttpClient.java:527)
  at sun.net.www.http.HttpClient.(HttpClient.java:211)
  at sun.net.www.http.HttpClient.Ne
<http://sun.net.www.http.HttpClient.Ne>w(HttpClient.java:308)
  at sun.net.www.http.HttpClient.Ne
<http://sun.net.www.http.HttpClient.Ne>w(HttpClient.java:326)
  at

sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:998)
  at

sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:934)
  at

sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:852)
  at

sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1302)
  at
java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:468)
  at
org.apac

Re: Can phoenix run without hadoop? (hbase standalone)

2016-08-23 Thread F21

Try running it with "bin/strart-phoenix.sh &" to force it to the background.

On 24/08/2016 10:21 AM, Cheyenne Forbes wrote:
Okay, thank you, but it doesnt seem to run in the background 
"bin/start-phoenix.sh"


Regards,

Cheyenne Forbes

Chief Executive Officer
Avapno Omnitech

Chief Operating Officer
Avapno Solutions

Chairman
Avapno Assets, LLC

Bethel Town P.O
Westmoreland
Jamaica

Email: cheyenne.osanu.for...@gmail.com 
<mailto:cheyenne.osanu.for...@gmail.com>

Mobile: 876-881-7889 
skype: cheyenne.forbes1



On Tue, Aug 23, 2016 at 6:56 PM, F21 <f21.gro...@gmail.com 
<mailto:f21.gro...@gmail.com>> wrote:


Tephra is used for transactions in 4.7.0 and onwards. In your
case, ignore tephra and the transaction related settings in the
configuration.

On 24/08/2016 9:55 AM, Cheyenne Forbes wrote:

I dont see "phoenix/bin/tephra" in phoenix version 4.4.0

Regards,

Cheyenne Forbes

Chief Executive Officer
Avapno Omnitech

Chief Operating Officer
Avapno Solutions

Chairman
Avapno Assets, LLC

Bethel Town P.O
Westmoreland
Jamaica

Email: cheyenne.osanu.for...@gmail.com
<mailto:cheyenne.osanu.for...@gmail.com>
Mobile: 876-881-7889 
skype: cheyenne.forbes1



On Tue, Aug 23, 2016 at 6:16 PM, F21 <f21.gro...@gmail.com
<mailto:f21.gro...@gmail.com>> wrote:

You don't have to download the docker image. Look at the
Dockerfile file to see how to install phoenix with hbase in
standalone mode. Then look at start-hbase-phoenix.sh to see
how to set up the configuration file and start the services.


On 24/08/2016 9:03 AM, Cheyenne Forbes wrote:

I already have Hbase and Phoenix but I dont want to set up
Hadoop, how can I do it with what I already have? (without
downloading the docker file)

Regards,

Cheyenne Forbes

Chief Executive Officer
Avapno Omnitech

Chief Operating Officer
Avapno Solutions

Chairman
Avapno Assets, LLC

Bethel Town P.O
Westmoreland
Jamaica

Email: cheyenne.osanu.for...@gmail.com
<mailto:cheyenne.osanu.for...@gmail.com>
Mobile: 876-881-7889 
        skype: cheyenne.forbes1



On Tue, Aug 23, 2016 at 5:49 PM, F21 <f21.gro...@gmail.com
<mailto:f21.gro...@gmail.com>> wrote:

It's possible to run phoenix with out hadoop using hbase
in standalone mode. However, you will not be able to do
bulk load, etc. The safety of your data is also not
guaranteed without HDFS.

For reference, I have a docker image running hbase
standalone with phoenix for testing purposes:
https://github.com/Boostport/hbase-phoenix-all-in-one
<https://github.com/Boostport/hbase-phoenix-all-in-one>


On 24/08/2016 8:23 AM, Cheyenne Forbes wrote:

what settings should be changed if I can?

Hbase 1.1.2
Phoenix 4.4.0
Ubuntu 14

*I try in this
order* "/usr/lihbase-1.1.2/bin/start-hbase.sh"* then* 
"/usr/lib/phoenix-4.4.0/bin/queryserver.py
start"
*then*"/usr/lib/phoenix-4.4.0/bin/sqlline-thin.py
http://localhost:8765 <http://localhost:8765/>" *but
it* *gives this error:*

java.lang.RuntimeException: java.net.ConnectException:
Connection refused
  at

org.apache.calcite.avatica.remote.RemoteService.apply(RemoteService.java:59)
  at

org.apache.calcite.avatica.remote.JsonService.apply(JsonService.java:235)
  at

org.apache.calcite.avatica.remote.RemoteMeta.connectionSync(RemoteMeta.java:97)
  at

org.apache.calcite.avatica.AvaticaConnection.getAutoCommit(AvaticaConnection.java:135)
  at sqlline.SqlLine.autocommitStatus(SqlLine.java:951)
  at
sqlline.DatabaseConnection.connect(DatabaseConnection.java:178)
  at

sqlline.DatabaseConnection.getConnection(DatabaseConnection.java:203)
  at sqlline.Commands.connect(Commands.java:1064)
  at sqlline.Commands.connect(Commands.java:996)
  at
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  at

sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
  at

sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.lang.reflect.Method.invoke(Method.java:606)
  at

sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:

Re: Can phoenix run without hadoop? (hbase standalone)

2016-08-23 Thread F21
Tephra is used for transactions in 4.7.0 and onwards. In your case, 
ignore tephra and the transaction related settings in the configuration.


On 24/08/2016 9:55 AM, Cheyenne Forbes wrote:

I dont see "phoenix/bin/tephra" in phoenix version 4.4.0

Regards,

Cheyenne Forbes

Chief Executive Officer
Avapno Omnitech

Chief Operating Officer
Avapno Solutions

Chairman
Avapno Assets, LLC

Bethel Town P.O
Westmoreland
Jamaica

Email: cheyenne.osanu.for...@gmail.com 
<mailto:cheyenne.osanu.for...@gmail.com>

Mobile: 876-881-7889 
skype: cheyenne.forbes1



On Tue, Aug 23, 2016 at 6:16 PM, F21 <f21.gro...@gmail.com 
<mailto:f21.gro...@gmail.com>> wrote:


You don't have to download the docker image. Look at the
Dockerfile file to see how to install phoenix with hbase in
standalone mode. Then look at start-hbase-phoenix.sh to see how to
set up the configuration file and start the services.


On 24/08/2016 9:03 AM, Cheyenne Forbes wrote:

I already have Hbase and Phoenix but I dont want to set up
Hadoop, how can I do it with what I already have? (without
downloading the docker file)

Regards,

Cheyenne Forbes

Chief Executive Officer
Avapno Omnitech

Chief Operating Officer
Avapno Solutions

Chairman
Avapno Assets, LLC

Bethel Town P.O
Westmoreland
Jamaica

Email: cheyenne.osanu.for...@gmail.com
<mailto:cheyenne.osanu.for...@gmail.com>
Mobile: 876-881-7889 
skype: cheyenne.forbes1



    On Tue, Aug 23, 2016 at 5:49 PM, F21 <f21.gro...@gmail.com
<mailto:f21.gro...@gmail.com>> wrote:

It's possible to run phoenix with out hadoop using hbase in
standalone mode. However, you will not be able to do bulk
load, etc. The safety of your data is also not guaranteed
without HDFS.

For reference, I have a docker image running hbase standalone
with phoenix for testing purposes:
https://github.com/Boostport/hbase-phoenix-all-in-one
<https://github.com/Boostport/hbase-phoenix-all-in-one>


On 24/08/2016 8:23 AM, Cheyenne Forbes wrote:

what settings should be changed if I can?

Hbase 1.1.2
Phoenix 4.4.0
Ubuntu 14

*I try in this
order* "/usr/lihbase-1.1.2/bin/start-hbase.sh"* then* 
"/usr/lib/phoenix-4.4.0/bin/queryserver.py
start" *then*"/usr/lib/phoenix-4.4.0/bin/sqlline-thin.py
http://localhost:8765 <http://localhost:8765/>" *but it*
*gives this error:*

java.lang.RuntimeException: java.net.ConnectException:
Connection refused
  at

org.apache.calcite.avatica.remote.RemoteService.apply(RemoteService.java:59)
  at

org.apache.calcite.avatica.remote.JsonService.apply(JsonService.java:235)
  at

org.apache.calcite.avatica.remote.RemoteMeta.connectionSync(RemoteMeta.java:97)
  at

org.apache.calcite.avatica.AvaticaConnection.getAutoCommit(AvaticaConnection.java:135)
  at sqlline.SqlLine.autocommitStatus(SqlLine.java:951)
  at
sqlline.DatabaseConnection.connect(DatabaseConnection.java:178)
  at
sqlline.DatabaseConnection.getConnection(DatabaseConnection.java:203)
  at sqlline.Commands.connect(Commands.java:1064)
  at sqlline.Commands.connect(Commands.java:996)
  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
Method)
  at

sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
  at

sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.lang.reflect.Method.invoke(Method.java:606)
  at

sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:36)
  at sqlline.SqlLine.dispatch(SqlLine.java:804)
  at sqlline.SqlLine.initArgs(SqlLine.java:588)
  at sqlline.SqlLine.begin(SqlLine.java:656)
  at sqlline.SqlLine.start(SqlLine.java:398)
  at sqlline.SqlLine.main(SqlLine.java:292)
Caused by: java.net.ConnectException: Connection refused
  at java.net.PlainSocketImpl.socketConnect(Native Method)
  at java.net

<http://java.net/>.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
  at java.net

<http://java.net/>.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
  at java.net

<http://java.net/>.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
  at
java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
  at java.net.Socket.connect(Socket.java:579)
  at java.n

Re: Can phoenix run without hadoop? (hbase standalone)

2016-08-23 Thread F21
dress(AbstractPlainSocketImpl.java:200)
at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)

at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:579)
at java.net.Socket.connect(Socket.java:528)
at sun.net.NetworkClient.doConnect(NetworkClient.java:180)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:432)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:527)
at sun.net.www.http.HttpClient.(HttpClient.java:211)
at sun.net.www.http.HttpClient.New(HttpClient.java:308)
at sun.net.www.http.HttpClient.New(HttpClient.java:326)
at 
sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:998)
at 
sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:934)
at 
sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:852)
at 
sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1302)
at 
java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:468)
at 
org.apache.calcite.avatica.remote.RemoteService.apply(RemoteService.java:43)

... 18 more
Building list of tables and columns for tab-completion (set 
fastconnect to true to skip)...

java.lang.RuntimeException: java.net.ConnectException: Connection refused
at 
org.apache.calcite.avatica.remote.RemoteService.apply(RemoteService.java:59)
at 
org.apache.calcite.avatica.remote.JsonService.apply(JsonService.java:235)
at 
org.apache.calcite.avatica.remote.RemoteMeta.connectionSync(RemoteMeta.java:97)
at 
org.apache.calcite.avatica.AvaticaConnection.getCatalog(AvaticaConnection.java:182)

at sqlline.SqlLine.getColumns(SqlLine.java:1098)
at sqlline.SqlLine.getColumnNames(SqlLine.java:1122)
at sqlline.SqlCompleter.(SqlCompleter.java:81)
at 
sqlline.DatabaseConnection.setCompletions(DatabaseConnection.java:84)

at sqlline.SqlLine.setCompletions(SqlLine.java:1730)
at sqlline.Commands.connect(Commands.java:1066)
at sqlline.Commands.connect(Commands.java:996)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

at java.lang.reflect.Method.invoke(Method.java:606)
at 
sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:36)

at sqlline.SqlLine.dispatch(SqlLine.java:804)
at sqlline.SqlLine.initArgs(SqlLine.java:588)
at sqlline.SqlLine.begin(SqlLine.java:656)
at sqlline.SqlLine.start(SqlLine.java:398)
at sqlline.SqlLine.main(SqlLine.java:292)
Caused by: java.net.ConnectException: Connection refused
at java.net.PlainSocketImpl.socketConnect(Native Method)
at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)

at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:579)
at java.net.Socket.connect(Socket.java:528)
at sun.net.NetworkClient.doConnect(NetworkClient.java:180)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:432)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:527)
at sun.net.www.http.HttpClient.(HttpClient.java:211)
at sun.net.www.http.HttpClient.New(HttpClient.java:308)
at sun.net.www.http.HttpClient.New(HttpClient.java:326)
at 
sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:998)
at 
sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:934)
at 
sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:852)
at 
sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1302)
at 
java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:468)
at 
org.apache.calcite.avatica.remote.RemoteService.apply(RemoteService.java:43)

... 20 more


Regards,

Cheyenne Forbes

Chief Executive Officer
Avapno Omnitech

Chief Operating Officer
Avapno Solutions

Chairman
Avapno Assets, LLC

Bethel Town P.O
Westmoreland
Jamaica

Email: cheyenne.osanu.for...@gmail.com 
<mailto:cheyenne.osanu.for...@gmail.com>

Mobile: 876-881-7889 
skype: cheyenne.forbes1



On Tue, Aug 23, 2016 at 7:23 PM, F21 <f21.gro...@gmail.com 
<mailto:f21.gro...@gmail.com>> wrote:


Try running it with "bin/strart-phoenix.sh &" to force it to the
background.


On 24/08/2016 10:21 AM, Ch

What are some use-cases for JOINs?

2016-10-20 Thread F21

Hey all,

Normally, rather than de-normalizing my data, I prefer to have the data 
duplicated in 2 tables. With transactions, it is quite simple to ensure 
atomic updates to those 2 tables (especially for read-heavy apps). This 
also makes things easier to query and avoids the memory limits of hash 
joins.


Having said that, what are some use-cases of when JOINs should be used?

Cheers,

Francis



Re: What are some use-cases for JOINs?

2016-10-20 Thread F21

P.S. I meant to say normalizing rather than de-normalizing.

On 21/10/2016 10:36 AM, F21 wrote:

Hey all,

Normally, rather than de-normalizing my data, I prefer to have the 
data duplicated in 2 tables. With transactions, it is quite simple to 
ensure atomic updates to those 2 tables (especially for read-heavy 
apps). This also makes things easier to query and avoids the memory 
limits of hash joins.


Having said that, what are some use-cases of when JOINs should be used?

Cheers,

Francis





Go Avatica/Phoenix driver updated to support new Go 1.8 features

2017-03-08 Thread F21

Hi all,

I am cross posting this from the Calcite mailing list, since the phoenix 
query server uses Avatica from the Calcite project.


Go 1.8 was released recently and the database/sql package saw a lot of 
new features. I just tagged the v1.3.0 release for the Go Avatica 
driver[0] which ships all of these new features.


The full list of changes in the database/sql package is available here: 
https://docs.google.com/document/d/1F778e7ZSNiSmbju3jsEWzShcb8lIO4kDyfKDNm4PNd8/edit


Backwards compatibility:

- The new interfaces/methods are all additive. The implementation is 
also backwards compatible, so v1.3.0 will work with Go 1.7.x and below.


Highlights:

- Methods now support using context to enable cancellation and timeouts 
so that queries can be cancelled on the client side. Note: Since there 
is no mechanism to cancel queries on the server, once a query is sent to 
the server, users should assume that it will be executed.


- The Ping method will now connect to the server and execute `SELECT 1` 
to ensure that the server is ok.


- More column type information: It is now possible to get the column 
name, type, length, precision + scale, nullability and the Go scan type 
for a column in a result set.


- Support for multiple result sets. Avatica had support for multiple 
result sets for a while and this mapped really well to the multiple 
result sets support introduced in Go 1.8.


Unimplemented features:

- Since Calcite/Avatica does not support named bind parameters in 
prepared statements, the driver will throw an error if you try to use them.


If you have any question or comments, please let me know!

Cheers,

Francis

[0] https://github.com/Boostport/avatica



Phoenix vs CockroachDB

2017-07-13 Thread F21
I recently came across CockroachDB[0] which is a distributed SQL 
database. Operationally, it is easy to run and adding a new node to the 
cluster is really simple as well. I believe it is targeted towards OLTP 
workloads.


Has anyone else had a look at CockroachDB? How does it compare with 
Phoenix + HBase?


Francis

[0] https://www.cockroachlabs.com/