I have successfully deployed phoenix and the phoenix query server into a
toy HBase cluster.
I am currently running the http query server on all regionserver,
however I think it would be much better if I can run the http query
servers on separate docker containers or machines. This way, I can
Hey Rafa,
So in terms of the hbase-site.xml, I just need the entries for the
location to the zookeeper quorum and the zookeeper znode for the cluster
right?
Cheers!
On 17/12/2015 9:48 PM, rafa wrote:
Hi F21,
You can install Query Server in any server that has network connection
with your
I haven't used this driver (don't write any .NET code), but I used it as
a reference while building (https://github.com/Boostport/avatica), in
particular, setting up the HTTP requests correctly.
Francis
On 28/06/2016 8:19 AM, Josh Elser wrote:
Hi,
I was just made aware of a neat little .NET
(assuming this is the log location
configured in your tephra-env.sh)
- mujtaba
On Wed, Mar 30, 2016 at 2:54 AM, F21 <f21.gro...@gmail.com
<mailto:f21.gro...@gmail.com>> wrote:
I have been trying to get tephra working, but wasn't able to get
it starting successfully.
I
I think that might be from the tephra start up script.
The folder /opt/hbase/phoenix-assembly/ does not exist on my system.
On 31/03/2016 11:53 AM, Mujtaba Chohan wrote:
I still see you have the following on classpath:
opt/hbase/phoenix-assembly/target/*
On Wed, Mar 30, 2016 at 5:42 PM, F21
HBase classes
in hbase/lib.
- Check for exception starting tephra in
/tmp/tephra-*/tephra-service-*.log (assuming this is the log location
configured in your tephra-env.sh)
- mujtaba
On Wed, Mar 30, 2016 at 2:54 AM, F21 <f21.gro...@gmail.com
<mailto:f21.gro...@gmail.com>> wrote:
you think this might be a bug?
On 31/03/2016 11:53 AM, Mujtaba Chohan wrote:
I still see you have the following on classpath:
opt/hbase/phoenix-assembly/target/*
On Wed, Mar 30, 2016 at 5:42 PM, F21 <f21.gro...@gmail.com
<mailto:f21.gro...@gmail.com>> wrote:
Thanks f
Your PrepareAndExecute request is missing a statementId:
https://calcite.apache.org/docs/avatica_json_reference.html#prepareandexecuterequest
Before calling PrepareAndExecute, you need to send a CreateStatement
request to the server so that it can give you a statementId. Then, use
that
wrapper library. if there are some books or references where i can
read more about apache phoenix will be very helpful.
thanks
On 13.04.2016 13:29, F21 wrote:
Your PrepareAndExecute request is missing a statementId:
https://calcite.apache.org/docs/avatica_json_reference.html
I am interested in building a Go client to query the phoenix query
server using protocol buffers.
The query server is running on http://localhost:8765, so I tried POSTing
to localhost:8765 with the marshalled protocol buffer as the body.
Unfortunately, the server responds with:
have a work around. Would you mind filing a Calcite bug for
the Avatica component after you finish your testing?
Thanks,
James
On Sat, Apr 2, 2016 at 4:10 AM, F21 <f21.gro...@gmail.com
<mailto:f21.gro...@gmail.com>> wrote:
I was able to successfully commit a transact
I am using HBase 1.1.3 with Phoenix 4.8.0-SNAPSHOT. To talk to phoenix,
I am using the phoenix query server with serialization set to JSON.
First, I create a non-transactional table:
CREATE TABLE my_table3 (k BIGINT PRIMARY KEY, v VARCHAR)
TRANSACTIONAL=false;
I then send the following
Send your unsubscribe request to user-unsubscr...@phoenix.apache.org to
unsubscribe. :)
On 29/03/2016 4:54 PM, Dor Ben Dov wrote:
This message and the information contained herein is proprietary and
confidential and subject to the Amdocs policy statement, you may
review at
working in our environment. To
verify can you please try this? Copy only tephra and tephra-env.sh
files supplied with Phoenix in a new directory with HBASE_HOME env
variable set and then run tephra.
Thanks,
Mujtaba
On Wed, Mar 30, 2016 at 9:59 PM, F21 <f21.gro...@gmail.com
<mailto:f
problems doing commits when using the thin client
and Phoenix 4.6.0.
Hope this helps,
Steve
On Thu, Mar 31, 2016 at 11:25 PM, F21 <f21.gro...@gmail.com
<mailto:f21.gro...@gmail.com>> wrote:
As I mentioned about a week ago, I am working on a golang client
using protobuf s
using the thin client
and Phoenix 4.6.0.
Hope this helps,
Steve
On Thu, Mar 31, 2016 at 11:25 PM, F21 <f21.gro...@gmail.com
<mailto:f21.gro...@gmail.com>> wrote:
As I mentioned about a week ago, I am working on a golang client
using protobuf serialization with the p
As I mentioned about a week ago, I am working on a golang client using
protobuf serialization with the phoenix query server. I have
successfully dealt with the serialization of requests and responses.
However, I am trying to commit a transaction and just doesn't seem to
commit.
Here's what
Hi all,
I have just open sourced a golang driver for Phoenix and Avatica.
The code is licensed using the Apache 2 License and is available here:
https://github.com/Boostport/avatica
Contributions are very welcomed :)
Cheers,
Francis
],
"sql":null,
"parameters":[
],
"cursorFactory":{
"style":"LIST",
"clazz":null,
"fieldNames":null
},
"st
On 13.04.2016 19:27, Josh Elser wrote:
For reference materials: definitely check out
https://calcite.apache.org/avatica/
While JSON is easy to get started with, there are zero guarantees on
compatibility between versions. If you use protobuf, we should be
able to hide all schema drift from yo
"schemaName": "",
"precision": 0,
"scale": 0,
"tableName": "SYSTEM.TABLE",
"catalogName": "",
"type": {
"type": "scalar",
ot;: 0,
"tableName": "US_POPULATION",
"catalogName": "",
"type": {
"type": "scalar",
"id": 12,
"name": "VARCHAR",
TransactionServiceMain
Cheers,
Francis
On 1/09/2016 4:01 AM, Thomas D'Silva wrote:
Can you check the Transaction Manager logs and see if there are any
error? Also can you do a jps and see confirm the Transaction Manager
is running ?
On Wed, Aug 31, 2016 at 2:12 AM, F21 <f21.gro...@gmail.
Glad you got it working! :)
Cheers,
Francis
On 8/09/2016 7:11 PM, zengbaitang wrote:
I found the reason , because i have not set the env : HADOOP_CONF_DIR
while i set the env, the problem solved .
Thank you F21, thank you very much
curl or wget to get
http://your-phoenix-query-server:8765 to see if there's a response?
Cheers,
Francis
On 8/09/2016 3:54 PM, zengbaitang wrote:
hi F21 , I am sure hbase-site.xml was configured properly ,
here is my *hbase-site.xml (hbase side)*:
hbase.rootdir
hdfs://stage-cluster
by Jetty:// Java Web
Server
-- ----------
*??:* "F21";<f21.gro...@gmail.com>;
*:* 2016??9??8??(??) 2:01
*??:* "user"<user@phoenix.apache.org>;
*:* Re: ?? Can query server run with hadoop ha mode??
Your logs do not seem to s
I am not sure what you mean here. The phoenix query server (which is
based on avatica, which is a subproject in the Apache Calcite project)
accepts both Protobufs and JSON depending on the value of
"phoenix.queryserver.serialization".
The server implements readers that will convert the
I have a test cluster running HDFS in HA mode with HBase + Phoenix on
docker running successfully.
Can you check if you have a properly configured hbase-site.xml that is
available to your phoenix query server? Make sure hbase.zookeeper.quorum
and zookeeper.znode.parent is present. If
/../conf/:/opt/hbase/phoenix-c
542 root 0:00 /bin/bash
9035 root 0:00 sleep 1
9036 root 0:00 ps
bash-4.3# wget localhost:15165
Connecting to localhost:15165 (127.0.0.1:15165)
wget: error getting response: Connection reset by peer
On 31/08/2016 3:25 PM, F21 wrote:
This only seems
/08/2016 11:21 AM, F21 wrote:
I have HBase 1.2.2 and Phoenix 4.8.0 running on my HBase master
running on alpine linux with OpenJDK JRE 8.
This is my hbase-site.xml:
hbase.rootdir
hdfs://mycluster/hbase
zookeeper.znode.parent
/hbase
hbase.cluster.distributed
I have HBase 1.2.2 and Phoenix 4.8.0 running on my HBase master running
on alpine linux with OpenJDK JRE 8.
This is my hbase-site.xml:
hbase.rootdir
hdfs://mycluster/hbase
zookeeper.znode.parent
/hbase
hbase.cluster.distributed
true
, Co.
Chairman
Avapno Assets, LLC
Bethel Town P.O
Westmoreland
Jamaica
Email: cheyenne.osanu.for...@gmail.com
<mailto:cheyenne.osanu.for...@gmail.com>
Mobile: 876-881-7889
skype: cheyenne.forbes1
On Wed, Aug 31, 2016 at 5:39 AM, F21 <f21.gro...@gmail.com
<mailto:f21.gro...@gmail
Did you build the image yourself? If so, you need to make
start-hbase-phoenix.sh executable before building it.
On 31/08/2016 8:02 PM, Cheyenne Forbes wrote:
" ': No such file or directory"
Hey,
You mentioned that you sent a PrepareAndExecuteRequest. However, to do
that, you would need to first:
1. Open a connection:
https://calcite.apache.org/docs/avatica_json_reference.html#openconnectionrequest
2. Create a statement:
I just ran into the following scenario with Phoenix 4.8.1 and HBase 1.2.3.
1. Create a transactional table: CREATE TABLE schemas(version varchar
not null primary key) TRANSACTIONAL=true
2. Confirm it exists/is created: SELECT * FROM schemas
3. Begin transaction.
4. Insert into schemas:
Westmoreland
Jamaica
Email: cheyenne.osanu.for...@gmail.com
<mailto:cheyenne.osanu.for...@gmail.com>
Mobile: 876-881-7889
skype: cheyenne.forbes1
On Tue, Aug 23, 2016 at 5:49 PM, F21 <f21.gro...@gmail.com
<mailto:f21.gro...@gmail.com>> wrote:
It's possible to run phoen
mnitech
Chief Operating Officer
Avapno Solutions
Chairman
Avapno Assets, LLC
Bethel Town P.O
Westmoreland
Jamaica
Email: cheyenne.osanu.for...@gmail.com
<mailto:cheyenne.osanu.for...@gmail.com>
Mobile: 876-881-7889
skype: cheyenne.forbes1
On Tue, Aug 23, 2016 at 6:56 PM, F21 <f21.
Executive Officer
Avapno Omnitech
Chief Operating Officer
Avapno Solutions
Chairman
Avapno Assets, LLC
Bethel Town P.O
Westmoreland
Jamaica
Email: cheyenne.osanu.for...@gmail.com
<mailto:cheyenne.osanu.for...@gmail.com>
Mobile: 876-881-7889
skype: cheyenne.forbes1
On Tue, Aug 23, 2016 at 6
n P.O
Westmoreland
Jamaica
Email: cheyenne.osanu.for...@gmail.com
<mailto:cheyenne.osanu.for...@gmail.com>
Mobile: 876-881-7889
skype: cheyenne.forbes1
On Tue, Aug 23, 2016 at 7:23 PM, F21 <f21.gro...@gmail.com
<mailto:f21.gro...@gmail.com>> wrote:
Try running it with
Hey all,
Normally, rather than de-normalizing my data, I prefer to have the data
duplicated in 2 tables. With transactions, it is quite simple to ensure
atomic updates to those 2 tables (especially for read-heavy apps). This
also makes things easier to query and avoids the memory limits of
P.S. I meant to say normalizing rather than de-normalizing.
On 21/10/2016 10:36 AM, F21 wrote:
Hey all,
Normally, rather than de-normalizing my data, I prefer to have the
data duplicated in 2 tables. With transactions, it is quite simple to
ensure atomic updates to those 2 tables (especially
Hi all,
I am cross posting this from the Calcite mailing list, since the phoenix
query server uses Avatica from the Calcite project.
Go 1.8 was released recently and the database/sql package saw a lot of
new features. I just tagged the v1.3.0 release for the Go Avatica
driver[0] which ships
I recently came across CockroachDB[0] which is a distributed SQL
database. Operationally, it is easy to run and adding a new node to the
cluster is really simple as well. I believe it is targeted towards OLTP
workloads.
Has anyone else had a look at CockroachDB? How does it compare with
43 matches
Mail list logo