I found the reason , because i have not set the env : HADOOP_CONF_DIR while i
set the env, the problem solved .
Thank you F21, thank you very much!
-- --
??: "F21";;
: 2016??9??8??(??) 3:33
??:
Glad you got it working! :)
Cheers,
Francis
On 8/09/2016 7:11 PM, zengbaitang wrote:
I found the reason , because i have not set the env : HADOOP_CONF_DIR
while i set the env, the problem solved .
Thank you F21, thank you very much!
-- --
I add phoenix.queryserver.serialization to the hbase-site.xml and start the
queryserver ,
here is the respons of command :
curl -XPOST 'http://tnode02:8765' -d '{"request":
"openConnection","connectionId": "00---"}', it seems
the same Exception:
Your logs do not seem to show any errors.
You mentioned that you have 2 hbase-site.xml. Are the Phoenix query
servers running on the same machine as the HBase servers? If not, the
hbase-site.xml for the phoenix query servers also needs the zookeeper
configuration.
Did you also try to use
yes, the query server run on one of the regionservers
and exec curl 'http://tnode02:8765' the terminal returns :
Error 404 - Not Found
Error 404 - Not Found.
No context on this server matched or handled this request.Contexts known to
this server are:
From the response of your curl, it appears that the query server is
started correctly and running. The next bit to check is to see if it can
talk to the HBase servers properly.
Add phoenix.queryserver.serialization to the hbase-site.xml for the
query server and set the value to JSON.
Then
Is there any reason why it would be a bad idea to enable region replication on
the Phoenix metadata tables. Specifically, SYSTEM.CATALOG et al.
From everything I’m reading it seems like it would be a good idea. Those tables
are a single point of failure for Phoenix. If they aren’t up then no
I was going to say that
https://issues.apache.org/jira/browse/PHOENIX-3223 might be related,
but it looks like the HADOOP_CONF_DIR is already put on the classpath.
Glad to see you goth this working :)
On Thu, Sep 8, 2016 at 5:56 AM, F21 wrote:
> Glad you got it working! :)
I agree with Michael on enabling region replication for the SYSTEM.CATALOG
table. But when I tried enabling region replication, I was not able to bring
the HBase cluster. I am using HBase 1.2.1 and Phoenix 4.7 (on Amazon EMR
Platform)
Regards
Nithin
From: Michael McAllister
Yup, Francis got it right. There are POJOs in Avatica which Jackson
(un)marshals the JSON in-to/out-of and logic which constructs the POJOs
from Protobuf and vice versa.
In some hot-code paths, there are implementations in the server which
can use protobuf objects directly (to avoid extra
James
I’m not talking about replication between different clusters. Instead I’m
talking about region replication within the same cluster for High Availability
purposes. An overview is here:-
https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.3.2/bk_hadoop-ha/content/ha-hbase-intro.html
What is an easy solution or is there a solution to clone the table/schema
in phoenix?
Thanks in advance.
Ah, I see. I'm not sure if you'd ever see "partial" state since a Phoenix
table is represented by multiple rows in the SYSTEM.CATALOG table. Probably
not a good idea for SYSTEM.SEQUENCE table as you wouldn't want to see an
"old" row (which might make you get duplicate sequence values).
On Thu,
I meant, The region servers were never active and did not show up on UI
From: Michael McAllister [mailto:mmcallis...@homeaway.com]
Sent: Thursday, September 8, 2016 11:46 AM
To: user@phoenix.apache.org
Subject: Re: Enabling region replication on Phoenix metadata tables
Nithin
>
when I tried
Take a look at this[1] thread for a discussion on replication of system
tables. You can replicate the SYSTEM.CATALOG table, but you have to be very
careful. Make sure to disable and discard replicated data for
SYSTEM.CATALOG while any Phoenix upgrade is in progress (i.e. first
connection after
Nithin
>
when I tried enabling region replication, I was not able to bring the HBase
cluster.
>
I’m not sure what you mean here. Specifically referring to “bring the HBase
cluster”.
Michael McAllister
Staff Data Warehouse Engineer | Decision Systems
It's not about data. Would like to clone just the table structure(s) under the
schema partially or entire tables.
Kumar Palaniappan
> On Sep 8, 2016, at 5:48 PM, dalin.qin wrote:
>
> try this:
>
> 0: jdbc:phoenix:namenode:2181:/hbase-unsecure> CREATE TABLE TABLE1 (ID
try this:
0: jdbc:phoenix:namenode:2181:/hbase-unsecure> CREATE TABLE TABLE1 (ID
BIGINT NOT NULL PRIMARY KEY, COL1 VARCHAR);
No rows affected (1.287 seconds)
0: jdbc:phoenix:namenode:2181:/hbase-unsecure> UPSERT INTO TABLE1 (ID,
COL1) VALUES (1, 'test_row_1');
1 row affected (0.105 seconds)
0:
I think you're best off running DDL with a new table name, but you could
probably upsert the values yourself into system.catalog. If you have a lot of
data to copy, you can use hbase snapshots and restore into the new table name.
This would also take care of creating the underlying hbase table,
Hi Kumar,
I believe right now there is no way to directly generate the DDL statement
for the existing table,better to write down you sql immedately after
exection (in oracle ,dbms_metadata is so perfect ,in hive show create
table also works )
however you can query system.catalog for all the
Yes, we found a way to do off of system.catalog
In the meantime, trying to to explore are there any off the
shelves options.
Thanks dalin.
Kumar Palaniappan
> On Sep 8, 2016, at 6:43 PM, dalin.qin wrote:
>
> Hi Kumar,
>
> I believe right now there is no way to
21 matches
Mail list logo