HI Jaanai Zhang,
When you say migrate the data, do you mean somehow export the data from
phoenix tables(phoenix 4.6) and bulk-insert into new phoenix
tables(phoenix-4.14) ?
Do you have any data migration script or something which I can take help of
?
Thanks,
Tanvi
On Wed, Oct 17, 2018 at 5:41
Thank Jaanai.
At first we thought it was data issue too, but as we restored the table
from snapshot to a separate schema on the same cluster to triage, the
exception no longer happens... Does that give further clue on what the
issue might've been?
0: jdbc:phoenix:journalnode,test> SELECT A, B,
The methods that you are invoking assume that the Phoenix JDBC driver
(the java class org.apache.phoenix.jdbc.PhoenixDriver) is in use. It's
not, so you get this error.
The Phoenix "thick" JDBC driver is what's running inside of the Phoenix
Query Server, just not in your local JVM. As such,
It seems that is impossible to upgrade from Phoenix-4.6 to Phoenix-4.14,
the schema of SYSTEM had been changed or some futures will be
incompatible. Maybe you can migrate data from Phoenix-4.6 to Phoenix-4.14,
this solution can ensure that everything will be right.
@Shamvenk
Yes I did check the STATS table from hbase shell, it's not empty.
After dropping all SYSTEM tables and mapping hbase-tables to phoenix tables
by executing all DDLs, I am seeing new issue.
I have a table and an index on that table. Number of records in index table
and main table are