Theoretically you can run Phoenix4.1 server jar on HDP2.1.5 but this
combination isn't tested. So you should not try it in a production env.
In order to workaround the coprocessor issue, you can try to rename the table
folder in hdfs, then restart region servers so meta region will be assigned.
You can then disable the problematic table, rename the hdfs table folder back
and then alter table to remove the coprocessor.
Another thing is that once you used phoenix4.1 client connect to a phoenix
cluster, the cluster system table schema are upgraded to 4.1 version. In 4.1
version, it seems to me there are only two new columns created in catalog table
and should be all right to continue use 4.0 Phoenix client.
Date: Fri, 6 Mar 2015 09:37:32 +0800
From: su...@certusnet.com.cn
To: user@phoenix.apache.org; jamestay...@apache.org
CC: d...@phoenix.apache.org
Subject: Re: Re: HBase Cluster Down: No jar path specified for
org.apache.hadoop.hbase.regionserver.LocalIndexSplitter
Hi. anil
I remember that you can find the client jars from
$PHOENIX_HOME/phoenix-assembly/target with some kind of
phoenix-4.0.0-incubating-client.jar sort of.If you rebuild phoenix source
targeting on your hbase version, that would be the right place.
Or you just want the original tar file for the incubating phoenix? You can find
here:
https://archive.apache.org/dist/incubator/phoenix/phoenix-4.0.0-incubating/src/
Thanks,Sun.
CertusNet
From: anil guptaDate: 2015-03-06 09:26To: user@phoenix.apache.org; James
TaylorCC: devSubject: Re: HBase Cluster Down: No jar path specified for
org.apache.hadoop.hbase.regionserver.LocalIndexSplitter@James: Could you point
me to a place where i can find tar file of Phoenix-4.0.0-incubating release?
All the links on this page are broken:
http://www.apache.org/dyn/closer.cgi/incubator/phoenix/
On Thu, Mar 5, 2015 at 5:04 PM, anil gupta <anilgupt...@gmail.com> wrote:
I have tried to disable the table but since none of the RS are coming up. I am
unable to do it. Am i missing something?
On the server side, we were using the "4.0.0-incubating". It seems like my only
option is to upgrade the server to 4.1. At-least, the HBase cluster to be UP.
I just want my cluster to come and then i will disable the table that has a
Phoenix view.
What would be the possible side effects of using Phoenix 4.1 with HDP2.1.5.
Even after updating to Phoenix4.1, if the problem is not fixed. What is the
next alternative?
On Thu, Mar 5, 2015 at 4:54 PM, Nick Dimiduk <ndimi...@gmail.com> wrote:
Hi Anil,
HDP-2.1.5 ships with Phoenix [0]. Are you using the version shipped, or trying
out a newer version? As James says, the upgrade must be servers first, then
client. Also, Phoenix versions tend to be picky about their underlying HBase
version.
You can also try altering the now-broken phoenix tables via HBase shell,
removing the phoenix coprocessor. I've tried this in the past with other
coprocessor-loading woes and had mixed results. Try: disable table, alter
table, enable table. There's still sharp edges around coprocessor-based
deployment.
Keep us posted, and sorry for the mess.
-n
[0]:
http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.1.7/bk_releasenotes_hdp_2.1/content/ch_relnotes-hdp-2.1.5-product.html
On Thu, Mar 5, 2015 at 4:34 PM, anil gupta <anilgupt...@gmail.com> wrote:
Unfortunately, we ran out of luck on this one because we are not running the
latest version of HBase. This property was introduced recently:
https://issues.apache.org/jira/browse/HBASE-13044 :(
Thanks, Vladimir.
On Thu, Mar 5, 2015 at 3:44 PM, Vladimir Rodionov <vladrodio...@gmail.com>
wrote:
Try the following:
Update hbase-site.xml config, set
hbase.coprocessor.enabed=false
or:
hbase.coprocessor.user.enabed=false
sync config across cluster.
restart the cluster
than update your table's settings in hbase shell
-Vlad
On Thu, Mar 5, 2015 at 3:32 PM, anil gupta <anilgupt...@gmail.com> wrote:
Hi All,
I am using HDP2.1.5, Phoenix4-0.0 was installed on RS. I was running Phoenix4.1
client because i could not find tar file for "Phoenix4-0.0-incubating".
I tried to create a view on existing table and then my entire cluster went
down(all the RS went down. MAster is still up).
This is the exception i am seeing:
2015-03-05 14:30:53,296 FATAL [RS_OPEN_REGION-hdpslave8:60020-2]
regionserver.HRegionServer: ABORTING region server
bigdatabox.com,60020,1423589420136: The coprocessor
org.apache.hadoop.hbase.regionserver.LocalIndexSplitter threw an unexpected
exceptionjava.io.IOException: No jar path specified for
org.apache.hadoop.hbase.regionserver.LocalIndexSplitter at
org.apache.hadoop.hbase.coprocessor.CoprocessorHost.load(CoprocessorHost.java:177)
at
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.loadTableCoprocessors(RegionCoprocessorHost.java:207)
at
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.<init>(RegionCoprocessorHost.java:163)
at
org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:555) at
org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:462) at
sun.reflect.GeneratedConstructorAccessor33.newInstance(Unknown Source)
at
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:4119)
at
org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4430)
at
org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4403)
at
org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4359)
at
org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4310)
at
org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:465)
at
org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:139)
at
org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
We tried to restart the cluster. It died again. It seems, its stucks at this
point looking for
LocalIndexSplitter class. How can i resolve this error? We cant do anything in
the cluster until we fix it.
I was thinking of disabling those tables but none of the RS is coming up. Can
anyone suggest me how can i bail out of this BAD situation.
--
Thanks & Regards,
Anil Gupta
--
Thanks & Regards,
Anil Gupta
--
Thanks & Regards,
Anil Gupta
--
Thanks & Regards,
Anil Gupta