BerylLin created PHOENIX-2130:
---------------------------------
Summary: Can't connct to hbase cluster
Key: PHOENIX-2130
URL: https://issues.apache.org/jira/browse/PHOENIX-2130
Project: Phoenix
Issue Type: Bug
Environment: ubuntu 14.0
Reporter: BerylLin
I have a hadoop cluster which have 6 nodes, hadoop version is 2.2.0.
Zookeeper cluster are installed in
datanode1,datanode2,datanode3,datanode4,datanode5.
Hbase cluster is installed in that environment above, version is 0.98.13.
Hbase can be started and used successfully.
Phoenix version is 4.3.0(4.4.0 has also been tried)
When I use "sqlline.py datanode1:2181", I got the error below:
Setting property: [isolation, TRANSACTION_READ_COMMITTED]
issuing: !connect jdbc:phoenix:datanode1:2181 none none
org.apache.phoenix.jdbc.PhoenixDriver
Connecting to jdbc:phoenix:datanode1:2181
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further
details.
15/07/18 20:55:39 WARN util.NativeCodeLoader: Unable to load native-hadoop
library for your platform... using builtin-java classes where applicable
Error: org.apache.hadoop.hbase.DoNotRetryIOException: Class
org.apache.phoenix.coprocessor.MetaDataRegionObserver cannot be loaded Set
hbase.table.sanity.checks to false at conf or table descriptor if you want to
bypass sanity checks
at
org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure(HMaster.java:1978)
at
org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1910)
at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1849)
at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:2025)
at
org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:42280)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2107)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
at
org.apache.hadoop.hbase.ipc.FifoRpcScheduler$1.run(FifoRpcScheduler.java:74)
at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745) (state=08000,code=101)
org.apache.phoenix.exception.PhoenixIOException:
org.apache.hadoop.hbase.DoNotRetryIOException: Class
org.apache.phoenix.coprocessor.MetaDataRegionObserver cannot be loaded Set
hbase.table.sanity.checks to false at conf or table descriptor if you want to
bypass sanity checks
at
org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure(HMaster.java:1978)
at
org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1910)
at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1849)
at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:2025)
at
org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:42280)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2107)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
at
org.apache.hadoop.hbase.ipc.FifoRpcScheduler$1.run(FifoRpcScheduler.java:74)
at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
at
org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:108)
at
org.apache.phoenix.query.ConnectionQueryServicesImpl.ensureTableCreated(ConnectionQueryServicesImpl.java:870)
at
org.apache.phoenix.query.ConnectionQueryServicesImpl.createTable(ConnectionQueryServicesImpl.java:1194)
at
org.apache.phoenix.query.DelegateConnectionQueryServices.createTable(DelegateConnectionQueryServices.java:111)
at
org.apache.phoenix.schema.MetaDataClient.createTableInternal(MetaDataClient.java:1682)
at
org.apache.phoenix.schema.MetaDataClient.createTable(MetaDataClient.java:592)
at
org.apache.phoenix.compile.CreateTableCompiler$2.execute(CreateTableCompiler.java:177)
at
org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:280)
at
org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:272)
at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
at
org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:270)
at
org.apache.phoenix.jdbc.PhoenixStatement.executeUpdate(PhoenixStatement.java:1052)
at
org.apache.phoenix.query.ConnectionQueryServicesImpl$11.call(ConnectionQueryServicesImpl.java:1841)
at
org.apache.phoenix.query.ConnectionQueryServicesImpl$11.call(ConnectionQueryServicesImpl.java:1810)
at
org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:77)
at
org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:1810)
at
org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:162)
at
org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.connect(PhoenixEmbeddedDriver.java:126)
at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:133)
at sqlline.DatabaseConnection.connect(DatabaseConnection.java:157)
at sqlline.DatabaseConnection.getConnection(DatabaseConnection.java:203)
at sqlline.Commands.connect(Commands.java:1064)
at sqlline.Commands.connect(Commands.java:996)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at
sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:36)
at sqlline.SqlLine.dispatch(SqlLine.java:804)
at sqlline.SqlLine.initArgs(SqlLine.java:588)
at sqlline.SqlLine.begin(SqlLine.java:656)
at sqlline.SqlLine.start(SqlLine.java:398)
at sqlline.SqlLine.main(SqlLine.java:292)
Caused by: org.apache.hadoop.hbase.DoNotRetryIOException:
org.apache.hadoop.hbase.DoNotRetryIOException: Class
org.apache.phoenix.coprocessor.MetaDataRegionObserver cannot be loaded Set
hbase.table.sanity.checks to false at conf or table descriptor if you want to
bypass sanity checks
at
org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure(HMaster.java:1978)
at
org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1910)
at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1849)
at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:2025)
at
org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:42280)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2107)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
at
org.apache.hadoop.hbase.ipc.FifoRpcScheduler$1.run(FifoRpcScheduler.java:74)
at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at
org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
at
org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95)
at
org.apache.hadoop.hbase.client.RpcRetryingCaller.translateException(RpcRetryingCaller.java:213)
at
org.apache.hadoop.hbase.client.RpcRetryingCaller.translateException(RpcRetryingCaller.java:227)
at
org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:125)
at
org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:93)
at
org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3398)
at
org.apache.hadoop.hbase.client.HBaseAdmin.createTableAsync(HBaseAdmin.java:631)
at
org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:522)
at
org.apache.phoenix.query.ConnectionQueryServicesImpl.ensureTableCreated(ConnectionQueryServicesImpl.java:831)
... 31 more
Caused by:
org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.DoNotRetryIOException):
org.apache.hadoop.hbase.DoNotRetryIOException: Class
org.apache.phoenix.coprocessor.MetaDataRegionObserver cannot be loaded Set
hbase.table.sanity.checks to false at conf or table descriptor if you want to
bypass sanity checks
at
org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure(HMaster.java:1978)
at
org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1910)
at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1849)
at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:2025)
at
org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:42280)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2107)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
at
org.apache.hadoop.hbase.ipc.FifoRpcScheduler$1.run(FifoRpcScheduler.java:74)
at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1457)
at
org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1661)
at
org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1719)
at
org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.createTable(MasterProtos.java:43773)
at
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$5.createTable(HConnectionManager.java:2007)
at org.apache.hadoop.hbase.client.HBaseAdmin$2.call(HBaseAdmin.java:635)
at org.apache.hadoop.hbase.client.HBaseAdmin$2.call(HBaseAdmin.java:631)
at
org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:117)
... 36 more
sqlline version 1.1.8
It seems that I should set the property "hbase.table.sanity.checks" to false?
But I have not found the location to set that.
Have anyone met the same error? Could you please give me some advice?
Thank you very much!
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)