Hi, adjust "kylin.metadata.hbase-rpc-timeout" This method does work. Another problem: When I tried to run kylin job, I encountered below problem. I tried to check consistancy of kylin_metadata with "hbase hbck -details kylin_metadata". The result is "0 inconsistency"
|
org.apache.kylin.engine.mr.exception.HadoopShellException: java.lang.RuntimeException: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=1, exceptions: Mon Apr 22 14:30:23 GMT+08:00 2019, RpcRetryingCaller{globalStartTime=1555914623296, pause=100, retries=1}, org.apache.hadoop.hbase.NotServingRegionException: org.apache.hadoop.hbase.NotServingRegionException: Region kylin_metadata,/dict/FACT/DIM_TB/8b5cdf3e-8aa3-5c70-44bd-fffdc6ed4d1a.dict,1555653527986.9fbc862f521b93968a3299c9d853992e. is not online on ip-109-105-1-504.compute.internal,16020,1555901919971 at org.apache.hadoop.hbase.regionserver.HRegionServer.getRegionByEncodedName(HRegionServer.java:3008) at org.apache.hadoop.hbase.regionserver.RSRpcServices.getRegion(RSRpcServices.java:1144) at org.apache.hadoop.hbase.regionserver.RSRpcServices.newRegionScanner(RSRpcServices.java:2476) at org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2757) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:34950) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2339) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:123) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:188) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:168) | Does anyone knows how to solve this problem?Thanks a lot!
--------- Original Message ---------
Sender : JiaTao Tao <[email protected]>
Date : 2019-04-19 10:54 (GMT+9)
Title : Re: KYLIN timeout problem
Hi
You can adjust "kylin.metadata.hbase-rpc-timeout" to a larger value. And then run metadata/StorageCleanup, It will reduce the data in Hbase.
Hi,
I met below error when I run cube in Kylin. It happened in the "#4 Step Name: Build Dimension Dictionary"
|
Tue Apr 16 14:18:06 GMT+08:00 2019, RpcRetryingCaller{globalStartTime=1555395481041, pause=100, retries=1}, java.io.IOException: Call to [HBASE URL] failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=15589, waitTime=5001, operationTimeout=5000 expired.
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:159) at org.apache.hadoop.hbase.client.ResultBoundedCompletionService$QueueingFuture.run(ResultBoundedCompletionService.java:65) ... 3 more Caused by: java.io.IOException: Call to ip-10-10-110-102.cn-north-1.compute.internal/10.10.110.102:16020 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=15589, waitTime=5001, operationTimeout=5000 expired. at org.apache.hadoop.hbase.ipc.AbstractRpcClient.wrapException(AbstractRpcClient.java:292) at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1274) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:227) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:336) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:35396) at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:224) at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:65) at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:212) |
The normal dimension of this cube is about 25.
Then I created a new cube with 3 normal dimensions and run it. The job is successful.
When I tried to do metadata backup and metadata clean wich cmd "metastore.sh", I also met the error: "org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=15589, waitTime=5001, operationTimeout=5000 expired".
Does anyone know about the root cause of this problem? And how to fix it? Thanks a lot!
|
![]()
|
![]() |