>
/tenants/rft/rcmo/kylin/ns_rft_rcmo_creg_poc-kylin_metadata_kylin_2.3.1/resources/table_snapshot/DB_RFT_RCMO_RFDA.DRRBUSINESSHIERARCHY/2d660a9f-186d-47d9-b043-1ded145433ba.snapshot


A file on HDFS should exist but not exist, that is what the error is
saying. This file is the snapshot of a lookup table. Seems your Kylin
metadata is somewhat corrupted. There are some ways to fix however none of
them is cheap and nor can they return your Kylin to full health...

If you can afford data loss, rebuild a new cube from scratch is the
simplest way out. And you should dig deeper in kylin.log to see how the
metadata becomes corrupted. I guess there must be some HDFS error earlier
in the log, maybe the HDFS is not stable at that moment etc..

On Wed, Aug 8, 2018 at 12:23 PM Kumar, Manoj H <[email protected]>
wrote:

> Can someone suggest on this? Whats wrong here?
>
>
>
> java.io.IOException: Failed to read resource at
> /table_snapshot/DB_RFT_RCMO_RFDA.DRRBUSINESSHIERARCHY/2d660a9f-186d-47d9-b043-1ded145433ba.snapshot
>
>         at
> org.apache.kylin.storage.hbase.HBaseResourceStore.getInputStream(HBaseResourceStore.java:256)
>
>         at
> org.apache.kylin.storage.hbase.HBaseResourceStore.getResourceImpl(HBaseResourceStore.java:277)
>
>         at
> org.apache.kylin.common.persistence.ResourceStore.getResource(ResourceStore.java:165)
>
>         at
> org.apache.kylin.dict.lookup.SnapshotManager.load(SnapshotManager.java:196)
>
>         at
> org.apache.kylin.dict.lookup.SnapshotManager.checkDupByInfo(SnapshotManager.java:161)
>
>         at
> org.apache.kylin.dict.lookup.SnapshotManager.buildSnapshot(SnapshotManager.java:107)
>
>         at
> org.apache.kylin.cube.CubeManager$DictionaryAssist.buildSnapshotTable(CubeManager.java:1055)
>
>         at
> org.apache.kylin.cube.CubeManager.buildSnapshotTable(CubeManager.java:971)
>
>         at
> org.apache.kylin.cube.cli.DictionaryGeneratorCLI.processSegment(DictionaryGeneratorCLI.java:87)
>
>         at
> org.apache.kylin.cube.cli.DictionaryGeneratorCLI.processSegment(DictionaryGeneratorCLI.java:49)
>
>         at
> org.apache.kylin.engine.mr.steps.CreateDictionaryJob.run(CreateDictionaryJob.java:71)
>
>         at org.apache.kylin.engine.mr.MRUtil.runMRJob(MRUtil.java:97)
>
>         at
> org.apache.kylin.engine.mr.common.HadoopShellExecutable.doWork(HadoopShellExecutable.java:63)
>
>         at
> org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:162)
>
>         at
> org.apache.kylin.job.execution.DefaultChainedExecutable.doWork(DefaultChainedExecutable.java:67)
>
>         at
> org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:162)
>
>         at
> org.apache.kylin.job.impl.threadpool.DefaultScheduler$JobRunner.run(DefaultScheduler.java:300)
>
>         at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>
>         at java.lang.Thread.run(Thread.java:745)
>
> Caused by: java.io.FileNotFoundException: File does not exist:
> /tenants/rft/rcmo/kylin/ns_rft_rcmo_creg_poc-kylin_metadata_kylin_2.3.1/resources/table_snapshot/DB_RFT_RCMO_RFDA.DRRBUSINESSHIERARCHY/2d660a9f-186d-47d9-b043-1ded145433ba.snapshot
>
>         at
> org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:66)
>
>         at
> org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:56)
>
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsInt(FSNamesystem.java:2007)
>
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1977)
>
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1890)
>
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:572)
>
>         at
> org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getBlockLocations(AuthorizationProviderProxyClientProtocol.java:89)
>
>         at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:365)
>
>         at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>
>         at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
>
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)
>
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2141)
>
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2137)
>
>         at java.security.AccessController.doPrivileged(Native Method)
>
>         at javax.security.auth.Subject.doAs(Subject.java:422)
>
>         at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1714)
>
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2135)
>
>
>
>         at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> Method)
>
>         at
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
>
>         at
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>
>         at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
>
>         at
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
>
>         at
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
>
>         at
> org.apache.hadoop.hdfs.DFSClient.callGetBlockLocations(DFSClient.java:1281)
>
>         at
> org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:1266)
>
>         at
> org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:1254)
>
>         at
> org.apache.hadoop.hdfs.DFSInputStream.fetchLocatedBlocksAndGetLastBlockLength(DFSInputStream.java:305)
>
>         at
> org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.java:271)
>
>         at
> org.apache.hadoop.hdfs.DFSInputStream.<init>(DFSInputStream.java:263)
>
>         at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1585)
>
>         at
> org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:309)
>
>         at
> org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:305)
>
>         at
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>
>         at
> org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:305)
>
>         at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:781)
>
>         at
> org.apache.kylin.storage.hbase.HBaseResourceStore.getInputStream(HBaseResourceStore.java:254)
>
>         ... 19 more
>
> Caused by:
> org.apache.hadoop.ipc.RemoteException(java.io.FileNotFoundException): File
> does not exist:
> /tenants/rft/rcmo/kylin/ns_rft_rcmo_creg_poc-kylin_metadata_kylin_2.3.1/resources/table_snapshot/DB_RFT_RCMO_RFDA.DRRBUSINESSHIERARCHY/2d660a9f-186d-47d9-b043-1ded145433ba.snapshot
>
>         at
> org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:66)
>
>         at
> org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:56)
>
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsInt(FSNamesystem.java:2007)
>
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1977)
>
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1890)
>
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:572)
>
>         at
> org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getBlockLocations(AuthorizationProviderProxyClientProtocol.java:89)
>
>         at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:365)
>
>         at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>
>         at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
>
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)
>
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2141)
>
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2137)
>
>         at java.security.AccessController.doPrivileged(Native Method)
>
>         at javax.security.auth.Subject.doAs(Subject.java:422)
>
>         at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1714)
>
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2135)
>
>
>
>         at org.apache.hadoop.ipc.Client.call(Client.java:1472)
>
>         at org.apache.hadoop.ipc.Client.call(Client.java:1409)
>
>         at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230)
>
>         at com.sun.proxy.$Proxy41.getBlockLocations(Unknown Source)
>
>         at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getBlockLocations(ClientNamenodeProtocolTranslatorPB.java:256)
>
>         at sun.reflect.GeneratedMethodAccessor156.invoke(Unknown Source)
>
>         at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>
>         at java.lang.reflect.Method.invoke(Method.java:606)
>
>         at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:256)
>
>         at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104)
>
>         at com.sun.proxy.$Proxy42.getBlockLocations(Unknown Source)
>
>         at
> org.apache.hadoop.hdfs.DFSClient.callGetBlockLocations(DFSClient.java:1279)
>
>         ... 31 more
>
>
>
> result code:2
>
>
>
>
>
> Regards,
>
> Manoj
>
>
>
> This message is confidential and subject to terms at: http://
> www.jpmorgan.com/emaildisclaimer including on confidentiality, legal
> privilege, viruses and monitoring of electronic messages. If you are not
> the intended recipient, please delete this message and notify the sender
> immediately. Any unauthorized use is strictly prohibited.
>

Reply via email to