Re: 答复: 答复: 答复: table_snapshot file does not exist

2017-05-27 Thread Li Yang
Sounds good.  :-)

On Sat, May 27, 2017 at 3:03 PM, jianhui.yi <jianhui...@zhiyoubao.com>
wrote:

> Aha, Stupid way:
>
> 1. backup metadata
>
> 2. drop all cubes and models
>
> 3. unload that table
>
> 4. load that table
>
> 5. restore metadata.
>
>
>
> J
>
>
>
> *发件人:* Li Yang [mailto:liy...@apache.org]
> *发送时间:* 2017年5月27日 14:50
> *收件人:* user@kylin.apache.org
> *主题:* Re: 答复: 答复: table_snapshot file does not exist
>
>
>
> What has been done to fix this issue? Curious to know.
>
>
>
> On Sat, May 27, 2017 at 1:37 PM, jianhui.yi <jianhui...@zhiyoubao.com>
> wrote:
>
> Thanks,I fixed it.
>
>
>
> *发件人:* Li Yang [mailto:liy...@apache.org]
> *发送时间:* 2017年5月27日 10:29
> *收件人:* user@kylin.apache.org
> *主题:* Re: 答复: table_snapshot file does not exist
>
>
>
> It seems your Kylin metadata is somewhat corrupted. In the metadata there
> exists a snapshot of table PRODUCT_DIM, however its related physical file
> does not exist on HDFS.
>
> You can manually fix the metadata, or if data rebuild is easy, delete all
> metadata and start over again.
>
>
>
> On Fri, May 19, 2017 at 11:03 AM, jianhui.yi <jianhui...@zhiyoubao.com>
> wrote:
>
> Is it a build error
>
>
>
> *发件人:* Billy Liu [mailto:billy...@apache.org]
> *发送时间:* 2017年5月19日 11:00
> *收件人:* user <user@kylin.apache.org>
> *主题:* Re: table_snapshot file does not exist
>
>
>
> Is it a build error? or query error? You mentioned two scenarios, but one
> exception.
>
>
>
> 2017-05-18 14:25 GMT+08:00 jianhui.yi <jianhui...@zhiyoubao.com>:
>
> Hi all:
>
>When I build cube to run step 4: Build Dimension Dictionary, the
> following error occurred,how to solve it?
>
> When I use the dimensions of this table, this error will appear.
>
>
>
> java.io.FileNotFoundException: File does not exist: /kylin/kylin_metadata/
> resources/table_snapshot/DW.DIM_PRODUCT/1394db19-c200-
> 46f8-833c-d28878629246.snapshot
>
> at org.apache.hadoop.hdfs.server.
> namenode.INodeFile.valueOf(INodeFile.java:66)
>
> at org.apache.hadoop.hdfs.server.
> namenode.INodeFile.valueOf(INodeFile.java:56)
>
> at org.apache.hadoop.hdfs.server.
> namenode.FSNamesystem.getBlockLocationsInt(FSNamesystem.java:2007)
>
> at org.apache.hadoop.hdfs.server.
> namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1977)
>
> at org.apache.hadoop.hdfs.server.
> namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1890)
>
> at org.apache.hadoop.hdfs.server.
> namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:572)
>
> at org.apache.hadoop.hdfs.server.namenode.
> AuthorizationProviderProxyClientProtocol.getBlockLocations(
> AuthorizationProviderProxyClientProtocol.java:89)
>
> at org.apache.hadoop.hdfs.protocolPB.
> ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(
> ClientNamenodeProtocolServerSideTranslatorPB.java:365)
>
> at org.apache.hadoop.hdfs.protocol.proto.
> ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(
> ClientNamenodeProtocolProtos.java)
>
> at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$
> ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
>
> at org.apache.hadoop.ipc.RPC$
> Server.call(RPC.java:1073)
>
> at org.apache.hadoop.ipc.Server$
> Handler$1.run(Server.java:2141)
>
> at org.apache.hadoop.ipc.Server$
> Handler$1.run(Server.java:2137)
>
> at java.security.AccessController.doPrivileged(Native
> Method)
>
> at javax.security.auth.Subject.doAs(Subject.java:415)
>
> at org.apache.hadoop.security.
> UserGroupInformation.doAs(UserGroupInformation.java:1783)
>
> at org.apache.hadoop.ipc.Server$
> Handler.run(Server.java:2135)
>
>
>
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> Method)
>
> at sun.reflect.NativeConstructorAccessorImpl.
> newInstance(NativeConstructorAccessorImpl.java:57)
>
> at sun.reflect.DelegatingConstructorAccessorI
> mpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>
> at java.lang.reflect.Constructor.
> newInstance(Constructor.java:526)
>
> at org.apache.hadoop.ipc.RemoteException.
> instantiateException(RemoteException

答复: 答复: 答复: table_snapshot file does not exist

2017-05-27 Thread jianhui.yi
Aha, Stupid way:

1. backup metadata

2. drop all cubes and models

3. unload that table

4. load that table

5. restore metadata.

 

J

 

发件人: Li Yang [mailto:liy...@apache.org] 
发送时间: 2017年5月27日 14:50
收件人: user@kylin.apache.org
主题: Re: 答复: 答复: table_snapshot file does not exist

 

What has been done to fix this issue? Curious to know.

 

On Sat, May 27, 2017 at 1:37 PM, jianhui.yi <jianhui...@zhiyoubao.com 
<mailto:jianhui...@zhiyoubao.com> > wrote:

Thanks,I fixed it.

 

发件人: Li Yang [mailto:liy...@apache.org <mailto:liy...@apache.org> ] 
发送时间: 2017年5月27日 10:29
收件人: user@kylin.apache.org <mailto:user@kylin.apache.org> 
主题: Re: 答复: table_snapshot file does not exist

 

It seems your Kylin metadata is somewhat corrupted. In the metadata there 
exists a snapshot of table PRODUCT_DIM, however its related physical file does 
not exist on HDFS.

You can manually fix the metadata, or if data rebuild is easy, delete all 
metadata and start over again.

 

On Fri, May 19, 2017 at 11:03 AM, jianhui.yi <jianhui...@zhiyoubao.com 
<mailto:jianhui...@zhiyoubao.com> > wrote:

Is it a build error

 

发件人: Billy Liu [mailto:billy...@apache.org <mailto:billy...@apache.org> ] 
发送时间: 2017年5月19日 11:00
收件人: user <user@kylin.apache.org <mailto:user@kylin.apache.org> >
主题: Re: table_snapshot file does not exist

 

Is it a build error? or query error? You mentioned two scenarios, but one 
exception. 

 

2017-05-18 14:25 GMT+08:00 jianhui.yi <jianhui...@zhiyoubao.com 
<mailto:jianhui...@zhiyoubao.com> >:

Hi all:

   When I build cube to run step 4: Build Dimension Dictionary, the 
following error occurred,how to solve it?

When I use the dimensions of this table, this error will appear.

 

java.io.FileNotFoundException: File does not exist: 
/kylin/kylin_metadata/resources/table_snapshot/DW.DIM_PRODUCT/1394db19-c200-46f8-833c-d28878629246.snapshot

at 
org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:66)

at 
org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:56)

at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsInt(FSNamesystem.java:2007)

at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1977)

at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1890)

at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:572)

at 
org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getBlockLocations(AuthorizationProviderProxyClientProtocol.java:89)

at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:365)

at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)

at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)

at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)

at 
org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2141)

at 
org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2137)

at java.security.AccessController.doPrivileged(Native 
Method)

at javax.security.auth.Subject.doAs(Subject.java:415)

at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1783)

at 
org.apache.hadoop.ipc.Server$Handler.run(Server.java:2135)

 

at 
sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)

at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)

at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)

at 
java.lang.reflect.Constructor.newInstance(Constructor.java:526)

at 
org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)

at 
org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)

at 
org.apache.hadoop.hdfs.DFSClient.callGetBlockLocations(DFSClient.java:1281)

at 
org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:1266)

at 
org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:1254)

at 
org.apache.hadoop.hdfs.DFSInputStream.fetchLocatedBlocksAndGetLastBlockLength(DFSInputStream.java:305)

at 
org.

Re: 答复: 答复: table_snapshot file does not exist

2017-05-27 Thread Li Yang
What has been done to fix this issue? Curious to know.

On Sat, May 27, 2017 at 1:37 PM, jianhui.yi <jianhui...@zhiyoubao.com>
wrote:

> Thanks,I fixed it.
>
>
>
> *发件人:* Li Yang [mailto:liy...@apache.org]
> *发送时间:* 2017年5月27日 10:29
> *收件人:* user@kylin.apache.org
> *主题:* Re: 答复: table_snapshot file does not exist
>
>
>
> It seems your Kylin metadata is somewhat corrupted. In the metadata there
> exists a snapshot of table PRODUCT_DIM, however its related physical file
> does not exist on HDFS.
>
> You can manually fix the metadata, or if data rebuild is easy, delete all
> metadata and start over again.
>
>
>
> On Fri, May 19, 2017 at 11:03 AM, jianhui.yi <jianhui...@zhiyoubao.com>
> wrote:
>
> Is it a build error
>
>
>
> *发件人:* Billy Liu [mailto:billy...@apache.org]
> *发送时间:* 2017年5月19日 11:00
> *收件人:* user <user@kylin.apache.org>
> *主题:* Re: table_snapshot file does not exist
>
>
>
> Is it a build error? or query error? You mentioned two scenarios, but one
> exception.
>
>
>
> 2017-05-18 14:25 GMT+08:00 jianhui.yi <jianhui...@zhiyoubao.com>:
>
> Hi all:
>
>When I build cube to run step 4: Build Dimension Dictionary, the
> following error occurred,how to solve it?
>
> When I use the dimensions of this table, this error will appear.
>
>
>
> java.io.FileNotFoundException: File does not exist: /kylin/kylin_metadata/
> resources/table_snapshot/DW.DIM_PRODUCT/1394db19-c200-
> 46f8-833c-d28878629246.snapshot
>
> at org.apache.hadoop.hdfs.server.
> namenode.INodeFile.valueOf(INodeFile.java:66)
>
> at org.apache.hadoop.hdfs.server.
> namenode.INodeFile.valueOf(INodeFile.java:56)
>
> at org.apache.hadoop.hdfs.server.
> namenode.FSNamesystem.getBlockLocationsInt(FSNamesystem.java:2007)
>
> at org.apache.hadoop.hdfs.server.
> namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1977)
>
> at org.apache.hadoop.hdfs.server.
> namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1890)
>
> at org.apache.hadoop.hdfs.server.
> namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:572)
>
> at org.apache.hadoop.hdfs.server.namenode.
> AuthorizationProviderProxyClientProtocol.getBlockLocations(
> AuthorizationProviderProxyClientProtocol.java:89)
>
> at org.apache.hadoop.hdfs.protocolPB.
> ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(
> ClientNamenodeProtocolServerSideTranslatorPB.java:365)
>
> at org.apache.hadoop.hdfs.protocol.proto.
> ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(
> ClientNamenodeProtocolProtos.java)
>
> at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$
> ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
>
> at org.apache.hadoop.ipc.RPC$
> Server.call(RPC.java:1073)
>
> at org.apache.hadoop.ipc.Server$
> Handler$1.run(Server.java:2141)
>
> at org.apache.hadoop.ipc.Server$
> Handler$1.run(Server.java:2137)
>
> at java.security.AccessController.doPrivileged(Native
> Method)
>
> at javax.security.auth.Subject.doAs(Subject.java:415)
>
> at org.apache.hadoop.security.
> UserGroupInformation.doAs(UserGroupInformation.java:1783)
>
> at org.apache.hadoop.ipc.Server$
> Handler.run(Server.java:2135)
>
>
>
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> Method)
>
> at sun.reflect.NativeConstructorAccessorImpl.
> newInstance(NativeConstructorAccessorImpl.java:57)
>
> at sun.reflect.DelegatingConstructorAccessorI
> mpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>
> at java.lang.reflect.Constructor.
> newInstance(Constructor.java:526)
>
> at org.apache.hadoop.ipc.RemoteException.
> instantiateException(RemoteException.java:106)
>
> at org.apache.hadoop.ipc.RemoteException.
> unwrapRemoteException(RemoteException.java:73)
>
> at org.apache.hadoop.hdfs.DFSClient.
> callGetBlockLocations(DFSClient.java:1281)
>
> at org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(
> DFSClient.java:1266)
>
> at org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(
> DFSClient.java:1254)
>
> at org.apache.hadoop.hdfs.DFSInputStream.
> fetchLocatedBlocksAndGe

答复: 答复: table_snapshot file does not exist

2017-05-26 Thread jianhui.yi
Thanks,I fixed it.

 

发件人: Li Yang [mailto:liy...@apache.org] 
发送时间: 2017年5月27日 10:29
收件人: user@kylin.apache.org
主题: Re: 答复: table_snapshot file does not exist

 

It seems your Kylin metadata is somewhat corrupted. In the metadata there 
exists a snapshot of table PRODUCT_DIM, however its related physical file does 
not exist on HDFS.

You can manually fix the metadata, or if data rebuild is easy, delete all 
metadata and start over again.

 

On Fri, May 19, 2017 at 11:03 AM, jianhui.yi <jianhui...@zhiyoubao.com 
<mailto:jianhui...@zhiyoubao.com> > wrote:

Is it a build error

 

发件人: Billy Liu [mailto:billy...@apache.org <mailto:billy...@apache.org> ] 
发送时间: 2017年5月19日 11:00
收件人: user <user@kylin.apache.org <mailto:user@kylin.apache.org> >
主题: Re: table_snapshot file does not exist

 

Is it a build error? or query error? You mentioned two scenarios, but one 
exception. 

 

2017-05-18 14:25 GMT+08:00 jianhui.yi <jianhui...@zhiyoubao.com 
<mailto:jianhui...@zhiyoubao.com> >:

Hi all:

   When I build cube to run step 4: Build Dimension Dictionary, the 
following error occurred,how to solve it?

When I use the dimensions of this table, this error will appear.

 

java.io.FileNotFoundException: File does not exist: 
/kylin/kylin_metadata/resources/table_snapshot/DW.DIM_PRODUCT/1394db19-c200-46f8-833c-d28878629246.snapshot

at 
org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:66)

at 
org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:56)

at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsInt(FSNamesystem.java:2007)

at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1977)

at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1890)

at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:572)

at 
org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getBlockLocations(AuthorizationProviderProxyClientProtocol.java:89)

at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:365)

at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)

at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)

at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)

at 
org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2141)

at 
org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2137)

at java.security.AccessController.doPrivileged(Native 
Method)

at javax.security.auth.Subject.doAs(Subject.java:415)

at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1783)

at 
org.apache.hadoop.ipc.Server$Handler.run(Server.java:2135)

 

at 
sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)

at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)

at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)

at 
java.lang.reflect.Constructor.newInstance(Constructor.java:526)

at 
org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)

at 
org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)

at 
org.apache.hadoop.hdfs.DFSClient.callGetBlockLocations(DFSClient.java:1281)

at 
org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:1266)

at 
org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:1254)

at 
org.apache.hadoop.hdfs.DFSInputStream.fetchLocatedBlocksAndGetLastBlockLength(DFSInputStream.java:305)

at 
org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.java:271)

at 
org.apache.hadoop.hdfs.DFSInputStream.(DFSInputStream.java:263)

at 
org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1585)

at 
org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:309)

at 
org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:305)

at 
org.apache.hadoop.fs.FileSystemLinkResol