Great it worked out finally. :-)

If you spot any failures in log that may be related to the missing HFiles,
please post here and that may help prevent similar errors.

Thanks
Yang

On Tue, Jun 9, 2015 at 3:04 AM, Diego Pinheiro <[email protected]>
wrote:

> I'm not sure why, but the HFile was not being generated and, since
> the file did not exist, it was given that error. I found it by debugging
> BulkLoadJove.java.
>
> I have changed my 'Advanced Settings' about aggregation group and
> row keys. Then, my cube was successfully built.
>
> I think my last configuration was not given enough information to Kylin.
> But I'm not sure.
>
> On Thu, Jun 4, 2015 at 2:41 AM, Li Yang <[email protected]> wrote:
> > at org.apache.kylin.job.hadoop.hbase.BulkLoadJob.run(BulkLoadJob.java:75)
> >
> > Here Kylin is setting the permission of generated HFile so HBase can take
> > over. My feeling is something wrong with the HFile. Click the key icon
> > under the GUI step, you shall see a popup like
> >
> >  "-input
> >
> /apps/hdmi-prod/b_kylin/qa/kylin-938b48a3-a9e8-472e-b9e2-f50bc2a16778/xxxx/hfile/
> >  -htablename KYLIN_LLCVWUB5Y4 -cubename xxxx"
> >
> > The input is the HFile directory the permission error is about. Check it
> on
> > HDFS.
> >
> >
> > Cheers
> > Yang
> >
> > On Tue, Jun 2, 2015 at 10:23 AM, Diego Pinheiro <
> [email protected]>
> > wrote:
> >
> >> Hi there,
> >>
> >> I started playing around with Kylin by making an simple example. I
> created
> >> a
> >> cube with 5 look up tables, 5 dimensions and 1 measure. Then, I started
> the
> >> cube build. At the "HFile to HBase" phase, I've got an permission error.
> >>
> >> org.apache.hadoop.security.AccessControlException: Permission denied.
> >> user=cloudera is not the owner of inode=null
> >> at
> >>
> >>
> org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkOwner(DefaultAuthorizationProvider.java:169)
> >> at
> >>
> >>
> org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkPermission(DefaultAuthorizationProvider.java:157)
> >> at
> >>
> >>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:138)
> >> at
> >>
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6553)
> >> at
> >>
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6535)
> >> at
> >>
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:6454)
> >> at
> >>
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1757)
> >> at
> >>
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1737)
> >> at
> >>
> >>
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:618)
> >> at
> >>
> >>
> org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.setPermission(AuthorizationProviderProxyClientProtocol.java:172)
> >> at
> >>
> >>
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:444)
> >> at
> >>
> >>
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> >> at
> >>
> >>
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
> >> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1060)
> >> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2044)
> >> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2040)
> >> at java.security.AccessController.doPrivileged(Native Method)
> >> at javax.security.auth.Subject.doAs(Subject.java:415)
> >> at
> >>
> >>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
> >> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2038)
> >> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> >> at
> >>
> >>
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
> >> at
> >>
> >>
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> >> at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
> >> at
> >>
> >>
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
> >> at
> >>
> >>
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
> >> at org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:2321)
> >> at
> >>
> >>
> org.apache.hadoop.hdfs.DistributedFileSystem$24.doCall(DistributedFileSystem.java:1296)
> >> at
> >>
> >>
> org.apache.hadoop.hdfs.DistributedFileSystem$24.doCall(DistributedFileSystem.java:1292)
> >> at
> >>
> >>
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> >> at
> >>
> >>
> org.apache.hadoop.hdfs.DistributedFileSystem.setPermission(DistributedFileSystem.java:1292)
> >> at
> org.apache.kylin.job.hadoop.hbase.BulkLoadJob.run(BulkLoadJob.java:75)
> >> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> >> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
> >> at
> >>
> >>
> org.apache.kylin.job.common.HadoopShellExecutable.doWork(HadoopShellExecutable.java:63)
> >> at
> >>
> >>
> org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:107)
> >> at
> >>
> >>
> org.apache.kylin.job.execution.DefaultChainedExecutable.doWork(DefaultChainedExecutable.java:50)
> >> at
> >>
> >>
> org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:107)
> >> at
> >>
> >>
> org.apache.kylin.job.impl.threadpool.DefaultScheduler$JobRunner.run(DefaultScheduler.java:132)
> >> at
> >>
> >>
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> >> at
> >>
> >>
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> >> at java.lang.Thread.run(Thread.java:745)
> >> Caused by:
> >>
> >>
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
> >> Permission denied. user=cloudera is not the owner of inode=null
> >> at
> >>
> >>
> org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkOwner(DefaultAuthorizationProvider.java:169)
> >> at
> >>
> >>
> org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkPermission(DefaultAuthorizationProvider.java:157)
> >> at
> >>
> >>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:138)
> >> at
> >>
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6553)
> >> at
> >>
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6535)
> >> at
> >>
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:6454)
> >> at
> >>
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1757)
> >> at
> >>
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1737)
> >> at
> >>
> >>
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:618)
> >> at
> >>
> >>
> org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.setPermission(AuthorizationProviderProxyClientProtocol.java:172)
> >> at
> >>
> >>
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:444)
> >> at
> >>
> >>
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> >> at
> >>
> >>
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
> >> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1060)
> >> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2044)
> >> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2040)
> >> at java.security.AccessController.doPrivileged(Native Method)
> >> at javax.security.auth.Subject.doAs(Subject.java:415)
> >> at
> >>
> >>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
> >> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2038)
> >> at org.apache.hadoop.ipc.Client.call(Client.java:1468)
> >> at org.apache.hadoop.ipc.Client.call(Client.java:1399)
> >> at
> >>
> >>
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
> >> at com.sun.proxy.$Proxy26.setPermission(Unknown Source)
> >> at
> >>
> >>
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setPermission(ClientNamenodeProtocolTranslatorPB.java:345)
> >> at sun.reflect.GeneratedMethodAccessor44.invoke(Unknown Source)
> >> at
> >>
> >>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> >> at java.lang.reflect.Method.invoke(Method.java:606)
> >> at
> >>
> >>
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
> >> at
> >>
> >>
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
> >> at com.sun.proxy.$Proxy27.setPermission(Unknown Source)
> >> at org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:2319)
> >> ... 15 more
> >>
> >> 'cloudera' user has permission to read and write in hadoop fs.
> >>
> >> Kylin was built using 0.7.1-staging branch and I've got the same error
> >> using HortonWorks Sandbox 2.2.4 and Cloudera QuickStart VM 5.8.
> >>
> >> Do you have any idea why this exception is happening?
> >>
> >> Regards,
> >> Diego
> >>
>

Reply via email to