[jira] [Commented] (KYLIN-3028) Build cube error when set S3 as working-dir

2018-02-01 Thread Shaofeng SHI (JIRA)

[ 
https://issues.apache.org/jira/browse/KYLIN-3028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16348360#comment-16348360
 ] 

Shaofeng SHI commented on KYLIN-3028:
-

It is a bug of kylin: if hdfs-working-dir is configured as a non-default file 
system, and not configure hbase.cluster-fs, this error will happen.

> Build cube error when set S3 as working-dir
> ---
>
> Key: KYLIN-3028
> URL: https://issues.apache.org/jira/browse/KYLIN-3028
> Project: Kylin
>  Issue Type: Bug
>  Components: Job Engine
>Affects Versions: v2.2.0
> Environment: AWS EMR 5.7, Apache Kylin 2.2 for HBase 1.x
>Reporter: Shaofeng SHI
>Assignee: Shaofeng SHI
>Priority: Minor
> Fix For: v2.3.0
>
>
> 1. Start an AWS EMR cluster, with HBase selected (data stored on S3);
> 2. Download and expand apache-kylin-2.2 for hbase 1.x binary package on EMR 
> 5.7's master node. Copy the "hbase.zookeeper.quorum" property from 
> /etc/hbase/conf/hbase-site.xml to $KYLIN_HOME/conf/kylin_job_conf.xml;  In 
> kylin.properties, set: "kylin.env.hdfs-working-dir=s3://mybucket/kylin"
> 3. Build the sample cube, in the job failed at "Create HTable" step, error is:
> {code}
> 2017-11-10 08:21:35,011 DEBUG [http-bio-7070-exec-2] 
> cachesync.Broadcaster:290 : Done broadcastingUPDATE, cube, kylin_sales_cube
> 2017-11-10 08:21:35,013 ERROR [Scheduler 1778356018 Job 
> 5a2893c9-3a76-458c-a03e-5cd97839fca5-393] common.HadoopShellExecutable:64 : 
> error execute 
> HadoopShellExecutable{id=5a2893c9-3a76-458c-a03e-5cd97839fca5-05, name=Create 
> HTable, state=RUNNING}
> org.apache.hadoop.security.AccessControlException: Permission denied: 
> user=root, access=WRITE, 
> inode="/kylin/kylin_metadata/kylin-5a2893c9-3a76-458c-a03e-5cd97839fca5/kylin_sales_cube/rowkey_stats":hdfs:hadoop:drwxr-xr-x
> at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:320)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:292)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:213)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1728)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1712)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1695)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:71)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3896)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:984)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:622)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2045)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at 
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
> at 
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
> at 
> org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:3002)
> at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2970)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:1047)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:1043)
>  

[jira] [Commented] (KYLIN-3028) Build cube error when set S3 as working-dir

2018-01-29 Thread Billy Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/KYLIN-3028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16343037#comment-16343037
 ] 

Billy Liu commented on KYLIN-3028:
--

Hi [~Shaofengshi], what's the root cause here? The "kylin" keyword, or the 
sub-directory? 
To use the EMR, user needs to both kylin.env.hdfs-working-dir and 
kylin.storage.hbase.cluster-fs?

> Build cube error when set S3 as working-dir
> ---
>
> Key: KYLIN-3028
> URL: https://issues.apache.org/jira/browse/KYLIN-3028
> Project: Kylin
>  Issue Type: Bug
>  Components: Job Engine
>Affects Versions: v2.2.0
> Environment: AWS EMR 5.7, Apache Kylin 2.2 for HBase 1.x
>Reporter: Shaofeng SHI
>Assignee: Shaofeng SHI
>Priority: Minor
> Fix For: v2.3.0
>
>
> 1. Start an AWS EMR cluster, with HBase selected (data stored on S3);
> 2. Download and expand apache-kylin-2.2 for hbase 1.x binary package on EMR 
> 5.7's master node. Copy the "hbase.zookeeper.quorum" property from 
> /etc/hbase/conf/hbase-site.xml to $KYLIN_HOME/conf/kylin_job_conf.xml;  In 
> kylin.properties, set: "kylin.env.hdfs-working-dir=s3://mybucket/kylin"
> 3. Build the sample cube, in the job failed at "Create HTable" step, error is:
> {code}
> 2017-11-10 08:21:35,011 DEBUG [http-bio-7070-exec-2] 
> cachesync.Broadcaster:290 : Done broadcastingUPDATE, cube, kylin_sales_cube
> 2017-11-10 08:21:35,013 ERROR [Scheduler 1778356018 Job 
> 5a2893c9-3a76-458c-a03e-5cd97839fca5-393] common.HadoopShellExecutable:64 : 
> error execute 
> HadoopShellExecutable{id=5a2893c9-3a76-458c-a03e-5cd97839fca5-05, name=Create 
> HTable, state=RUNNING}
> org.apache.hadoop.security.AccessControlException: Permission denied: 
> user=root, access=WRITE, 
> inode="/kylin/kylin_metadata/kylin-5a2893c9-3a76-458c-a03e-5cd97839fca5/kylin_sales_cube/rowkey_stats":hdfs:hadoop:drwxr-xr-x
> at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:320)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:292)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:213)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1728)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1712)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1695)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:71)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3896)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:984)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:622)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2045)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at 
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
> at 
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
> at 
> org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:3002)
> at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2970)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:1047)
> at 
> 

[jira] [Commented] (KYLIN-3028) Build cube error when set S3 as working-dir

2017-11-10 Thread Shaofeng SHI (JIRA)

[ 
https://issues.apache.org/jira/browse/KYLIN-3028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16247281#comment-16247281
 ] 

Shaofeng SHI commented on KYLIN-3028:
-

The workaround is, setting "kylin.storage.hbase.cluster-fs=s3://mybucket"  (no 
'kylin' folder), restart the build from scratch, it gets success.

I will fix this bug.

> Build cube error when set S3 as working-dir
> ---
>
> Key: KYLIN-3028
> URL: https://issues.apache.org/jira/browse/KYLIN-3028
> Project: Kylin
>  Issue Type: Bug
>  Components: Job Engine
>Affects Versions: v2.2.0
> Environment: AWS EMR 5.7, Apache Kylin 2.2 for HBase 1.x
>Reporter: Shaofeng SHI
>Assignee: Shaofeng SHI
>
> 1. Start an AWS EMR cluster, with HBase selected (data stored on S3);
> 2. Download and expand apache-kylin-2.2 for hbase 1.x binary package on EMR 
> 5.7's master node. Copy the "hbase.zookeeper.quorum" property from 
> /etc/hbase/conf/hbase-site.xml to $KYLIN_HOME/conf/kylin_job_conf.xml;  In 
> kylin.properties, set: "kylin.env.hdfs-working-dir=s3://mybucket/kylin"
> 3. Build the sample cube, in the job failed at "Create HTable" step, error is:
> {code}
> 2017-11-10 08:21:35,011 DEBUG [http-bio-7070-exec-2] 
> cachesync.Broadcaster:290 : Done broadcastingUPDATE, cube, kylin_sales_cube
> 2017-11-10 08:21:35,013 ERROR [Scheduler 1778356018 Job 
> 5a2893c9-3a76-458c-a03e-5cd97839fca5-393] common.HadoopShellExecutable:64 : 
> error execute 
> HadoopShellExecutable{id=5a2893c9-3a76-458c-a03e-5cd97839fca5-05, name=Create 
> HTable, state=RUNNING}
> org.apache.hadoop.security.AccessControlException: Permission denied: 
> user=root, access=WRITE, 
> inode="/kylin/kylin_metadata/kylin-5a2893c9-3a76-458c-a03e-5cd97839fca5/kylin_sales_cube/rowkey_stats":hdfs:hadoop:drwxr-xr-x
> at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:320)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:292)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:213)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1728)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1712)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1695)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:71)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3896)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:984)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:622)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2045)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at 
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
> at 
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
> at 
> org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:3002)
> at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2970)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:1047)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:1043)
> at 
>