What I can read from the log:
- It's a warning.
- DistributedFileSystem.getFileStatus() hits the StandbyException, meaning
the namenode is in standby state and cannot serve

Need to fix the (HA) HDFS first.

Yang

On Tue, Sep 27, 2016 at 6:53 AM, Ashika Umanga Umagiliya <
[email protected]> wrote:

> Greetings,
>
> I sucessfyully managed to install Kylin and build the sample cube.
> We created my own cube using relatively large hive table and during Cube
> generation, in the step "#3 Step Name: Extract Fact Table Distinct Columns"
> the MR job fails.(I tried 3 times and it failed everytime giving the same
> error)
>
> MR log is as follows :
>
>
> 2016-09-26 09:32:09,728 INFO [main] 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster:
> Created MRAppMaster for application appattempt_1472454550517_38668_000001
> 2016-09-26 09:32:09,958 WARN [main] org.apache.hadoop.util.NativeCodeLoader:
> Unable to load native-hadoop library for your platform... using
> builtin-java classes where applicable
> 2016-09-26 09:32:09,993 INFO [main] 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster:
> Executing with tokens:
> 2016-09-26 09:32:10,229 INFO [main] 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster:
> Kind: YARN_AM_RM_TOKEN, Service: , Ident: (appAttemptId { application_id {
> id: 38668 cluster_timestamp: 1472454550517 } attemptId: 1 } keyId:
> -991094474)
> 2016-09-26 09:32:10,230 INFO [main] 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster:
> Kind: HDFS_DELEGATION_TOKEN, Service: ha-hdfs:nameservice1, Ident:
> (HDFS_DELEGATION_TOKEN token 222750 for kylin)
> 2016-09-26 09:32:10,246 INFO [main] 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster:
> Using mapred newApiCommitter.
> 2016-09-26 09:32:10,248 INFO [main] 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster:
> OutputCommitter set in config null
> 2016-09-26 09:32:10,285 INFO [main] org.apache.hadoop.mapreduce.
> lib.output.FileOutputCommitter: File Output Committer Algorithm version
> is 1
> 2016-09-26 09:32:10,286 INFO [main] org.apache.hadoop.mapreduce.
> lib.output.FileOutputCommitter: FileOutputCommitter skip cleanup
> _temporary folders under output directory:false, ignore cleanup failures:
> false
> 2016-09-26 09:32:10,804 WARN [main] 
> org.apache.hadoop.hdfs.shortcircuit.DomainSocketFactory:
> The short-circuit local reads feature cannot be used because libhadoop
> cannot be loaded.
> 2016-09-26 09:32:10,810 INFO [main] 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster:
> OutputCommitter is org.apache.hadoop.mapreduce.
> lib.output.FileOutputCommitter
> 2016-09-26 09:32:10,866 WARN [main] org.apache.hadoop.ipc.Client:
> Exception encountered while connecting to the server :
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException):
> Operation category READ is not supported in state standby
> at org.apache.hadoop.security.SaslRpcClient.saslConnect(
> SaslRpcClient.java:375)
> at org.apache.hadoop.ipc.Client$Connection.setupSaslConnection(Client.
> java:563)
> at org.apache.hadoop.ipc.Client$Connection.access$1900(Client.java:378)
> at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:732)
> at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:728)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at org.apache.hadoop.security.UserGroupInformation.doAs(
> UserGroupInformation.java:1709)
> at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:727)
> at org.apache.hadoop.ipc.Client$Connection.access$2900(Client.java:378)
> at org.apache.hadoop.ipc.Client.getConnection(Client.java:1492)
> at org.apache.hadoop.ipc.Client.call(Client.java:1402)
> at org.apache.hadoop.ipc.Client.call(Client.java:1363)
> at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.
> invoke(ProtobufRpcEngine.java:229)
> at com.sun.proxy.$Proxy10.getFileInfo(Unknown Source)
> at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslat
> orPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:773)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(
> NativeMethodAccessorImpl.java:62)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(
> DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:497)
> at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(
> RetryInvocationHandler.java:256)
> at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(
> RetryInvocationHandler.java:104)
> at com.sun.proxy.$Proxy11.getFileInfo(Unknown Source)
> at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2162)
> at org.apache.hadoop.hdfs.DistributedFileSystem$24.
> doCall(DistributedFileSystem.java:1363)
> at org.apache.hadoop.hdfs.DistributedFileSystem$24.
> doCall(DistributedFileSystem.java:1359)
> at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(
> FileSystemLinkResolver.java:81)
> at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(
> DistributedFileSystem.java:1359)
> at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1424)
> at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.
> serviceInit(MRAppMaster.java:291)
> at org.apache.hadoop.service.AbstractService.init(
> AbstractService.java:163)
> at org.apache.hadoop.mapreduce.v2.app.MRAppMaster$5.run(
> MRAppMaster.java:1556)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at org.apache.hadoop.security.UserGroupInformation.doAs(
> UserGroupInformation.java:1709)
> at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster(
> MRAppMaster.java:1553)
> at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(
> MRAppMaster.java:1486)
> 2016-09-26 09:32:10,991 INFO [main] org.apache.hadoop.yarn
>
>
>

Reply via email to