hi, i do not set the exact IP as the value of nameservice1 in hosts, cloudera 
take care of hdp hadoop HA

the /etc/hosts is written as:

192.168.2.44    tserver2
192.168.3.20    testserver  test.ip.uu.cc test.stephen.uu.cc
192.168.0.92    cdh-DEV-server-1.idreamsky.com
192.168.0.93    cdh-DEV-server-2.idreamsky.com
192.168.0.95    cdh-DEV-server-3.idreamsky.com
192.168.0.94    ids

but it always gives out the UnknownHostException.




胡明哲    edison.hu
部门:研发部/数据平台中心/平台组
 
深圳市南山区科苑北路科兴科学园A3栋16层
16/F, A3 Bld, Kexing Science Park, 15 Keyuan Rd, Nanshan District, 
Shenzhen,China
T +8613554462513      E [email protected]
 
From: Dong Li
Date: 2016-01-26 18:11
To: dev
Subject: Re: ERROR: issues with building cube in step 2 in kylin
Hello,
 
​did you properly setup /etc/hosts?​
 
Refer to:
https://mail-archives.apache.org/mod_mbox/incubator-kylin-dev/201505.mbox/%3CCAF7etTnABWrossF_Ko6XEjnQvD=qjpuxzv-1hvsjf7txyz6...@mail.gmail.com%3E
 
Thanks,
Dong Li
 
2016-01-26 15:00 GMT+08:00 [email protected] <[email protected]>
:
 
> hi
>
> I got some problems when i was trying to build cube in step 2, under job
> task. the job was stopped by this issue.
>
>
> the log shows the error:
>
> java.lang.IllegalArgumentException: java.net.UnknownHostException: 
> nameservice1
> at 
> org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:374)
> at 
> org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:312)
> at 
> org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:178)
> at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:664)
> at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:608)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:148)
> at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2596)
> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:91)
> at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2630)
> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2612)
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
> at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
> at 
> org.apache.hive.hcatalog.mapreduce.HCatBaseInputFormat.setInputPath(HCatBaseInputFormat.java:336)
> at 
> org.apache.hive.hcatalog.mapreduce.HCatBaseInputFormat.getSplits(HCatBaseInputFormat.java:130)
> at 
> org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:597)
> at org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:614)
> at 
> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:492)
> at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1306)
> at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1303)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
> at org.apache.hadoop.mapreduce.Job.submit(Job.java:1303)
> at 
> org.apache.kylin.job.hadoop.AbstractHadoopJob.waitForCompletion(AbstractHadoopJob.java:121)
> at 
> org.apache.kylin.job.hadoop.cube.FactDistinctColumnsJob.run(FactDistinctColumnsJob.java:83)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
> at 
> org.apache.kylin.job.common.MapReduceExecutable.doWork(MapReduceExecutable.java:120)
> at 
> org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:107)
> at 
> org.apache.kylin.job.execution.DefaultChainedExecutable.doWork(DefaultChainedExecutable.java:51)
> at 
> org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:107)
> at 
> org.apache.kylin.job.impl.threadpool.DefaultScheduler$JobRunner.run(DefaultScheduler.java:130)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.net.UnknownHostException: nameservice1
> ... 35 more
> result code:2
>
> but i did set the value of nameservice1 in hdfs-site.xml .
>
> while all other components work well independently.
>
> the kylin properties are :
>
> <!--Autogenerated by Cloudera Manager-->
> <configuration>
> <property>
> <name>dfs.nameservices</name>
> <value>nameservice1</value>
> </property>
> <property>
> <name>dfs.client.failover.proxy.provider.nameservice1</name>
> <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
>
> </property>
> <property>
> <name>dfs.ha.automatic-failover.enabled.nameservice1</name>
> <value>true</value>
> </property>
> <property>
> <name>ha.zookeeper.quorum</name>
> <value>cdh-DEV-server-1.idreamsky.com:2181,
> cdh-DEV-server-2.idreamsky.com:2181,cdh-DEV-server-3.idreamsky.com:2181</value>
>
> </property>
> <property>
> <name>dfs.ha.namenodes.nameservice1</name>
> <value>namenode86,namenode133</value>
> </property>
> <property>
> <name>dfs.namenode.rpc-address.nameservice1.namenode86</name>
> <value>cdh-DEV-server-1.idreamsky.com:8020</value>
> </property>
> <property>
> <name>dfs.namenode.servicerpc-address.nameservice1.namenode86</name>
> <value>cdh-DEV-server-1.idreamsky.com:8022</value>
> </property>
> <property>
> <name>dfs.namenode.http-address.nameservice1.namenode86</name>
>
>
> so could u please tell me how to do in next step to fix this problem?
>
> best regards
>
> ------------------------------
> ------------------------------
>
> [image: 说明: http://dl.uucc/ars/logo.jpg]
>
> 胡明哲    edison.hu
>
> 部门:研发部/数据平台中心/平台组
>
>
>
> 深圳市南山区科苑北路科兴科学园A3栋16层
>
> 16/F, A3 Bld, Kexing Science Park, 15 Keyuan Rd, Nanshan District,
> Shenzhen,China
>
> T +8613554462513      E [email protected] <[email protected]>
>

Reply via email to