Hi Terry,

Please check whether there is another yarn-site.xml, mapred-site.xml,
core-site.xml on your Kylin machine. e.g.:

find / -name yarn-site.xml

Kylin reads the configurations from class path. If some fake file is there,
Kylin may not connect to your Hadoop properly. Previously you mentioned
that, after copy "yarn.resourcemanager.address" to kylin_job_conf.xml, it
can move ahead, that is the case I guess.

2018-08-10 17:30 GMT+08:00 Terry Lu <lujinlin1...@gmail.com>:

>  Our company USES the apache open source hadoop2.7.3 version, all of them
> are manual start, manual configuration, but the environment running graphs,
> spark, hive, hbase program is no problem, and has been running for a long
> time, have a lot of tasks in the run, in CDH, HDP, again from the company
> level also not reality, and now access kylin will quote this error, do not
> know if version? Or does it not support open source apache hadoop? Or ask
> the master to help deal with it, we are very grateful!
>
> 2018-08-10 10:27 GMT+08:00 Terry Lu <lujinlin1...@gmail.com>:
>
> > 是的,我们公司使用的是apache开源的hadoop2.7.3的版本,全部都是手动启动,手动配置,
> > 但这些环境在跑mapreduce、spark、hive、hbase程序都没有问题,而且已经运行很长时间了,
> 有好多任务在跑,重新换CDH、HDP这些,
> > 从公司层面也不现实,而现在接入kylin就报这个错误,不知道是否是版本问题?或者是不支持开源apache
> > hadoop?还是请求大师能协助处理一下, 我们万分感激!
> >
> > 2018-08-10 10:17 GMT+08:00 ShaoFeng Shi <shaofeng...@apache.org>:
> >
> >> Hi Terry,
> >>
> >> I see your email several days ago, but I have no idea about the issue.
> It
> >> should be some environment problems I believe.
> >>
> >> Are you setting up the Hadoop cluster by manual? Usually, if you're not
> a
> >> Hadoop expert, we recommend to start with the commercial Hadoop releases
> >> like HDP, CDH, AWS EMR, etc; That could save your lots of time and
> effort.
> >>
> >>
> >>
> >> 2018-08-10 10:07 GMT+08:00 Terry Lu <lujinlin1...@gmail.com>:
> >>
> >> > Hi:
> >> >
> >> >      我们在使用 Kylin(kylin版本是2.3,  Hadoop是apache hadoop-2.7.3,而且集群全部启动正常
> >> )在创建
> >> > Cube到#3 Step Name: Extract Fact Table Distinct Columns时报
> localhost:18032
> >> > failed错误,然后就将yarn.resourcemanager.address信息添加到了kylin_job_
> >> conf.xml这个文件中,原来
> >> > Cube创建只能执行到第三步就报错,此时可以执行到第十步, 但此问题还是依然存在,而这个问题也没有解决,
> >> > 附件是Kylin的日志信息,还有kylin及yarn、mapred的配置文件,请求大师能协助处理一下,我们万分感激!
> >> >
> >> > 以下是#10 Step Name: Build Cube In-Mem Duration: 20.19 mins Waiting: 0
> >> > seconds的错误信息如下:
> >> >
> >> >
> >> >
> >> > java.net.ConnectException: Call From hsmaster/10.9.0.86 to
> >> localhost:18032 failed on connection exception:
> java.net.ConnectException:
> >> Connection refused; For more details see:
> http://wiki.apache.org/hadoop/
> >> ConnectionRefused
> >> >       at sun.reflect.GeneratedConstructorAccessor75
> .newInstance(Unknown
> >> Source)
> >> >       at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(De
> >> legatingConstructorAccessorImpl.java:45)
> >> >       at java.lang.reflect.Constructor.newInstance(Constructor.java:4
> >> 23)
> >> >       at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.
> >> java:792)
> >> >       at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:
> >> 732)
> >> >       at org.apache.hadoop.ipc.Client.call(Client.java:1479)
> >> >       at org.apache.hadoop.ipc.Client.call(Client.java:1412)
> >> >       at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(
> >> ProtobufRpcEngine.java:229)
> >> >       at com.sun.proxy.$Proxy66.getNewApplication(Unknown Source)
> >> >       at org.apache.hadoop.yarn.api.impl.pb.client.ApplicationClientP
> >> rotocolPBClientImpl.getNewApplication(ApplicationC
> >> lientProtocolPBClientImpl.java:221)
> >> >       at sun.reflect.GeneratedMethodAccessor122.invoke(Unknown
> Source)
> >> >       at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMe
> >> thodAccessorImpl.java:43)
> >> >       at java.lang.reflect.Method.invoke(Method.java:498)
> >> >       at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMeth
> >> od(RetryInvocationHandler.java:191)
> >> >       at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(Ret
> >> ryInvocationHandler.java:102)
> >> >       at com.sun.proxy.$Proxy67.getNewApplication(Unknown Source)
> >> >       at org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.getNew
> >> Application(YarnClientImpl.java:219)
> >> >       at org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.create
> >> Application(YarnClientImpl.java:227)
> >> >       at org.apache.hadoop.mapred.ResourceMgrDelegate.getNewJobID(Res
> >> ourceMgrDelegate.java:187)
> >> >       at org.apache.hadoop.mapred.YARNRunner.getNewJobID(YARNRunner.
> >> java:231)
> >> >       at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(J
> >> obSubmitter.java:153)
> >> >       at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290)
> >> >       at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287)
> >> >       at java.security.AccessController.doPrivileged(Native Method)
> >> >       at javax.security.auth.Subject.doAs(Subject.java:422)
> >> >       at org.apache.hadoop.security.UserGroupInformation.doAs(UserGro
> >> upInformation.java:1698)
> >> >       at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287)
> >> >       at org.apache.kylin.engine.mr.common.AbstractHadoopJob.waitForC
> >> ompletion(AbstractHadoopJob.java:175)
> >> >       at org.apache.kylin.engine.mr.steps.InMemCuboidJob.run(InMemCub
> >> oidJob.java:121)
> >> >       at org.apache.kylin.engine.mr.common.MapReduceExecutable.doWork
> >> (MapReduceExecutable.java:130)
> >> >       at org.apache.kylin.job.execution.AbstractExecutable.execute(
> >> AbstractExecutable.java:162)
> >> >       at org.apache.kylin.job.execution.DefaultChainedExecutable.doWo
> >> rk(DefaultChainedExecutable.java:67)
> >> >       at org.apache.kylin.job.execution.AbstractExecutable.execute(
> >> AbstractExecutable.java:162)
> >> >       at org.apache.kylin.job.impl.threadpool.DefaultScheduler$JobRun
> >> ner.run(DefaultScheduler.java:300)
> >> >       at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPool
> >> Executor.java:1142)
> >> >       at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoo
> >> lExecutor.java:617)
> >> >       at java.lang.Thread.run(Thread.java:748)
> >> > Caused by: java.net.ConnectException: Connection refused
> >> >       at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
> >> >       at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl
> >> .java:717)
> >> >       at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWi
> >> thTimeout.java:206)
> >> >       at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
> >> >       at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495)
> >> >       at org.apache.hadoop.ipc.Client$Connection.setupConnection(Clie
> >> nt.java:614)
> >> >       at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Clien
> >> t.java:712)
> >> >       at org.apache.hadoop.ipc.Client$Connection.access$2900(Client.j
> >> ava:375)
> >> >       at org.apache.hadoop.ipc.Client.getConnection(Client.java:1528)
> >> >       at org.apache.hadoop.ipc.Client.call(Client.java:1451)
> >> >       ... 31 more
> >> > result code:2
> >> >
> >> >
> >> >
> >>
> >>
> >>
> >> --
> >> Best regards,
> >>
> >> Shaofeng Shi 史少锋
> >>
> >
> >
>



-- 
Best regards,

Shaofeng Shi 史少锋

Reply via email to