> Caused by: java.lang.OutOfMemoryError: unable to create new native thread
This is the root cause. Google "java.lang.OutOfMemoryError: unable to create new native thread" and you will find plenty answers. On Thu, Feb 15, 2018 at 9:32 PM, praveen kumar <[email protected]> wrote: > Hi Team, > > I used apache Kylin for building cube (star Schema based) in > cluster mode,but i have issue which i mentioned below.please guide me. > > Cluster detail > > three machine have 64GB(memory) > > one machine set job > > other two machine set query. > > version -apache kylin 2.2.0(hbase) > > > > Exception: > org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.yarn.exceptions. > YarnRuntimeException): > java.io.IOException: Failed on local exception: java.io.IOException: > Couldn't set up IO streams; Host Details : local host is: "hostname/ > 10.237.247.12"; destination host is: "hostname":9000; > > at > org.apache.hadoop.mapreduce.v2.hs.CachedHistoryStorage.getFullJob( > CachedHistoryStorage.java:147) > > at > org.apache.hadoop.mapreduce.v2.hs.JobHistory.getJob(JobHistory.java:217) > > at > org.apache.hadoop.mapreduce.v2.hs.HistoryClientService$ > HSClientProtocolHandler$1.run(HistoryClientService.java:203) > > at > org.apache.hadoop.mapreduce.v2.hs.HistoryClientService$ > HSClientProtocolHandler$1.run(HistoryClientService.java:199) > > at java.security.AccessController.doPrivileged(Native Method) > > at javax.security.auth.Subject.doAs(Subject.java:422) > > at > org.apache.hadoop.security.UserGroupInformation.doAs( > UserGroupInformation.java:1548) > > at > org.apache.hadoop.mapreduce.v2.hs.HistoryClientService$ > HSClientProtocolHandler.verifyAndGetJob(HistoryClientService.java:199) > > at > org.apache.hadoop.mapreduce.v2.hs.HistoryClientService$ > HSClientProtocolHandler.getJobReport(HistoryClientService.java:231) > > at > org.apache.hadoop.mapreduce.v2.api.impl.pb.service. > MRClientProtocolPBServiceImpl.getJobReport(MRClientProtocolPBServiceImpl. > java:122) > > at > org.apache.hadoop.yarn.proto.MRClientProtocol$MRClientProtocolService$2. > callBlockingMethod(MRClientProtocol.java:275) > > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call( > ProtobufRpcEngine.java:585) > > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) > > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) > > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) > > at java.security.AccessController.doPrivileged(Native Method) > > at javax.security.auth.Subject.doAs(Subject.java:422) > > at > org.apache.hadoop.security.UserGroupInformation.doAs( > UserGroupInformation.java:1548) > > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) > > Caused by: java.io.IOException: Failed on local exception: > java.io.IOException: Couldn't set up IO streams; Host Details : local host > is: "CTSINGTOHP12.cts.com/10.237.247.12"; destination host is: " > CTSINGTOHP12.cts.com":9000; > > at org.apache.hadoop.net.NetUtils.wrapException( > NetUtils.java:764) > > at org.apache.hadoop.ipc.Client.call(Client.java:1414) > > at org.apache.hadoop.ipc.Client.call(Client.java:1363) > > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker. > invoke(ProtobufRpcEngine.java:206) > > at com.sun.proxy.$Proxy9.getListing(Unknown Source) > > at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source) > > at > sun.reflect.DelegatingMethodAccessorImpl.invoke( > DelegatingMethodAccessorImpl.java:43) > > at java.lang.reflect.Method.invoke(Method.java:498) > > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod( > RetryInvocationHandler.java:190) > > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke( > RetryInvocationHandler.java:103) > > at com.sun.proxy.$Proxy9.getListing(Unknown Source) > > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslat > orPB.getListing(ClientNamenodeProtocolTranslatorPB.java:515) > > at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient. > java:1743) > > at > org.apache.hadoop.fs.Hdfs$DirListingIterator.<init>(Hdfs.java:203) > > at > org.apache.hadoop.fs.Hdfs$DirListingIterator.<init>(Hdfs.java:190) > > at org.apache.hadoop.fs.Hdfs$2.<init>(Hdfs.java:172) > > at org.apache.hadoop.fs.Hdfs.listStatusIterator(Hdfs.java:172) > > at org.apache.hadoop.fs.FileContext$20.next( > FileContext.java:1393) > > at org.apache.hadoop.fs.FileContext$20.next( > FileContext.java:1388) > > at > org.apache.hadoop.fs.FSLinkResolver.resolve(FSLinkResolver.java:90) > > at > org.apache.hadoop.fs.FileContext.listStatus(FileContext.java:1388) > > at > org.apache.hadoop.mapreduce.v2.jobhistory.JobHistoryUtils. > listFilteredStatus(JobHistoryUtils.java:438) > > at > org.apache.hadoop.mapreduce.v2.jobhistory.JobHistoryUtils. > localGlobber(JobHistoryUtils.java:385) > > at > org.apache.hadoop.mapreduce.v2.jobhistory.JobHistoryUtils. > localGlobber(JobHistoryUtils.java:377) > > at > org.apache.hadoop.mapreduce.v2.jobhistory.JobHistoryUtils. > localGlobber(JobHistoryUtils.java:372) > > at > org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager. > scanIntermediateDirectory(HistoryFileManager.java:779) > > at > org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager.getFileInfo( > HistoryFileManager.java:931) > > at > org.apache.hadoop.mapreduce.v2.hs.CachedHistoryStorage.getFullJob( > CachedHistoryStorage.java:132) > > ... 18 more > > Caused by: java.io.IOException: Couldn't set up IO streams > > at > org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:769) > > at > org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:367) > > at org.apache.hadoop.ipc.Client.getConnection(Client.java:1462) > > at org.apache.hadoop.ipc.Client.call(Client.java:1381) > > ... 44 more > > Caused by: java.lang.OutOfMemoryError: unable to create new native thread > > at java.lang.Thread.start0(Native Method) > > at java.lang.Thread.start(Thread.java:717) > > at > org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:762) > ... 47 more > > > Regards > Praveen.G >
