I may not be correct here but as far as I can see it's regarding your HDFS data node port. Have you created your "UNIX Domain Socket path" and not provided the read/write permissions for the user from which you are running your map reduce jobs.
If you are using cloudera as your setup then can find this configuration from your cloudera manager's web page. All you have to do is go to hdfs -> configuration -> search for "unix" you might this configuration entry. If there is any value configured then make sure that you have set proper permissions for this folder. I hope this helps. On Thu, Oct 22, 2015 at 11:34 PM, Edward Capriolo <[email protected]> wrote: > I have just updated to CDH 5.4.2. > > > When multiple map reduce jobs run at once a port bind conflict sometimes > happens. It seems like from the message that binding to 0.0.0.0:0 will pick a > random port which should not cause a conflict but that does not seem to > happen. > > at sun.nio.ch.Net.bind(Net.java:436) > at > sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214) > at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) > at org.apache.hadoop.ipc.Server.bind(Server.java:407) > ... 19 more > 2015-10-04 19:31:10,567 INFO [main] > org.apache.hadoop.service.AbstractService: Service > org.apache.hadoop.mapreduce.v2.app.MRAppMaster failed in state STARTED; > cause: org.apache.hadoop.yarn.exceptions.YarnRuntimeException: > java.net.BindException: Problem binding to [0.0.0.0:0] > java.net.BindException: Address already in use; For more details see: > http://wiki.apache.org/hadoop/BindException > org.apache.hadoop.yarn.exceptions.YarnRuntimeException: > java.net.BindException: Problem binding to [0.0.0.0:0] > java.net.BindException: Address already in use; For more details see: > http://wiki.apache.org/hadoop/BindException > at > org.apache.hadoop.yarn.factories.impl.pb.RpcServerFactoryPBImpl.getServer(RpcServerFactoryPBImpl.java:139) > at > org.apache.hadoop.yarn.ipc.HadoopYarnProtoRPC.getServer(HadoopYarnProtoRPC.java:65) > at > org.apache.hadoop.mapreduce.v2.app.client.MRClientService.serviceStart(MRClientService.java:119) > at > org.apache.hadoop.service.AbstractService.start(AbstractService.java:193) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster.serviceStart(MRAppMaster.java:1084) > at > org.apache.hadoop.service.AbstractService.start(AbstractService.java:193) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$4.run(MRAppMaster.java:1500) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster(MRAppMaster.java:1496) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1429) > Caused by: java.net.BindException: Problem binding to [0.0.0.0:0] > java.net.BindException: Address already in use; For more details see: > http://wiki.apache.org/hadoop/BindException > > Does anyone know why this happens? Also a work around that does not involve > an upgrade? > > > TX > >
