here is my mapred.site.xml config

<property>
  <name>mapred.job.tracker</name>
  <value>localhost:54311</value>
  <description>The host and port that the MapReduce job tracker runs
  at.  If "local", then jobs are run in-process as a single map
  and reduce task.
  </description>
</property>


Also, The job runs fine in memory, if i remove the dependency on yarn, i.e.
if i comment out:
<property>
  <name> mapreduce.framework.name</name>
  <value>yarn</value>
</property>

in mapred-site.xml.




On Thu, Apr 10, 2014 at 4:43 PM, Kiran Dangeti <kirandkumar2...@gmail.com>wrote:

> Rahul,
>
> Please check the port name given in mapred.site.xml
> Thanks
> Kiran
>
> On Thu, Apr 10, 2014 at 3:23 PM, Rahul Singh 
> <smart.rahul.i...@gmail.com>wrote:
>
>>  Hi,
>>   I am getting following exception while running word count example,
>>
>> 14/04/10 15:17:09 INFO mapreduce.Job: Task Id :
>> attempt_1397123038665_0001_m_000000_2, Status : FAILED
>> Container launch failed for container_1397123038665_0001_01_000004 :
>> java.lang.IllegalArgumentException: Does not contain a valid host:port
>> authority: poc_hadoop04:46162
>>     at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:211)
>>     at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:163)
>>     at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:152)
>>     at
>> org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy$ContainerManagementProtocolProxyData.newProxy(ContainerManagementProtocolProxy.java:210)
>>     at
>> org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy$ContainerManagementProtocolProxyData.<init>(ContainerManagementProtocolProxy.java:196)
>>     at
>> org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy.getProxy(ContainerManagementProtocolProxy.java:117)
>>     at
>> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl.getCMProxy(ContainerLauncherImpl.java:403)
>>     at
>> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$Container.launch(ContainerLauncherImpl.java:138)
>>     at
>> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$EventProcessor.run(ContainerLauncherImpl.java:369)
>>     at
>> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>>     at
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>     at java.lang.Thread.run(Thread.java:662)
>>
>>
>> I have everything configured with hdfs  running where i am able to create
>> files and directories. running jps on my machine shows all components
>> running.
>>
>> 10290 NameNode
>> 10416 DataNode
>> 10738 ResourceManager
>> 11634 Jps
>> 10584 SecondaryNameNode
>> 10844 NodeManager
>>
>>
>> Any pointers will be appreciated.
>>
>> Thanks and Regards,
>> -Rahul Singh
>>
>
>

Reply via email to