Here is complete output.


2011-11-28 16:34:27,606 WARN  conf.Configuration
(Configuration.java:set(629)) - mapred.used.genericoptionsparser is
deprecated. Instead, use mapreduce.client.genericoptionsparser.used
2011-11-28 16:34:27,660 INFO  ipc.YarnRPC (YarnRPC.java:create(47)) -
Creating YarnRPC for org.apache.hadoop.yarn.ipc.HadoopYarnProtoRPC
2011-11-28 16:34:27,663 INFO  mapred.ResourceMgrDelegate
(ResourceMgrDelegate.java:<init>(95)) - Connecting to ResourceManager at /
0.0.0.0:8040
2011-11-28 16:34:27,664 INFO  ipc.HadoopYarnRPC
(HadoopYarnProtoRPC.java:getProxy(48)) - Creating a HadoopYarnProtoRpc
proxy for protocol interface org.apache.hadoop.yarn.api.ClientRMProtocol
2011-11-28 16:34:27,700 INFO  mapred.ResourceMgrDelegate
(ResourceMgrDelegate.java:<init>(99)) - Connected to ResourceManager at /
0.0.0.0:8040
2011-11-28 16:34:27,734 INFO  mapreduce.Cluster
(Cluster.java:initialize(116)) - Failed to use
org.apache.hadoop.mapred.YarnClientProtocolProvider due to error:
java.lang.reflect.InvocationTargetException
2011-11-28 16:34:27,735 INFO  mapreduce.Cluster
(Cluster.java:initialize(111)) - Cannot pick
org.apache.hadoop.mapred.LocalClientProtocolProvider as the
ClientProtocolProvider - returned null protocol
2011-11-28 16:34:27,736 INFO  mapreduce.Cluster
(Cluster.java:initialize(111)) - Cannot pick
org.apache.hadoop.mapred.JobTrackerClientProtocolProvider as the
ClientProtocolProvider - returned null protocol
java.io.IOException: Cannot initialize Cluster. Please check your
configuration for mapreduce.framework.name and the correspond server
addresses.
    at org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:123)
    at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:85)
    at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:78)
    at org.apache.hadoop.mapred.JobClient.init(JobClient.java:460)
    at org.apache.hadoop.mapred.JobClient.<init>(JobClient.java:450)
    at org.apache.hadoop.examples.RandomWriter.run(RandomWriter.java:246)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:69)
    at org.apache.hadoop.examples.RandomWriter.main(RandomWriter.java:294)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at
org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:72)
    at org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:144)
    at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:68)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:189)


2011/11/28 Stephen Boesch <java...@gmail.com>

> I had mentioned in the original post that the configuration files were set
> up exactly as in the cloudera post.  That includes the yarn-site.xml.
>
> but since there are questions about it, i'll go ahead and include those
> below.
>
> This setup DID work one time, just does not restart properly..
>
>
>
> yarn-site.xml
>
> <?xml version="1.0"?>
> <configuration>
> <property>
> <name>yarn.nodemanager.aux-services</name>
> <value>mapreduce.shuffle</value>
> </property>
> <property>
> <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
> <value>org.apache.hadoop.mapred.ShuffleHandler</value>
> </property>
> </configuration>
>
>
> core-site.xml
>
>
> <?xml version="1.0"?>
> <?xml-stylesheet href="configuration.xsl"?>
> <configuration>
> <property>
> <name>fs.default.name</name>
> <value>hdfs://localhost:9000</value>
> </property>
> <property>
> <name>yarn.user</name>
> <value>had</value>
> </property>
> </configuration>
>
> hdfs-site.xml
>
>
> <?xml version="1.0"?>
> <?xml-stylesheet href="configuration.xsl"?>
> <configuration>
> <property>
> <name>dfs.replication</name>
> <value>1</value>
> </property>
> <property>
> <name>dfs.permissions</name>
> <value>false</value>
> </property>
> </configuration>
>
>
> mapred-site.xml
>
> <?xml version="1.0"?>
> <?xml-stylesheet href="configuration.xsl"?>
> <configuration>
> <property>
> <name> mapreduce.framework.name</name>
> <value>yarn</value>
> </property>
> </configuration>
>
> thx
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> 2011/11/28 Stephen Boesch <java...@gmail.com>
>
>> Yes I did both of those already.
>>
>>
>> 2011/11/28 Marcos Luis Ortiz Valmaseda <marcosluis2...@googlemail.com>
>>
>>> 2011/11/28 Stephen Boesch <java...@gmail.com>:
>>> >
>>> > Hi
>>> >   I set up a pseudo cluster  according to the instructions  here
>>> >  http://www.cloudera.com/blog/2011/11/building-and-deploying-mr2/.
>>> > Initially the randomwriter example worked. But after a crash on the
>>> machine
>>> > and restarting the services I am getting the errors shown below.
>>> > Jps seems to think the processes are running properly:
>>> >
>>> > had@mithril:/shared/hadoop$ jps
>>> > 7980 JobHistoryServer
>>> > 7668 NameNode
>>> > 7821 ResourceManager
>>> > 7748 DataNode
>>> > 8021 Jps
>>> > 7902 NodeManager
>>> >
>>> > $ hadoop jar hadoop-mapreduce-examples-0.23.0.jar  randomwriter
>>> > -Dmapreduce.job.user.name=$USER
>>> > -Dmapreduce.clientfactory.class.name
>>> =org.apache.hadoop.mapred.YarnClientFactory
>>> > -Dmapreduce.randomwriter.bytespermap=10000 -Ddfs.blocksize=64m
>>> > -Ddfs.block.size=64m -libjars
>>> > $YARN_HOME/modules/hadoop-mapreduce-client-jobclient-0.23.0.jar output
>>> >
>>> > 2011-11-28 10:23:56,102 WARN  conf.Configuration
>>> > (Configuration.java:set(629)) - mapred.used.genericoptionsparser is
>>> > deprecated. Instead, use mapreduce.client.genericoptionsparser.used
>>> > 2011-11-28 10:23:56,158 INFO  ipc.YarnRPC (YarnRPC.java:create(47)) -
>>> > Creating YarnRPC for org.apache.hadoop.yarn.ipc.HadoopYarnProtoRPC
>>> > 2011-11-28 10:23:56,162 INFO  mapred.ResourceMgrDelegate
>>> > (ResourceMgrDelegate.java:<init>(95)) - Connecting to ResourceManager
>>> at
>>> > /0.0.0.0:8040
>>> > 2011-11-28 10:23:56,163 INFO  ipc.HadoopYarnRPC
>>> > (HadoopYarnProtoRPC.java:getProxy(48)) - Creating a HadoopYarnProtoRpc
>>> proxy
>>> > for protocol interface org.apache.hadoop.yarn.api.ClientRMProtocol
>>> > 2011-11-28 10:23:56,203 INFO  mapred.ResourceMgrDelegate
>>> > (ResourceMgrDelegate.java:<init>(99)) - Connected to ResourceManager at
>>> > /0.0.0.0:8040
>>> > 2011-11-28 10:23:56,248 INFO  mapreduce.Cluster
>>> > (Cluster.java:initialize(116)) - Failed to use
>>> > org.apache.hadoop.mapred.YarnClientProtocolProvider due to error:
>>> > java.lang.reflect.InvocationTargetException
>>> > 2011-11-28 10:23:56,250 INFO  mapreduce.Cluster
>>> > (Cluster.java:initialize(111)) - Cannot pick
>>> > org.apache.hadoop.mapred.LocalClientProtocolProvider as the
>>> > ClientProtocolProvider - returned null protocol
>>> > 2011-11-28 10:23:56,251 INFO  mapreduce.Cluster
>>> > (Cluster.java:initialize(111)) - Cannot pick
>>> > org.apache.hadoop.mapred.JobTrackerClientProtocolProvider as the
>>> > ClientProtocolProvider - returned null protocol
>>> > java.io.IOException: Cannot initialize Cluster. Please check your
>>> > configuration for mapreduce.framework.name and the correspond server
>>> > addresses.
>>> >
>>> > My  *-site.xml files are precisely as shown on the instructions page.
>>> > In any case copying here the one that is most germane - mapred-site.xml
>>> > <?xml version="1.0"?>
>>> > <?xml-stylesheet href="configuration.xsl"?>
>>> > <configuration>
>>> > <property>
>>> > <name> mapreduce.framework.name</name>
>>> > <value>yarn</value>
>>> > </property>
>>> > </configuration>
>>> >
>>>
>>> Remember that you have to configure two conf files related to Yarn,
>>>  yarn-site.xml:
>>>  <?xml version=”1.0″?>
>>> <configuration>
>>> <!– Site specific YARN configuration properties –>
>>> <property>
>>> <name>yarn.nodemanager.aux-services</name>
>>> <value>mapreduce.shuffle</value>
>>> </property>
>>> <property>
>>> <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
>>> <value>org.apache.hadoop.mapred.ShuffleHandler</value>
>>> </property>
>>> </configuration>
>>>
>>> and mapred-site.xml
>>>
>>> <?xml version=”1.0″?>
>>> <?xml-stylesheet href=”configuration.xsl”?>
>>> <configuration>
>>> <property>
>>> <name> mapreduce.framework.name</name>
>>> <value>yarn</value>
>>> </property>
>>> </configuration>
>>>
>>> Regards
>>>
>>> --
>>> Marcos Luis Ortíz Valmaseda
>>>  Linux Infrastructure Engineer
>>>  Linux User # 418229
>>>  http://marcosluis2186.posterous.com
>>>  http://www.linkedin.com/in/marcosluis2186
>>>  Twitter: @marcosluis2186
>>>
>>
>>
>

Reply via email to