configuration in the sense i have given the following configs
hdfs-site
<property>
<name>dfs.replication</name>
<value>1</value>
<description>Default block replication.
The actual number of replications can be specified when the file is created.
The default is used if replication is not specified in create time.
</description>
</property>
core-site
<property>
<name>hadoop.tmp.dir</name>
<value>/app/hadoop/tmp</value>
<description>A base for other temporary directories.</description>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:54310</value>
<description>The name of the default file system. A URI whose
scheme and authority determine the FileSystem implementation. The
uri's scheme determines the config property (fs.SCHEME.impl) naming
the FileSystem implementation class. The uri's authority is used to
determine the host, port, etc. for a filesystem.</description>
</property>
and yarn-site
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>localhost:8031</value>
<description>host is the hostname of the resource manager and
port is the port on which the NodeManagers contact the Resource Manager.
</description>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>localhost:8030</value>
<description>host is the hostname of the resourcemanager and port
is the port
on which the Applications in the cluster talk to the Resource Manager.
</description>
</property>
<property>
<name>yarn.resourcemanager.scheduler.class</name>
<value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler</value>
<description>In case you do not want to use the default
scheduler</description>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>localhost:8032</value>
<description>the host is the hostname of the ResourceManager and
the port is the port on
which the clients can talk to the Resource Manager. </description>
</property>
<property>
<name>yarn.nodemanager.local-dirs</name>
<value></value>
<description>the local directories used by the nodemanager</description>
</property>
<property>
<name>yarn.nodemanager.address</name>
<value>127.0.0.1:8041</value>
<description>the nodemanagers bind to this port</description>
</property>
<property>
<name>yarn.nodemanager.resource.memory-mb</name>
<value>10240</value>
<description>the amount of memory on the NodeManager in GB</description>
</property>
<property>
<name>yarn.nodemanager.remote-app-log-dir</name>
<value>/app-logs</value>
<description>directory on hdfs where the application logs are
moved to </description>
</property>
<property>
<name>yarn.nodemanager.log-dirs</name>
<value></value>
<description>the directories used by Nodemanagers as log
directories</description>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce.shuffle</value>
<description>shuffle service that needs to be set for Map Reduce
to run </description>
</property>
is there i need to make any other changes ????
On Fri, Jun 15, 2012 at 4:10 PM, Mohammad Tariq <[email protected]> wrote:
> Hi Soham,
>
> Have you mentioned all the necessary properties in the
> configuration files??Also make sure your hosts file is ok.
>
> Regards,
> Mohammad Tariq
>
>
> On Fri, Jun 15, 2012 at 3:53 PM, soham sardar <[email protected]>
> wrote:
>> hey friends !!
>>
>> I have downloaded the cdh4 tarballs and kept in a folder and try to
>> run the hadoop nodes and other subsequent tools
>> I have also set each of the home paths in my bashrc
>>
>> now the problem is
>> when i try
>>
>>
>> hadoop fs -ls
>>
>>
>> 12/06/15 15:51:35 INFO ipc.Client: Retrying connect to server:
>> localhost/127.0.0.1:54310. Already tried 0 time(s).
>> 12/06/15 15:51:36 INFO ipc.Client: Retrying connect to server:
>> localhost/127.0.0.1:54310. Already tried 1 time(s).
>> 12/06/15 15:51:37 INFO ipc.Client: Retrying connect to server:
>> localhost/127.0.0.1:54310. Already tried 2 time(s).
>> 12/06/15 15:51:38 INFO ipc.Client: Retrying connect to server:
>> localhost/127.0.0.1:54310. Already tried 3 time(s).
>> 12/06/15 15:51:39 INFO ipc.Client: Retrying connect to server:
>> localhost/127.0.0.1:54310. Already tried 4 time(s).
>> 12/06/15 15:51:40 INFO ipc.Client: Retrying connect to server:
>> localhost/127.0.0.1:54310. Already tried 5 time(s).
>> 12/06/15 15:51:41 INFO ipc.Client: Retrying connect to server:
>> localhost/127.0.0.1:54310. Already tried 6 time(s).
>> 12/06/15 15:51:42 INFO ipc.Client: Retrying connect to server:
>> localhost/127.0.0.1:54310. Already tried 7 time(s).
>> 12/06/15 15:51:43 INFO ipc.Client: Retrying connect to server:
>> localhost/127.0.0.1:54310. Already tried 8 time(s).
>> 12/06/15 15:51:44 INFO ipc.Client: Retrying connect to server:
>> localhost/127.0.0.1:54310. Already tried 9 time(s).
>> ls: Call From XPS-L501X/127.0.1.1 to localhost:54310 failed on
>> connection exception: java.net.ConnectException: Connection refused;
>> For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
>>
>>
>> this is the error can someone help me as to why this error is occuring
>> ????????