Soham, to me it seems that your base directories haven't been created
properly. Stop all the Hadoop related processes and issue these
commands -
$ sudo rm -rf /var/lib/hadoop-0.20/cache/hadoop/dfs
$ sudo mkdir -p /var/lib/hadoop-0.20/cache/hadoop/dfs/{name,data}
$ sudo chown hdfs:hdfs /var/lib/hadoop-0.20/cache/hadoop/dfs/{name,data}
$ sudo -u hdfs hadoop namenode -format
It should work.
Regards,
Mohammad Tariq
On Mon, Jun 18, 2012 at 12:52 PM, soham sardar
<[email protected]> wrote:
> yea i was using cdh3 and then i removed all the nodes and all that
> completely as to try cdh4 and hue more specifically
>
>
> On Mon, Jun 18, 2012 at 12:48 PM, Mohammad Tariq <[email protected]> wrote:
>> Are you installing CDH4 for the first time or were you using the CDh3
>> with MRv1??If that is the case you have to uninstall that first..It
>> may cause problems.
>>
>> Regards,
>> Mohammad Tariq
>>
>>
>> On Mon, Jun 18, 2012 at 11:59 AM, soham sardar
>> <[email protected]> wrote:
>>> hey when i tried that it says that command not found .. I wanna tell u
>>> that i have installed via tarball ( cdh4) so is there some changes i
>>> need to imply because of the tarball ???
>>> I badly need start the nodes ....
>>>
>>>
>>>
>>>
>>>
>>>
>>> On Fri, Jun 15, 2012 at 5:05 PM, Mohammad Tariq <[email protected]> wrote:
>>>> to start the hdfs use -
>>>>
>>>> $ for service in /etc/init.d/hadoop-hdfs-*
>>>>> do
>>>>> sudo $service start
>>>>> done
>>>>
>>>> and to start mapreduce do -
>>>>
>>>> $ for service in /etc/init.d/hadoop-0.20-mapreduce-*
>>>>> do
>>>>> sudo $service start
>>>>> done
>>>>
>>>> Regards,
>>>> Mohammad Tariq
>>>>
>>>>
>>>> On Fri, Jun 15, 2012 at 4:54 PM, soham sardar <[email protected]>
>>>> wrote:
>>>>> hey mohammad
>>>>> i wanna knw how to start all the nodes of hadoop like in cdh3 there
>>>>> was a script /bin/start-all.sh
>>>>> but in the cdh4 tarballs i dont find any such script??
>>>>>
>>>>> On Fri, Jun 15, 2012 at 4:39 PM, Mohammad Tariq <[email protected]>
>>>>> wrote:
>>>>>> in both the lines???i mean your hosts file should look something like
>>>>>> this -
>>>>>>
>>>>>> 127.0.0.1 localhost
>>>>>> 127.0.0.1 ubuntu.ubuntu-domain ubuntu
>>>>>>
>>>>>> # The following lines are desirable for IPv6 capable hosts
>>>>>> ::1 ip6-localhost ip6-loopback
>>>>>> fe00::0 ip6-localnet
>>>>>> ff00::0 ip6-mcastprefix
>>>>>> ff02::1 ip6-allnodes
>>>>>> ff02::2 ip6-allrouters
>>>>>>
>>>>>> Regards,
>>>>>> Mohammad Tariq
>>>>>>
>>>>>>
>>>>>> On Fri, Jun 15, 2012 at 4:32 PM, soham sardar
>>>>>> <[email protected]> wrote:
>>>>>>> hey mohammad but its already 127.0.0.1 i guess
>>>>>>>
>>>>>>>
>>>>>>> On Fri, Jun 15, 2012 at 4:24 PM, Mohammad Tariq <[email protected]>
>>>>>>> wrote:
>>>>>>>> All looks fine to me..change the line "127.0.1.1" in your hosts file
>>>>>>>> to "127.0.0.1" and see if it works for you.
>>>>>>>>
>>>>>>>> Regards,
>>>>>>>> Mohammad Tariq
>>>>>>>>
>>>>>>>>
>>>>>>>> On Fri, Jun 15, 2012 at 4:14 PM, soham sardar
>>>>>>>> <[email protected]> wrote:
>>>>>>>>> configuration in the sense i have given the following configs
>>>>>>>>>
>>>>>>>>> hdfs-site
>>>>>>>>>
>>>>>>>>> <property>
>>>>>>>>> <name>dfs.replication</name>
>>>>>>>>> <value>1</value>
>>>>>>>>> <description>Default block replication.
>>>>>>>>> The actual number of replications can be specified when the file is
>>>>>>>>> created.
>>>>>>>>> The default is used if replication is not specified in create time.
>>>>>>>>> </description>
>>>>>>>>> </property>
>>>>>>>>>
>>>>>>>>> core-site
>>>>>>>>>
>>>>>>>>> <property>
>>>>>>>>> <name>hadoop.tmp.dir</name>
>>>>>>>>> <value>/app/hadoop/tmp</value>
>>>>>>>>> <description>A base for other temporary directories.</description>
>>>>>>>>> </property>
>>>>>>>>>
>>>>>>>>> <property>
>>>>>>>>> <name>fs.default.name</name>
>>>>>>>>> <value>hdfs://localhost:54310</value>
>>>>>>>>> <description>The name of the default file system. A URI whose
>>>>>>>>> scheme and authority determine the FileSystem implementation. The
>>>>>>>>> uri's scheme determines the config property (fs.SCHEME.impl) naming
>>>>>>>>> the FileSystem implementation class. The uri's authority is used to
>>>>>>>>> determine the host, port, etc. for a filesystem.</description>
>>>>>>>>> </property>
>>>>>>>>>
>>>>>>>>> and yarn-site
>>>>>>>>>
>>>>>>>>> <property>
>>>>>>>>> <name>yarn.resourcemanager.resource-tracker.address</name>
>>>>>>>>> <value>localhost:8031</value>
>>>>>>>>> <description>host is the hostname of the resource manager and
>>>>>>>>> port is the port on which the NodeManagers contact the Resource
>>>>>>>>> Manager.
>>>>>>>>> </description>
>>>>>>>>> </property>
>>>>>>>>>
>>>>>>>>> <property>
>>>>>>>>> <name>yarn.resourcemanager.scheduler.address</name>
>>>>>>>>> <value>localhost:8030</value>
>>>>>>>>> <description>host is the hostname of the resourcemanager and port
>>>>>>>>> is the port
>>>>>>>>> on which the Applications in the cluster talk to the Resource
>>>>>>>>> Manager.
>>>>>>>>> </description>
>>>>>>>>> </property>
>>>>>>>>>
>>>>>>>>> <property>
>>>>>>>>> <name>yarn.resourcemanager.scheduler.class</name>
>>>>>>>>>
>>>>>>>>> <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler</value>
>>>>>>>>> <description>In case you do not want to use the default
>>>>>>>>> scheduler</description>
>>>>>>>>> </property>
>>>>>>>>>
>>>>>>>>> <property>
>>>>>>>>> <name>yarn.resourcemanager.address</name>
>>>>>>>>> <value>localhost:8032</value>
>>>>>>>>> <description>the host is the hostname of the ResourceManager and
>>>>>>>>> the port is the port on
>>>>>>>>> which the clients can talk to the Resource Manager. </description>
>>>>>>>>> </property>
>>>>>>>>>
>>>>>>>>> <property>
>>>>>>>>> <name>yarn.nodemanager.local-dirs</name>
>>>>>>>>> <value></value>
>>>>>>>>> <description>the local directories used by the
>>>>>>>>> nodemanager</description>
>>>>>>>>> </property>
>>>>>>>>>
>>>>>>>>> <property>
>>>>>>>>> <name>yarn.nodemanager.address</name>
>>>>>>>>> <value>127.0.0.1:8041</value>
>>>>>>>>> <description>the nodemanagers bind to this port</description>
>>>>>>>>> </property>
>>>>>>>>>
>>>>>>>>> <property>
>>>>>>>>> <name>yarn.nodemanager.resource.memory-mb</name>
>>>>>>>>> <value>10240</value>
>>>>>>>>> <description>the amount of memory on the NodeManager in
>>>>>>>>> GB</description>
>>>>>>>>> </property>
>>>>>>>>>
>>>>>>>>> <property>
>>>>>>>>> <name>yarn.nodemanager.remote-app-log-dir</name>
>>>>>>>>> <value>/app-logs</value>
>>>>>>>>> <description>directory on hdfs where the application logs are
>>>>>>>>> moved to </description>
>>>>>>>>> </property>
>>>>>>>>>
>>>>>>>>> <property>
>>>>>>>>> <name>yarn.nodemanager.log-dirs</name>
>>>>>>>>> <value></value>
>>>>>>>>> <description>the directories used by Nodemanagers as log
>>>>>>>>> directories</description>
>>>>>>>>> </property>
>>>>>>>>>
>>>>>>>>> <property>
>>>>>>>>> <name>yarn.nodemanager.aux-services</name>
>>>>>>>>> <value>mapreduce.shuffle</value>
>>>>>>>>> <description>shuffle service that needs to be set for Map Reduce
>>>>>>>>> to run </description>
>>>>>>>>> </property>
>>>>>>>>>
>>>>>>>>> is there i need to make any other changes ????
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Fri, Jun 15, 2012 at 4:10 PM, Mohammad Tariq <[email protected]>
>>>>>>>>> wrote:
>>>>>>>>>> Hi Soham,
>>>>>>>>>>
>>>>>>>>>> Have you mentioned all the necessary properties in the
>>>>>>>>>> configuration files??Also make sure your hosts file is ok.
>>>>>>>>>>
>>>>>>>>>> Regards,
>>>>>>>>>> Mohammad Tariq
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On Fri, Jun 15, 2012 at 3:53 PM, soham sardar
>>>>>>>>>> <[email protected]> wrote:
>>>>>>>>>>> hey friends !!
>>>>>>>>>>>
>>>>>>>>>>> I have downloaded the cdh4 tarballs and kept in a folder and try to
>>>>>>>>>>> run the hadoop nodes and other subsequent tools
>>>>>>>>>>> I have also set each of the home paths in my bashrc
>>>>>>>>>>>
>>>>>>>>>>> now the problem is
>>>>>>>>>>> when i try
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> hadoop fs -ls
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> 12/06/15 15:51:35 INFO ipc.Client: Retrying connect to server:
>>>>>>>>>>> localhost/127.0.0.1:54310. Already tried 0 time(s).
>>>>>>>>>>> 12/06/15 15:51:36 INFO ipc.Client: Retrying connect to server:
>>>>>>>>>>> localhost/127.0.0.1:54310. Already tried 1 time(s).
>>>>>>>>>>> 12/06/15 15:51:37 INFO ipc.Client: Retrying connect to server:
>>>>>>>>>>> localhost/127.0.0.1:54310. Already tried 2 time(s).
>>>>>>>>>>> 12/06/15 15:51:38 INFO ipc.Client: Retrying connect to server:
>>>>>>>>>>> localhost/127.0.0.1:54310. Already tried 3 time(s).
>>>>>>>>>>> 12/06/15 15:51:39 INFO ipc.Client: Retrying connect to server:
>>>>>>>>>>> localhost/127.0.0.1:54310. Already tried 4 time(s).
>>>>>>>>>>> 12/06/15 15:51:40 INFO ipc.Client: Retrying connect to server:
>>>>>>>>>>> localhost/127.0.0.1:54310. Already tried 5 time(s).
>>>>>>>>>>> 12/06/15 15:51:41 INFO ipc.Client: Retrying connect to server:
>>>>>>>>>>> localhost/127.0.0.1:54310. Already tried 6 time(s).
>>>>>>>>>>> 12/06/15 15:51:42 INFO ipc.Client: Retrying connect to server:
>>>>>>>>>>> localhost/127.0.0.1:54310. Already tried 7 time(s).
>>>>>>>>>>> 12/06/15 15:51:43 INFO ipc.Client: Retrying connect to server:
>>>>>>>>>>> localhost/127.0.0.1:54310. Already tried 8 time(s).
>>>>>>>>>>> 12/06/15 15:51:44 INFO ipc.Client: Retrying connect to server:
>>>>>>>>>>> localhost/127.0.0.1:54310. Already tried 9 time(s).
>>>>>>>>>>> ls: Call From XPS-L501X/127.0.1.1 to localhost:54310 failed on
>>>>>>>>>>> connection exception: java.net.ConnectException: Connection refused;
>>>>>>>>>>> For more details see:
>>>>>>>>>>> http://wiki.apache.org/hadoop/ConnectionRefused
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> this is the error can someone help me as to why this error is
>>>>>>>>>>> occuring ????????