Use jps and check what all processes are running, is this a single node
cluster?


On Fri, Apr 4, 2014 at 11:54 AM, Mahmood Naderan <[email protected]>wrote:

> Strange! See the output
>
> $ ./search/bin/hadoop namenode -format
> Re-format filesystem in /data/mahmood/nutch-test/filesystem/name ? (Y or
> N) Y
>
> $ ./search/bin/start-all.sh
> starting namenode, logging to
> /data/mahmood/nutch-test/search/logs/hadoop-mahmood-namenode-orca.out
> localhost: starting datanode, logging to
> /data/mahmood/nutch-test/search/logs/hadoop-mahmood-datanode-orca.out
> localhost: starting secondarynamenode, logging to
> /data/mahmood/nutch-test/search/logs/hadoop-mahmood-secondarynamenode-orca.out
> starting jobtracker, logging to
> /data/mahmood/nutch-test/search/logs/hadoop-mahmood-jobtracker-orca.out
> localhost: starting tasktracker, logging to
> /data/mahmood/nutch-test/search/logs/hadoop-mahmood-tasktracker-orca.out
>
>
> $ ./search/bin/hadoop dfsadmin -report
> Configured Capacity: 0 (0 KB)
> Present Capacity: 0 (0 KB)
> DFS Remaining: 0 (0 KB)
> DFS Used: 0 (0 KB)
> DFS Used%: �%
> Under replicated blocks: 0
> Blocks with corrupt replicas: 0
> Missing blocks: 0
>
> -------------------------------------------------
> Datanodes available: 0 (0 total, 0 dead)
>
>
> Regards,
> Mahmood
>   On Friday, April 4, 2014 11:09 PM, Jitendra Yadav <
> [email protected]> wrote:
>  Yes, run that command and check whether you have any live datanode.
>
> Thanks
> jitendra
>
>
> On Fri, Apr 4, 2014 at 11:35 AM, Mahmood Naderan <[email protected]>wrote:
>
> Jitendra,
> For the first part, can explain how?
> For the second part, do you mean "hadoop dfsadmin -report"?
>
> Regards,
> Mahmood
>
>
>   On Friday, April 4, 2014 9:44 PM, Jitendra Yadav <
> [email protected]> wrote:
>  >Can you check total running datanodes in your cluster and also
> free hdfs  space?
> >
> >
> >Thanks
> >Jitendra
>
>
>
>
>
>
>
>

Reply via email to