Hi,
Glad to know it helped.
If you need to get your cluster up and running quickly, you can manipulate the 
parameter dfs.namenode.threshold.percent. If you set it to 0, NN will not enter 
safe mode.

Amogh


On 1/19/10 12:39 PM, "prasenjit mukherjee" <[email protected]> 
wrote:

That was exactly the reason. Thanks  a bunch.

On Tue, Jan 19, 2010 at 12:24 PM, Mafish Liu <[email protected]> wrote:
> 2010/1/19 prasenjit mukherjee <[email protected]>:
>>  I run "hadoop fs -rmr .." immediately after start-all.sh    Does the
>> namenode always start in safemode and after sometime switches to
>> normal mode ? If that is the problem then your suggestion of waiting
>> might work. Lemme check.
>
> This is the point. Namenode will enter safemode on starting to gather
> metadata information of files, and then switch to normal mode. The
> time spent in safemode depends one the data scale in your HDFS.
>>
>> -Thanks for the pointer.
>> Prasen
>>
>> On Tue, Jan 19, 2010 at 10:47 AM, Amogh Vasekar <[email protected]> wrote:
>>> Hi,
>>> When NN is in safe mode, you get a read-only view of the hadoop file 
>>> system. ( since NN is reconstructing its image of FS )
>>> Use  "hadoop dfsadmin -safemode get" to check if in safe mode.
>>> "hadoop dfsadmin -safemode leave" to leave safe mode forcefully. Or use 
>>> "hadoop dfsadmin -safemode wait" to block till NN leaves by itself.
>>>
>>> Amogh
>>>
>>>
>>> On 1/19/10 10:31 AM, "prasenjit mukherjee" <[email protected]> wrote:
>>>
>>> Hmmm.  I am actually running it from a batch file. Is "hadoop fs -rmr"
>>> not that stable compared to pig's rm OR hadoop's FileSystem ?
>>>
>>> Let me try your suggestion by writing a cleanup script in pig.
>>>
>>> -Thanks,
>>> Prasen
>>>
>>> On Tue, Jan 19, 2010 at 10:25 AM, Rekha Joshi <[email protected]> 
>>> wrote:
>>>> Can you try with dfs/ without quotes?If using pig to run jobs you can use 
>>>> rmf within your script(again w/o quotes) to force remove and avoid error 
>>>> if file/dir not present.Or if doing this inside hadoop job, you can use 
>>>> FileSystem/FileStatus to delete directories.HTH.
>>>> Cheers,
>>>> /R
>>>>
>>>> On 1/19/10 10:15 AM, "prasenjit mukherjee" <[email protected]> wrote:
>>>>
>>>> "hadoop fs -rmr /op"
>>>>
>>>> That command always fails. I am trying to run sequential hadoop jobs.
>>>> After the first run all subsequent runs fail while cleaning up ( aka
>>>> removing the hadoop dir created by previous run ). What can I do to
>>>> avoid this ?
>>>>
>>>> here is my hadoop version :
>>>> # hadoop version
>>>> Hadoop 0.20.0
>>>> Subversion 
>>>> https://svn.apache.org/repos/asf/hadoop/core/branches/branch-0.20
>>>> -r 763504
>>>> Compiled by ndaley on Thu Apr  9 05:18:40 UTC 2009
>>>>
>>>> Any help is greatly appreciated.
>>>>
>>>> -Prasen
>>>>
>>>>
>>>
>>>
>>
>
>
>
> --
> [email protected]
>

Reply via email to