Hey Pavel,

It's also worth checking the number of data nodes that have registered with
the name node, depending on what you're trying to do when HDFS is ready.
Try this:

hadoop dfsadmin -report | grep "Datanodes available" | awk '{ print $3 }'
>

- or -

MIN_NODES=5
> MAX_RETRIES=15
> counter=0
> while [ `hadoop dfsadmin -report | grep "Datanodes available" | awk '{
> print $3 }'` -ne $MIN_NODES ]
> do
>   sleep 2
>   counter=$((counter+1))
>   if [ $counter -gt $MAX_RETRIES ]
>   then
>     echo "Note enough data nodes registered!"
>     exit 1
>   fi
> done
>

If you try to write HDFS data immediately after the name node is out of safe
mode, you might get replication errors if data nodes haven't registered yet.

Alex

On Fri, Jun 19, 2009 at 6:21 AM, Todd Lipcon <t...@cloudera.com> wrote:

> Hi Pavel,
>
> You should use "hadoop dfsadmin -safemode wait" after starting your
> cluster.
> This will wait for the namenode to exit "safe mode" so you can begin making
> modifications.
>
> -Todd
>
> On Fri, Jun 19, 2009 at 9:03 AM, pavel kolodin <pavelkolo...@gmail.com
> >wrote:
>
> >
> > Hello.
> > How i can ensure that cluster is up?
> > Now i using "sleep 60" between "start-dfs.sh" and putting files to
> input...
> > Thanks.
> >
>

Reply via email to