Thanks for the reply. I checked out my logs more and found out that
sometimes the hdfs addr is the correct one.

But in the jobtracker log, there is an error:

file /data/mapredsys/zhang~~~/xxxx.info can only be replicated on 0 nodes
instead of 1
...........................
DFS is not ready...


And when I check the file, the who dir is not there. And do you know how to
check the namenode/datanode logs? I can't find them anywhere. Thanks a lot!

Boyu

On Thu, Apr 8, 2010 at 4:58 PM, Kevin Van Workum <[email protected]> wrote:

> On Thu, Apr 8, 2010 at 2:23 PM, Boyu Zhang <[email protected]> wrote:
> > Hi Kevin,
> >
> > I am having the same error, but my critical error is:
> >
> > [2010-04-08 13:47:25,304] CRITICAL/50 hadoop:303 - Cluster could not be
> > allocated because of the following errors.
> > Hodring at n0 failed with following errors:
> > JobTracker failed to initialise
> >
> > Have you solved this? Thanks!
>
> Yes, I was about to post my solution. In my case the issue was that
> the default log-dir is to use the "log" directory under the HOD
> installation. Since I didn't have permissions to write to this
> directory, the hdfs couldn't initailize. Setting "log-dir = logs" for
> [hod], [ringmaster], [hodring], [gridservice-mapred], and
> [gridservice-hdfs] in hodrc fixed the problem by writing the logs to
> the "logs" directory under the CWD.
>
> Also, I have managed to get HOD to use the hod.cluster setting from
> hodrc to set the node properties for the qsub command. I'm going to
> clean up my modifications and post it in the next day or two.
>
> Kevin
>
>

Reply via email to