Hi all,

I am just getting started with hadoop 0.20 and trying to run a job in
pseudo-distributed mode.

I configured hadoop according to the tutorial, but it seems it does
not work as expected.

My map/reduce tasks are running sequencially and output result is
stored on local filesystem instead of the dfs space.
Job tracker does not see the running job at all.
I have checked the logs but don't see any errors either. I have also
copied some files manually to the dfs to make sure it works.

The only difference between the manual and my configuration is that I
had to change the ports for the job tracker and namenode as 9000 and
9001 are already used by other apps on my workstation.

Any hints?

Thanks

Regards,

Vasyl

Reply via email to