Sounds like your pid files are getting cleaned out of whatever directory
they are being written (maybe garbage collection on a temp directory?). 

Look at (taken from hadoop-env.sh):
# The directory where pid files are stored. /tmp by default.
# export HADOOP_PID_DIR=/var/hadoop/pids

The hadoop shell scripts look in the directory that is defined.

Bill

-----Original Message-----
From: Raymond Jennings III [mailto:raymondj...@yahoo.com] 
Sent: Monday, March 29, 2010 11:37 AM
To: common-user@hadoop.apache.org
Subject: why does 'jps' lose track of hadoop processes ?

After running hadoop for some period of time, the command 'jps' fails to
report any hadoop process on any node in the cluster.  The processes are
still running as can be seen with 'ps -ef|grep java'

In addition, scripts like stop-dfs.sh and stop-mapred.sh no longer find the
processes to stop.


      


Reply via email to