jps gets its information from the files stored under /tmp/hsperfdata_*,
so when a cron job clears your /tmp directory, it also erases these
files. You can submit jobs as long as your jobtracker and namenode are
responding to requests over TCP, though.
- Marcos
Raymond Jennings III wrote:
That would explain why the processes cannot be stopped but the mystery of why
jps loses track of these active processes still remains. Even when jps does
not report any hadoop process I can still submit and run jobs just fine. I
will have to check the next time it happens if the the hadoop pid's are the
same as what is in the file. If different that would somehow mean the hadoop
process was being restarted?
--- On Mon, 3/29/10, Bill Habermaas <b...@habermaas.us> wrote:
From: Bill Habermaas <b...@habermaas.us>
Subject: RE: why does 'jps' lose track of hadoop processes ?
To: common-user@hadoop.apache.org
Date: Monday, March 29, 2010, 11:44 AM
Sounds like your pid files are
getting cleaned out of whatever directory
they are being written (maybe garbage collection on a temp
directory?).
Look at (taken from hadoop-env.sh):
# The directory where pid files are stored. /tmp by
default.
# export HADOOP_PID_DIR=/var/hadoop/pids
The hadoop shell scripts look in the directory that is
defined.
Bill
-----Original Message-----
From: Raymond Jennings III [mailto:raymondj...@yahoo.com]
Sent: Monday, March 29, 2010 11:37 AM
To: common-user@hadoop.apache.org
Subject: why does 'jps' lose track of hadoop processes ?
After running hadoop for some period of time, the command
'jps' fails to
report any hadoop process on any node in the cluster.
The processes are
still running as can be seen with 'ps -ef|grep java'
In addition, scripts like stop-dfs.sh and stop-mapred.sh no
longer find the
processes to stop.
--
------------------------------------------------------------------------
Marcos Medrado Rubinelli
Tecnologia - BuscaPĂ©
Tel. +55 11 3848-8700 Ramal 8788
marc...@buscape-inc.com <mailto:marc...@buscape-inc.com>