Hello again,

As I mentioned in a recent post I often find need to debug jobs running
from our local Galaxy mirror and I often have the need to look at the
script & data files that the job is trying to use in order to figure out
what is causing a problem. The directories containing those file are in
'[galaxy_dist]/database/pbs/' and
'[galaxy_dist]/database/job_working_directory/' for me. Each job that is
run gets a corresponding .sh file in the pbs/ directory (like 344.sh) which
will have the entire sequence of bash commands to execute the job with and
also a call to a wrapper script somewhere in the middle normally. That
script information is very useful, but the problem is that when a job fails
(often within the first 30 seconds of running it) the script is deleted and
there is no trace of it left in the directory. The same with the output or
job data files in job_working_directory/.

I have had to suffice with using the technique of coordinating with the
user when to (re)run their failed job and then quickly within the 30 second
window do a "cp -R script_I_care_about.sh copy_of_script.sh" command, so
that when the script is deleted I have a copy that I can examine. The same
goes with copying the job_working_directory/ files. I know that it would
get very cluttered in those directories if they were not automatically
cleaned/deleted but I find those files essential for debugging. Is there a
way to force Galaxy to retain those files (optionally) for debugging
purposes? Maybe make a new option in the universe.ini file for that purpose
that can be set for people who want it?

Josh Nielsen
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:


Reply via email to