We emailed previously about possible memory leaks in our installation of Galaxy
here on the HPC at Bristol. We can run Galaxy just fine on our login node but
when we integrate into the cluster using pbs job runner the whole thing falls
over - almost certainly due to a memory leak. In essence, every attempt to
submit a TopHat job (with 2x5GB paired end reads to the full human genome)
always results in the whole thing falling over - but not when Galaxy is
restricted to the login node.
We saw that Nate responded to Todd Oakley about a week ago saying that there is
a memory leak in libtorque or pbs_python when using the pbs job runner. Have
there been any developments on this ?
Dr David A. Matthews
Senior Lecturer in Virology
Department of Cellular and Molecular Medicine,
School of Medical Sciences
University of Bristol
Tel. +44 117 3312058
Fax. +44 117 3312091
Please keep all replies on the list by using "reply all"
in your mail client. To manage your subscriptions to this
and other Galaxy lists, please use the interface at: