Another relevant question, my institute has configured a NFS volume that
based on a SSD disk pool on the file server.
I want to use it for improving galaxy's job execution on big dataset.
However, the SSD volume has only 2.5TB (they are very expensive...). So
migrate the entire database folder to there is impossible.
Any recommendation for the galaxy to have a good use of the SSD?
On Tue, Jun 19, 2012 at 9:55 PM, Derrick Lin <klin...@gmail.com> wrote:
> I think my question has been answered:
> Hopefully can see the enhancements in the near future.
> On Tue, Jun 19, 2012 at 5:01 PM, Derrick Lin <klin...@gmail.com> wrote:
>> Hi guys,
>> I have deploy a galaxy on a cluster (so I installed it on a NFS share
>> that it's accessible by all cluster compute nodes).
>> Everything is running fine. Now I am looking for a way such that every
>> job dispatched to a compute node uses that node's local /tmp as working
>> I know galaxy config provides job_working_directory for the similar
>> My question really is, while all my compute nodes can access the NFS
>> share where galaxy installed, but the galaxy host cannot access each
>> compute node's /tmp.
>> Is there a way that for the galaxy to collect job results back to the
>> data directory?
Please keep all replies on the list by using "reply all"
in your mail client. To manage your subscriptions to this
and other Galaxy lists, please use the interface at: