Hello Geert,
I don't believe any such functionality is available out of the box,
but I am confident clever use of dynamic job runners
(http://lists.bx.psu.edu/pipermail/galaxy-dev/2012-June/010080.html)
could solve this problem.
One approach would be to maybe move all of your job runners out of
g
I am not sure if this is the cause, but if you have changed a workflow step's
input, often the workflow becomes un-editable and GALAXY can even hang...
So if you changed a step in your workflow that was an input (parameter changes
are fine), set it back to what it was and you should be able to e
Hi,
One of the biggest hurdles for the implementation in our institute is the
inability of Galaxy API to set parameters at run time.
You can only seem to set inputs, but not parameters...
Is there any ETA on when this will be available? Is this even a priority?
Thanks!
Regards,
Thon de Boer,
Hi Lance,
On Sep 21, 2012, at 6:04 PM, Lance Parsons wrote:
> OK, I was able to get a new version installed. It seems there are two issues:
>
> 1) New revisions with the same version "ionvalidate" previous revisions.
> This means that Galaxy servers with the old, and now invalid, revisions a
On Sep 19, 2012, at 9:50 AM, Jennifer Jackson wrote:
> repost to galaxy-dev
>
> On 9/7/12 6:39 PM, Lukasz Lacinski wrote:
>> Dear All,
>>
>> I use an init script that comes with Galaxy in the contrib/ subdirectory
>> to start Galaxy. The log file
>>
>> --log-file /home/galaxy/galaxy.log
>>
>>
For Test/Main, I have the user's ~/.bash_profile set $PYTHON_EGG_CACHE on a
per-node basis. This could also be done per-node and per-pty to ensure
uniqueness per job.
--nate
On Sep 18, 2012, at 11:24 AM, James Taylor wrote:
> Interesting. If I'm reading this correctly the problem is happening
Hello,
After updating to the Sept. 07 distribution I am having problems editing
an existing workflow.
Server error
URL:
http:galaxy_url/workflow/load_workflow?id=ba751ee0539fff04&_=1348501448807
Module paste.exceptions.errormiddleware:143 in __call__
>> app_iter = self.application(environ, sta
Hello,
I am trying to run the cleanup scripts on my local installation but get
stuck when trying to run the following:
./scripts/cleanup_datasets/cleanup_datasets.py universe_wsgi.ini -d 10 -5
-r
Deleting library dataset id 7225
Traceback (most recent call last):
File "./scripts/cleanup_data
Hanfei,
I'd be happy to take a look at the report and share it with the rest of the
team if you'd like to send it directly to me.
Regarding SSL, this is definitely something that you can set up for your own
instance, see the documentation for configuring proxies on the wiki
http://wiki.g2.bx.p
Hi,
The admin pages state that it is possible to specify multiple clusters
in the universe file. Currently, we are investigating if we can couple
the university HPC platform to galaxy, to handle usage peaks. It would
be ideal if the job manager would check the load of the dedicated
cluster (e
10 matches
Mail list logo