Hi
For the woefully inept, where in the heck is galaxy.ini stored?
I saw the documentation that said change 127.0.0.1 to 0.0.0.0 and the file
name, can't find it to do so.
I know it all works as I've installed a module and that all shows up and works
on localhost.
Thanks
Bryan
She got that error message when she clicked on settings and tried to view
saved histories.
It looks like this error when loading '/history/view' was fixed with
release_13.02:
https://github.com/galaxyproject/galaxy/commit/3deac8edfba8e8088de855789a28e7d3a328b9b5
If possible, you could update
It is under config/. You’ll want to copy it from /config/galaxy.ini.sample
Thanks for using Galaxy,
Dan
On Jul 14, 2015, at 6:35 AM, Bryan Hepworth bryan.hepwo...@newcastle.ac.uk
wrote:
Hi
For the woefully inept, where in the heck is galaxy.ini stored?
I saw the documentation that
Hi Bryan,
if you are running a old Galaxy instance this file was called
universe_wsgi.ini and was located in your Galaxy root folder.
Cheers,
Bjoern
On 14.07.2015 12:35, Bryan Hepworth wrote:
Hi
For the woefully inept, where in the heck is galaxy.ini stored?
I saw the documentation that
We run the galaxy server as a vm… for 0-30 simultaneous users on a similar size
cluster.
16 virtual cores and 8G RAM.
I use 4 web handlers and 2 job handlers behind nginx.
Brad
On Jul 14, 2015, at 1:13 PM, Benjamin Datko
bda...@carc.unm.edumailto:bda...@carc.unm.edu wrote:
Hello All,
We am
It looks like a recent commit since 6/26 to 15.05 is break the workflow
editor. I have a pipeline I'm trying to edit. I updated my instance of
Galaxy and now when I try to edit a workflow, the editor is stuck at
Loading workflow editor.
___
Hi,
I am trying to run metaphlan on the Galaxy public website, but even a 25MB file
cannot be uploaded for analysis. It says the size is too large. And to run
metaphlan on local it is just a nightmare because there is absolutely no
tutorial of how to install all the prerequisites for material
I just fixed this here
https://github.com/galaxyproject/cloudman/commit/0f6ab132958ad57a338c66387112407edbc7632d#diff-9eba488c9175de87945a13d7b7228a6dR621
This required switching to SSD-based volumes on AWS and, besides actually
being able to create volumes up to 16TB now, it should give us