[galaxy-dev] recovery: galaxy restart
Hi everyone, I am trying to modify the *recover* function from the drmaa.py (/galaxy_central/lib/galaxy/job/runners/drmaa.py) as per my requirements. But I am not ale to understand the flow of that function. The recover function is called when the galaxy server is restarted. It first looks for the running jobs from the database. Then my problem is how it regains the same old state of the galaxy (specially the GUI) which was before the galaxy got restarted. Can anyone explain me the flow of the recover function and how the old state is regained. Regards Harendra ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
[galaxy-dev] Dynamic default values
Dear all, I was thinking about how to improve some of my tools and wondered if there is any way to have dynamic default values for parameters (depending on other parameters). Obviously this won't make sense in all cases, but as a specific example, consider my Venn Diagram tool: https://bitbucket.org/peterjc/galaxy-central/src/9f66c5d1fca8/tools/plotting/venn_list.xml I have separate parameters for each dataset (file input) and its caption (text input). Like so: param name=set type=data format=tabular,fasta,fastq,sff label=Members of set help=Tabular file (uses column one), FASTA, FASTQ or SFF file./ param name=lab size=30 type=text value=Group label=Caption for set/ Is something like this possible (or generally useful?) param name=set type=data format=tabular,fasta,fastq,sff label=Members of set help=Tabular file (uses column one), FASTA, FASTQ or SFF file./ param name=lab size=30 type=text value=$set.name label=Caption for set/ i.e. The default value of the label text parameter lab would be the human friendly name of the associated data file parameter set. I'd expect changing the file to also change the text box. I have thought of a workaround, but not yet tried it: Use the empty string as the default, and use an if statement in the Cheetah template for constructing the command line string. Thanks, Peter ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
Re: [galaxy-dev] recovery: galaxy restart
Harendra chawla wrote: Hi everyone, I am trying to modify the *recover* function from the drmaa.py (/galaxy_central/lib/galaxy/job/runners/drmaa.py) as per my requirements. But I am not ale to understand the flow of that function. The recover function is called when the galaxy server is restarted. It first looks for the running jobs from the database. Then my problem is how it regains the same old state of the galaxy (specially the GUI) which was before the galaxy got restarted. Can anyone explain me the flow of the recover function and how the old state is regained. Hi Harendra, I'm not sure I understand what you mean by old state and the GUI - all that's really necessary here is to determine what Galaxy considers to be the state of the job (new, queued, running), recreate the in-memory job components (the JobWrapper), and place the job back in Galaxy's DRM-monitoring queue, which will then proceed with the process of finishing the job if it's finished in the DRM or waiting for it to finish if it's still queued or running in the DRM. --nate Regards Harendra ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/ ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
Re: [galaxy-dev] The new hg based Galaxy Tool Shed
Hi Peter, Greg will probably reply, but I'll throw in my $0.02 as well. Peter Cock wrote: Hi Greg et al, I've just been looking over your slides from last week about the new 'Galaxy Tool Shed', which are posted online here: http://wiki.g2.bx.psu.edu/GCC2011 http://wiki.g2.bx.psu.edu/GCC2011?action=AttachFiledo=gettarget=GalaxyToolShed.pdf They talk about how you will be tracking individual tools in hg repositories. I can see two ways this might work: (1) Each of these tool specific repositories (or branches if you just make one repository for each tool owner) would be a full fork of the Galaxy code base. This allows in principle tools to include changes to core functionality (but that seems dangerous due to potential merge clashes), and any existing tool contributor's pre-existing hg forks on bitbucket might be reused. The tool shed isn't really intended for framework changes - I would suggest keeping these as bitbucket forks, although it would certainly be good if we had a way to locate the list of such forks centrally. (2) Each of these tool specific repositories would ONLY track the tool specific files you'd add to Galaxy to install the tool. So, typically there would be an XML file, perhaps a wrapper script, maybe a sample loc file, and a plain text readme file. I'm guessing you've gone for something along the lines of idea (2), but I Yep. would love to hear more about how this will all work. e.g. Where would the tool shed repositories be hosted, and would tool authors use hg to work with them, or something like the current web based tool upload? They're hosted here, and you can check them out and work with them locally as you do the Galaxy source itself, or use the new web-based upload to upload individual files or tarballs. Have a look at the test instance of the next-gen toolshed here if you'd like to see how it works: http://testtoolshed.g2.bx.psu.edu/ Please feel free to use this as a sandbox and report any issues you find. --nate Regards, Peter ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/ ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
Re: [galaxy-dev] The new hg based Galaxy Tool Shed
On Wed, Jun 1, 2011 at 3:22 PM, Nate Coraor n...@bx.psu.edu wrote: Hi Peter, Greg will probably reply, but I'll throw in my $0.02 as well. Great - but with your answers you've triggered more questions ;) Peter Cock wrote: Hi Greg et al, I've just been looking over your slides from last week about the new 'Galaxy Tool Shed', which are posted online here: http://wiki.g2.bx.psu.edu/GCC2011 http://wiki.g2.bx.psu.edu/GCC2011?action=AttachFiledo=gettarget=GalaxyToolShed.pdf They talk about how you will be tracking individual tools in hg repositories. I can see two ways this might work: (1) Each of these tool specific repositories (or branches if you just make one repository for each tool owner) would be a full fork of the Galaxy code base. This allows in principle tools to include changes to core functionality (but that seems dangerous due to potential merge clashes), and any existing tool contributor's pre-existing hg forks on bitbucket might be reused. The tool shed isn't really intended for framework changes - I would suggest keeping these as bitbucket forks, although it would certainly be good if we had a way to locate the list of such forks centrally. Well, as long as the repository is created by forking on bitbucket, then the link existing in the bitbucket web interface. https://bitbucket.org/galaxy/galaxy-central/descendants (2) Each of these tool specific repositories would ONLY track the tool specific files you'd add to Galaxy to install the tool. So, typically there would be an XML file, perhaps a wrapper script, maybe a sample loc file, and a plain text readme file. I'm guessing you've gone for something along the lines of idea (2), but I Yep. It did seem the most likely route. would love to hear more about how this will all work. e.g. Where would the tool shed repositories be hosted, and would tool authors use hg to work with them, or something like the current web based tool upload? They're hosted here, and you can check them out and work with them locally as you do the Galaxy source itself, or use the new web-based upload to upload individual files or tarballs. Have a look at the test instance of the next-gen toolshed here if you'd like to see how it works: http://testtoolshed.g2.bx.psu.edu/ Please feel free to use this as a sandbox and report any issues you find. I see the existing usernames and passwords from the old Tool Shed were transferred - that makes life easier. And it lists the hg information, e.g. hg clone http://pete...@testtoolshed.g2.bx.psu.edu/repos/peterjc/venn_list hg clone http://pete...@testtoolshed.g2.bx.psu.edu/repos/peterjc/tmhmm_and_signalp What happens with branches? Would the Tool Shed just show the default branch? That seems best for a simple UI. I have a query regarding the way the tools are shown in tables and the version column, which shows a changeset and revision number. According to Greg's slides (slide #10, titled Simpler tool versioning which seems ironic to me), the old numerical version is still there in the XML - and I'd prefer to see that. How about having both shown (two columns, perhaps call them Public version and hg version or hg revision). With regards to the planned installation functionality, what happens when a tool repository (aka Tool Suite in the old model) contains several XML wrappers - would you be able to choose which are wanted? The use case I have here is when several tools share some common dependency (which should be tracked in a single repository), and were therefore useful to bundle together as a suite, but where not all the tools will be of global interest (e.g. My TMHMM, SignalP, etc suite). Peter ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
Re: [galaxy-dev] The new hg based Galaxy Tool Shed
Hello Peter - I finally got a chance to jump in - see my inline comments... On Jun 1, 2011, at 11:00 AM, Peter Cock wrote: On Wed, Jun 1, 2011 at 3:22 PM, Nate Coraor n...@bx.psu.edu wrote: Hi Peter, Greg will probably reply, but I'll throw in my $0.02 as well. Great - but with your answers you've triggered more questions ;) Peter Cock wrote: Hi Greg et al, I've just been looking over your slides from last week about the new 'Galaxy Tool Shed', which are posted online here: http://wiki.g2.bx.psu.edu/GCC2011 http://wiki.g2.bx.psu.edu/GCC2011?action=AttachFiledo=gettarget=GalaxyToolShed.pdf They talk about how you will be tracking individual tools in hg repositories. I can see two ways this might work: (1) Each of these tool specific repositories (or branches if you just make one repository for each tool owner) would be a full fork of the Galaxy code base. This allows in principle tools to include changes to core functionality (but that seems dangerous due to potential merge clashes), and any existing tool contributor's pre-existing hg forks on bitbucket might be reused. The tool shed isn't really intended for framework changes - I would suggest keeping these as bitbucket forks, although it would certainly be good if we had a way to locate the list of such forks centrally. Well, as long as the repository is created by forking on bitbucket, then the link existing in the bitbucket web interface. https://bitbucket.org/galaxy/galaxy-central/descendants What's important here is that each tool or set of tools is it's own separate entity - see the future big picture highlights below for reasons. (2) Each of these tool specific repositories would ONLY track the tool specific files you'd add to Galaxy to install the tool. So, typically there would be an XML file, perhaps a wrapper script, maybe a sample loc file, and a plain text readme file. I'm guessing you've gone for something along the lines of idea (2), but I Yep. It did seem the most likely route. would love to hear more about how this will all work. e.g. Where would the tool shed repositories be hosted, and would tool authors use hg to work with them, or something like the current web based tool upload? They're hosted here, and you can check them out and work with them locally as you do the Galaxy source itself, or use the new web-based upload to upload individual files or tarballs. Have a look at the test instance of the next-gen toolshed here if you'd like to see how it works: http://testtoolshed.g2.bx.psu.edu/ Please feel free to use this as a sandbox and report any issues you find. I see the existing usernames and passwords from the old Tool Shed were transferred - that makes life easier. And it lists the hg information, e.g. hg clone http://pete...@testtoolshed.g2.bx.psu.edu/repos/peterjc/venn_list hg clone http://pete...@testtoolshed.g2.bx.psu.edu/repos/peterjc/tmhmm_and_signalp What happens with branches? Would the Tool Shed just show the default branch? That seems best for a simple UI. Some of the branching details are yet to be worked out, but forks are easy because repository urls include the unique username of the Galaxy user. I have a query regarding the way the tools are shown in tables and the version column, which shows a changeset and revision number. According to Greg's slides (slide #10, titled Simpler tool versioning which seems ironic to me), the old numerical version is still there in the XML - and I'd prefer to see that. How about having both shown (two columns, perhaps call them Public version and hg version or hg revision). We can certainly do this, but what would you like to see for tool suites and other tool types? The old Galaxy tool shed strictly required a suite_config.xml file that included the overall version of the suite. To make tool development easier, we're no longer requiring the inclusion of a suite_config.xml file ( we don't even differentiate types of tools since everything is a repository ). The definition of a tool in the next gen tool shed, is fairly loose. A tool could be data, it could be an exported workflow, it could be a suite of tools, a single tool, or just a set of files. So we'll need to define an easy way to provide a version of the tool if it will be different than the version of the repository tip. With regards to the planned installation functionality, what happens when a tool repository (aka Tool Suite in the old model) contains several XML wrappers - would you be able to choose which are wanted? Yes - see below... The use case I have here is when several tools share some common dependency (which should be tracked in a single repository), and were therefore useful to bundle together as a suite, but where not all the tools will be of global interest (e.g. My TMHMM, SignalP, etc suite). Here's the future big picture highlights. Many of the
[galaxy-dev] listing universe_wsgi.ini configuration
Hi, I have started galaxy daemon using 'reload' option. I am wondering if there is a command to print currently used configuration values in the universe_wsgi.ini file. It may help verify that configuration changes were reloaded. -- Thanks, Shantanu. ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
Re: [galaxy-dev] recovery: galaxy restart
Hi Nate, I got your point but which part of the code is doing all these things, I mean how exactly this is done. Is it using any other function apart from recover? Regards Harendra On Wed, Jun 1, 2011 at 8:56 AM, Nate Coraor n...@bx.psu.edu wrote: Harendra chawla wrote: Hi everyone, I am trying to modify the *recover* function from the drmaa.py (/galaxy_central/lib/galaxy/job/runners/drmaa.py) as per my requirements. But I am not ale to understand the flow of that function. The recover function is called when the galaxy server is restarted. It first looks for the running jobs from the database. Then my problem is how it regains the same old state of the galaxy (specially the GUI) which was before the galaxy got restarted. Can anyone explain me the flow of the recover function and how the old state is regained. Hi Harendra, I'm not sure I understand what you mean by old state and the GUI - all that's really necessary here is to determine what Galaxy considers to be the state of the job (new, queued, running), recreate the in-memory job components (the JobWrapper), and place the job back in Galaxy's DRM-monitoring queue, which will then proceed with the process of finishing the job if it's finished in the DRM or waiting for it to finish if it's still queued or running in the DRM. --nate Regards Harendra ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/ ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
Re: [galaxy-dev] recovery: galaxy restart
Harendra chawla wrote: Hi Nate, I got your point but which part of the code is doing all these things, I mean how exactly this is done. Is it using any other function apart from recover? Yes, see __check_jobs_at_startup() in lib/galaxy/jobs/__init__.py --nate Regards Harendra On Wed, Jun 1, 2011 at 8:56 AM, Nate Coraor n...@bx.psu.edu wrote: Harendra chawla wrote: Hi everyone, I am trying to modify the *recover* function from the drmaa.py (/galaxy_central/lib/galaxy/job/runners/drmaa.py) as per my requirements. But I am not ale to understand the flow of that function. The recover function is called when the galaxy server is restarted. It first looks for the running jobs from the database. Then my problem is how it regains the same old state of the galaxy (specially the GUI) which was before the galaxy got restarted. Can anyone explain me the flow of the recover function and how the old state is regained. Hi Harendra, I'm not sure I understand what you mean by old state and the GUI - all that's really necessary here is to determine what Galaxy considers to be the state of the job (new, queued, running), recreate the in-memory job components (the JobWrapper), and place the job back in Galaxy's DRM-monitoring queue, which will then proceed with the process of finishing the job if it's finished in the DRM or waiting for it to finish if it's still queued or running in the DRM. --nate Regards Harendra ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/ ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
Re: [galaxy-dev] The new hg based Galaxy Tool Shed
On Wed, Jun 1, 2011 at 4:22 PM, Greg Von Kuster g...@bx.psu.edu wrote: Hello Peter - I finally got a chance to jump in - see my inline comments... Hi :) What happens with branches? Would the Tool Shed just show the default branch? That seems best for a simple UI. Some of the branching details are yet to be worked out, but forks are easy because repository urls include the unique username of the Galaxy user. Well, yes and no - as long as there are competing versions of a Galaxy tool (e.g. from an original author and a fork by a second author), and they use the same ID in their XML, you have a clash. This will have to be considered in the (automated) install interface. i.e. In general, when installing or updating any tool, there may be existing versions of some components already present. In fact two completely unrelated tools could even have the same XML ID by accident. I have a query regarding the way the tools are shown in tables and the version column, which shows a changeset and revision number. According to Greg's slides (slide #10, titled Simpler tool versioning which seems ironic to me), the old numerical version is still there in the XML - and I'd prefer to see that. How about having both shown (two columns, perhaps call them Public version and hg version or hg revision). We can certainly do this, but what would you like to see for tool suites and other tool types? The old Galaxy tool shed strictly required a suite_config.xml file that included the overall version of the suite. To make tool development easier, we're no longer requiring the inclusion of a suite_config.xml file ( we don't even differentiate types of tools since everything is a repository ). The definition of a tool in the next gen tool shed, is fairly loose. A tool could be data, it could be an exported workflow, it could be a suite of tools, a single tool, or just a set of files. So we'll need to define an easy way to provide a version of the tool if it will be different than the version of the repository tip. I see what you mean for the suite case. Maybe on the view details page each constituent tool could be shown with its classical version number from the XML file? Here's the future big picture highlights. Many of the details are yet to be defined and fleshed out... We're hoping that in the near future there will be many local tool sheds ( just like Galaxy instances ). I'm thinking that there will be a central tool shed broker of sorts that is hosted by the Galaxy team. This broker will provide 2 basic functions. It will enable local tool sheds ( including the current tool shed hosted by the Galaxy team ) to advertise their tools, and it will allow local Galaxy instances to use those advertisements to find tools that the local Galaxy instance's users are interested in. This specific point has not yet been discussed to any depth, so consider it fluid for now. I'm not immediately sold on this plan. To me one of the big plus points of having a single Official Tool Shed looked after by the Galaxy team is the convenience factor (a one stop shop), which requires critical mass, plus whatever QA happens as part of the current approval process. I would regard it as a step backwards if in order to hunt for a wrapper for a given tool, I had to resort to Google in order to find all the individual Galaxy Tool Sheds. When a Galaxy instance's admin locates tools within a specific tool shed that they want to install, they will be able to install them via a Galaxy tool installation control panel. Think of a UI that provides a check-boxed list of tools that have been found in some tool shed or sheds. The Galaxy admin will check those tools he wants to install, and the tools, along with all dependencies will automatically be installed in the local Galaxy instance. Dependencies could include 3rd party binaries, maybe some form of data, and other forms of dependencies. This is another good reason to keep tools separated in their own repositories. If you mean by dependencies the small task of installing the tool XML and associated scripts and data files currently bundled in the tar balls on the current Tool Shed, that seems fine. Anything beyond that seems difficult and likely to impose a significant extra load on tool wrapper authors. The installation will be virtually automatic, requiring little or no manual intervention via a package manage of sorts. This will be done using a combination of fabric scripts, and other components. All of the underlying mercurial stuff will be handled beneath the UI layer. This larger aim of installing the underlying dependencies is impossible in general - but that seems to be what you want to aim for. Consider obvious use case of closed source (non-redistributable) 3rd party binaries. I can think of several examples from the current Tool Shed wrappers, including the Roche Newbler off instrument applications, TMHMM and SignalP.
Re: [galaxy-dev] The new hg based Galaxy Tool Shed
Peter Cock wrote: Well, yes and no - as long as there are competing versions of a Galaxy tool (e.g. from an original author and a fork by a second author), and they use the same ID in their XML, you have a clash. This will have to be considered in the (automated) install interface. i.e. In general, when installing or updating any tool, there may be existing versions of some components already present. In fact two completely unrelated tools could even have the same XML ID by accident. I agree there could be a problem with tool ID uniqueness. We've talked about suggesting that people namespace their tool IDs to prevent this, but nothing formal has materialized at this point. I'm not immediately sold on this plan. To me one of the big plus points of having a single Official Tool Shed looked after by the Galaxy team is the convenience factor (a one stop shop), which requires critical mass, plus whatever QA happens as part of the current approval process. I would regard it as a step backwards if in order to hunt for a wrapper for a given tool, I had to resort to Google in order to find all the individual Galaxy Tool Sheds. It'll be possible for people to run their own Tool Sheds if they'd like, for whatever purpose - and this may be necessary for sharing extremely large data which we can't possibly host at the main Shed, but there should be an aggregator somewhere which lists all of the available public Sheds and makes it easy to add them as new sources to your Galaxy install. Like a slightly more organized Debian APT system. If you mean by dependencies the small task of installing the tool XML and associated scripts and data files currently bundled in the tar balls on the current Tool Shed, that seems fine. Anything beyond that seems difficult and likely to impose a significant extra load on tool wrapper authors. It'll be up to the authors to decide what level of complexity they care to handle, but we want to move away from the situation where someone installs a tool but finds that it's unusable because the actual underlying dependency doesn't exist and is non-trivial to install. This larger aim of installing the underlying dependencies is impossible in general - but that seems to be what you want to aim for. Consider obvious use case of closed source (non-redistributable) 3rd party binaries. I can think of several examples from the current Tool Shed wrappers, including the Roche Newbler off instrument applications, TMHMM and SignalP. Agreed, thankfully, the current dependency system (tool_dependency_dir in the config file (not in the sample config, sorry, I'll rememdy that shortly!)) only requires that you have an environment file that configures whatever is necessary (generally just $PATH) to find a dependency. So the tools in the Tool Shed would provide the XML, wrapper script (if necessary), and then instructions or perhaps an interface to configure the env file. Even if you just hope to cover open source tool dependencies, this is another big problem which seems like something Galaxy shouldn't be taking on. Frankly the only way I expect this grand plan to have any practical chance of success is if you limit yourselves to a single existing Linux package management platform like RPM or Deb files (although doing that would limit Galaxy's appeal). e.g. Work hand in hand with Debian-Med to ensure any missing tool is covered. Distributing binaries for the core platforms (Linux i686/x86_64) and Mac OS X is probably not terribly difficult for us, but would be more work for for 3rd party developers - but the choice to do this is up to them. I also haven't given too much though about how this would work. dpkg and rpm have the upside of being deterministic, but the downside of being platform-specific, requiring root, and not having much ability to install to varying paths. A fallback to source if binaries are not available would also be nice, if it's possible to write some easy instructions on how to compile, but of course this won't always be the case. Are you biting off more than you can chew? I hope I am misinterpreting your plans. Hopefully not! We're trying to think this through pretty thoroughly before we get started, thanks for joining in the discussion. =) (And for the umpteenth time, I am frustrated I couldn't make it to the Galaxy conference last week in person - more for this kind of discussion rather than the talks themselves. Will you be at BOSC or ISMB 2011 in Vienna? Maybe that could be another thread...) Agreed! I do believe there are some people going to BOSC, Dave will hopefully chime in with the details (when he's awake, I think he was only flying back today). --nate Regards, Peter ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at:
Re: [galaxy-dev] Out of Memory Error (Galaxy Local Instance) - File Download
John David Osborne wrote: Hi, We have run into a an error when trying to download a previously uploaded file from our local galaxy instance (lives in a 512 MB VM). The error is at the end of this message. In a previous thread, Nate advised setting use_debug = False in universe_wsgi.ini. Is this because the debugging code is causing the error but otherwise download would be fine? Hi John, That's correct, debug = True will cause the entire response to be loaded into memory, so once all available memory and swap is consumed, the process will crash. Also, is 512 MB enough for a VM? (Out setup has actual jobs being submitted to the cluster) Sure, it should be okay if you're not doing anything locally (make sure you set set_metadata_externally = True). --nate -John Error Traceback: View as: Interactivehttp://galaxy.uabgrid.uab.edu/datasets/ba1915f3923e3bf1/display?to_ext=sam | Texthttp://galaxy.uabgrid.uab.edu/datasets/ba1915f3923e3bf1/display?to_ext=sam | XMLhttp://galaxy.uabgrid.uab.edu/datasets/ba1915f3923e3bf1/display?to_ext=sam (full)http://galaxy.uabgrid.uab.edu/datasets/ba1915f3923e3bf1/display?to_ext=sam ⇝ MemoryError: out of memory URL:http://galaxy.uabgrid.uab.edu/datasets/ba1915f3923e3bf1/display?to_ext=sam Moduleweberror.evalexception.middleware:364in respond http://galaxy.uabgrid.uab.edu/datasets/ba1915f3923e3bf1/display?to_ext=sam [https://mail.ad.uab.edu/owa/attachment.ashx?id=RgDwtwsGAIuwS4Yj8heXBrQ8BwBfPEvHhIGISJMpl3DamZ28DucKAAAn9mSYf%2brbTqKQMfWLxCs%2bAAACFAODAAAJattcnt=1attid0=EACqVATCvqm6R6pDXKPldx%2f%2b] http://galaxy.uabgrid.uab.edu/datasets/ba1915f3923e3bf1/display?to_ext=sam viewhttp://galaxy.uabgrid.uab.edu/datasets/ba1915f3923e3bf1/display?to_ext=sam http://galaxy.uabgrid.uab.edu/datasets/ba1915f3923e3bf1/display?to_ext=sam app_iter =self.application(environ,detect_start_response) Module paste.debug.prints:98 in__call__ http://galaxy.uabgrid.uab.edu/datasets/ba1915f3923e3bf1/display?to_ext=sam [https://mail.ad.uab.edu/owa/attachment.ashx?id=RgDwtwsGAIuwS4Yj8heXBrQ8BwBfPEvHhIGISJMpl3DamZ28DucKAAAn9mSYf%2brbTqKQMfWLxCs%2bAAACFAODAAAJattcnt=1attid0=EACqVATCvqm6R6pDXKPldx%2f%2b] http://galaxy.uabgrid.uab.edu/datasets/ba1915f3923e3bf1/display?to_ext=sam viewhttp://galaxy.uabgrid.uab.edu/datasets/ba1915f3923e3bf1/display?to_ext=sam http://galaxy.uabgrid.uab.edu/datasets/ba1915f3923e3bf1/display?to_ext=sam environ, self.app) Module paste.wsgilib:544 inintercept_output http://galaxy.uabgrid.uab.edu/datasets/ba1915f3923e3bf1/display?to_ext=sam [https://mail.ad.uab.edu/owa/attachment.ashx?id=RgDwtwsGAIuwS4Yj8heXBrQ8BwBfPEvHhIGISJMpl3DamZ28DucKAAAn9mSYf%2brbTqKQMfWLxCs%2bAAACFAODAAAJattcnt=1attid0=EACqVATCvqm6R6pDXKPldx%2f%2b] http://galaxy.uabgrid.uab.edu/datasets/ba1915f3923e3bf1/display?to_ext=sam viewhttp://galaxy.uabgrid.uab.edu/datasets/ba1915f3923e3bf1/display?to_ext=sam http://galaxy.uabgrid.uab.edu/datasets/ba1915f3923e3bf1/display?to_ext=sam output.write(item) MemoryError: out of memory ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/ ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
Re: [galaxy-dev] recovery: galaxy restart
Hi Nate, Ya I have seen the function __check_jobs_at_startup(), it calls the recover function after checking the state of the job from the database and updating the jobWrapper. But what it does after calling the recover function and one more thing, does it use the monitor() and the check_watched_item() in the drmaa.py. Thanks Harendra On Wed, Jun 1, 2011 at 9:10 PM, Nate Coraor n...@bx.psu.edu wrote: Harendra chawla wrote: Hi Nate, I got your point but which part of the code is doing all these things, I mean how exactly this is done. Is it using any other function apart from recover? Yes, see __check_jobs_at_startup() in lib/galaxy/jobs/__init__.py --nate Regards Harendra On Wed, Jun 1, 2011 at 8:56 AM, Nate Coraor n...@bx.psu.edu wrote: Harendra chawla wrote: Hi everyone, I am trying to modify the *recover* function from the drmaa.py (/galaxy_central/lib/galaxy/job/runners/drmaa.py) as per my requirements. But I am not ale to understand the flow of that function. The recover function is called when the galaxy server is restarted. It first looks for the running jobs from the database. Then my problem is how it regains the same old state of the galaxy (specially the GUI) which was before the galaxy got restarted. Can anyone explain me the flow of the recover function and how the old state is regained. Hi Harendra, I'm not sure I understand what you mean by old state and the GUI - all that's really necessary here is to determine what Galaxy considers to be the state of the job (new, queued, running), recreate the in-memory job components (the JobWrapper), and place the job back in Galaxy's DRM-monitoring queue, which will then proceed with the process of finishing the job if it's finished in the DRM or waiting for it to finish if it's still queued or running in the DRM. --nate Regards Harendra ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/ ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
Re: [galaxy-dev] The new hg based Galaxy Tool Shed
On Wed, Jun 1, 2011 at 5:25 PM, Nate Coraor n...@bx.psu.edu wrote: Peter Cock wrote: Well, yes and no - as long as there are competing versions of a Galaxy tool (e.g. from an original author and a fork by a second author), and they use the same ID in their XML, you have a clash. This will have to be considered in the (automated) install interface. i.e. In general, when installing or updating any tool, there may be existing versions of some components already present. In fact two completely unrelated tools could even have the same XML ID by accident. I agree there could be a problem with tool ID uniqueness. We've talked about suggesting that people namespace their tool IDs to prevent this, but nothing formal has materialized at this point. That sounds sensible, and the sooner the better. I'm not immediately sold on this plan. To me one of the big plus points of having a single Official Tool Shed looked after by the Galaxy team is the convenience factor (a one stop shop), which requires critical mass, plus whatever QA happens as part of the current approval process. I would regard it as a step backwards if in order to hunt for a wrapper for a given tool, I had to resort to Google in order to find all the individual Galaxy Tool Sheds. It'll be possible for people to run their own Tool Sheds if they'd like, for whatever purpose - and this may be necessary for sharing extremely large data which we can't possibly host at the main Shed, but there should be an aggregator somewhere which lists all of the available public Sheds and makes it easy to add them as new sources to your Galaxy install. Like a slightly more organized Debian APT system. If there is an official meta tool shed aggregator, that would address my main concern about fragmenting things. If you mean by dependencies the small task of installing the tool XML and associated scripts and data files currently bundled in the tar balls on the current Tool Shed, that seems fine. Anything beyond that seems difficult and likely to impose a significant extra load on tool wrapper authors. It'll be up to the authors to decide what level of complexity they care to handle, Good - that silences a lot of my worries. ... but we want to move away from the situation where someone installs a tool but finds that it's unusable because the actual underlying dependency doesn't exist and is non-trivial to install. Improving the documentation shown on the tool shed could help here - make it easier for the tool wrapper to tell the Tool Shed user what will be required. Currently we get a short plain text box as part of the upload (no markup), and can include a (plain text) readme file which is easily viewable from the tool shed. I've just filed an enhancement request on a related idea: https://bitbucket.org/galaxy/galaxy-central/issue/565/ Show mockup of tool GUI in Galaxy Tool Shed This larger aim of installing the underlying dependencies is impossible in general - but that seems to be what you want to aim for. Consider obvious use case of closed source (non-redistributable) 3rd party binaries. I can think of several examples from the current Tool Shed wrappers, including the Roche Newbler off instrument applications, TMHMM and SignalP. Agreed, thankfully, the current dependency system (tool_dependency_dir in the config file (not in the sample config, sorry, I'll rememdy that shortly!)) only requires that you have an environment file that configures whatever is necessary (generally just $PATH) to find a dependency. So the tools in the Tool Shed would provide the XML, wrapper script (if necessary), and then instructions or perhaps an interface to configure the env file. I'd hope the common case where all that is required is the tool binary to be on the path, would not require any extra configuration files. See also: https://bitbucket.org/galaxy/galaxy-central/issue/82 [cut] Are you biting off more than you can chew? I hope I am misinterpreting your plans. Hopefully not! We're trying to think this through pretty thoroughly before we get started, thanks for joining in the discussion. =) I've been reassured :) Peter ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
Re: [galaxy-dev] recovery: galaxy restart
Harendra chawla wrote: Hi Nate, Ya I have seen the function __check_jobs_at_startup(), it calls the recover function after checking the state of the job from the database and updating the jobWrapper. But what it does after calling the recover function and one more thing, does it use the monitor() and the check_watched_item() in the drmaa.py. Yes, notice that the last function of the recover() method is to insert the job state object into the DRM monitor queue with: self.monitor_queue.put( drm_job_state ) Which is the same as the final step of the submission process, the queue_job() method. Once this happens, the monitor thread will pick up the job state from monitor_queue (a Python Queue instance) and monitor it with monitor()/check_watched_items(). --nate Thanks Harendra On Wed, Jun 1, 2011 at 9:10 PM, Nate Coraor n...@bx.psu.edu wrote: Harendra chawla wrote: Hi Nate, I got your point but which part of the code is doing all these things, I mean how exactly this is done. Is it using any other function apart from recover? Yes, see __check_jobs_at_startup() in lib/galaxy/jobs/__init__.py --nate Regards Harendra On Wed, Jun 1, 2011 at 8:56 AM, Nate Coraor n...@bx.psu.edu wrote: Harendra chawla wrote: Hi everyone, I am trying to modify the *recover* function from the drmaa.py (/galaxy_central/lib/galaxy/job/runners/drmaa.py) as per my requirements. But I am not ale to understand the flow of that function. The recover function is called when the galaxy server is restarted. It first looks for the running jobs from the database. Then my problem is how it regains the same old state of the galaxy (specially the GUI) which was before the galaxy got restarted. Can anyone explain me the flow of the recover function and how the old state is regained. Hi Harendra, I'm not sure I understand what you mean by old state and the GUI - all that's really necessary here is to determine what Galaxy considers to be the state of the job (new, queued, running), recreate the in-memory job components (the JobWrapper), and place the job back in Galaxy's DRM-monitoring queue, which will then proceed with the process of finishing the job if it's finished in the DRM or waiting for it to finish if it's still queued or running in the DRM. --nate Regards Harendra ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/ ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
Re: [galaxy-dev] The new hg based Galaxy Tool Shed
(apologies in advance, limiting my response to the two questions below) On Jun 1, 2011, at 11:54 AM, Peter Cock wrote: On Wed, Jun 1, 2011 at 5:25 PM, Nate Coraor n...@bx.psu.edu wrote: Peter Cock wrote: Well, yes and no - as long as there are competing versions of a Galaxy tool (e.g. from an original author and a fork by a second author), and they use the same ID in their XML, you have a clash. This will have to be considered in the (automated) install interface. i.e. In general, when installing or updating any tool, there may be existing versions of some components already present. In fact two completely unrelated tools could even have the same XML ID by accident. I agree there could be a problem with tool ID uniqueness. We've talked about suggesting that people namespace their tool IDs to prevent this, but nothing formal has materialized at this point. That sounds sensible, and the sooner the better. Agreed. I think simple namespace prefixes (maybe hg account?) is the easiest option. I'm not immediately sold on this plan. To me one of the big plus points of having a single Official Tool Shed looked after by the Galaxy team is the convenience factor (a one stop shop), which requires critical mass, plus whatever QA happens as part of the current approval process. I would regard it as a step backwards if in order to hunt for a wrapper for a given tool, I had to resort to Google in order to find all the individual Galaxy Tool Sheds. It'll be possible for people to run their own Tool Sheds if they'd like, for whatever purpose - and this may be necessary for sharing extremely large data which we can't possibly host at the main Shed, but there should be an aggregator somewhere which lists all of the available public Sheds and makes it easy to add them as new sources to your Galaxy install. Like a slightly more organized Debian APT system. If there is an official meta tool shed aggregator, that would address my main concern about fragmenting things. Not sure how feasible this is, but could you use hg subrepositories for this purpose? For instance, have a 'blessed' set of galaxy tool sheds (as subrepos) listed in a main tool shed repository. One of the nice advantages of this is it could allow one to use git or svn, though I think sticking with hg-only repos is the simplest option for now. chris PS - wonderful conference, sorry that Peter couldn't make it! ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
Re: [galaxy-dev] The new hg based Galaxy Tool Shed
Peter Cock wrote: If there is an official meta tool shed aggregator, that would address my main concern about fragmenting things. If nothing else, there can be a wiki page, although something programatic would be more ideal. ... but we want to move away from the situation where someone installs a tool but finds that it's unusable because the actual underlying dependency doesn't exist and is non-trivial to install. Improving the documentation shown on the tool shed could help here - make it easier for the tool wrapper to tell the Tool Shed user what will be required. Currently we get a short plain text box as part of the upload (no markup), and can include a (plain text) readme file which is easily viewable from the tool shed. I've just filed an enhancement request on a related idea: https://bitbucket.org/galaxy/galaxy-central/issue/565/ Show mockup of tool GUI in Galaxy Tool Shed Yeah, eventually we'll have to parse the tool configs in the repo, so functionality like this should show up as the Shed matures. Not sure about the difficulty of doing the tool form mockup, but I like the idea. This larger aim of installing the underlying dependencies is impossible in general - but that seems to be what you want to aim for. Consider obvious use case of closed source (non-redistributable) 3rd party binaries. I can think of several examples from the current Tool Shed wrappers, including the Roche Newbler off instrument applications, TMHMM and SignalP. Agreed, thankfully, the current dependency system (tool_dependency_dir in the config file (not in the sample config, sorry, I'll rememdy that shortly!)) only requires that you have an environment file that configures whatever is necessary (generally just $PATH) to find a dependency. So the tools in the Tool Shed would provide the XML, wrapper script (if necessary), and then instructions or perhaps an interface to configure the env file. I'd hope the common case where all that is required is the tool binary to be on the path, would not require any extra configuration files. See also: https://bitbucket.org/galaxy/galaxy-central/issue/82 Well, use of the dependency system isn't required, so just setting things up on the $PATH is always a possibility. I was going to suggest that your patch could be applied if it was conditional on the local runner and checked after any requirement type=package dependencies were setup, but there's still the problem of people running jobs through the local runner which are actually sent to the cluster without Galaxy's knowledge. Perhaps this is something we shouldn't worry too much about, but I know there are people doing it. --nate [cut] Are you biting off more than you can chew? I hope I am misinterpreting your plans. Hopefully not! We're trying to think this through pretty thoroughly before we get started, thanks for joining in the discussion. =) I've been reassured :) Peter ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
Re: [galaxy-dev] Script which creates users
We use external authentication against our local LDAP directory - when configured for external auth, Galaxy will automagically create a user account the first time it sees a new user name from Apache - see https://bitbucket.org/galaxy/galaxy-central/wiki/Config/ApacheProxy On Wed, Jun 1, 2011 at 1:55 PM, Chorny, Ilya icho...@illumina.com wrote: Can someone point me to the script that adds users to the user database? I want to check to see if a linux user name exists and add create that account if it doesn’t. BTW, has anyone else implemented this? Thanks, Ilya Ilya Chorny Ph.D. Bioinformatics – Intern icho...@illumina.com 858-202-4582 ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/ -- Ross Lazarus MBBS MPH; Associate Professor, Harvard Medical School; Director of Bioinformatics, Channing Lab; Tel: +1 617 505 4850; Head, Medical Bioinformatics, BakerIDI; Tel: +61 385321444; ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
Re: [galaxy-dev] Error getting to http://galaxy/user/create on local instance
It's also fixed in stable - the release right before the community conference includes a bug fix for that fixes the URL generated for that link. --nate Kanwei Li wrote: Fixed on trunk by setting a default cntrller when there isn't one. -K On Thu, May 26, 2011 at 2:16 PM, Dave Walton dave.wal...@jax.org wrote: Just removed user.pyc and restarted the server. Did not resolve the problem. Dave On 5/26/11 8:09 AM, Greg Von Kuster g...@bx.psu.edu wrote: Hi Glen, I've never seen this before, but perhaps your user controller is using an old version ( I'm just grasping here, because this is pretty obscure behavior ). Try deleting the following file - make sure it is the .pyc file. ~/lib/galaxy/web/controllers/user.pyc The user controller contains the code that generates the you may create one http://galaxy.jax.org/user/create link, but the current versions of this controller is using the new method signature as well, so the only thing I can think of is that your compiled version ( user.pyc ) is old. Let us know if this doesn't resolve the issue. Thanks! On May 26, 2011, at 7:39 AM, Glen Beane wrote: I just checked, Using the User-Register menu works, however we require our users to log in so when they go to galaxy.jax.org http://galaxy.jax.org/ they are redirected to a login page if they are not already logged in. That page has the text This installation of Galaxy has been configured such that only users who are logged in may use it. If you don't already have an account, you may create one http://galaxy.jax.org/user/create . , and you may create one links to http://galaxy.jax.org/user/create. clicking on the link results in the error. -glen On May 26, 2011, at 4:00 AM, Greg Von Kuster wrote: Hello Dave, Can you clarify how your users are accessing the following URL http://galaxy.jax.org/user/create I originally assumed they were using the Galaxy UI to register by clicking the Register link on the User popup menu. However, perhaps they are just pointing their browser to the URL. Is this the case? If so, why are you not clicking on the menu link? If it is the case that your users are entering the address in their browsers rather than using the normal Galaxy UI, then the address should include an additional request param as is shown here: http://galaxy.jax.org/user/create?cntrller=user Greg ~/ On May 25, 2011, at 1:46 PM, Dave Walton wrote: Greg. We do not use the template_cache_path in our config. I deleted all the files in ~/database/compiled_templates and restarted my server. I still get the error (even with a shift reload of my browser). I've tried it (on the Mac) with Firefox 3.5.2, Safari 5.0.5 and Google Chrome 11.0.696.71 We run two totally separate instances of Galaxy, on different virtual machines. One for testing, development of tools and deploying new versions of galaxy, and one for production, where our scientists do their analysis. Each of these servers run 4 instances of the galaxy process. 3 Web applications and 1 job runner. We get the error on both servers, but I'm doing all of this troubleshooting on our development server. This behavior is occuring for any user who tries to create a new galaxy account (so the original user who reported the problem, and myself as I test it. I can ask others to try if you think that will help). I mentioned the browsers I've tried it with above. I've not tried it from a windows box, but can, again, if you think that will help. We are running whatever version of galaxy-dist was available about a week and a half ago, which is when Glen did the upgrade. Thanks, Dave On 5/24/11 11:29 PM, Greg Von Kuster g...@bx.psu.edu wrote: Hi Dave, If you do not have a template_cache_path config setting in universe_wsgi.ini, then your cached templates are stored in the default directory of ~/database/compiled_templates. Deleting your cached templates in ~/database/compiled_templates should have corrected the problem, assuming there are no other issues within your Galaxy environment, so let's begin down the path of determining the cause. You mention your development server in your response. Are you running only 1 Galaxy instance, or more than 1? Is the behavior occurring for all of your users or just a few? Does it occur with different browsers, or is it related to a certain browser brand? What version of Galaxy are your running? Thanks Dave, Greg Von Kuster On May 24, 2011, at 6:06 PM, Dave Walton wrote: Greg, We've tried just reloading the browser and that didn't do anything. As for deleting the cached templates, on our development server I tried deleting everything under the database/compiled_templates directory (I even restarted my server) and it appeared to
Re: [galaxy-dev] Further Progress
icho...@illumina.com wrote: Have you made any additional progress on implimenting running DRMAA(sge) jobs as different user (i.e. selecting which user)? Also, have you found a way to integrate Linux usernames with Galaxy user names? Hi Ilya, This was a topic of much interest at the community conference and the subject of Paul Smith's lightning talk. I believe he's going to be working on it, but the Galaxy development team will also be addressing it in the next month or two as well. I am planning to open a thread on the development list sometime this week to get feedback on how to resolve some of the lingering problems that arise from implementing this. --nate Thanks, Ilya ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/ ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
Re: [galaxy-dev] tmp directory not cleaned up?
Ryan Golhar wrote: I just noticed a lot of files in my galaxy tmp directory. Since this isn't a system tmp directory, the system cron scripts don't clean it up. Is there a galaxy cron script that can be used to clean up this directory? Hi Ryan, Sorry for the (very) late reply, I'm picking up some threads which appear to have fallen through the cracks. We do this just with a (GNU) find: find /path/to/tmp -depth -mtime +7 -delete You can also use tmpreaper on Debian/Ubuntu. --nate -- CONFIDENTIALITY NOTICE: This email communication may contain private, confidential, or legally privileged information intended for the sole use of the designated and/or duly authorized recipient(s). If you are not the intended recipient or have received this email in error, please notify the sender immediately by email and permanently delete all copies of this email including all attachments without reading them. If you are the intended recipient, secure the contents in a manner that conforms to all applicable state and/or federal requirements related to privacy and confidentiality of such information. begin:vcard fn:Ryan Golhar, Ph.D. n:Golhar;Ryan org:The Cancer Institute of NJ;Cancer Informatics Core/Bioinformatics adr:5th floor;;120 Albany St;New Brunswick;NJ;08901;USA email;internet:golha...@umdnj.edu title:NGS Bioinformatics Specialist tel;work:(732) 235-6613 tel;fax:(732) 235-6267 tel;cell:(732) 236-1176 x-mozilla-html:FALSE url:http://www.cinj.org version:2.1 end:vcard ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/ ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
Re: [galaxy-dev] Multiple galaxy instances
Jean-Baptiste Denis wrote: Hello everybody, Hi Jean-Baptiste, i'm in the process to provide Galaxy for multiple team. I've already setup a testing instance using the production setup page on the wiki (apache + sge) and it works quite well if i'm refearing to the users feedback. This setup is typically use for NGS dealing with data on a NFS share without uploading them to the instance. Why do i need multiple instance ? Maybe i'm not using galaxy correctly. Correct me if i'm wrong. My goal is to delagate the management of library/datasets to a galaxy admin of each team from the beginning : i do NOT want a SINGLE independant super admin to manage the access for multiple team, it doesn't scale. Okay, you are correct, then, in needing to do this in seperate instances. We'd like for this to eventually become a role instead of superuser privelege, but I don't know when it'll be implemented. The galaxy instance and the underlying galaxy system user must access to the NGS data on the NFS (v3) share. This means that the galaxy user must be in a group that has access to the data. I can delegate the process of managing datasets and library to a dedicated galaxy admin. This setup is working quite well with the single instance setup. My job as a sysadmin is reduced to galaxy setup and maintenance : i'm not involved in the library/dataset management. The problem with this setup does not work if there is another team with data they don't want to share with others (don't blame me on that) : the galaxy system user must access the data of the first team AND the second team, this means that the galaxy admin of each team could access everything. One solution to this problem would be to have an independant galaxy super admin with access to everything which manage data access to each team. I don't like this solution, like i said, it doesn't scale. So, another way to deal with that is to give each team its own galaxy instance (each running with a specific system galaxy user) with a dedicated galaxy admin. Two possibility : - N galaxy tree, each with a different tuned universe_wsgi.ini init file (dedicated path, port, database, etc...). The problem here is on the sysadmin side : the update process effort must be repeated N times. - A unique galaxy tree, and N tuned (dedicated path, port, database, etc...) universe_wsgi.ini files. This seems the best to me but i need to know if galaxy internals can managed that kind of setup ? The latter should work fine, but you may have problems if users of one Galaxy instance want to share with users of another Galaxy instance. Hope this helps, and sorry for the long delay in response. --nate What do you think ? Any inputs, remarks or advices are welcome ! Regards, Jean-Baptiste ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/ ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
Re: [galaxy-dev] Multiple galaxy instances
On Wed, Jun 1, 2011 at 3:34 PM, Nate Coraor n...@bx.psu.edu wrote: Jean-Baptiste Denis wrote: My goal is to delagate the management of library/datasets to a galaxy admin of each team from the beginning : i do NOT want a SINGLE independant super admin to manage the access for multiple team, it doesn't scale. Okay, you are correct, then, in needing to do this in seperate instances. We'd like for this to eventually become a role instead of superuser privelege, but I don't know when it'll be implemented. This may not work for you but this is how we approach this problem - create a separate library folder at the top of the library hierarchy for each separate team; require the team role for access to that library; assign all the admin privs for all subfolders and data sets to a user in that team to manage all the material within that team's library and to take responsibility to ensure that every item requires that team's role for access. Every member of a team has to have the team role and every data set for the team must have that permission. I'm pretty sure this is possible if a little tedious to set up but users would not be able to see anything in a library they didn't have permission to see and each separate group's admin would only be able to administer things at or below their group's top level folder - but not in folders for which they did not have administrative permissions. This avoids all the problems with multiple separate instances but will require some careful administration - which each team would be responsible for. The Galaxy admin of the single instance can always see all libraries and contents - and if that's a problem for your users, please don't tell them that the root user of any unix filesystem (your system administrator) already has access to everything!! ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
[galaxy-dev] galaxy deployment workflow or best-practices
I looked at the Galaxy deployment presentation from the Galaxy Community Conference ( http://wiki.g2.bx.psu.edu/GCC2011?action=AttachFiledo=viewtarget=GalaxyDeploymentandAPI.pdf ) and it was really helpful. I am wondering if you could share any details or best practices for maintaing the galaxy install base. For example, do you use any build-deploy tools or workflow for updating the galaxy install?? Any comments or suggestions on this will be really helpful. -- Thanks, Shantanu. ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
[galaxy-dev] separate galaxy tools directory
I am wondering if galaxy tools directory structure can be changed so that tool configurations resides outside galaxy code base. Right now the galaxy code comes with some default/pre-installed tools in the $GALAXY_DIST/tools directory. Any additional tool configurations are also defined in the same directory. I think if we use a separate directory for additional tools then core galaxy code base can remain clean. This may help in updating the galaxy code as well. Any thoughts? -- Thanks, Shantanu. ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
Re: [galaxy-dev] user and dataset management with LDAP: some questions
Louise-Amélie Schmitt wrote: Hello everyone I'm updating the remaining questions: Hi Louise-Amélie, I think it's possible that some of these were answered in person at the community conference, but for a permanent record (and in case we didn't actually answer them) here they are: 1) We use LDAP for user registration and logging, would it be possible to retrieve automatically the LDAP users' groups and create the groups in Galaxy accordigly? (and of course put the user in their respective groups) You can automatically register new users an Galaxy by using the LDAP authentication method described in the Apache Proxy documentation on the wiki. Groups are another story, currently there is no way to synchronize Galaxy groups with LDAP groups, although if the API were updated to allow for group creation and management, that would probably be the way to do it. 2) Is it possible, still using LDAP, to delete a user and all his datasets? Meaning, you delete the user in LDAP, can that cause the user to be deleted in Galaxy? Not currently. Probably also a good candidate for API functionality, not currently implemented. 3) Is it possible to automatically suppress, for instance, any dataset that was added more than a couple of months ago? Only for anonymous datasets, all datasets owned by a real user are kept until marked deleted by the user. 4) Is there a not-so-intrusive way to add a column in the user list showing the disk space they use respectively? I believe a later email included a quick patch for this, but I'm also working on adding this into the committed code base. --nate Thanks in advance for your help Regards, L-A Le mardi 12 avril 2011 à 16:41 +0200, Louise-Amélie Schmitt a écrit : Hello everyone I have a couple of questions regarding user and dataset management. 1) We use LDAP for user registration and logging, would it be possible to retrieve automatically the LDAP users' groups and create the groups in Galaxy accordigly? (and of course put the user in their respective groups) 2) Is it possible, still using LDAP, to delete a user and all his datasets? 3) Is it possible to automatically suppress, for instance, any dataset that was added more than a couple of months ago? 4) Is there a not-so-intrusive way to add a column in the user list showing the disk space they use respectively? 5) I tried to see how the API works but i have to admit I didn't get a thing. I read the scripts/api/README file and there I saw one needs to access the user preferences to generate an API key. What is its purpose? Is there a way to do it when you use LDAP (therefore no access to that key generator) ? Sorry, this is a bit random but I'm kinda drowning here, since I'm not used to manipulating apps this huge. Thanks for your help and patience. Cheers, L-A ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/ ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/ ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
Re: [galaxy-dev] separate galaxy tools directory
Shantanu Pavgi wrote: I am wondering if galaxy tools directory structure can be changed so that tool configurations resides outside galaxy code base. Right now the galaxy code comes with some default/pre-installed tools in the $GALAXY_DIST/tools directory. Any additional tool configurations are also defined in the same directory. I think if we use a separate directory for additional tools then core galaxy code base can remain clean. This may help in updating the galaxy code as well. Any thoughts? Hi Shantanu, It's entirely possible to put an absolute path into the tool file=.../ tags in tool_conf.xml --nate -- Thanks, Shantanu. ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/ ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
Re: [galaxy-dev] problems previewing certain files, downloading files and with login
Hi Nate, I was able to finally track down the login issue. It had to do with the following setting in my universe_wsgi.ini: * * *cookie_path = /galaxy* Removing this out fixed the problem and I should be fine leaving it out out since I don't need to run more than one instance of Galaxy. I probably shouldn't have had this setting in the first place, though I'm not exactly sure what this caused the problem. Anyways, thanks for all the help tracking these issues down. -Matt On Mon, May 16, 2011 at 5:26 PM, Matthew Conte mco...@umd.edu wrote: Yep. On May 16, 2011 7:14 AM, Nate Coraor n...@bx.psu.edu wrote: Matthew Conte wrote: On Fri, May 13, 2011 at 11:18 AM, Nate Coraor n...@bx.psu.edu wrote: That's correct, although it should've been fixed. Can you remove the contents of galaxy-dist/database/compiled_templates, clear cache, reload, and see if you still get the same results? Thanks, --nate I've removed the contents of that folder, cleared the browser cache, reloaded and still get the same results (nothing different in the log either). I'm not sure if this will help, but maybe I should mention that when I try to log in I do get the following screen to flash for a brief second before automatically taking me back to the welcome screen: [image: Screen shot 2011-05-13 at 1.44.25 PM.png] I'm going to try to set up a test environment to replicate this shortly. Does this happen with both Apache and nginx? --nate ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/