Re: [galaxy-dev] Desire to contribute
Hi Paul, Fixes or enhancements to individual tools might be a good place to start - and you won't need to know as much about the Galaxy internals. The Galaxy development team look after a lot of tools/wrappers, but of course there are even more on the Tool Shed written and maintained by other groups. Fixing a non-core Galaxy tool may not be quite what your lecturer had in mind, so do check ;) Peter On Wednesday, February 6, 2013, Matthew Paul wrote: Dear Galaxy Project community, I am working with a group of students at College of Charleston of South Carolina. Being interested in bioinformatics and software engineering, we chose to work on Galaxy for our open source class project. We are subscribed to the appropriate mailing list, have been accessing Trello and are becoming familiar with the Galaxy architecture. Our first assignment is to identify and fix a bug, but unfortunately the bugs reported seem to be going right over our heads.Where would be a good place to start, so that we may be able to contribute to your system (documentation, etc)? We are looking forward to your response. Thank you, Matt Paul -- Forwarded message -- From: Matthew Paul mrp...@g.cofc.edu javascript:_e({}, 'cvml', 'mrp...@g.cofc.edu'); Date: Tue, Feb 5, 2013 at 8:10 PM Subject: Desire to contribute To: galaxy-dev@lists.bx.psu.edu javascript:_e({}, 'cvml', 'galaxy-dev@lists.bx.psu.edu'); Dear Galaxy Project community, ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
[galaxy-dev] Import history from web server to local server
Hi all, I'm trying to import an history from the web server to my local server. First I've chosen Export to File in the history menu and I've seen a message with an url. Second in my local server I've created a new history and I chosen Import from File in my new history menu. I've pasted the url in the formular and A message says : Importing history from ' http://cistrome.org/ap/history/export_archive?id='. This history will be visible when the import is complete But there is nothing in my new history and it seems that no operation is running!!! Is it normal? Another idea to transfer an hitory and his datasets in another history? Thanks. julie ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
Re: [galaxy-dev] Fwd: Desire to contribute
In risk of getting a discussion here: a long standing enhancement request is to highlight the current history item one is currently viewing. Situation sketch: I often hide the history items panel to study results (displayed in middle panel) into detail, and when I bring the history item panel back, I often have to search which item I was viewing - no clue at all. It really annoys me, but I don't know whether this can be fixed easily, and how deep you need to dig. Anyway, you will make at least one person happy :-) Cheers, Joachim Joachim Jacob Rijvisschestraat 120, 9052 Zwijnaarde Tel: +32 9 244.66.34 Bioinformatics Training and Services (BITS) http://www.bits.vib.be @bitsatvib On 02/06/2013 02:16 AM, Matthew Paul wrote: Dear Galaxy Project community, I am working with a group of students at College of Charleston of South Carolina. Being interested in bioinformatics and software engineering, we chose to work on Galaxy for our open source class project. We are subscribed to the appropriate mailing list, have been accessing Trello and are becoming familiar with the Galaxy architecture. Our first assignment is to identify and fix a bug, but unfortunately the bugs reported seem to be going right over our heads.Where would be a good place to start, so that we may be able to contribute to your system (documentation, etc)? We are looking forward to your response. Thank you, Matt Paul -- Forwarded message -- From: *Matthew Paul* mrp...@g.cofc.edu mailto:mrp...@g.cofc.edu Date: Tue, Feb 5, 2013 at 8:10 PM Subject: Desire to contribute To: galaxy-dev@lists.bx.psu.edu mailto:galaxy-dev@lists.bx.psu.edu Dear Galaxy Project community, ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/ ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
Re: [galaxy-dev] Fwd: Desire to contribute
I thought it was on Trello already. Anyway, for the moment I cannot acces Trello... When I do I will search and perhaps add it! Cheers Joachim Joachim Jacob Rijvisschestraat 120, 9052 Zwijnaarde Tel: +32 9 244.66.34 Bioinformatics Training and Services (BITS) http://www.bits.vib.be @bitsatvib On 02/06/2013 12:08 PM, Peter Cock wrote: On Wed, Feb 6, 2013 at 11:01 AM, Joachim Jacob |VIB| joachim.ja...@vib.be wrote: In risk of getting a discussion here: a long standing enhancement request is to highlight the current history item one is currently viewing. Situation sketch: I often hide the history items panel to study results (displayed in middle panel) into detail, and when I bring the history item panel back, I often have to search which item I was viewing - no clue at all. It really annoys me, but I don't know whether this can be fixed easily, and how deep you need to dig. Anyway, you will make at least one person happy :-) Cheers, Joachim That sounds like a good usability enhancement - and would likely need some knowledge of the mako template system used in Galaxy, and HTML/CSS for the visual styling too. You said it was a long standing enhancement request - is it filed on Trello? http://galaxyproject.org/trello (I couldn't find it myself). Peter ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
[galaxy-dev] workflow intermediate files hog memory
Hi All Intermediate files in a workflow often make up the large majority of a workflow's output and, when this is an NGS analysis, this volume can be HUGE. This is a considerable concern for me as we consider implementing a local install of galaxy. Storing all of this seems useless (once workflow has been worked out) and a huge memory hog if one wants to actually persist the useful final outputs of workflows in galaxy. Is there any way to specify that the output of particular steps in a workflow be deleted (or sent to /tmp) upon successful workflow completion? How are others dealing with this? Is it inadvisable to use galaxy to serve as a repository of results? Thanks Mark This message may contain confidential information. If you are not the designated recipient, please notify the sender immediately, and delete the original and any copies. Any use of the message by you is prohibited. ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
Re: [galaxy-dev] Why
Hmm. If that configuration option is set to True, and uncommented, then the only thing left to check would be to make sure you are logged in as an admin user. And, just reply-all, like you did to this mail, and everything will thread to the list properly. Thanks! -Dannon On Feb 6, 2013, at 4:59 AM, Sanjarbek Hudaiberdiev hudai...@icgeb.org wrote: Sorry, I couldn't find a way of posting a reply on the thread. There was an only option of replying to author. How can I post my reply so that it appears in publicly visible list? Yes, I restarted, and in the drop-down, there are only import from your current history and upload files Sanjar. On 02/05/2013 06:19 PM, Dannon Baker wrote: Have you restarted your galaxy instance after changing the setting? And, in that drop-down, can you confirm that 'filesystem paths' is not an option? Also, please keep conversations on-list so that others might be able to help and/or benefit from the exchange. -Dannon On Feb 5, 2013, at 12:15 PM, hudai...@icgeb.org wrote: I set allow_library_path_paste = True in universe_wsgi.ini and still don't have the option of copying locally on manage data libraries page. http://i.imgur.com/PEeCAnc.png?1 Should I set some other parameters to True? quote author='Dannon Baker' The absolute fastest method would be to enable filepath upload for data libraries (in universe_wsgi.ini -- allow_library_path_paste = True). Once that's set, you can paste the absolute file path into the upload box, and check the box for not copying the data. This will leave the data where it is on the external drive, though it will make it available in the Data Library. Obviously if you use this method, you'll need to have that external drive attached (and always at the same mount point) to use the data. If you actually *do* want to copy this data off the external drive, simply don't check that 'no copy' checkbox and the data will be copied directly through the filesystem and not uploaded. -Dannon ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
Re: [galaxy-dev] workflow intermediate files hog memory
Hey Mark, Galaxy by default saves everything, as you've noticed. In workflows, you can flag outputs after which intermediate (unflagged) steps will be 'hidden' in the history, but you can't automatically delete them, though this is something we've wanted to do for a while. Unfortunately it requires rewriting the workflow execution model, so it's a larger task. As a stopgap measure, being able to wipe out those 'hidden' datasets in one step would probably be useful. I'd actually thought this was already implemented as an option in the history panel menu, but I don't see it now. I'm creating a Trello card now for adding that method, and there's already one for the deletion of intermediate datasets. -Dannon On Feb 6, 2013, at 7:07 AM, mark.r...@syngenta.com wrote: Hi All Intermediate files in a workflow often make up the large majority of a workflow’s output and, when this is an NGS analysis, this volume can be HUGE. This is a considerable concern for me as we consider implementing a local install of galaxy. Storing all of this seems useless (once workflow has been worked out) and a huge memory hog if one wants to actually persist the useful final outputs of workflows in galaxy. Is there any way to specify that the output of particular steps in a workflow be deleted (or sent to /tmp) upon successful workflow completion? How are others dealing with this? Is it inadvisable to use galaxy to serve as a repository of results? Thanks Mark This message may contain confidential information. If you are not the designated recipient, please notify the sender immediately, and delete the original and any copies. Any use of the message by you is prohibited. ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/ ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
Re: [galaxy-dev] Empty status for web uploaded non-empty datasets
On Feb 5, 2013, at 7:25 PM, Eric Enns wrote: Hey, In our local galaxy install using the latest release january, uploaded datasets are showing empty. But when you download it the contents are there, also if you click auto-detect in edit attributes it will remove the empty status and allow peeking and such. If you upload via ftp the dataset does not show up as empty. We are setting the metadata on our cluster if this is of any help. Hi Eric, You'll probably need to disable attribute caching. From the doc[1]: You may also find that you need to disable attribute caching in your filesystem. In NFS this is done with the -noac mount option (Linux) or -actimeo=0 (Solaris). The attribute cache can prevent Galaxy from detecting the presence of output files or properly reading their sizes. Note that there is some performance trade-off here since all attributes will have to be read from the file server upon every file access. --nate [1] http://wiki.galaxyproject.org/Admin/Config/Performance/Cluster -Eric ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/ ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
[galaxy-dev] Can Galaxy run on Hadoop system?
Hi, I am a computer engineer and have to deploy a cloud computing system, like Hadoop. I want to deploy a local Galaxy system on the Hadoop platform. I know that Galaxy can run on Cloud such as EC2 from Amazon. My question is: can Galaxy run on Hadoop? (Not running within a virtual mechine, but interacting with multiple slave nodes in Hadoop) If NOT, do you have any plan to migrate it to Hadoop platform? Best regards. lukeyoyo___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
Re: [galaxy-dev] Why
I am an admin user. I guess otherwise I wouldn't have access to Manage data libraries page. I have two instances of galaxy, one at server and one locally on my laptop. And both have the same problem. Can't let the files copy locally, and it's a problem. On 02/06/2013 02:10 PM, Dannon Baker wrote: Hmm. If that configuration option is set to True, and uncommented, then the only thing left to check would be to make sure you are logged in as an admin user. And, just reply-all, like you did to this mail, and everything will thread to the list properly. Thanks! -Dannon On Feb 6, 2013, at 4:59 AM, Sanjarbek Hudaiberdiev hudai...@icgeb.org wrote: Sorry, I couldn't find a way of posting a reply on the thread. There was an only option of replying to author. How can I post my reply so that it appears in publicly visible list? Yes, I restarted, and in the drop-down, there are only import from your current history and upload files Sanjar. On 02/05/2013 06:19 PM, Dannon Baker wrote: Have you restarted your galaxy instance after changing the setting? And, in that drop-down, can you confirm that 'filesystem paths' is not an option? Also, please keep conversations on-list so that others might be able to help and/or benefit from the exchange. -Dannon On Feb 5, 2013, at 12:15 PM, hudai...@icgeb.org wrote: I set allow_library_path_paste = True in universe_wsgi.ini and still don't have the option of copying locally on manage data libraries page. http://i.imgur.com/PEeCAnc.png?1 Should I set some other parameters to True? quote author='Dannon Baker' The absolute fastest method would be to enable filepath upload for data libraries (in universe_wsgi.ini -- allow_library_path_paste = True). Once that's set, you can paste the absolute file path into the upload box, and check the box for not copying the data. This will leave the data where it is on the external drive, though it will make it available in the Data Library. Obviously if you use this method, you'll need to have that external drive attached (and always at the same mount point) to use the data. If you actually *do* want to copy this data off the external drive, simply don't check that 'no copy' checkbox and the data will be copied directly through the filesystem and not uploaded. -Dannon ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
Re: [galaxy-dev] Can Galaxy run on Hadoop system?
Hi, Because hadoop is a different computing approach, all the underlying tools would need to be written for hadoop as well, so currently Galaxy would not be very useful with only a hadoop cluster as a backend. However work is underway to allow hadoop jobs to be run alongside other jobs from a Galaxy frontend. -- James Taylor, Assistant Professor, Biology/CS, Emory University On Wed, Feb 6, 2013 at 9:51 AM, lukeyoyo lukey...@gmail.com wrote: Hi, I am a computer engineer and have to deploy a cloud computing system, like Hadoop. I want to deploy a local Galaxy system on the Hadoop platform. I know that Galaxy can run on Cloud such as EC2 from Amazon. My question is: can Galaxy run on Hadoop? (Not running within a virtual mechine, but interacting with multiple slave nodes in Hadoop) If NOT, do you have any plan to migrate it to Hadoop platform? Best regards. lukeyoyo ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/ ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
Re: [galaxy-dev] Import history from web server to local server
This is likely a permissions issue: importing your history into another Galaxy instance requires the history to be accessible. Here's a solution that, while ugly, should work: (1) Make the history accessible by going to Share/Publish and clicking the option to make it accessible. (2) Export your history again. (3) Use the history URL to import to another instance. I've created a card for enhancements to this feature that will make this process easier in the future. https://trello.com/c/qCfAWeYU Best, J On Feb 6, 2013, at 5:00 AM, julie dubois wrote: Hi all, I'm trying to import an history from the web server to my local server. First I've chosen Export to File in the history menu and I've seen a message with an url. Second in my local server I've created a new history and I chosen Import from File in my new history menu. I've pasted the url in the formular and A message says : Importing history from 'http://cistrome.org/ap/history/export_archive?id='. This history will be visible when the import is complete But there is nothing in my new history and it seems that no operation is running!!! Is it normal? Another idea to transfer an hitory and his datasets in another history? Thanks. julie ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/ ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
Re: [galaxy-dev] Import history from web server to local server
Hi, thanks for your help. I've tested your procedure and it doesn't work. I have the same error. Sorry and thanks for the creation of the card. Julie 2013/2/6 Jeremy Goecks jeremy.goe...@emory.edu This is likely a permissions issue: importing your history into another Galaxy instance requires the history to be accessible. Here's a solution that, while ugly, should work: (1) Make the history accessible by going to Share/Publish and clicking the option to make it accessible. (2) Export your history again. (3) Use the history URL to import to another instance. I've created a card for enhancements to this feature that will make this process easier in the future. https://trello.com/c/qCfAWeYU Best, J On Feb 6, 2013, at 5:00 AM, julie dubois wrote: Hi all, I'm trying to import an history from the web server to my local server. First I've chosen Export to File in the history menu and I've seen a message with an url. Second in my local server I've created a new history and I chosen Import from File in my new history menu. I've pasted the url in the formular and A message says : Importing history from ' http://cistrome.org/ap/history/export_archive?id='. This history will be visible when the import is complete But there is nothing in my new history and it seems that no operation is running!!! Is it normal? Another idea to transfer an hitory and his datasets in another history? Thanks. julie ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/ ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
Re: [galaxy-dev] Import history from web server to local server
Ah, I see the issue: the Cistrome instance cannot be used anonymously (without login). It's not possible for one Galaxy instance to work with another instance's history because instances work with objects anonymously rather than using login credentials. For now, you can download/copy the compressed history to a Web-accessible location (e.g. local web server, Dropbox) and import the history from that location. We'll look into improving this in the future. Best, J. On Feb 6, 2013, at 11:10 AM, julie dubois wrote: Hi, thanks for your help. I've tested your procedure and it doesn't work. I have the same error. Sorry and thanks for the creation of the card. Julie 2013/2/6 Jeremy Goecks jeremy.goe...@emory.edu This is likely a permissions issue: importing your history into another Galaxy instance requires the history to be accessible. Here's a solution that, while ugly, should work: (1) Make the history accessible by going to Share/Publish and clicking the option to make it accessible. (2) Export your history again. (3) Use the history URL to import to another instance. I've created a card for enhancements to this feature that will make this process easier in the future. https://trello.com/c/qCfAWeYU Best, J On Feb 6, 2013, at 5:00 AM, julie dubois wrote: Hi all, I'm trying to import an history from the web server to my local server. First I've chosen Export to File in the history menu and I've seen a message with an url. Second in my local server I've created a new history and I chosen Import from File in my new history menu. I've pasted the url in the formular and A message says : Importing history from 'http://cistrome.org/ap/history/export_archive?id='. This history will be visible when the import is complete But there is nothing in my new history and it seems that no operation is running!!! Is it normal? Another idea to transfer an hitory and his datasets in another history? Thanks. julie ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/ ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
Re: [galaxy-dev] Import history from web server to local server
Thanks! Just one ask : How can I download this compressed history : is it the same that : wget url_of_exported_history and copy the file from this command in a local server ? Because this file is not an archive but a text file with html code thanks again. Julie 2013/2/6 Jeremy Goecks jeremy.goe...@emory.edu Ah, I see the issue: the Cistrome instance cannot be used anonymously (without login). It's not possible for one Galaxy instance to work with another instance's history because instances work with objects anonymously rather than using login credentials. For now, you can download/copy the compressed history to a Web-accessible location (e.g. local web server, Dropbox) and import the history from that location. We'll look into improving this in the future. Best, J. On Feb 6, 2013, at 11:10 AM, julie dubois wrote: Hi, thanks for your help. I've tested your procedure and it doesn't work. I have the same error. Sorry and thanks for the creation of the card. Julie 2013/2/6 Jeremy Goecks jeremy.goe...@emory.edu This is likely a permissions issue: importing your history into another Galaxy instance requires the history to be accessible. Here's a solution that, while ugly, should work: (1) Make the history accessible by going to Share/Publish and clicking the option to make it accessible. (2) Export your history again. (3) Use the history URL to import to another instance. I've created a card for enhancements to this feature that will make this process easier in the future. https://trello.com/c/qCfAWeYU Best, J On Feb 6, 2013, at 5:00 AM, julie dubois wrote: Hi all, I'm trying to import an history from the web server to my local server. First I've chosen Export to File in the history menu and I've seen a message with an url. Second in my local server I've created a new history and I chosen Import from File in my new history menu. I've pasted the url in the formular and A message says : Importing history from ' http://cistrome.org/ap/history/export_archive?id='. This history will be visible when the import is complete But there is nothing in my new history and it seems that no operation is running!!! Is it normal? Another idea to transfer an hitory and his datasets in another history? Thanks. julie ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/ ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
Re: [galaxy-dev] DustMasker tool for ncbi_blast_plus
Adding galaxy-dev list in CC as suggested by Peter. Il giorno mer, 06/02/2013 alle 16.57 +, Peter Cock ha scritto: On Tue, Feb 5, 2013 at 11:45 AM, Nicola Soranzo sora...@crs4.it wrote: Dear Peter, I have created a simple Galaxy tool for DustMasker of the NCBI BLAST+ suite, which I think would be a useful addition to the ncbi_blast_plus repository you're maintaining in the Galaxy Tool Shed. You can find it and hopefully pull it from: https://bitbucket.org/nsoranzo/ncbi_blast_plus Kind regards, Nicola Hi Nicola, Thanks for getting involved - we can discuss this on the galaxy-dev mailing list if you prefer? For now I have CC'd Edward Kirton as he is/was working on masking in BLAST databases for Galaxy. I can see the new file tools/ncbi_blast_plus/ncbi_dustmasker_wrapper.xml however it refers to multiple new file formats - where are they defined? * acclist * maskinfo_asn1_bin * maskinfo_asn1_text * seqloc_asn1_bin * seqloc_asn1_text Hi Peter, I added these file formats mostly as placeholders for a future implementation. Now I have changed a bit the tool by removing acclist and seqloc_xml formats since they are not recognized by the last versions of dustmasker (I also sent an email to blast-h...@ncbi.nlm.nih.gov to inform them of this bug). As before, you can find the new version at: https://bitbucket.org/nsoranzo/ncbi_blast_plus I stripped the old commit and did a new one, not a very good practice, sorry about that. Have you looked at the (commented out) bits in the makeblastdb wrapper which would perhaps be relevant? This is something Edward Kirton wrote which I haven't integrated yet: !-- SEQUENCE MASKING OPTIONS -- !-- TODO repeat name=mask_data title=Provide one or more files containing masking data param name=file type=data format=asnb label=File containing masking data help=As produced by NCBI masking applications (e.g. dustmasker, segmasker, windowmasker) / /repeat repeat name=gi_mask title=Create GI indexed masking data param name=file type=data format=asnb label=Masking data output file / /repeat -- Perhaps all you need to offer in ncbi_dustmasker_wrapper.xml is 'fasta' and 'asnb' (binary ASN) formats? Edward - did you have an 'asnb' definition? 'fasta' and 'interval' are the ones I'm interested for my use case. 'maskinfo_asn1_bin' is probably the one referenced as 'asnb' in the cited code (ASN1 is a general data serialization format like XML). A file in this format can be given as input to makeblastdb -mask_data. Nicola -- Nicola Soranzo, Ph.D. CRS4 Bioinformatics Program Loc. Piscina Manna 09010 Pula (CA), Italy http://www.bioinformatica.crs4.it/ ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
Re: [galaxy-dev] Import history from web server to local server
Unfortunately using wget won't work in this case. The reason you have access to the history is your Galaxy cookie, which isn't shared with wget/curl. You'll need to click on the export link in your Web browser to download the history to your local computer and then move it to a local server. Best, J. On Feb 6, 2013, at 12:07 PM, julie dubois wrote: Thanks! Just one ask : How can I download this compressed history : is it the same that : wget url_of_exported_history and copy the file from this command in a local server ? Because this file is not an archive but a text file with html code thanks again. Julie 2013/2/6 Jeremy Goecks jeremy.goe...@emory.edu Ah, I see the issue: the Cistrome instance cannot be used anonymously (without login). It's not possible for one Galaxy instance to work with another instance's history because instances work with objects anonymously rather than using login credentials. For now, you can download/copy the compressed history to a Web-accessible location (e.g. local web server, Dropbox) and import the history from that location. We'll look into improving this in the future. Best, J. On Feb 6, 2013, at 11:10 AM, julie dubois wrote: Hi, thanks for your help. I've tested your procedure and it doesn't work. I have the same error. Sorry and thanks for the creation of the card. Julie 2013/2/6 Jeremy Goecks jeremy.goe...@emory.edu This is likely a permissions issue: importing your history into another Galaxy instance requires the history to be accessible. Here's a solution that, while ugly, should work: (1) Make the history accessible by going to Share/Publish and clicking the option to make it accessible. (2) Export your history again. (3) Use the history URL to import to another instance. I've created a card for enhancements to this feature that will make this process easier in the future. https://trello.com/c/qCfAWeYU Best, J On Feb 6, 2013, at 5:00 AM, julie dubois wrote: Hi all, I'm trying to import an history from the web server to my local server. First I've chosen Export to File in the history menu and I've seen a message with an url. Second in my local server I've created a new history and I chosen Import from File in my new history menu. I've pasted the url in the formular and A message says : Importing history from 'http://cistrome.org/ap/history/export_archive?id='. This history will be visible when the import is complete But there is nothing in my new history and it seems that no operation is running!!! Is it normal? Another idea to transfer an hitory and his datasets in another history? Thanks. julie ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/ ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
Re: [galaxy-dev] Fwd: Desire to contribute
Hi Matt, Here are a couple of things to consider: 1. Fix Select tool to match special characters: https://trello.com/c/cwrBpNP9 2. Extend history export to include composite dataset objects/files: https://trello.com/c/oq1ASbkC There are lots of other ideas, but they tend to be a lot more work. Please let the list know if any of the ideas posted so far grab you, or if you want further explanation. Thanks for your interest and for picking the Galaxy Project. Efforts like these really help the project move forward. Dave C On Tue, Feb 5, 2013 at 8:16 PM, Matthew Paul mrp...@g.cofc.edu wrote: Dear Galaxy Project community, I am working with a group of students at College of Charleston of South Carolina. Being interested in bioinformatics and software engineering, we chose to work on Galaxy for our open source class project. We are subscribed to the appropriate mailing list, have been accessing Trello and are becoming familiar with the Galaxy architecture. Our first assignment is to identify and fix a bug, but unfortunately the bugs reported seem to be going right over our heads.Where would be a good place to start, so that we may be able to contribute to your system (documentation, etc)? We are looking forward to your response. Thank you, Matt Paul -- Forwarded message -- From: Matthew Paul mrp...@g.cofc.edu Date: Tue, Feb 5, 2013 at 8:10 PM Subject: Desire to contribute To: galaxy-dev@lists.bx.psu.edu Dear Galaxy Project community, ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/ -- http://galaxyproject.org/wiki/GCC2012http://galaxyproject.org/ http://getgalaxy.org/ http://usegalaxy.org/ http://wiki.galaxyproject.org/ ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
[galaxy-dev] Uploading a Directory of Files - IOError: [Errno 28] No space left on device
Hi guys, When I try to upload a directory of files from a server directory I'm seeing the error below. It appears to be trying to write to a temp directory somewhere that I'm guessing doesn't have enough space? Is there a way I can direct where it writes to for temporary files like this? Am I understanding right, that these upload jobs are running on our cluster? I think it would be a problem if its trying to use the default temp directory on each cluster node since they aren't provisioned with much space. Please advise. Thanks, Greg Miscellaneous information:Traceback (most recent call last): File /misc/local/galaxy/galaxy-dist/tools/data_source/upload.py, line 384, in __main__() File /misc/local/galaxy/galaxy-dist/tools/data_source/upload.py, line 373, in __main__ add_file( dataset, Job Standard Error Traceback (most recent call last): File /misc/local/galaxy/galaxy-dist/tools/data_source/upload.py, line 384, in __main__() File /misc/local/galaxy/galaxy-dist/tools/data_source/upload.py, line 373, in __main__ add_file( dataset, registry, json_file, output_path ) File /misc/local/galaxy/galaxy-dist/tools/data_source/upload.py, line 270, in add_file line_count, converted_path = sniff.convert_newlines( dataset.path, in_place=in_place ) File /misc/local/galaxy/galaxy-dist/lib/galaxy/datatypes/sniff.py, line 99, in convert_newlines fp.write( %s\n % line.rstrip( \r\n ) ) IOError: [Errno 28] No space left on device ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
Re: [galaxy-dev] Uploading a Directory of Files - IOError: [Errno 28] No space left on device
On Feb 6, 2013, at 2:32 PM, greg wrote: Hi guys, When I try to upload a directory of files from a server directory I'm seeing the error below. It appears to be trying to write to a temp directory somewhere that I'm guessing doesn't have enough space? Is there a way I can direct where it writes to for temporary files like this? Hi Greg, There are a few ways. For some parts of Galaxy, you will want to set new_file_path in universe_wsgi.ini to a suitable temp space. However, this is not the case for the upload tool. Am I understanding right, that these upload jobs are running on our cluster? I think it would be a problem if its trying to use the default temp directory on each cluster node since they aren't provisioned with much space. This is correct. On the cluster, add something to your user's shell startup files (or see the environment_setup_file option in universe_wsgi.ini) that will set the $TEMP or $TMPDIR environment variable to a suitable temp space. --nate Please advise. Thanks, Greg Miscellaneous information:Traceback (most recent call last): File /misc/local/galaxy/galaxy-dist/tools/data_source/upload.py, line 384, in __main__() File /misc/local/galaxy/galaxy-dist/tools/data_source/upload.py, line 373, in __main__ add_file( dataset, Job Standard Error Traceback (most recent call last): File /misc/local/galaxy/galaxy-dist/tools/data_source/upload.py, line 384, in __main__() File /misc/local/galaxy/galaxy-dist/tools/data_source/upload.py, line 373, in __main__ add_file( dataset, registry, json_file, output_path ) File /misc/local/galaxy/galaxy-dist/tools/data_source/upload.py, line 270, in add_file line_count, converted_path = sniff.convert_newlines( dataset.path, in_place=in_place ) File /misc/local/galaxy/galaxy-dist/lib/galaxy/datatypes/sniff.py, line 99, in convert_newlines fp.write( %s\n % line.rstrip( \r\n ) ) IOError: [Errno 28] No space left on device ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/ ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
Re: [galaxy-dev] Why
Sorry, I couldn't find a way of posting a reply on the thread. There was an only option of replying to author. How can I post my reply so that it appears in publicly visible list? Yes, I restarted, and in the drop-down, there are only import from your current history and upload files Sanjar. On 02/05/2013 06:19 PM, Dannon Baker wrote: Have you restarted your galaxy instance after changing the setting? And, in that drop-down, can you confirm that 'filesystem paths' is not an option? Also, please keep conversations on-list so that others might be able to help and/or benefit from the exchange. -Dannon On Feb 5, 2013, at 12:15 PM, hudai...@icgeb.org wrote: I set allow_library_path_paste = True in universe_wsgi.ini and still don't have the option of copying locally on manage data libraries page. http://i.imgur.com/PEeCAnc.png?1 Should I set some other parameters to True? quote author='Dannon Baker' The absolute fastest method would be to enable filepath upload for data libraries (in universe_wsgi.ini -- allow_library_path_paste = True). Once that's set, you can paste the absolute file path into the upload box, and check the box for not copying the data. This will leave the data where it is on the external drive, though it will make it available in the Data Library. Obviously if you use this method, you'll need to have that external drive attached (and always at the same mount point) to use the data. If you actually *do* want to copy this data off the external drive, simply don't check that 'no copy' checkbox and the data will be copied directly through the filesystem and not uploaded. -Dannon ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
[galaxy-dev] Divide the web interface into sections for a tool.
Hi, Galaxy developers, I would like to have the parameters on the web interface to be packed into sections for a tool. Is there a way to do this? There is a page tag, but the document says we should avoid using it. Also there is no replacement tag for page. The problem using the page tag is that there seems missing a previous step button on the pages other than the first page. If there is a section tag to be rendered as a box with a title and all the tags inside that section are rendered inside the box, that would be great. Even a horizontal bar to separate the sections is helpful. Thanks, Luobin ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
Re: [galaxy-dev] Uploading a Directory of Files - IOError: [Errno 28] No space left on device
Thanks Nate, It turns out I already had this as the first line of my job setup file: export TEMP=/scratch/galaxy But when I look in that directory, there's plenty of free space, and I also don't see any recent files there. So I'm wondering if the upload jobs aren't seeing that for some reason. Any ideas on how I could diagnose this more? -Greg Relevant info? grep env galaxy-dist/universe_wsgi.ini environment_setup_file = /usr/local/galaxy/job_environment_setup_file cat /usr/local/galaxy/job_environment_setup_file export TEMP=/scratch/galaxy #active Python virtual env just for galaxy source /usr/local/galaxy/galaxy_python/bin/activate ... path setup lines ... On Wed, Feb 6, 2013 at 2:37 PM, Nate Coraor n...@bx.psu.edu wrote: On Feb 6, 2013, at 2:32 PM, greg wrote: Hi guys, When I try to upload a directory of files from a server directory I'm seeing the error below. It appears to be trying to write to a temp directory somewhere that I'm guessing doesn't have enough space? Is there a way I can direct where it writes to for temporary files like this? Hi Greg, There are a few ways. For some parts of Galaxy, you will want to set new_file_path in universe_wsgi.ini to a suitable temp space. However, this is not the case for the upload tool. Am I understanding right, that these upload jobs are running on our cluster? I think it would be a problem if its trying to use the default temp directory on each cluster node since they aren't provisioned with much space. This is correct. On the cluster, add something to your user's shell startup files (or see the environment_setup_file option in universe_wsgi.ini) that will set the $TEMP or $TMPDIR environment variable to a suitable temp space. --nate Please advise. Thanks, Greg Miscellaneous information:Traceback (most recent call last): File /misc/local/galaxy/galaxy-dist/tools/data_source/upload.py, line 384, in __main__() File /misc/local/galaxy/galaxy-dist/tools/data_source/upload.py, line 373, in __main__ add_file( dataset, Job Standard Error Traceback (most recent call last): File /misc/local/galaxy/galaxy-dist/tools/data_source/upload.py, line 384, in __main__() File /misc/local/galaxy/galaxy-dist/tools/data_source/upload.py, line 373, in __main__ add_file( dataset, registry, json_file, output_path ) File /misc/local/galaxy/galaxy-dist/tools/data_source/upload.py, line 270, in add_file line_count, converted_path = sniff.convert_newlines( dataset.path, in_place=in_place ) File /misc/local/galaxy/galaxy-dist/lib/galaxy/datatypes/sniff.py, line 99, in convert_newlines fp.write( %s\n % line.rstrip( \r\n ) ) IOError: [Errno 28] No space left on device ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/ ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
[galaxy-dev] galaxy error
Hi Bassam, First, I am going to post your question to the galaxy-...@bx.psu.edu mailing list. This is where local and cloud install Galaxy questions can get the best exposure to the development community. Next, this isn't necessarily an error. And was the full line really this? nohup: ignoring input and redirecting stderr to stdout This is more of a linux issue than a Galaxy issue, but I'll give some feedback and others can comment as well (or a google will bring up many threads on the topic). Basically, this is just a stderr warning and can be ignored. IF you should be using it - nohup isn't for all jobs. Personally, I use it for data processing sometimes (general use, not part of running Galaxy), but I think I am the only one (here!), since there are other options to keep processes detached from individual sessions. It is a habit. I see this all the time and the job keeps on running. I could redirect the stderr to /dev/null but I usually just ignore it (lazy = type less characters). But I don't admin our Galaxy server either and wouldn't consider using it for that purpose. Which is why I am posting to the mailing list, where the admins can comment. On your system, the process might have actually run - this message has nothing to do with it running or not. But you want to be careful - if the process is interactive (requires a password), you don't want to use nohup and redirect both stdout and stderr. I don't know what command you ran, but you can do a few things to check to see if the process is running: See if the nohup.out file is keeping a log of some sort of the job (if it is still running, or if it quit and put output here): $ more nohup.out See what jobs are running in the current session: $ jobs See what jobs are assigned to the Galaxy user, you can do something like: $ ps -u galaxy Check to see if what you asked the process to do has been done, or check the other galaxy logs for activity along those lines. I am not sure if this is helpful or not, but perhaps will point you in the correct direction (linux command help/usage) where you can find out more. Take care, Jen Galaxy team On 2/6/13 11:24 AM, Bassam Tork wrote: Dear Jennifer, When I ran galaxy on our server it gave the error: nohup: ignoring input although I provided all inputs Appreciate your help. -- Bassam Tork. -- Jennifer Hillman-Jackson Galaxy Support and Training http://galaxyproject.org ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
Re: [galaxy-dev] Uploading a Directory of Files - IOError: [Errno 28] No space left on device
On Feb 6, 2013, at 3:00 PM, greg wrote: Thanks Nate, It turns out I already had this as the first line of my job setup file: export TEMP=/scratch/galaxy But when I look in that directory, there's plenty of free space, and I also don't see any recent files there. So I'm wondering if the upload jobs aren't seeing that for some reason. Any ideas on how I could diagnose this more? Hi Greg, The first place to look would be in lib/galaxy/datatypes/sniff.py, line 96: fd, temp_name = tempfile.mkstemp() If you print temp_name, that will tell you what file the upload tool is writing to. You may also want to take a look at: http://docs.python.org/2/library/tempfile.html#tempfile.tempdir Some cluster environments set $TMPDIR, and if that is set, $TEMP will not be used. --nate -Greg Relevant info? grep env galaxy-dist/universe_wsgi.ini environment_setup_file = /usr/local/galaxy/job_environment_setup_file cat /usr/local/galaxy/job_environment_setup_file export TEMP=/scratch/galaxy #active Python virtual env just for galaxy source /usr/local/galaxy/galaxy_python/bin/activate ... path setup lines ... On Wed, Feb 6, 2013 at 2:37 PM, Nate Coraor n...@bx.psu.edu wrote: On Feb 6, 2013, at 2:32 PM, greg wrote: Hi guys, When I try to upload a directory of files from a server directory I'm seeing the error below. It appears to be trying to write to a temp directory somewhere that I'm guessing doesn't have enough space? Is there a way I can direct where it writes to for temporary files like this? Hi Greg, There are a few ways. For some parts of Galaxy, you will want to set new_file_path in universe_wsgi.ini to a suitable temp space. However, this is not the case for the upload tool. Am I understanding right, that these upload jobs are running on our cluster? I think it would be a problem if its trying to use the default temp directory on each cluster node since they aren't provisioned with much space. This is correct. On the cluster, add something to your user's shell startup files (or see the environment_setup_file option in universe_wsgi.ini) that will set the $TEMP or $TMPDIR environment variable to a suitable temp space. --nate Please advise. Thanks, Greg Miscellaneous information:Traceback (most recent call last): File /misc/local/galaxy/galaxy-dist/tools/data_source/upload.py, line 384, in __main__() File /misc/local/galaxy/galaxy-dist/tools/data_source/upload.py, line 373, in __main__ add_file( dataset, Job Standard Error Traceback (most recent call last): File /misc/local/galaxy/galaxy-dist/tools/data_source/upload.py, line 384, in __main__() File /misc/local/galaxy/galaxy-dist/tools/data_source/upload.py, line 373, in __main__ add_file( dataset, registry, json_file, output_path ) File /misc/local/galaxy/galaxy-dist/tools/data_source/upload.py, line 270, in add_file line_count, converted_path = sniff.convert_newlines( dataset.path, in_place=in_place ) File /misc/local/galaxy/galaxy-dist/lib/galaxy/datatypes/sniff.py, line 99, in convert_newlines fp.write( %s\n % line.rstrip( \r\n ) ) IOError: [Errno 28] No space left on device ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/ ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
Re: [galaxy-dev] Uploading a Directory of Files - IOError: [Errno 28] No space left on device
Hmm, I narrowed down the problem some more: For some reason Python isn't respecting the TEMP environment variable so it's trying to write to /tmp on whichever node it's running on. I really don't understand why Python isn't respecting it. The docs seem to suggest it should: http://docs.python.org/2/library/tempfile.html#tempfile.tempdir I ran: qlogin source /usr/local/galaxy/job_environment_setup_file python Python 2.7.1 (r271:86832, Apr 4 2011, 13:23:54) [GCC 4.1.2 20080704 (Red Hat 4.1.2-44)] on linux2 Type help, copyright, credits or license for more information. import tempfile tempfile.gettempdir() '/tmp' import os os.environ['TEMP'] '/scratch/galaxy' On Wed, Feb 6, 2013 at 3:00 PM, greg margeem...@gmail.com wrote: Thanks Nate, It turns out I already had this as the first line of my job setup file: export TEMP=/scratch/galaxy But when I look in that directory, there's plenty of free space, and I also don't see any recent files there. So I'm wondering if the upload jobs aren't seeing that for some reason. Any ideas on how I could diagnose this more? -Greg Relevant info? grep env galaxy-dist/universe_wsgi.ini environment_setup_file = /usr/local/galaxy/job_environment_setup_file cat /usr/local/galaxy/job_environment_setup_file export TEMP=/scratch/galaxy #active Python virtual env just for galaxy source /usr/local/galaxy/galaxy_python/bin/activate ... path setup lines ... On Wed, Feb 6, 2013 at 2:37 PM, Nate Coraor n...@bx.psu.edu wrote: On Feb 6, 2013, at 2:32 PM, greg wrote: Hi guys, When I try to upload a directory of files from a server directory I'm seeing the error below. It appears to be trying to write to a temp directory somewhere that I'm guessing doesn't have enough space? Is there a way I can direct where it writes to for temporary files like this? Hi Greg, There are a few ways. For some parts of Galaxy, you will want to set new_file_path in universe_wsgi.ini to a suitable temp space. However, this is not the case for the upload tool. Am I understanding right, that these upload jobs are running on our cluster? I think it would be a problem if its trying to use the default temp directory on each cluster node since they aren't provisioned with much space. This is correct. On the cluster, add something to your user's shell startup files (or see the environment_setup_file option in universe_wsgi.ini) that will set the $TEMP or $TMPDIR environment variable to a suitable temp space. --nate Please advise. Thanks, Greg Miscellaneous information:Traceback (most recent call last): File /misc/local/galaxy/galaxy-dist/tools/data_source/upload.py, line 384, in __main__() File /misc/local/galaxy/galaxy-dist/tools/data_source/upload.py, line 373, in __main__ add_file( dataset, Job Standard Error Traceback (most recent call last): File /misc/local/galaxy/galaxy-dist/tools/data_source/upload.py, line 384, in __main__() File /misc/local/galaxy/galaxy-dist/tools/data_source/upload.py, line 373, in __main__ add_file( dataset, registry, json_file, output_path ) File /misc/local/galaxy/galaxy-dist/tools/data_source/upload.py, line 270, in add_file line_count, converted_path = sniff.convert_newlines( dataset.path, in_place=in_place ) File /misc/local/galaxy/galaxy-dist/lib/galaxy/datatypes/sniff.py, line 99, in convert_newlines fp.write( %s\n % line.rstrip( \r\n ) ) IOError: [Errno 28] No space left on device ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/ ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
Re: [galaxy-dev] Problem referencing EMBOSS tools in universe_wsgi.ini
Hi Simon, Significant changes to the way that job run parameters are defined will be included in a future release. In specific, there'll be no more job runner URLs, and parameters will be specified with a more robust XML language. These changes aren't stable enough to go out in the next release (scheduled for the end of this week), but they should be in the one after that. --nate Hi Nate, Thanks for that. We look forward to that release! cheers, Simon -- === Attention: The information contained in this message and/or attachments from AgResearch Limited is intended only for the persons or entities to which it is addressed and may contain confidential and/or privileged material. Any review, retransmission, dissemination or other use of, or taking of any action in reliance upon, this information by persons or entities other than the intended recipients is prohibited by AgResearch Limited. If you have received this message in error, please notify the sender immediately. === ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
Re: [galaxy-dev] Uploading a Directory of Files - IOError: [Errno 28] No space left on device
Ok, when I ran Python in my last two emails I was running as myself, not the galaxy user, and only the galaxy user has write permission to /scratch/galaxy So that's why Python was ignoring /scratch/galaxy for me. If it doesn't have write access it tries the next temp directory in its list. I'm going to try debugging as the galaxy user next. -Greg On Wed, Feb 6, 2013 at 3:21 PM, greg margeem...@gmail.com wrote: Hi Nate, I don't see $TMPDIR being set on the cluster, in addition to my previous email I ran: print os.environ.keys() ['KDE_IS_PRELINKED', 'FACTERLIB', 'LESSOPEN', 'SGE_CELL', 'LOGNAME', 'USER', 'INPUTRC', 'QTDIR', 'PATH', 'PS1', 'LANG', 'KDEDIR', 'TERM', 'SHELL', 'TEMP', 'QTINC', 'G_BROKEN_FILENAMES', 'SGE_EXECD_PORT', 'HISTSIZE', 'KDE_NO_IPV6', 'MANPATH', 'HOME', 'SGE_ROOT', 'QTLIB', 'VIRTUAL_ENV', 'SGE_CLUSTER_NAME', '_', 'SSH_CONNECTION', 'SSH_TTY', 'HOSTNAME', 'SSH_CLIENT', 'SHLVL', 'PWD', 'MAIL', 'LS_COLORS', 'SGE_QMASTER_PORT'] But I think we've narrowed it down to something interfering with Python deciding the temp file location. I just can't figure out what. On Wed, Feb 6, 2013 at 3:18 PM, Nate Coraor n...@bx.psu.edu wrote: On Feb 6, 2013, at 3:00 PM, greg wrote: Thanks Nate, It turns out I already had this as the first line of my job setup file: export TEMP=/scratch/galaxy But when I look in that directory, there's plenty of free space, and I also don't see any recent files there. So I'm wondering if the upload jobs aren't seeing that for some reason. Any ideas on how I could diagnose this more? Hi Greg, The first place to look would be in lib/galaxy/datatypes/sniff.py, line 96: fd, temp_name = tempfile.mkstemp() If you print temp_name, that will tell you what file the upload tool is writing to. You may also want to take a look at: http://docs.python.org/2/library/tempfile.html#tempfile.tempdir Some cluster environments set $TMPDIR, and if that is set, $TEMP will not be used. --nate -Greg Relevant info? grep env galaxy-dist/universe_wsgi.ini environment_setup_file = /usr/local/galaxy/job_environment_setup_file cat /usr/local/galaxy/job_environment_setup_file export TEMP=/scratch/galaxy #active Python virtual env just for galaxy source /usr/local/galaxy/galaxy_python/bin/activate ... path setup lines ... On Wed, Feb 6, 2013 at 2:37 PM, Nate Coraor n...@bx.psu.edu wrote: On Feb 6, 2013, at 2:32 PM, greg wrote: Hi guys, When I try to upload a directory of files from a server directory I'm seeing the error below. It appears to be trying to write to a temp directory somewhere that I'm guessing doesn't have enough space? Is there a way I can direct where it writes to for temporary files like this? Hi Greg, There are a few ways. For some parts of Galaxy, you will want to set new_file_path in universe_wsgi.ini to a suitable temp space. However, this is not the case for the upload tool. Am I understanding right, that these upload jobs are running on our cluster? I think it would be a problem if its trying to use the default temp directory on each cluster node since they aren't provisioned with much space. This is correct. On the cluster, add something to your user's shell startup files (or see the environment_setup_file option in universe_wsgi.ini) that will set the $TEMP or $TMPDIR environment variable to a suitable temp space. --nate Please advise. Thanks, Greg Miscellaneous information:Traceback (most recent call last): File /misc/local/galaxy/galaxy-dist/tools/data_source/upload.py, line 384, in __main__() File /misc/local/galaxy/galaxy-dist/tools/data_source/upload.py, line 373, in __main__ add_file( dataset, Job Standard Error Traceback (most recent call last): File /misc/local/galaxy/galaxy-dist/tools/data_source/upload.py, line 384, in __main__() File /misc/local/galaxy/galaxy-dist/tools/data_source/upload.py, line 373, in __main__ add_file( dataset, registry, json_file, output_path ) File /misc/local/galaxy/galaxy-dist/tools/data_source/upload.py, line 270, in add_file line_count, converted_path = sniff.convert_newlines( dataset.path, in_place=in_place ) File /misc/local/galaxy/galaxy-dist/lib/galaxy/datatypes/sniff.py, line 99, in convert_newlines fp.write( %s\n % line.rstrip( \r\n ) ) IOError: [Errno 28] No space left on device ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/ ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
Re: [galaxy-dev] Job handler : crash [SOLVED]
Thanks for your reply ! The patch has fixed the problem ! Christophe Le 30/01/2013 00:58, Derrick Lin a écrit : Hi, I had the similar issue a while ago, and it's fixed in https://bitbucket.org/galaxy/galaxy-central/commits/c015b82b3944f967e2c859d5552c00e3e38a2da0 Hope this help D On Wed, Jan 30, 2013 at 9:39 AM, Christophe Caron christophe.ca...@sb-roscoff.fr wrote: Hello, We run Galaxy (2013 January version) with load balancing mode (5 x web manager, 5 job handler) with Apache/Sun Grid Engine 6.0u4/CentOS 6.3 - Since 2 weeks, some handler job process crash during the Galaxy startup with this error message in handlerx.log . Starting server in PID 13634. serving on http://127.0.0.1:8091 galaxy.jobs.handler DEBUG 2013-01-29 20:06:48,902 Stopping job 22842: galaxy.jobs.handler DEBUG 2013-01-29 20:06:48,902 stopping job 22842 in drmaa runner - The system log files report a segfault with libdrmaa kernel: python[13977]: segfault at 0 ip 7f2811805dc5 sp 7f27f4aac0a0 error 4 in libdrmaa.so.1.0[7f28116dd000+**185000] Thanks for your help ! Christophe -- Christophe Caron Station Biologique / Service Informatique et Bio-informatique Place Georges Teissier - CS 90074 29688 Roscoff Cedex Analysis and Bioinformatics for Marine Science http://abims.sb-roscoff.fr/ christophe.ca...@sb-roscoff.fr tél: +33 (0)2 98 29 25 43 / +33 (0)6 07 83 54 77 __**_ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/ -- Christophe Caron Station Biologique / Service Informatique et Bio-informatique Place Georges Teissier - CS 90074 29688 Roscoff Cedex Analysis and Bioinformatics for Marine Science http://abims.sb-roscoff.fr/ christophe.ca...@sb-roscoff.fr tél: +33 (0)2 98 29 25 43 / +33 (0)6 07 83 54 77 ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
Re: [galaxy-dev] card 79: Split large jobs over multiple nodes for processing
Hi All, Can anybody please add a few words on how can we use the “initial implementation” which “ exists in the tasks framework”? -Alex From: Trello [mailto:do-not-re...@trello.com] Sent: Wednesday, 6 February 2013 10:58 AM To: Khassapov, Alex (CSIRO IMT, Clayton) Subject: 4 new notifications on the board Galaxy: Development since 5:56 PM (Tuesday) [https://trello.com/images/logo-s.png] Notifications On Galaxy: Developmenthttps://trello.com/board/galaxy-development/506338ce32ae458f6d15e4b3 [https://trello-avatars.s3.amazonaws.com/a6e93a63989ab71cd87ade0165a04b08/30.png]James Taylor added [https://trello-avatars.s3.amazonaws.com/d0f1bba8eb293d305140421271c383a9/30.png] Dannon Baker to the card 79: Split large jobs over multiple nodes for processinghttps://trello.com/card/79-split-large-jobs-over-multiple-nodes-for-processing/506338ce32ae458f6d15e4b3/411 on Galaxy: Developmenthttps://trello.com/board/galaxy-development/506338ce32ae458f6d15e4b3 [https://trello-avatars.s3.amazonaws.com/a6e93a63989ab71cd87ade0165a04b08/30.png]James Taylor commented on the card 79: Split large jobs over multiple nodes for processinghttps://trello.com/card/79-split-large-jobs-over-multiple-nodes-for-processing/506338ce32ae458f6d15e4b3/411 on Galaxy: Developmenthttps://trello.com/board/galaxy-development/506338ce32ae458f6d15e4b3 An initial implementation exists in the tasks framework. [https://trello-avatars.s3.amazonaws.com/a6e93a63989ab71cd87ade0165a04b08/30.png]James Taylor moved the card 79: Split large jobs over multiple nodes for processinghttps://trello.com/card/79-split-large-jobs-over-multiple-nodes-for-processing/506338ce32ae458f6d15e4b3/411 to Complete on Galaxy: Developmenthttps://trello.com/board/galaxy-development/506338ce32ae458f6d15e4b3 [https://trello-avatars.s3.amazonaws.com/a6e93a63989ab71cd87ade0165a04b08/30.png]James Taylor moved the card 137: allow multiple=true in input param fields of type datahttps://trello.com/card/137-allow-multiple-true-in-input-param-fields-of-type-data/506338ce32ae458f6d15e4b3/292 to Pull Requests / Patches on Galaxy: Developmenthttps://trello.com/board/galaxy-development/506338ce32ae458f6d15e4b3 Change how often you get email on your account pagehttps://trello.com/my/account. Follow Trello on Twitterhttps://twitter.com/intent/follow?user_id=360831528 and Facebookhttps://www.facebook.com/TrelloApp. Get the Trello app for iPhonehttp://itunes.com/apps/trello or Androidhttps://play.google.com/store/apps/details?id=com.trello. ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
Re: [galaxy-dev] Reloading a tools configuration does not seem to actually work
I am indeed using multiple web processes and I guess I am talking about the "old" admine tool reloader...Is there any other way to do this for your own tools that you just manually place in tools etc.?ThonOn Feb 05, 2013, at 06:22 PM, Dannon Baker dannonba...@me.com wrote:Are you using multiple web processes, and are you referring to the old admin tool reloader or the toolshed reloading interface? -Dannon On Feb 5, 2013, at 9:13 PM, Anthonius deBoer thondeb...@me.com wrote: Hi,I find that reloading a tool's configuration file does not really work. First, you have to click the reload buttow twice to actually have it update the VERSION number (so it does read something)... But when I try to run my tool, the old bug is still there...I am using proxy server so something may still be cached, but I have to restart my server for it actually to pick up the changes...Any ideas?Thon ___ Please keep all replies on the list by using "reply all" in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at:http://lists.bx.psu.edu/ ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
Re: [galaxy-dev] galaxy error
Hi Bassam, I think that your best bet is to explore this command usage in general, since this is really a unix question and no detail about how you used this in Galaxy was provided for others to comment on. I am assuming that you already checked your Galaxy process and found out if it already ran and restarted it or did whatever you needed to do to get your work done. There isn't much more to add at the basic level so you should certainly go forward - we might get a admin to comment on when or if they use nohup, but this is a fairly common utility, and doing some research on how to use it (beyond the pointers I sent out) is the next move. You will likely find with a quick search both a unix forum where the grinding details are discussed among seasoned admins and a friendly web page where common usage cases are explained in patient detail - plus the full spectrum of advice in-between (some good/some ?). Please do keep all follow-up on the list and start brand new threads for new questions with just a to to the mailing list - not any team members, this greatly helps us to track and provide expedient feedback. If you do have a problem with a specific process - I would start fresh with a new thread (new message, not just a reply with the subject line changed). http://wiki.galaxyproject.org/Support#Mailing_Lists For the best chance at a good reply, be specific about exactly what you are doing, if you are on a local or cloud (assuming local from your comments, but this will be a new thread, so include it), what changeset you are running, what sort of troubleshooting you have done already, etc. Those sorts of details help so that advice can be specific to help you solve the problem - vague questions are difficult to address since so many things could be factors. But don't be afraid to ask if you are stuck - give as much detail as you can and send it out - everyone started new at some point! Here is some help about asking good questions: http://wiki.galaxyproject.org/MailingLists?action=showredirect=Mailing+Lists#New_to_Mailing_Lists.3F Take care, Jen Galaxy team On 2/6/13 4:25 PM, Bassam Tork wrote: Dear Jennifer, Thank you so much. Should I wait to receive email from galaxy-...@bx.psu.edu mailto:galaxy-...@bx.psu.edu or could I check the posting and answers from some web pages? Thanks. Bassam Tork. On 2/6/13 12:13 PM, Jennifer Jackson wrote: Hi Bassam, First, I am going to post your question to the galaxy-...@bx.psu.edu mailing list. This is where local and cloud install Galaxy questions can get the best exposure to the development community. Next, this isn't necessarily an error. And was the full line really this? nohup: ignoring input and redirecting stderr to stdout This is more of a linux issue than a Galaxy issue, but I'll give some feedback and others can comment as well (or a google will bring up many threads on the topic). Basically, this is just a stderr warning and can be ignored. IF you should be using it - nohup isn't for all jobs. Personally, I use it for data processing sometimes (general use, not part of running Galaxy), but I think I am the only one (here!), since there are other options to keep processes detached from individual sessions. It is a habit. I see this all the time and the job keeps on running. I could redirect the stderr to /dev/null but I usually just ignore it (lazy = type less characters). But I don't admin our Galaxy server either and wouldn't consider using it for that purpose. Which is why I am posting to the mailing list, where the admins can comment. On your system, the process might have actually run - this message has nothing to do with it running or not. But you want to be careful - if the process is interactive (requires a password), you don't want to use nohup and redirect both stdout and stderr. I don't know what command you ran, but you can do a few things to check to see if the process is running: See if the nohup.out file is keeping a log of some sort of the job (if it is still running, or if it quit and put output here): $ more nohup.out See what jobs are running in the current session: $ jobs See what jobs are assigned to the Galaxy user, you can do something like: $ ps -u galaxy Check to see if what you asked the process to do has been done, or check the other galaxy logs for activity along those lines. I am not sure if this is helpful or not, but perhaps will point you in the correct direction (linux command help/usage) where you can find out more. Take care, Jen Galaxy team On 2/6/13 11:24 AM, Bassam Tork wrote: Dear Jennifer, When I ran galaxy on our server it gave the error: nohup: ignoring input although I provided all inputs Appreciate your help. -- Bassam Tork. -- Jennifer Hillman-Jackson Galaxy Support and Training http://galaxyproject.org ___ Please keep all replies on the list by using reply all in your mail client. To manage
Re: [galaxy-dev] Divide the web interface into sections for a tool.
Hi Luobin, This isn't yet possible but is planned: https://trello.com/c/KxlQK0FB Best, J. On Feb 6, 2013, at 2:47 PM, Luobin Yang wrote: Hi, Galaxy developers, I would like to have the parameters on the web interface to be packed into sections for a tool. Is there a way to do this? There is a page tag, but the document says we should avoid using it. Also there is no replacement tag for page. The problem using the page tag is that there seems missing a previous step button on the pages other than the first page. If there is a section tag to be rendered as a box with a title and all the tags inside that section are rendered inside the box, that would be great. Even a horizontal bar to separate the sections is helpful. Thanks, Luobin ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/ ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
[galaxy-dev] VirtualEnv
Hello Everything seems to be working on my local Galaxy I was talking to my IT guy who did the initial installation and he said that virtualenv may not have been loaded. When he did % yum install python-virtualenv.noarch He got a not found error. He did install mercurial and then completed the Install instructions from www.biocodershub.net/community/guest-post-notes-on-installing-galaxy/http://www.biocodershub.net/community/guest-post-notes-on-installing-galaxy/ What should we do at this point? Is VirtualEnv dispensible? What should I expect to be broken? Whats the fix? Thank you, Gregory Thyssen, PhD Molecular Biologist Cotton Fiber Bioscience USDA-ARS-Southern Regional Research Center 1100 Robert E Lee Blvd New Orleans, LA 70124 gregory.thys...@ars.usda.gov 504-286-4280 This electronic message contains information generated by the USDA solely for the intended recipients. Any unauthorized interception of this message or the use or disclosure of the information it contains may violate the law and subject the violator to civil or criminal penalties. If you believe you have received this message in error, please notify the sender and delete the email immediately. ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
Re: [galaxy-dev] Velvet program files
I also figured this one out, My external USB harddrive name had a space in it. So velvet is running for the first time... Got my fingers crossed. Thanks for all the help; I'm sure Ill need some later. Greg From: Thyssen, Gregory - ARS Sent: Tuesday, February 05, 2013 3:34 PM To: 'galaxy-...@bx.psu.edu' Subject: RE: Velvet program files I partially answered my own questions by reading the Tool Dependencies wiki page. I installed velvet to my workstation. When I open the shell to start galaxy, I modify $PATH to include the velvet directory. Now the error I get in Velvet suggests it found the program, but something else is wrong. My new error is: No reads found The reads are fastq files that appear in my history and are data libraries housed in an external harddrive. I have used these same data before on the public server. Please advise, Thanks, Greg From: Thyssen, Gregory - ARS Sent: Tuesday, February 05, 2013 10:38 AM To: 'galaxy-...@bx.psu.edu' Subject: Velvet program files So I am setting up my local Galaxy on a workstation for my own use. I have installed several programs from the Assembly Main Tool Shed. None works. Are these just the wrappers? Do I need to install, say, Velvet somewhere outside of Galaxy? How do I do this? My errors: 6: velveth on data 2 error An error occurred running this job:Unable to run velveth: No such file or directory 7: Abyss on data 5 error An error occurred running this job: ERROR: Expected exactly 3 arguments; got: 67 /media/My Passport/IM-Data2Bio-10262012/IM-mut1/Im-mutant-bulk_1.fastq /home/galaxy/galaxy-dist/database/files/000/dataset_9.dat 8: MIRA contigs (FASTA) error An error occurred running this job: Traceback (most recent call last): File /home/galaxy/shed_tools/toolshed.g2.bx.psu.edu/repos/peterjc/mira_assembler/3e7eca1f5d04/mira_assembler/tools/sr_assembly/mira.py, line 33, in mira_ver = get_version() File /home/galaxy/shed_tool 15: Singleton reads reads for velvet error An error occurred running this job: Error running preppereads.sh /bin/sh: /users/galaxy/galaxyscripts/prepare_pe_reads_for_velvet.sh: No such file or directory Thanks, Greg This electronic message contains information generated by the USDA solely for the intended recipients. Any unauthorized interception of this message or the use or disclosure of the information it contains may violate the law and subject the violator to civil or criminal penalties. If you believe you have received this message in error, please notify the sender and delete the email immediately. ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/