Re: [galaxy-dev] provide a field to upload a file as part of tool input

2016-08-05 Thread Ryan G
I figured out the optional part (RTFM to myself)




On Fri, Aug 5, 2016 at 6:54 PM, Ryan G  wrote:

> Hi all - I have a custom tool that can optionally take a file as input.
>
> Right now, users have to upload the file into their history, then select
> the file when running the tool.
>
> Is there a way to let the user select a file local on their computer as
> input to the tool, then when run, the file is uploaded?
>
> And (second question), how can I make this field optional, such that the
> user does not need to provide a file at all when running the tool?
>
> Right now, my input field is:
>
> 
>
___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  https://lists.galaxyproject.org/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

[galaxy-dev] provide a field to upload a file as part of tool input

2016-08-05 Thread Ryan G
Hi all - I have a custom tool that can optionally take a file as input.

Right now, users have to upload the file into their history, then select
the file when running the tool.

Is there a way to let the user select a file local on their computer as
input to the tool, then when run, the file is uploaded?

And (second question), how can I make this field optional, such that the
user does not need to provide a file at all when running the tool?

Right now, my input field is:


___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  https://lists.galaxyproject.org/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

Re: [galaxy-dev] Parameter Storing in Database

2016-08-05 Thread Peter van Heusden
You mean the stuff for the tool-data tables? As I understand it, it is
loaded in:

lib/galaxy/tools/data/__init__.py

There's a ToolDataTableManager and a TabularToolDataTable (with associated
TabularToolDataField).

Peter

On Fri, 5 Aug 2016 at 20:46 Katherine Beaulieu <
katherine.beaulieu...@gmail.com> wrote:

> Would anyone be able to tell me in what file the storing of tool
> parameters into the database occurs?
> Thanks!
> Katherine
> ___
> Please keep all replies on the list by using "reply all"
> in your mail client.  To manage your subscriptions to this
> and other Galaxy lists, please use the interface at:
>   https://lists.galaxyproject.org/
>
> To search Galaxy mailing lists use the unified search at:
>   http://galaxyproject.org/search/mailinglists/
___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  https://lists.galaxyproject.org/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

[galaxy-dev] Parameter Storing in Database

2016-08-05 Thread Katherine Beaulieu
Would anyone be able to tell me in what file the storing of tool parameters
into the database occurs?
Thanks!
Katherine
___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  https://lists.galaxyproject.org/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

Re: [galaxy-dev] Command line building

2016-08-05 Thread Dannon Baker
Check out
https://github.com/galaxyproject/galaxy/blob/dev/lib/galaxy/jobs/command_factory.py

Let me know if this wasn't what you were looking for.

-Dannon

On Fri, Aug 5, 2016 at 10:49 AM Katherine Beaulieu <
katherine.beaulieu...@gmail.com> wrote:

> Within the galaxy app itself, does anyone know where the command line is
> being built?
> Thanks!
> Katherine
> ___
> Please keep all replies on the list by using "reply all"
> in your mail client.  To manage your subscriptions to this
> and other Galaxy lists, please use the interface at:
>   https://lists.galaxyproject.org/
>
> To search Galaxy mailing lists use the unified search at:
>   http://galaxyproject.org/search/mailinglists/
___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  https://lists.galaxyproject.org/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

Re: [galaxy-dev] Retrieving bam index file for visualization outside of Galaxy

2016-08-05 Thread Dannon Baker
Hi Scott,

This isn't currently exposed through the API, but I'm working on
implementing it at https://github.com/galaxyproject/galaxy/pull/2741, if
you'd like to test it.

-Dannon

On Thu, Aug 4, 2016 at 10:38 AM Ouellette, Scott <
scott_ouelle...@hms.harvard.edu> wrote:

> Hi all,
>
> I have a use case where I need to programmatically download both the
> *.bam* and it's index file to run IGV outside of Galaxy, but have not
> found a straightforward way to do so.
> It seems that Galaxy always generates this *.bai *file under the hood,
> and I can download it manually by clicking the “save” icon on the dataset
> in my history. Although, the application I am coding for relies on the
> bioblend Python library.
>
> ---
>
> In bioblend I see a field returned upon a
> `DatasetClient().show_dataset()` but that is
> as far as I've got:
>
> "meta_files": [
> {
> "file_type": "bam_index"
> }
> ],
>
> Any help is appreciated!
>
> Thanks,
> Scott O.
>
> ___
> Please keep all replies on the list by using "reply all"
> in your mail client.  To manage your subscriptions to this
> and other Galaxy lists, please use the interface at:
>   https://lists.galaxyproject.org/
>
> To search Galaxy mailing lists use the unified search at:
>   http://galaxyproject.org/search/mailinglists/
___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  https://lists.galaxyproject.org/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

Re: [galaxy-dev] conversion issue

2016-08-05 Thread Rathert , Philipp , Dr .
Yes I did.
All tools are installed and I see no error message.

Philipp

---Philipp on the road---

Am 05.08.2016 um 15:31 schrieb Peter Cock  >:

Its sounds like bedGraphToBigWig has not been installed.

https://toolshed.g2.bx.psu.edu/view/brad-chapman/bam_to_bigwig/

The README says:

> Ensure the following command line tools are on the system path:
>
> pysam - Python interface to samtools (http://code.google.com/p/pysam/ 
>  )
> genomeCoverageBed - part of BedTools (http://code.google.com/p/bedtools/ 
>  )
> bedGraphToBigWig  - from UCSC (http://hgdownload.cse.ucsc.edu/admin/exe/ 
>  )

However, the tool does appear to declare the dependencies via
the Tool Shed mechanism - did you install them via the Tool Shed?

Peter

On Fri, Aug 5, 2016 at 2:23 PM, Rathert, Philipp, Dr.
 > wrote:
> Dear All,
>
>
> I get the following errors when I try to Convert BAM to BigWig
>
>
> Traceback (most recent call last):
>   File
> "/shed_tools/toolshed.g2.bx.psu.edu/repos/brad-chapman/bam_to_bigwig/52bcd04ee0d6/bam_to_bigwig/bam_to_bigwig/bam_to_bigwig.py
>  
> 
>  ",
> line 103, in 
> main(*args, **kwargs)
>   File
> "/shed_tools/toolshed.g2.bx.psu.edu/repos/brad-chapman/bam_to_bigwig/52bcd04ee0d6/bam_to_bigwig/bam_to_bigwig/bam_to_bigwig.py
>  
> 
>  ",
> line 48, in main
> convert_to_graph(bam_file, split, config, temp_handle)
>   File
> "/shed_tools/toolshed.g2.bx.psu.edu/repos/brad-chapman/bam_to_bigwig/52bcd04ee0d6/bam_to_bigwig/bam_to_bigwig/bam_to_bigwig.py
>  
> 
>  ",
> line 76, in convert_to_graph
> subprocess.check_call(cl, stdout=out_handle)
>   File "/usr/lib/python2.7/subprocess.py", line 535, in check_call
> retcode = call(*popenargs, **kwargs)
>   File "/usr/lib/python2.7/subprocess.py", line 522, in call
> return Popen(*popenargs, **kwargs).wait()
>   File "/usr/lib/python2.7/subprocess.py", line 710, in __init__
> errread, errwrite)
>   File "/usr/lib/python2.7/subprocess.py", line 1327, in _execute_child
> raise child_exception
> OSError: [Errno 2] No such file or directory
>
> The tool produced the following additional output:
>
> Have 93 references
> Calculating coverage...
>
> or
>
> Fatal error: Exit code 1 ()
> bedtools:
> /galaxy-central/tool_deps/ucsc_tools/312/iuc/package_ucsc_tools_312/2d6bafd63401/lib/libstdc++.so.6:
> version `GLIBCXX_3.4.15' not found (required by bedtools)
> needLargeMem: trying to allocate 0 bytes (limit: 1000)
> Traceback (most recent call last):
>   File
> "/shed_tools/toolshed.g2.bx.psu.edu/repos/brad-chapman/bam_to_bigwig/9163e1db4c16/bam_to_bigwig/bam_to_bigwig.py
>  
> 
>  ",
> line 122, in 
> main(*args, **kwargs)
>   File
> "/shed_tools/toolshed.g2.bx.psu.edu/repos/brad-chapman/bam_to_bigwig/9163e1db4c16/bam_to_bigwig/bam_to_bigwig.py
>  
> 
>  ",
> line 57, in main
> convert_to_bigwig(temp_file, sizes, config, outfile)
>   File
> "/shed_tools/toolshed.g2.bx.psu.edu/repos/brad-chapman/bam_to_bigwig/9163e1db4c16/bam_to_bigwig/bam_to_bigwig.py
>  
> 
>  ",
> line 104, in convert_to_bigwig
> subprocess.check_call(cl)
>   File
> "/galaxy-central/tool_deps/python/2.7.10/iuc/package_python_2_7_10/0339c4a9b87b/lib/python2.7/subprocess.py",
> line 540, in check_call
> raise CalledProcessError(retcode, cmd)
> subprocess.CalledProcessError: Command '['bedGraphToBigWig',
> '/tmp/tmp2F67r6',
> '/export/galaxy-central/database/files/003/dataset_3116-sizes.txt',
> '/export/galaxy-central/database/files/003/dataset_3116.dat']' returned
> non-zero exit status 255
>
> The tool produced the following additional output:
>
> Have 93 references
> Calculating coverage...
> Converting 0 MB graph file to bigwig..
>
> depending what version of the tool i am using.
>
>
> Any idea what this can be?
>
>
> I am using the latest Docker-galaxy-stable version (16.04) but I think this
> is not related to docker. All tools are updated.
>
>
> Thank you very much for your help,
>
>
> Philipp
>
>
>
>
> ___
> Please keep all replies on the list by using "reply 

Re: [galaxy-dev] uploading multi-file archives and creation of potentially large collections

2016-08-05 Thread Gildas Le Corguillé
Hi Stephan,

I will only answer about uploading zip files. Since the release_16.04, a zip 
datatype is integrated within the Galaxy distribution. But without any sniffer, 
so your users will have to select, before the upload, the zip datatype manually.

I also want to write this kind of tool which will be able to extract a zip file 
and produce dataset collections. I would also like to add the possibility to 
create one dataset collection per folder (condition/phenotype). I start 
something like that last week but ...
Thus, in short term, I want to propose this tool to help my user to switch to 
dataset collections but I want this transition as smooth as possible (users are 
sometime stubborn)

Thanks to ask, I will follow this thread closely.


Gildas

-
Gildas Le Corguillé - Bioinformatician/Bioanalyste

Plateform ABiMS (Analyses and Bioinformatics for Marine Science)
http://abims.sb-roscoff.fr 

Member of the Workflow4Metabolomics project
http://workflow4metabolomics.org 

Station Biologique de Roscoff - UPMC/CNRS - FR2424
Place Georges Teissier 29680 Roscoff FRANCE
tel: +33 2 98 29 23 81
--



> Le 4 août 2016 à 17:44, Stephan Oepen  a écrit :
> 
> colleagues,
> 
> in our adaptation of galaxy for large-scale natural language
> processing, a fairly common use pattern is to invoke a workflow on a
> potentially large number of text files.  hence, i am wondering about
> facilities for uploading an archive (in ‘.zip’ or ‘.tgz’ format, say)
> containing several files, where i would like the upload tool to
> extract the files from the archive, import each individually into my
> history, and (maybe optionally) create a list collection for the set
> of files.
> 
> in my current galaxy instance (running version 2015.03), when i upload
> a multi-file ‘.zip’ file, part of the above actually happens: however,
> the upload tool only imports the first file extracted from the archive
> (and helpfully shows a warning message on the corresponding history
> entry).  have there been relevant changes in this neighborhood in more
> recent galaxy releases?
> 
> related to the above, we have started to experiment with potentially
> large collections and are beginning to worry about the scalability of
> the collection mechanism.  in principle, we would like to operate on
> collections comprised of tens or hundreds of thousands of individual
> datasets.  what are common collection sizes (in the number of
> components, not so much in the aggregate file size) used in other
> galaxy instances to date?  what kind of gut reaction do galaxy
> developers have to the idea of a collection containing, say, a hundred
> thousand entries?
> 
> with thanks in advance,
> ___
> Please keep all replies on the list by using "reply all"
> in your mail client.  To manage your subscriptions to this
> and other Galaxy lists, please use the interface at:
>  https://lists.galaxyproject.org/
> 
> To search Galaxy mailing lists use the unified search at:
>  http://galaxyproject.org/search/mailinglists/

___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  https://lists.galaxyproject.org/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/