Re: [galaxy-dev] disk space for local galaxy

2012-06-12 Thread Hans-Rudolf Hotz


 - Please keep all replies on the list by using reply all -


check your universe_wsgi.ini file:

# Allow users to remove their datasets from disk immediately (otherwise,
# datasets will be removed after a time period specified by an 
administrator in

# the cleanup scripts run via cron)
#allow_user_dataset_purge = False


Regards, Hans


On 06/11/2012 05:08 PM, Xu, Jianpeng wrote:

Hi,

I read them and tried to work according to the instruction below on the
http://wiki.g2.bx.psu.edu/Learn/Managing%20Datasets#Delete_vs_Delete_Permanently

*Active* and *Deleted* histories can be *permanently deleted *using from
the History pane Options - Saved Histories, then click on Advanced
Search, then click on status: all. Check the box for the histories to
be discarded and then click on the button Permanently delete. I do not
find the Permanently delete button. I found a button Delete and
remove datasets from disk, but I click on it and it did not work.

Do you know how to fix it ?

Actually I want to delete some histories permanently and get some disk
space.

Thanks a lot.

Jianpeng



Hi

Have you looked at the wiki pages?

http://wiki.g2.bx.psu.edu/Learn/Managing%20Datasets#Delete_vs_Delete_Permanently

and

http://wiki.g2.bx.psu.edu/Admin/Config/Performance/Purge%20Histories%20and%20Datasets



Regards, Hans

On 06/04/2012 05:29 PM, Xu, Jianpeng wrote:

 Hi,

 I have installed the local galaxy. I deleted a few histories in order to
 get some disk space. However, the disk space did not change after the
 deletion of the histories.

 Do you why ? How can I get some disk space by deleting histories ?

 Thanks,



 

 This e-mail message (including any attachments) is for the sole use of
 the intended recipient(s) and may contain confidential and privileged
 information. If the reader of this message is not the intended
 recipient, you are hereby notified that any dissemination, distribution
 or copying of this message (including any attachments) is strictly
 prohibited.

 If you have received this message in error, please contact
 the sender by reply e-mail message and destroy all copies of the
 original message (including attachments).


 ___
 Please keep all replies on the list by using reply all
 in your mail client. To manage your subscriptions to this
 and other Galaxy lists, please use the interface at:

 http://lists.bx.psu.edu/

___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

 http://lists.bx.psu.edu/


[galaxy-dev] Re : Composite output with self-declarated datatypes

2012-06-12 Thread Marine Rohmer
Hi,

Maybe my message was not understandable enough. I really need your help, so 
I'll try to be more concise :

How do I make a composite output from 2 datatypes that I have declared myself ?
I've followed the Composite Datatypes wiki but it seems that I've missed 
something...
My composite datatype appears well in file format from Get Data's upload file 
section, but when I run my tool, I have 2 outputs which are the components of 
my primary datatype, instead of only one output.

Best regards,

Marine

 



 De : Marine Rohmer marine.roh...@yahoo.fr
À : galaxy-dev@lists.bx.psu.edu galaxy-dev@lists.bx.psu.edu 
Envoyé le : Vendredi 8 juin 2012 15h15
Objet : Composite output with self-declarated datatype
 

Hi everyone,

I'm trying to add a tool which generates 2 files, that I will call .xxx (a 
text file) and .yyy (a binary file)  . Both files are needed to use the 
result of my tool with an other tool I've added. 
So I wanted to create a composite datatype , that I will call .composite, 
whose components are .xxx and .yyy.

I've declared the datatype .xxx, .yyy and .composite in the 
datatypes_conf.xml file, and written the required python files . Now, .xxx, 
.yyy and .composite appear in Get Data's file format .


These are my files :

In datatype_conf.xml :

datatype extension=xxx type=galaxy.datatypes.xxx:xxx mimetype=text/html 
display_in_upload = True
 subclass=True/
    datatype extension=yyy type=galaxy.datatypes.yyy:yyy 
mimetype=application/octet-stream display_in_upload = True subclass=True 
/
    datatype extension=composite type=galaxy.datatypes.composite:Composite 
mimetype=text/html display_in_upload=True/


xxx.py (summarized) :

import logging
from metadata import MetadataElement
from data import Text

log = logging.getLogger(__name__)

class xxx(Text):  
    file_ext = xxx

    def __init__( self, **kwd ):
        Text.__init__( self, **kwd )
    


yyy.py (summarized) :  

import logging
from metadata import
 MetadataElement
from data import Text

log = logging.getLogger(__name__)

# yyy is a binary file, don't know what to put instead of Text. Binary and 
Bin don't work.
class yyy(Text):     
    file_ext = yyy

    def __init__( self, **kwd ):
        Text.__init__( self, **kwd )
        


composite.py (summarized) :

import logging
from metadata import MetadataElement
from data import Text

log = logging.getLogger(__name__)

class Composite(Text):  
    composite_type = 'auto_primary_file'
    MetadataElement( name=base_name, desc=base name for all transformed 
versions of this index dataset, default=your_index, readonly=True,
 set_in_upload=True)
    file_ext = 'composite'

    def __init__( self, **kwd ):
        Text.__init__( self, **kwd )
        self.add_composite_file( '%s.xxx', description = XXX file, 
substitute_name_with_metadata = 'base_name')
        self.add_composite_file( '%s.yyy', description = YYY file, 
substitute_name_with_metadata = 'base_name', is_binary = True )



Atfer having read Composite Datatypes in the wiki, my myTool.xml looks like :

tool id=my tool
   command path/to/crac-index-wrapper.sh 
   ${os.path.join( $output_name_yyy.extra_files_path, '%s.yyy')} 
${os.path.join( $output_name_xxx.extra_files_path, '%s.xxx' )} $input_file
   /command
  
 inputs
  param name=output_name type=text value =IndexOutput label=Output 
name/
  param name=input_file type=data label=Source file format=fasta/
   /inputs  
   outputs
  data format=ssa name=output_name_ssa 
from_work_dir=crac-index_output.ssa label=CRAC-index: ${output_name}.ssa
  /data
    data format=conf name=output_name_conf 
from_work_dir=crac-index_output.conf label=CRAC-index: ${output_name}.conf
 /data
   /outputs
/tool



I have 2 main problems  :

When I upload a xxx file via Get Data, there's no problem. However, when I 
upload a yyy
 file (the binary one),history bloc rests eternally blue (uploading dataset) 
, even for a small file. 


The second problem is that I want my tool to only generate the .composite file 
on the history, and not each of .xxx and .yyy. 
. But when I run my tool I still have 2 outputs displayed in the history : one 
for xxx and one for yyy. Furthermore, neither of them work, and I have the 
following message :

path/to/myTool-wrapper.sh: 6: path/to/myTool-wrapper.sh.sh: cannot create 
/home/myName/work/galaxy-dist/database/files/000/dataset_302_files/%s.yyy.xxx: 
Directory nonexistent
path/to/myTool-wrapper.sh: 6: path/to/myTool-wrapper.sh: cannot create 
/home/myName/work/galaxy-dist/database/files/000/dataset_302_files/%s.yyy.yyy: 
Directory nonexistent
path/to/myTool-wrapper.sh: 11: path/to/myTool-wrapper.sh: Syntax error: 
redirection unexpected


So I've checked manually in /home/myName/work/galaxy-dist/database/files/000/ 
and there's only dataset_302.dat, an empty file.
(And whatsmore, I don't understand why I get in the message %s.yyy.xxx and 
%s.yyy.yyy instead of %s.yyy and %s.xxx ...)


Then 

[galaxy-dev] Instructions for migrating users, configuration and data to other Galaxy machine

2012-06-12 Thread Joachim Jacob

Hi all,


I am looking for instructions to migrate users, configuration and data 
to other Galaxy machine. I have a production Galaxy, with users, 
histories, pages, data libraries, configured tool panel, running on a 
postgresdb. I want to move this information to a fresh Galaxy install 
(same version as production) on another machine.


Is this feasible to do without much hassle? I am thinking about a 
merge/sync of configuration files and dumping and creating the 
postgresdb,... Has anyone experience with this and a check list for this?


The idea is that the user should have no clue that Galaxy is on a new 
machine: all data, tools, histories, etc... are there. Thanks for your 
consideration.



Kind regards,
Joachim

--
Joachim Jacob, PhD

Rijvisschestraat 120, 9052 Zwijnaarde
Tel: +32 9 244.66.34
Bioinformatics Training and Services (BITS)
http://www.bits.vib.be
@bitsatvib


___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

 http://lists.bx.psu.edu/


[galaxy-dev] Giardine et al Galaxy paper (2005)

2012-06-12 Thread Peter van Heusden
Hi everyone

The Galaxy system as described in Belinda Giardine et al's 2005 Genome
Research paper (Galaxy: A platform for interactive large-scale genome
analysis) appears to be radically different from the current Galaxy, at
least in its technical specification. The paper mentions a C core spoken
to by a Perl-based web user interface (referred to as the History User
Interface). The co-authors on the paper are, however, familiar names
from the current Galaxy team. Was this paper describing something in the
pre-history of current Galaxy (it sounds rather like the current History
but without workflows and of course implemented on a different platform)?

Peter
___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/


Re: [galaxy-dev] Instructions for migrating users, configuration and data to other Galaxy machine

2012-06-12 Thread Hans-Rudolf Hotz

Hi Joachim

This is roughly what we did  ~15 months ago:

1. make a copy of the MySQL DB (postgresdb in your case)

2. copy the complete galaxy directory to the new server (make sure you
   keep the path)

3. point the new galaxy server to the DB copy and start it (different
   port number)
   - due to the higher Python version, news eggs were downloaded
   - all python code was re-compiled

4. test the new server (while the old one is still in use)

5. stop the old server

6. rsync  ~/galaxy_dist/database/files/

7. point the new galaxy server to the 'live' DB and re-start it


Obviously, this is won't be a 'fresh' Galaxy install.

If you want to start from a new download of the galaxy distribution, the 
amount of work (eg: merging of configuration files) depends on the 
existing modifications you have made. If you have only a few, the 
copy/merge of the: universe_wsgi.ini and tool_conf.xml files
plus a copy of the database/ and tools/ directory might be 
sufficient...or you need to look into the tool-data/ directory, the 
datatypes_conf.xml file, etc, etcbut you can test all this while the 
new server is already running in parallel to the current server.


Regards, Hans







On 06/12/2012 01:14 PM, Joachim Jacob wrote:

Hi all,


I am looking for instructions to migrate users, configuration and data
to other Galaxy machine. I have a production Galaxy, with users,
histories, pages, data libraries, configured tool panel, running on a
postgresdb. I want to move this information to a fresh Galaxy install
(same version as production) on another machine.

Is this feasible to do without much hassle? I am thinking about a
merge/sync of configuration files and dumping and creating the
postgresdb,... Has anyone experience with this and a check list for this?

The idea is that the user should have no clue that Galaxy is on a new
machine: all data, tools, histories, etc... are there. Thanks for your
consideration.


Kind regards,
Joachim



___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

 http://lists.bx.psu.edu/


Re: [galaxy-dev] Giardine et al Galaxy paper (2005)

2012-06-12 Thread Hans-Rudolf Hotz

Hi Peter

Have a look at Anton's talk Introduction to Galaxy at last years GCC 
meeting: http://wiki.g2.bx.psu.edu/Events/GCC2011


The first few slides are talking about the history of Galaxy, eg:

Galaxy as a single Perl script (!)


Regards, Hans

On 06/12/2012 01:28 PM, Peter van Heusden wrote:

Hi everyone

The Galaxy system as described in Belinda Giardine et al's 2005 Genome
Research paper (Galaxy: A platform for interactive large-scale genome
analysis) appears to be radically different from the current Galaxy, at
least in its technical specification. The paper mentions a C core spoken
to by a Perl-based web user interface (referred to as the History User
Interface). The co-authors on the paper are, however, familiar names
from the current Galaxy team. Was this paper describing something in the
pre-history of current Galaxy (it sounds rather like the current History
but without workflows and of course implemented on a different platform)?

Peter
___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

   http://lists.bx.psu.edu/

___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

 http://lists.bx.psu.edu/


Re: [galaxy-dev] Giardine et al Galaxy paper (2005)

2012-06-12 Thread Bob Harris
And Galaxy had a predecessor called Gala, probably also implemented in perl.  
Belinda may also have published a paper about Gala.

Bob H


On Jun 12, 2012, at 8:36 AM, Hans-Rudolf Hotz wrote:

 Hi Peter
 
 Have a look at Anton's talk Introduction to Galaxy at last years GCC 
 meeting: http://wiki.g2.bx.psu.edu/Events/GCC2011
 
 The first few slides are talking about the history of Galaxy, eg:
 
 Galaxy as a single Perl script (!)
 
 
 Regards, Hans
 
 On 06/12/2012 01:28 PM, Peter van Heusden wrote:
 Hi everyone
 
 The Galaxy system as described in Belinda Giardine et al's 2005 Genome
 Research paper (Galaxy: A platform for interactive large-scale genome
 analysis) appears to be radically different from the current Galaxy, at
 least in its technical specification. The paper mentions a C core spoken
 to by a Perl-based web user interface (referred to as the History User
 Interface). The co-authors on the paper are, however, familiar names
 from the current Galaxy team. Was this paper describing something in the
 pre-history of current Galaxy (it sounds rather like the current History
 but without workflows and of course implemented on a different platform)?
 
 Peter
 ___
 Please keep all replies on the list by using reply all
 in your mail client.  To manage your subscriptions to this
 and other Galaxy lists, please use the interface at:
 
   http://lists.bx.psu.edu/
 ___
 Please keep all replies on the list by using reply all
 in your mail client.  To manage your subscriptions to this
 and other Galaxy lists, please use the interface at:
 
 http://lists.bx.psu.edu/


___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/


Re: [galaxy-dev] Instructions for migrating users, configuration and data to other Galaxy machine

2012-06-12 Thread Joachim Jacob

Hi Hans-Rudolf,


Thank you for sharing your experience!

To summarize:
1. user information is stored in the database (histories, pages, 
workflows). This info is linked to the datasets,
2. all datasets are in the '../database/files' (or the location 
specified in universe_wsgi.ini) directory, and

3. all configuration files are located in the root of Galaxy.

So would it be feasible to run the postgresdb not locally, but on a 
separate machine, to which I connect the freshly installed Galaxy?
Then I can also keep the datasets directory on a network share, which is 
mounted to the fresh install.
[perhaps other directories can also be accessed over the network, such 
as the toolshed tools directory].


The configuration files is perhaps the most difficult part. these have 
to be merged. But perhaps I could use mercurial for this? Seems a 
powerful way to do this.



Joachim.

Joachim Jacob, PhD

Rijvisschestraat 120, 9052 Zwijnaarde
Tel: +32 9 244.66.34
Bioinformatics Training and Services (BITS)
http://www.bits.vib.be
@bitsatvib



On 06/12/2012 02:29 PM, Hans-Rudolf Hotz wrote:

Hi Joachim

This is roughly what we did  ~15 months ago:

1. make a copy of the MySQL DB (postgresdb in your case)

2. copy the complete galaxy directory to the new server (make sure you
   keep the path)

3. point the new galaxy server to the DB copy and start it (different
   port number)
   - due to the higher Python version, news eggs were downloaded
   - all python code was re-compiled

4. test the new server (while the old one is still in use)

5. stop the old server

6. rsync  ~/galaxy_dist/database/files/

7. point the new galaxy server to the 'live' DB and re-start it


Obviously, this is won't be a 'fresh' Galaxy install.

If you want to start from a new download of the galaxy distribution, 
the amount of work (eg: merging of configuration files) depends on the 
existing modifications you have made. If you have only a few, the 
copy/merge of the: universe_wsgi.ini and tool_conf.xml files
plus a copy of the database/ and tools/ directory might be 
sufficient...or you need to look into the tool-data/ directory, the 
datatypes_conf.xml file, etc, etcbut you can test all this while 
the new server is already running in parallel to the current server.


Regards, Hans







On 06/12/2012 01:14 PM, Joachim Jacob wrote:

Hi all,


I am looking for instructions to migrate users, configuration and data
to other Galaxy machine. I have a production Galaxy, with users,
histories, pages, data libraries, configured tool panel, running on a
postgresdb. I want to move this information to a fresh Galaxy install
(same version as production) on another machine.

Is this feasible to do without much hassle? I am thinking about a
merge/sync of configuration files and dumping and creating the
postgresdb,... Has anyone experience with this and a check list for 
this?


The idea is that the user should have no clue that Galaxy is on a new
machine: all data, tools, histories, etc... are there. Thanks for your
consideration.


Kind regards,
Joachim




___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

 http://lists.bx.psu.edu/


Re: [galaxy-dev] Instructions for migrating users, configuration and data to other Galaxy machine

2012-06-12 Thread Hans-Rudolf Hotz



On 06/12/2012 04:04 PM, Joachim Jacob wrote:

Hi Hans-Rudolf,


Thank you for sharing your experience!

To summarize:
1. user information is stored in the database (histories, pages,
workflows). This info is linked to the datasets,
2. all datasets are in the '../database/files' (or the location
specified in universe_wsgi.ini) directory, and
3. all configuration files are located in the root of Galaxy.

So would it be feasible to run the postgresdb not locally, but on a
separate machine, to which I connect the freshly installed Galaxy?
Then I can also keep the datasets directory on a network share, which is
mounted to the fresh install.
[perhaps other directories can also be accessed over the network, such
as the toolshed tools directory].



No, don't share the postgresdb and the datasets. Two different galaxy 
installations accessing the same postgresdb and the datasets will cause 
troubles.


Hence, I have step 6 and 7 in my procedure, to make sure all the data 
accumulated during the test phase gets 'replaced' with all the data the 
users continue to produce on the old production server.



The configuration files is perhaps the most difficult part. these have
to be merged. But perhaps I could use mercurial for this? Seems a
powerful way to do this.


Yes, to some extend, see the .hgignore file


Regards, Hans




Joachim.

Joachim Jacob, PhD

Rijvisschestraat 120, 9052 Zwijnaarde
Tel: +32 9 244.66.34
Bioinformatics Training and Services (BITS)
http://www.bits.vib.be
@bitsatvib



On 06/12/2012 02:29 PM, Hans-Rudolf Hotz wrote:

Hi Joachim

This is roughly what we did ~15 months ago:

1. make a copy of the MySQL DB (postgresdb in your case)

2. copy the complete galaxy directory to the new server (make sure you
keep the path)

3. point the new galaxy server to the DB copy and start it (different
port number)
- due to the higher Python version, news eggs were downloaded
- all python code was re-compiled

4. test the new server (while the old one is still in use)

5. stop the old server

6. rsync ~/galaxy_dist/database/files/

7. point the new galaxy server to the 'live' DB and re-start it


Obviously, this is won't be a 'fresh' Galaxy install.

If you want to start from a new download of the galaxy distribution,
the amount of work (eg: merging of configuration files) depends on the
existing modifications you have made. If you have only a few, the
copy/merge of the: universe_wsgi.ini and tool_conf.xml files
plus a copy of the database/ and tools/ directory might be
sufficient...or you need to look into the tool-data/ directory, the
datatypes_conf.xml file, etc, etcbut you can test all this while
the new server is already running in parallel to the current server.

Regards, Hans







On 06/12/2012 01:14 PM, Joachim Jacob wrote:

Hi all,


I am looking for instructions to migrate users, configuration and data
to other Galaxy machine. I have a production Galaxy, with users,
histories, pages, data libraries, configured tool panel, running on a
postgresdb. I want to move this information to a fresh Galaxy install
(same version as production) on another machine.

Is this feasible to do without much hassle? I am thinking about a
merge/sync of configuration files and dumping and creating the
postgresdb,... Has anyone experience with this and a check list for
this?

The idea is that the user should have no clue that Galaxy is on a new
machine: all data, tools, histories, etc... are there. Thanks for your
consideration.


Kind regards,
Joachim




___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

 http://lists.bx.psu.edu/


Re: [galaxy-dev] Tool Shed Workflow

2012-06-12 Thread John Chilton
We are going in circles here :)

Me:
my hope is for a way of programmatically importing and updating new tools

You:
This is currently possible (and very simple to do) using the Galaxy Admin UI

I would call using the Admin UI not doing some programmatically.

You have done a brilliant job making easy to install and update tools
via the Admin UI. I am not sure the experience could be made any
easier. I have been instead trying to ask about how I might script
some of the actions you have enabled via the Admin UI.

My draconian theories about production environments aside, the second
use case - fully automating the creation of preconfigured Galaxy
instances for cloud images - that requires this functionality if I
want to use tool sheds, its not a taste thing. So I am going to
implement it - I need it, I just wanted your opinion on the best way
to go about it.

We should perhaps continue this conversation via pull request.

Thanks again,
-John

On Sat, Jun 9, 2012 at 6:01 AM, Greg Von Kuster g...@bx.psu.edu wrote:
 Hi John,

 I feel this is an important topic and that others in the community are 
 undoubtedly benefitting from it, so I'm glad you started this discussion.

 On Jun 9, 2012, at 12:36 AM, John Chilton wrote:

 We don't pull down from bitbucket directly to our production
 environment, we pull galaxy-dist changes into our testing repository,
 merge (that can be quite complicated, sometimes a multihour process),
 auto-deploy to a testing server, and then finally we push the tested
 changes into a bare production repo. Our sys admins then pull in
 changes from that bare production repo in our production environment.
 We also prebuild eggs in our testing environment not live on our
 production system. Given the complicated merges we need to do and the
 configuration files that need to be updated each dist update it would
 seem making those changes on a live production system would be
 problematic.

 Even if one was pulling changes directly from bitbucket into a
 production codebase, I think the dependency on bitbucket would be very
 different than on N toolsheds.

 These tool migrations will only interact with a single tool shed, the main 
 Galaxy tool shed.


 If our sys admin is going to update
 Galaxy and bitbucket is down, that is no problem he or she can just
 bring Galaxy back up and update later. Now lets imagine they shutdown
 our galaxy instance, updated the code base, did a database migration,
 and went to do a toolshed migration and that failed. In this case
 instead of just bringing Galaxy back up they will now need to restore
 the database from backup and pullout of the mercurial changes.

 In your scenario, if everything went well except the tool shed migration, an 
 option that would be less intrusive than reverting back to the previous 
 Galaxy release would be to just bring up your server without the migrated 
 tools for a temporary time.  When the tool shed migration process is 
 corrected (generally, the only reason it would break is if the tool shed was 
 down), you could run it at that time.  So the worst case scenario is that the 
 specific migrated tools will be temporarily unavailable from your production 
 Galaxy instance.

 A nice feature of these migrated tool scripts is that they are very flexible 
 in when they can be run, which is any time.  They also do not have to be run 
 in any specific order.  So, for example, you could run tool migration script 
 0002 six months after you've run migration script 0003, 0004, etc.

 These scripts do affect the Galaxy database by adding new records to certain 
 tables, but if the script fails, no database corrections are necessary in 
 order to prepare for running the script again.  You can just run the same 
 script later, and the script will handle whatever database state exists at 
 that time.



 Anyway all of that is a digression right, I understand that we will
 need to have the deploy-time dependencies on tool sheds and make these
 tool migration script calls part of our workflow. My lingering hope is
 for a way of programmatically importing and updating new tools that
 were never part of Galaxy (Qiime, upload_local_file, etc...) using
 tool sheds.

 This is currently possible (and very simple to do) using the Galaxy Admin UI. 
  See the following sections of the tool shed wiki for details.

 http://wiki.g2.bx.psu.edu/Tool%20Shed#Automatic_installation_of_Galaxy_tool_shed_repository_tools_into_a_local_Galaxy_instance
 http://wiki.g2.bx.psu.edu/Tool%20Shed#Automatic_installation_of_Galaxy_tool_shed_repository_data_types_into_a_local_Galaxy_instance
 http://wiki.g2.bx.psu.edu/Tool%20Shed#Getting_updates_for_tool_shed_repositories_installed_in_a_local_Galaxy_instance

 I'm currently writing the following new section - it should be available 
 within the next week or so.

 http://wiki.g2.bx.psu.edu/Tool%20Shed#Automatic_3rd_party_tool_dependency_installation_and_compilation_with_installed_repositories



 My previous e-mail was 

[galaxy-dev] tool_data_table_config.xml.sample

2012-06-12 Thread Birgit Crain
Hi

I'm developing tools that use an entry in the tool_data_table_config.xml file. 
For upload into a repository I created a tool_data_table_config.xml.sample file 
with the corresponding table entry. The structure for the file I copied from 
the  tool_data_table_config.xml. in the galaxy download.

tables
!-- Start location of cgatools crr files --
table name=cg_crr_files comment_char=#
columnsvalue, dbkey, name, path/columns
file path=tool-data/cg_crr_files.loc /
/table
!-- End Location of cgatools crr files --
/tables

When I upload the tar ball into the repository the tools won't load properly 
and I get the following message:

not well-formed (invalid token): line 1, column 0

Any suggestions what I'm missing or how to fix this error?

Thanks, Birgit




The contents of this e-mail and any attachments are confidential and only for 
use by the intended recipient. Any unauthorized use, distribution or copying of 
this message is strictly prohibited. If you are not the intended recipient 
please inform the sender immediately by reply e-mail and delete this message 
from your system. Thank you for your co-operation.
___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/

Re: [galaxy-dev] tool_data_table_config.xml.sample

2012-06-12 Thread Greg Von Kuster
Hi Birgit,

I don't see any problem with your xml definition below.  Are you sure the 
problem is not with another file in the tarball?  If you can send me the 
tarball, I'll take a look.

Greg Von Kuster


On Jun 12, 2012, at 4:09 PM, Birgit Crain wrote:

 Hi 
 
 I'm developing tools that use an entry in the tool_data_table_config.xml 
 file. For upload into a repository I created a 
 tool_data_table_config.xml.sample file with the corresponding table entry. 
 The structure for the file I copied from the  tool_data_table_config.xml. in 
 the galaxy download.
 
 tables
 !-- Start location of cgatools crr files --
 table name=cg_crr_files comment_char=#
 columnsvalue, dbkey, name, path/columns
 file path=tool-data/cg_crr_files.loc /
 /table
 !-- End Location of cgatools crr files --
 /tables
 
 When I upload the tar ball into the repository the tools won't load properly 
 and I get the following message:
 
 not well-formed (invalid token): line 1, column 0
 
 Any suggestions what I'm missing or how to fix this error?
 
 Thanks, Birgit
 
  
  
 The contents of this e-mail and any attachments are confidential and only for 
 use by the intended recipient. Any unauthorized use, distribution or copying 
 of this message is strictly prohibited. If you are not the intended recipient 
 please inform the sender immediately by reply e-mail and delete this message 
 from your system. Thank you for your co-operation.
 ___
 Please keep all replies on the list by using reply all
 in your mail client.  To manage your subscriptions to this
 and other Galaxy lists, please use the interface at:
 
  http://lists.bx.psu.edu/

___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/

Re: [galaxy-dev] tool_data_table_config.xml.sample

2012-06-12 Thread Birgit Crain
Thanks, Greg

I tried uploading the same tar ball without the 
tool_data_table_config.xml.sample file and the tools load fine (to troubleshoot 
I added only tools that actually do not depend on the file). As soon as I add 
the .sample file to the tar ball I get the error message.

Thanks, Birgit


From: Greg Von Kuster g...@bx.psu.edumailto:g...@bx.psu.edu
Date: Tuesday, June 12, 2012 1:18 PM
To: Birgit Crain 
bcr...@completegenomics.commailto:bcr...@completegenomics.com
Cc: galaxy-dev@lists.bx.psu.edumailto:galaxy-dev@lists.bx.psu.edu 
galaxy-dev@lists.bx.psu.edumailto:galaxy-dev@lists.bx.psu.edu
Subject: Re: [galaxy-dev] tool_data_table_config.xml.sample

Hi Birgit,

I don't see any problem with your xml definition below.  Are you sure the 
problem is not with another file in the tarball?  If you can send me the 
tarball, I'll take a look.

Greg Von Kuster


On Jun 12, 2012, at 4:09 PM, Birgit Crain wrote:

Hi

I'm developing tools that use an entry in the tool_data_table_config.xml file. 
For upload into a repository I created a tool_data_table_config.xml.sample file 
with the corresponding table entry. The structure for the file I copied from 
the  tool_data_table_config.xml. in the galaxy download.

tables
!-- Start location of cgatools crr files --
table name=cg_crr_files comment_char=#
columnsvalue, dbkey, name, path/columns
file path=tool-data/cg_crr_files.loc /
/table
!-- End Location of cgatools crr files --
/tables

When I upload the tar ball into the repository the tools won't load properly 
and I get the following message:

not well-formed (invalid token): line 1, column 0

Any suggestions what I'm missing or how to fix this error?

Thanks, Birgit




The contents of this e-mail and any attachments are confidential and only for 
use by the intended recipient. Any unauthorized use, distribution or copying of 
this message is strictly prohibited. If you are not the intended recipient 
please inform the sender immediately by reply e-mail and delete this message 
from your system. Thank you for your co-operation.
___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

 http://lists.bx.psu.edu/




The contents of this e-mail and any attachments are confidential and only for 
use by the intended recipient. Any unauthorized use, distribution or copying of 
this message is strictly prohibited. If you are not the intended recipient 
please inform the sender immediately by reply e-mail and delete this message 
from your system. Thank you for your co-operation.


scripts.tar.gz
Description: scripts.tar.gz
___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/

Re: [galaxy-dev] About the Galaxy Data Library

2012-06-12 Thread Ciara Ledero
Thank you, Greg. I'll see it right away.
___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/

Re: [galaxy-dev] Configuring Galaxy for FTP upload

2012-06-12 Thread Ciara Ledero
Another question, Nate. After running Galaxy, I had problems with accessing
the data in the created upload folder. Is there anything else that I need
to do?

Thank you so much for all the help.
___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/

Re: [galaxy-dev] Dynamic job runner configuration followup

2012-06-12 Thread Matloob Khushi
Dear John

This sounds interesting, however, I've been struggling for quite a sometimes on 
limiting the number of options a user could see when it loads the tool wrapper. 

I have a separate database that records the results obtained from the execution 
of my developed tool along with the $__user_email__. This way my database table 
has the results for all galaxy users. My tool also let the users to perform 
further downstream analysis on the previous run. Therefore, I am using 
dynamic_options in wrapper xml to populate a dropdown with the results/datasets 
for the previous run of the tool to choose from. 

The problem is I have to show the datasets/results for all users, ideally I 
would like to limit the options in the dropdown to the results belonged to the 
logged-in user by writing a query something similar to select id, option from 
myTable where useremail=$__user_email__. However, I have no idea how to grab 
the $__user_email__ at time of execution of the function assigned via 
dynamic_options=load_dynamic_values() in xml wrapper.

Would you have any idea how this could be achieved. Thanks.

Regards,

Matloob


___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/