Re: [galaxy-dev] Citing Galaxy + Toolshed in an app note of a small tool

2014-03-28 Thread Bossers, Alex
James,
I have been looking for it as well some while ago.
Would be good to post it somewhere on the wiki in a prominent place...how to 
cite...
Thx
Alex

-Oorspronkelijk bericht-
Van: galaxy-dev-boun...@lists.bx.psu.edu 
[mailto:galaxy-dev-boun...@lists.bx.psu.edu] Namens James Taylor
Verzonden: donderdag 27 maart 2014 20:21
Aan: Assaf Gordon
CC: galaxy-dev@lists.bx.psu.edu
Onderwerp: Re: [galaxy-dev] Citing Galaxy + Toolshed in an app note of a small 
tool

Hey Assaf,

For Cite1 (Galaxy): doi:10.1186/gb-2010-11-8-r86

For Cite2 (ToolShed): doi:10.1186/gb4161

Thanks for asking!

-- jt


On Tue, Mar 25, 2014 at 1:15 PM, Assaf Gordon agor...@wi.mit.edu wrote:
 Hello Galaxy People!

 (it's been a while since I've last been here... a pleasure to be back).

 I intend to publish a small command-line utility, which will also be 
 available through Galaxy Toolshed.
 It'll be a small application note, so not a lot of space for many citations.

 The relevant sentence would read something like:
 The tool is also available for the Galaxy Bioinformatics Platform 
 [Cite1], with automatic installation provided though the Galaxy Tool Shed 
 [Cite2].

 What should I use for [cite1 (galaxy) and [cite2 (toolshed)], out of 
 this impressive long list of publications:
   https://wiki.galaxyproject.org/CitingGalaxy


 Thanks!
  -gordon



 ___
 Please keep all replies on the list by using reply all
 in your mail client.  To manage your subscriptions to this and other 
 Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

 To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/
___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this and other Galaxy 
lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/



___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/


Re: [galaxy-dev] How to write test cases for custom tool

2014-03-28 Thread Janaki Rama Rao Gollapudi
Hi,

I have modified the tool dependency xml file by adding Absolute path to
output file, like below, then my test cases are *passing*:

tests
test
 param name=input1 value=TCGA-02-0001-01/
 param name=input2 value=Project-TSS-Participant/
 output name=output file=path to test-data
directory/barcode-parse-ouput1.txt/
/test
test
 param name=input1 value=TCGA-02-0001-01/
 param name=input2 value=Project TSS: Participant/
 output name=output file=path to test-data
directory/barcode-parse-ouput2.txt/
/test
test
 param name=input1 value=TCGA-02-0001-01/
 param name=input2 value=MyBarcodeProjectisTSS/
 output name=output file=path to test-data
directory/barcode-parse-ouput3.txt/
/test
/tests

Thanks for all your help.


Thanks,
JanakiRam


On Fri, Mar 28, 2014 at 2:39 PM, Janaki Rama Rao Gollapudi 
janakiram.gollap...@india.semanticbits.com wrote:

 Hi,

 I have gone through the resources and able to run the test cases, but some
 how my test cases are failing with below exception

   File 
 /home/janakiram/Documents/Galaxy/galaxy-dist/test/base/twilltestcase.py, 
 line 192, in get_filename
 return os.path.abspath( os.path.join( file_dir, filename ) )

   File /usr/lib/python2.7/posixpath.py, line 77, in join
 elif path == '' or path.endswith('/'):

 AttributeError: 'NoneType' object has no attribute 'endswith'

 My test cases are:
..

 tests

 test

   param name=input1 value=TEST-0001-01/

  param name=input2 value=TEST-Sample/

  output name=output file=barcode-parse-ouput1.txt/

 /test

 test

  param name=input1 value=TEST-0001-02/

  param name=input2 value=TEST-Sample-New/

  output name=output file=barcode-parse-ouput2.txt/

 /test

 test

  param name=input1 value=TEST-0001-01/

  param name=input2 value=MyTestSample/

  output name=output file=barcode-parse-ouput3.txt/

 /test

 /tests
 ..


 I put the barcode-parse-ouput1.txt, barcode-parse-ouput2.txt,
 barcode-parse-ouput3.txt files in the test-data folder (I also set the
 tool_dependency_dir in universe_wsgi.ini). Could you please help me on
 this.

 Please find the attached function test case output.

 Thanks  Regards,
 G.JanakiRam



 On Thu, Mar 27, 2014 at 7:40 PM, Janaki Rama Rao Gollapudi 
 janakiram.gollap...@india.semanticbits.com wrote:

 Thank you, I will go through the resources and will reply my results to
 this email chain.

 Thanks,
 JanakiRam


 On Thu, Mar 27, 2014 at 6:39 PM, Greg Von Kuster g...@bx.psu.edu wrote:

 Hello Janaki,

 Thanks for clarifying that you have installed your tools from a Tool
 Shed.  In this case, functional tests do not use Galaxy's
 tool_conf.xml.sample, so changing it will make no difference.  For
 information about running functional tests on tools installed from the Tool
 Shed, see: https://wiki.galaxyproject.org/TestingInstalledTools

 Basically, you'll be doing the following.  The functional test framework
 is not currently set up to test a specific installed tool using a -id flag.
  You'll have to use just the -installed flagg which will end up testing all
 installed tools.


 export GALAXY_TOOL_DEPENDENCY_DIR=tool_dependencies;  sh 
 run_functional_tests.sh -installed


 Greg Von Kuster

 On Mar 27, 2014, at 8:58 AM, Janaki Rama Rao Gollapudi 
 janakiram.gollap...@india.semanticbits.com wrote:

 Hi,

 Thanks for reply. I composed a details email for better clarity.

 when run grep command in galaxy home folder ( ./run_functional_tests.sh
 -list | grep barcode-parse), I got no results.
 What I did was:

- Implemented a custom tool (It has one python script and tool
definition file. These files are located in
../tools/Mytools/customToolName/)
- Then run the ToolShed in my local which is running on port 9009 (I
have)
- Created a new repository in the ToolShed (which is running on
9009) and uploaded .py and .xml files in to it
- Now I run the galaxy (running on port 8080) and browse the my
custom tool from my local galaxy and install the custom tool successfully
- And shed_tool_conf.xml file updated with new section:
   -
   - section id=mTools name=MyTools version=
   -   tool
   
 file=xxx.xx.x.xx/repos/janakiram-t1/barcode_parse_1/b6a60d02b1a2/barcode_parse_1/barcode-
parse.xml
   
 guid=xxx.xx.x.xx:9009/repos/janakiram-t1/barcode_parse_1/barcode-parse/1.0.0
   -   tool_shedxxx.xxx.x.xx:9009/tool_shed
   - repository_namebarcode_parse_1/repository_name
   - repository_ownerjanakiram-t1/repository_owner
   -
   
 installed_changeset_revisionb6a60d02b1a2/installed_changeset_revision
   -
   
 idxxx.xxx.x.xx:9009/repos/janakiram-t1/barcode_parse_1/barcode-parse/1.0.0/id
   - version1.0.0/version
   - /tool
   - /section

   - My tool definition  file look likes below:
   -
   - tool id=barcode-parse name=Barcode parse
   - descriptionsome description/description
   

[galaxy-dev] minimus2 wrapper

2014-03-28 Thread Peter Cock
Hi Edward,

Are you still working on your minimus2 wrapper? It does the basics very
nicely taking FASTA files as input (hiding the conversion into AMOS format
internally): http://toolshed.g2.bx.psu.edu/view/edward-kirton/minimus2

One minor improvement is the prefix parameters should be conditional
rather than always shown (just some XML tweaking). I could do that,
but where do you keep the upstream repository for your wrappers?
I'm presuming these are your accounts on GitHub and BitBucket:
https://bitbucket.org/eskirton  https://github.com/eskirton

Also I would like to see options for setting the parameters (REFCOUNT,
OVERLAP, CONSERR, MINID, MAXTRIM) for when the default values
do not work. That means XML additions and some Perl wrapper work
(outside my comfort zone).

Thanks,

Peter
___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/


[galaxy-dev] extract_genomic_dna.py

2014-03-28 Thread Adhemar
Hi,
In order to have the transcript_id for each sequence extracted from the
cuffmerge .gtf file I had to change the extract_genomic_dna.py by adding
the following lines after line 153:

attributes = gff_util.parse_gff_attributes( feature[8] )
if ( transcript_id in attributes ):
 name = attributes.get( transcript_id, None )


This way the variable name gets the transcript_id if it exists.

If it's correct, I would appreciate this modification in future galaxy
distributions.

Thanks!
Adhemar
___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

Re: [galaxy-dev] tophat2 install

2014-03-28 Thread Briand, Sheldon
Hi,

Just wanted to let you know that my tophat2 install is working now.  In case it 
helps someone else in the future: Manually placing tool_dependencies.xml in the 
correct shed_tools directory, placing the tool dependency package in the 
correct dependencies directory, and uninstalling and then installing the tool 
dependency package worked.

Thanks for the advice.

-Sheldon

-Original Message-
From: Greg Von Kuster [mailto:g...@bx.psu.edu] 
Sent: Tuesday, March 25, 2014 10:34 AM
To: Briand, Sheldon
Cc: galaxy-dev@lists.bx.psu.edu Dev
Subject: Re: [galaxy-dev] tophat2 install

If you installe dit from the main Tool Shed, then this repository has the 
recipe for installing the package.

http://toolshed.g2.bx.psu.edu/view/devteam/package_tophat2_2_0_9 

Greg Von Kuster
 
On Mar 25, 2014, at 9:25 AM, Briand, Sheldon sheldon.bri...@ssc-spc.gc.ca 
wrote:

 Hi,
 
 Not sure if you saw my last email. Updating to the latest patch version 
 doesn't seem to help.  I was wondering where I could get the galaxy package 
 version of tophat2 and put it where it needs to be manually.  Then I could 
 try the uninstall/install and see if that helps.
 
 Thanks!
 -Sheldon
 
 -Original Message-
 From: galaxy-dev-boun...@lists.bx.psu.edu 
 [mailto:galaxy-dev-boun...@lists.bx.psu.edu] On Behalf Of Briand, 
 Sheldon
 Sent: Thursday, March 20, 2014 3:11 PM
 To: 'Greg Von Kuster'
 Cc: galaxy-dev@lists.bx.psu.edu Dev
 Subject: Re: [galaxy-dev] tophat2 install
 
 No difference.  Same error
 
 -Original Message-
 From: Greg Von Kuster [mailto:g...@bx.psu.edu]
 Sent: Thursday, March 20, 2014 2:56 PM
 To: Briand, Sheldon
 Cc: galaxy-dev@lists.bx.psu.edu Dev
 Subject: Re: [galaxy-dev] tophat2 install
 
 Sorry, should have advised this:
 
 hg pull https://bitbucket.org/galaxy/galaxy-central#stable
 hg update stable
 
 See the end ot this thread:
 
 http://dev.list.galaxyproject.org/Persistent-jobs-in-cluster-queue-eve
 n-after-canceling-job-in-galaxy-td4663719.html
 
 
 
 
 On Mar 20, 2014, at 1:51 PM, Björn Grüning bjoern.gruen...@gmail.com wrote:
 
 Hi,
 
 you need to track galaxy-central to make use of the patches that are applied 
 as bugfixes after the release. You can do that easily with changing the path 
 in .hg/hgrc ... but I do not know if that is the right way to do ;) For me 
 it worked.
 
 Cheers,
 Bjoern
 
 Am 20.03.2014 18:46, schrieb Briand, Sheldon:
 pulling from https://bitbucket.org/galaxy/galaxy-dist
 searching for changes
 no changes found
 
 From: Greg Von Kuster [mailto:g...@bx.psu.edu]
 Sent: Thursday, March 20, 2014 2:36 PM
 To: Briand, Sheldon
 Cc: 'galaxy-dev@lists.bx.psu.edu'
 Subject: Re: [galaxy-dev] tophat2 install
 
 There have been several fixes released to the stable branch that you do not 
 have in 12441.  I would recommend doing the following:
 
 hg pull
 hg update stable
 
 Then uninstall the tophat2 repository and reinstall it.  See if that makes 
 a difference.
 
 
 On Mar 20, 2014, at 1:30 PM, Briand, Sheldon 
 sheldon.bri...@ssc-spc.gc.camailto:sheldon.bri...@ssc-spc.gc.ca wrote:
 
 
 $ hg summary
 parent: 12441:dc067a95261d
 
 From the tool_dependencies.xml:
 
 package name=tophat2 version=2.0.9
  repository changeset_revision=8549fd545473 
 name=package_tophat2_2_0_9 owner=devteam 
 prior_installation_required=False toolshed 
 =http://toolshed.g2.bx.psu.edu; /
 
 
 From: Greg Von Kuster [mailto:g...@bx.psu.eduhttp://bx.psu.edu]
 Sent: Thursday, March 20, 2014 2:23 PM
 To: Briand, Sheldon
 Cc: 'galaxy-dev@lists.bx.psu.edumailto:galaxy-dev@lists.bx.psu.edu'
 Subject: Re: [galaxy-dev] tophat2 install
 
 What version of Galaxy are you running?  You want to look in the tophat2 
 repository directory, not its package dependency.  There should be a 
 tool_dependencies.xml file in the tophat2's installation directory 
 hierarchy somewhere.
 
 On Mar 20, 2014, at 1:12 PM, Briand, Sheldon 
 sheldon.bri...@ssc-spc.gc.camailto:sheldon.bri...@ssc-spc.gc.ca wrote:
 
 
 
 Hi,
 
 dependancies/tophat2/2.0.9/devteam/package_tophat2_2_0_9/8549fd54547
 3
 contains only:
 env.sh
 
 If that isn't the right place to look let me know.
 
 -Sheldon
 
 From: Greg Von Kuster [mailto:g...@bx.psu.eduhttp://bx.psu.edu]
 Sent: Thursday, March 20, 2014 2:04 PM
 To: Briand, Sheldon
 Cc: 'galaxy-dev@lists.bx.psu.edumailto:galaxy-dev@lists.bx.psu.edu'
 Subject: Re: [galaxy-dev] tophat2 install
 
 It looks like your database and file system are out of sync.  Your database 
 seems to think you have an installed repository but your file system does 
 not seem to have the repository's installation directory.  Can you confirm 
 that this is the case?
 
 
 On Mar 20, 2014, at 11:53 AM, Briand, Sheldon 
 sheldon.bri...@ssc-spc.gc.camailto:sheldon.bri...@ssc-spc.gc.ca wrote:
 
 
 
 
 Hi,
 
 Here is the error from the paster.log:
 
 Traceback (most recent call last):
  File 
 /BigData/galaxy/galaxy-dist/lib/tool_shed/util/common_install_util.py, 
 line 496, in handle_tool_dependencies

Re: [galaxy-dev] [CONTENT] Re: Re: Unable to remove old datasets

2014-03-28 Thread Nate Coraor
Hi Ravi,

If you take a look at the dataset's entry in the
history_dataset_association table, is that marked deleted?
admin_cleanup_datasets.py only marks history_dataset_association rows
deleted, not datasets.

Running the cleanup_datasets.py flow with -d 0 should have then caused the
dataset to be deleted and purged, but this may not be the case if there is
more than one instance of the dataset you are trying to purge (either
another copy in a history somewhere, or in a library).

--nate


On Tue, Mar 25, 2014 at 5:12 PM, Sanka, Ravi rsa...@jcvi.org wrote:

 I have now been able to successfully remove datasets from disk. After
 deleting the dataset or history from the front-end interface (as the user),
 I then run the cleanup scripts as admin:

 python ./scripts/cleanup_datasets/cleanup_datasets.py ./universe_wsgi.ini
 -d 0 -1 $@  ./scripts/cleanup_datasets/delete_userless_histories.log
 python ./scripts/cleanup_datasets/cleanup_datasets.py ./universe_wsgi.ini
 -d 0 -2 -r $@  ./scripts/cleanup_datasets/purge_histories.log
 python ./scripts/cleanup_datasets/cleanup_datasets.py ./universe_wsgi.ini
 -d 0 -3 -r $@  ./scripts/cleanup_datasets/purge_datasets.log
 python ./scripts/cleanup_datasets/cleanup_datasets.py ./universe_wsgi.ini
 -d 0 -5 -r $@  ./scripts/cleanup_datasets/purge_folders.log
 python ./scripts/cleanup_datasets/cleanup_datasets.py ./universe_wsgi.ini
 -d 0 -4 -r $@  ./scripts/cleanup_datasets/purge_libraries.log
 python ./scripts/cleanup_datasets/cleanup_datasets.py ./universe_wsgi.ini
 -d 0 -6 -r $@  ./scripts/cleanup_datasets/delete_datasets.log

 However, my final goal is to have a process that can remove old datasets
 from disk regardless of whether or not the users have deleted them at the
 front-end (and then automate said process via cronjob). This will be
 essentially in a situation where users are likely to leave datasets
 unattended and accumulating disk space.

 I found the following Galaxy thread:


 http://dev.list.galaxyproject.org/Re-Improving-Administrative-Data-Clean-Up-pgcleanup-py-vs-cleanup-datasets-py-td4659330.html

 And am trying to use the script it mentions:

 python ./scripts/cleanup_datasets/admin_cleanup_datasets.py
 universe_wsgi.ini -d 30 --smtp smtp server --fromaddr rsa...@jcvi.org

 I chose -d 30 to remove all datasets older than 30 days, which currently
 only targets one dataset. The resulting stdout indicates success:

 
 # 2014-03-25 16:27:47 - Handling stuff older than 30 days
 Marked HistoryDatasetAssociation id 301 as deleted

 From: rsa...@jcvi.org
 To: isi...@jcvi.org
 Subject: Galaxy Server Cleanup - 1 datasets DELETED
 --
 Galaxy Server Cleanup
 -
 The following datasets you own on Galaxy are older than 30 days and have
 been DELETED:

 Small.fastq in history Unnamed history

 You may be able to undelete them by logging into Galaxy, navigating to the
 appropriate history, selecting Include Deleted Datasets from the history
 options menu, and clicking on the link to undelete each dataset that you
 want to keep.  You can then download the datasets.  Thank you for your
 understanding and cooporation in this necessary cleanup in order to keep
 the Galaxy resource available.  Please don't hesitate to contact us if you
 have any questions.

  -- Galaxy Administrators

 Marked 1 dataset instances as deleted
 

 But when I check the database, the status of dataset 301 is unchanged
 (ok-false-false-true).

 I then run the same cleanup_datasets.py routine from above (but with -d
 30), but dataset 301 is still present. I tried a second time, this time
 using -d 0, but still no deletion (which is not surprising since the
 dataset's deleted status is still false).

 If I run admin_cleanup_datasets.py again with the same parameters, the
 stdout says no datasets matched the criteria, so it seems to remember it's
 previous execution, but it's NOT actually updating the database.

 What am I doing wrong?

 --
 Ravi Sanka
 ICS - Sr. Bioinformatics Engineer
 J. Craig Venter Institute
 301-795-7743
 --

 From: Carl Eberhard carlfeberh...@gmail.com
 Date: Tuesday, March 18, 2014 2:09 PM
 To: Peter Cock p.j.a.c...@googlemail.com
 Cc: Ravi Sanka rsa...@jcvi.org, galaxy-dev@lists.bx.psu.edu 
 galaxy-dev@lists.bx.psu.edu
 Subject: [CONTENT] Re: [galaxy-dev] Re: Unable to remove old datasets

 The cleanup scripts enforce a sort of lifetime for the datasets.

 The first time they're run, they may mark a dataset as deleted and also
 reset the update time and you'll have to wait N days for the next stage of
 the lifetime.

 The next time they're run, or if a dataset has already been marked as
 deleted, the actual file removal happens and purged is set to true (if it
 wasn't already).

 You can manually pass in '-d 0' to force removal of datasets recently
 marked as deleted.

 The purge scripts do not check 'allow_user_dataset_purge', of course.


 On Tue, Mar 

[galaxy-dev] Globus World 2014

2014-03-28 Thread Ravi K Madduri
Dear All

I wanted to send a note to folks about Globus World 2014. I apologize before 
hand for spamming both the developer list and the users list also but I thought 
this may be relevant to folks on the lists. Please let me know if you have any 
questions.




GlobusWorld is this year’s biggest gathering of all things Globus. GlobusWorld 
2014 features a Using Globus Genomics to Accelerate Analysis Tutorial, and a 
full half day on Globus Genomicsin the main meeting, including a keynote by 
Nancy Cox and these accepted talks:


Globus Genomics: Enabling high-throughput cloud-based analysis and management 
of NGS data for Translational Genomics research at Georgetown, by Yuriy Gusev,
Improving next-generation sequencing variants identification in cancer genes 
using Globus Genomics, by Toshio Yoshimatsu
Globus Genomics: A Medical Center's Bioinformatics Core Perspective, by Anoop 
Mayampurath
Building a Low-budget Public Resource for Large-scale Proteomic Analyses, by 
Rama Raghavan

Globus Genomics is a Globus and Galaxy based platform for genomic analysis. 
GlobusWorld is being held April 15-17, in Chicago.  And, GCC2014 is a Silver 
Sponsor of GlobusWorld.

--
Ravi K Madduri
MCS, Argonne National Laboratory
Computation Institute, University of Chicago

___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

Re: [galaxy-dev] [CONTENT] Re: Re: Re: Unable to remove old datasets

2014-03-28 Thread Sanka, Ravi
Hi Nate,

I checked the dataset's entry in history_dataset_association, and the value in 
field deleted is true.

But if this does not enable the cleanup scripts to remove the dataset from 
disk, then how can I accomplish that? As an admin, my intention is to 
completely remove datasets that are past a certain age from Galaxy, including 
all instances of the dataset that may exist, regardless of whether or not the 
various users who own said instances have deleted them from their histories.

Can this be done with admin_cleanup_datasets.py? If so, how?

--
Ravi Sanka
ICS – Sr. Bioinformatics Engineer
J. Craig Venter Institute
301-795-7743
--

From: Nate Coraor n...@bx.psu.edumailto:n...@bx.psu.edu
Date: Friday, March 28, 2014 9:59 AM
To: Ravi Sanka rsa...@jcvi.orgmailto:rsa...@jcvi.org
Cc: Carl Eberhard carlfeberh...@gmail.commailto:carlfeberh...@gmail.com, 
Peter Cock p.j.a.c...@googlemail.commailto:p.j.a.c...@googlemail.com, 
galaxy-dev@lists.bx.psu.edumailto:galaxy-dev@lists.bx.psu.edu 
galaxy-dev@lists.bx.psu.edumailto:galaxy-dev@lists.bx.psu.edu
Subject: [CONTENT] Re: [galaxy-dev] Re: Re: Unable to remove old datasets

Hi Ravi,

If you take a look at the dataset's entry in the history_dataset_association 
table, is that marked deleted? admin_cleanup_datasets.py only marks 
history_dataset_association rows deleted, not datasets.

Running the cleanup_datasets.py flow with -d 0 should have then caused the 
dataset to be deleted and purged, but this may not be the case if there is more 
than one instance of the dataset you are trying to purge (either another copy 
in a history somewhere, or in a library).

--nate


On Tue, Mar 25, 2014 at 5:12 PM, Sanka, Ravi 
rsa...@jcvi.orgmailto:rsa...@jcvi.org wrote:
I have now been able to successfully remove datasets from disk. After deleting 
the dataset or history from the front-end interface (as the user), I then run 
the cleanup scripts as admin:

python ./scripts/cleanup_datasets/cleanup_datasets.py ./universe_wsgi.ini -d 0 
-1 $@  ./scripts/cleanup_datasets/delete_userless_histories.log
python ./scripts/cleanup_datasets/cleanup_datasets.py ./universe_wsgi.ini -d 0 
-2 -r $@  ./scripts/cleanup_datasets/purge_histories.log
python ./scripts/cleanup_datasets/cleanup_datasets.py ./universe_wsgi.ini -d 0 
-3 -r $@  ./scripts/cleanup_datasets/purge_datasets.log
python ./scripts/cleanup_datasets/cleanup_datasets.py ./universe_wsgi.ini -d 0 
-5 -r $@  ./scripts/cleanup_datasets/purge_folders.log
python ./scripts/cleanup_datasets/cleanup_datasets.py ./universe_wsgi.ini -d 0 
-4 -r $@  ./scripts/cleanup_datasets/purge_libraries.log
python ./scripts/cleanup_datasets/cleanup_datasets.py ./universe_wsgi.ini -d 0 
-6 -r $@  ./scripts/cleanup_datasets/delete_datasets.log

However, my final goal is to have a process that can remove old datasets from 
disk regardless of whether or not the users have deleted them at the front-end 
(and then automate said process via cronjob). This will be essentially in a 
situation where users are likely to leave datasets unattended and accumulating 
disk space.

I found the following Galaxy thread:

http://dev.list.galaxyproject.org/Re-Improving-Administrative-Data-Clean-Up-pgcleanup-py-vs-cleanup-datasets-py-td4659330.html

And am trying to use the script it mentions:

python ./scripts/cleanup_datasets/admin_cleanup_datasets.py universe_wsgi.ini 
-d 30 --smtp smtp server --fromaddr rsa...@jcvi.orgmailto:rsa...@jcvi.org

I chose –d 30 to remove all datasets older than 30 days, which currently only 
targets one dataset. The resulting stdout indicates success:


# 2014-03-25 16:27:47 - Handling stuff older than 30 days
Marked HistoryDatasetAssociation id 301 as deleted

From: rsa...@jcvi.orgmailto:rsa...@jcvi.org
To: isi...@jcvi.orgmailto:isi...@jcvi.org
Subject: Galaxy Server Cleanup - 1 datasets DELETED
--
Galaxy Server Cleanup
-
The following datasets you own on Galaxy are older than 30 days and have been 
DELETED:

Small.fastq in history Unnamed history

You may be able to undelete them by logging into Galaxy, navigating to the 
appropriate history, selecting Include Deleted Datasets from the history 
options menu, and clicking on the link to undelete each dataset that you want 
to keep.  You can then download the datasets.  Thank you for your understanding 
and cooporation in this necessary cleanup in order to keep the Galaxy resource 
available.  Please don't hesitate to contact us if you have any questions.

 -- Galaxy Administrators

Marked 1 dataset instances as deleted


But when I check the database, the status of dataset 301 is unchanged 
(ok-false-false-true).

I then run the same cleanup_datasets.py routine from above (but with –d 30), 
but dataset 301 is still present. I tried a second time, this time using –d 0, 
but still no deletion (which is not surprising since the dataset's deleted 
status is still false).


Re: [galaxy-dev] [CONTENT] Re: Re: Re: Unable to remove old datasets

2014-03-28 Thread Nate Coraor
Hi Ravi,

Can you check whether any other history_dataset_association or
library_dataset_dataset_association rows exist which reference the
dataset_id that you are attempting to remove?

When you run admin_cleanup_datasets.py, it'll set
history_dataset_association.deleted = true. After that is done, you need to
run cleanup_datasets.py with the `-6 -d 0` option to mark dataset.deleted =
true, followed by `-3 -d 0 -r ` to remove the dataset file from disk and
set dataset.purged = true. Note that the latter two operations will not do
anything until *all* associated history_dataset_association and
library_dataset_dataset_association rows are set to deleted = true.

--nate


On Fri, Mar 28, 2014 at 1:52 PM, Sanka, Ravi rsa...@jcvi.org wrote:

 Hi Nate,

 I checked the dataset's entry in history_dataset_association, and the
 value in field deleted is true.

 But if this does not enable the cleanup scripts to remove the dataset from
 disk, then how can I accomplish that? As an admin, my intention is to
 completely remove datasets that are past a certain age from Galaxy,
 including all instances of the dataset that may exist, regardless of
 whether or not the various users who own said instances have deleted them
 from their histories.

 Can this be done with admin_cleanup_datasets.py? If so, how?

 --
 Ravi Sanka
 ICS - Sr. Bioinformatics Engineer
 J. Craig Venter Institute
 301-795-7743
 --

 From: Nate Coraor n...@bx.psu.edu
 Date: Friday, March 28, 2014 9:59 AM
 To: Ravi Sanka rsa...@jcvi.org
 Cc: Carl Eberhard carlfeberh...@gmail.com, Peter Cock 
 p.j.a.c...@googlemail.com, galaxy-dev@lists.bx.psu.edu 
 galaxy-dev@lists.bx.psu.edu
 Subject: [CONTENT] Re: [galaxy-dev] Re: Re: Unable to remove old datasets

 Hi Ravi,

 If you take a look at the dataset's entry in the
 history_dataset_association table, is that marked deleted?
 admin_cleanup_datasets.py only marks history_dataset_association rows
 deleted, not datasets.

 Running the cleanup_datasets.py flow with -d 0 should have then caused the
 dataset to be deleted and purged, but this may not be the case if there is
 more than one instance of the dataset you are trying to purge (either
 another copy in a history somewhere, or in a library).

 --nate


 On Tue, Mar 25, 2014 at 5:12 PM, Sanka, Ravi rsa...@jcvi.org wrote:

 I have now been able to successfully remove datasets from disk. After
 deleting the dataset or history from the front-end interface (as the user),
 I then run the cleanup scripts as admin:

 python ./scripts/cleanup_datasets/cleanup_datasets.py ./universe_wsgi.ini
 -d 0 -1 $@  ./scripts/cleanup_datasets/delete_userless_histories.log
 python ./scripts/cleanup_datasets/cleanup_datasets.py ./universe_wsgi.ini
 -d 0 -2 -r $@  ./scripts/cleanup_datasets/purge_histories.log
 python ./scripts/cleanup_datasets/cleanup_datasets.py ./universe_wsgi.ini
 -d 0 -3 -r $@  ./scripts/cleanup_datasets/purge_datasets.log
 python ./scripts/cleanup_datasets/cleanup_datasets.py ./universe_wsgi.ini
 -d 0 -5 -r $@  ./scripts/cleanup_datasets/purge_folders.log
 python ./scripts/cleanup_datasets/cleanup_datasets.py ./universe_wsgi.ini
 -d 0 -4 -r $@  ./scripts/cleanup_datasets/purge_libraries.log
 python ./scripts/cleanup_datasets/cleanup_datasets.py ./universe_wsgi.ini
 -d 0 -6 -r $@  ./scripts/cleanup_datasets/delete_datasets.log

 However, my final goal is to have a process that can remove old datasets
 from disk regardless of whether or not the users have deleted them at the
 front-end (and then automate said process via cronjob). This will be
 essentially in a situation where users are likely to leave datasets
 unattended and accumulating disk space.

 I found the following Galaxy thread:


 http://dev.list.galaxyproject.org/Re-Improving-Administrative-Data-Clean-Up-pgcleanup-py-vs-cleanup-datasets-py-td4659330.html

 And am trying to use the script it mentions:

 python ./scripts/cleanup_datasets/admin_cleanup_datasets.py
 universe_wsgi.ini -d 30 --smtp smtp server --fromaddr rsa...@jcvi.org

 I chose -d 30 to remove all datasets older than 30 days, which currently
 only targets one dataset. The resulting stdout indicates success:

 
 # 2014-03-25 16:27:47 - Handling stuff older than 30 days
 Marked HistoryDatasetAssociation id 301 as deleted

 From: rsa...@jcvi.org
 To: isi...@jcvi.org
 Subject: Galaxy Server Cleanup - 1 datasets DELETED
 --
 Galaxy Server Cleanup
 -
 The following datasets you own on Galaxy are older than 30 days and have
 been DELETED:

 Small.fastq in history Unnamed history

 You may be able to undelete them by logging into Galaxy, navigating to
 the appropriate history, selecting Include Deleted Datasets from the
 history options menu, and clicking on the link to undelete each dataset
 that you want to keep.  You can then download the datasets.  Thank you for
 your understanding and cooporation in this necessary cleanup 

[galaxy-dev] HELP: universe_wsgi.ini and job_conf.xml for installation using local and PBS pro cluster.

2014-03-28 Thread Luca Toldo
Dear Galaxians,
I'd greatly appreciate if someone that has a running instance of galaxy
using local computing power and as well remote nodes (accessed with PBS
Pro) could share the files

universe_wsgi.ini
job_conf.xml

I've been trying very hard but failed to make noticeable progress.

I have a locally properly running galaxy server,
have a properly running (commandline) qsub to a PBS Pro cluster,
I have few tools that I would like to run on the cluster, while the
majority of tools
should run on the local instance (and they do so well already).

Thankyou for your patience and knowledge sharing effort.
___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

Re: [galaxy-dev] HELP: universe_wsgi.ini and job_conf.xml for installation using local and PBS pro cluster.

2014-03-28 Thread Björn Grüning

Hi,

will share mine in a few minutes off the list.

Cheers,
Bjoern

Am 28.03.2014 16:28, schrieb Luca Toldo:

Dear Galaxians,
I'd greatly appreciate if someone that has a running instance of galaxy
using local computing power and as well remote nodes (accessed with PBS
Pro) could share the files

universe_wsgi.ini
job_conf.xml

I've been trying very hard but failed to make noticeable progress.

I have a locally properly running galaxy server,
have a properly running (commandline) qsub to a PBS Pro cluster,
I have few tools that I would like to run on the cluster, while the
majority of tools
should run on the local instance (and they do so well already).

Thankyou for your patience and knowledge sharing effort.



___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
   http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
   http://galaxyproject.org/search/mailinglists/


___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
 http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
 http://galaxyproject.org/search/mailinglists/


Re: [galaxy-dev] restarting Galaxy without affecting jobs

2014-03-28 Thread Nate Coraor
Hi David,

Setting track_jobs_in_database = True should not be required, recovery is
supposed to work either way.

Does Galaxy lose all jobs, or just the ones that completed while Galaxy was
restarting? Can you provide the output from the Galaxy log that shows an
attempt to recover a job and all related messages?

Thanks,
--nate


On Mon, Mar 24, 2014 at 11:13 AM, David Hoover hoove...@helix.nih.govwrote:

 What are the configuration steps required for allowing a local Galaxy
 installation to be restarted without affecting currently running jobs?  I
 have Galaxy using DRMAA to submit jobs onto a backend cluster.  I thought
 that enable_job_recovery = True should allow this, but in a few tests I
 have found that although the batch jobs completed, Galaxy lost track of the
 jobs and classified them as failed.  Would track_jobs_in_database = True be
 required?  This is currently set to the default 'None'.

 Our local Galaxy installation has become quite busy, and restarts are not
 possible without forcing users to restart their jobs.

 David Hoover
 Helix Systems Staff
 ___
 Please keep all replies on the list by using reply all
 in your mail client.  To manage your subscriptions to this
 and other Galaxy lists, please use the interface at:
   http://lists.bx.psu.edu/

 To search Galaxy mailing lists use the unified search at:
   http://galaxyproject.org/search/mailinglists/

___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

Re: [galaxy-dev] Galaxy packages for Bio-Linux - an update + Apache proxying

2014-03-28 Thread Nate Coraor
Hi Tim,

I have recently been working on getting Galaxy Main's configs and
server-modified files and directories out of the galaxy-dist directory, so
our goals are aligning. Not everything can be moved without some trickery
(e.g. symlinks) but most paths, including the paths to shed_*.xml are
configurable in universe_wsgi.ini (which itself need not live in the
galaxy-dist directory).

--nate


On Mon, Mar 24, 2014 at 1:38 PM, Tim Booth tbo...@ceh.ac.uk wrote:

 Hi,

 This is mainly a message to Carlos and Eric who volunteered to get
 involved with my Debian packaging, but of course any other help will
 also be appreciated.  If either of you can spare time now it is a good
 moment to do so.

 I've been working on my package stuff over the last couple of weeks and
 have uploaded the latest to Debian-Med ready to build:


 http://anonscm.debian.org/viewvc/debian-med/trunk/packages/galaxy/trunk/debian/?view=log

 Note that you need the re-packed tarball as generated by
 get-orig-source.sh and if you have a problem generating that you can
 grab a copy of it here:


 https://launchpad.net/~nebc/+archive/galaxy/+files/galaxy_1.bl.py27.20140210.orig.tar.xz

 I've only built this for Ubuntu, and I know that to get it working on
 Debian you'll at least need to replace the upstart job with
 an /etc/init.d script.  After that I think you should have something
 working (see my commit notes).

 My latest efforts have been to try and get tool-shed installs working.
 Galaxy expects to be able to write to its own shed_tools_*_conf.xml
 files as well as to shed_tools and the tool-data directory.  It looks
 like there is work to have a separate shed_tool-data folder but this is
 not fully working so I'm seeing if I can patch it.  Either way, it's
 vital for packaging that the files managed by DPKG and the files
 (over)written by the Galaxy server are separated out into /var and /usr
 respectively.

 Cheers,

 TIM

 On Mon, 2013-12-16 at 16:27 +, Carlos Borroto wrote:
  Hi Tim,
 
  This sounds great. I'll be happy to help testing and hopefully find
  some time to help packaging once it gets into Debian Med(are you
  submitting all your packages there?).
 
  One question, for apache/nginx configuration why not use something ala
  phpMyAdmin which ask you if you want to preconfigure the package with
  a webserver in particular. The name of the DEB packaging technology to
  ask these kind of questions is evading me now. I think using something
  like that could open many possibilities in the future, like database
  backend to use, home URL, admin user/password, etc...
 
  Thanks for your work on this,
  Carlos
 
 
  On Fri, Dec 13, 2013 at 7:03 AM, Tim Booth tbo...@ceh.ac.uk wrote:
   Hi All,
  
   As previously mentioned, I'm back working on packaging the Galaxy
 server
   as DEB packages for Bio-Linux (ie. Ubuntu 12.04 LTS) and ultimately
   pushing towards something that could be Debian compliant.  There's a
 way
   to go in that regard, but I do now have an updated package for
 Bio-Linux
   in final testing and it also has a new trick: doing  apt-get install
   galaxy-server-apache-proxy will set up just that with no further
   configuration needed.  The galaxy server appears at
   http://localhost/galaxy and users log in with their regular system
   username and password.  Uploads are enabled via regular SFTP so no
   special FTP server configuration is needed.
  
   It's a little hacky in parts but I'm generally pleased with the result.
   If anyone want to take a look I'd welcome comments.  It's not in the
   main BL repo yet but can be found here:
  
  
 https://launchpad.net/~nebc/+archive/galaxy/+sourcepub/3711751/+listing-archive-extra
  
   Cheers,
  
   TIM
  
   --
   Tim Booth tbo...@ceh.ac.uk
   NERC Environmental Bioinformatics Centre
  
   Centre for Ecology and Hydrology
   Maclean Bldg, Benson Lane
   Crowmarsh Gifford
   Wallingford, England
   OX10 8BB
  
   http://nebc.nerc.ac.uk
   +44 1491 69 2705
  
   ___
   Please keep all replies on the list by using reply all
   in your mail client.  To manage your subscriptions to this
   and other Galaxy lists, please use the interface at:
 http://lists.bx.psu.edu/
  
   To search Galaxy mailing lists use the unified search at:
 http://galaxyproject.org/search/mailinglists/

 --
 Tim Booth tbo...@ceh.ac.uk
 NERC Environmental Bioinformatics Centre

 Centre for Ecology and Hydrology
 Maclean Bldg, Benson Lane
 Crowmarsh Gifford
 Wallingford, England
 OX10 8BB

 http://nebc.nerc.ac.uk
 +44 1491 69 2705

 ___
 Please keep all replies on the list by using reply all
 in your mail client.  To manage your subscriptions to this
 and other Galaxy lists, please use the interface at:
   http://lists.bx.psu.edu/

 To search Galaxy mailing lists use the unified search at:
   http://galaxyproject.org/search/mailinglists/


Re: [galaxy-dev] homebrew python

2014-03-28 Thread Nate Coraor
Hi Joshua,

You may be able to trick Galaxy into using existing versions of OS X eggs,
they are built for both 32 and 64-bit Intel, but should work fine with a
single-arch build. If the attached patch works, let me know and I'll commit
it.

If you'd rather not mess with the Galaxy source, you should be able to
build the missing eggs using `python ./scripts/scramble.py -e
egg_package_name`. Usually, the fetch_eggs.py script will inform you of
this - is it not doing so in your case?

--nate


On Mon, Mar 24, 2014 at 9:54 PM, Joshua Udall jaud...@gmail.com wrote:

 For various reasons, I installed a Homebrew of python instead of the
 system version on OSX 10.9.2.

 Now, when galaxy initializes, it isn't looking in the right location for
 eggs (or they aren't placed in the right spot on my system). I was able to
 manually install several of the eggs and the galaxy startup would move to
 the next egg until here.

 Some eggs are out of date, attempting to fetch...
 Warning: MarkupSafe (a dependent egg of WebHelpers) cannot be fetched
 Traceback (most recent call last):
   File ./scripts/fetch_eggs.py, line 37, in module
 c.resolve() # Only fetch eggs required by the config
   File /Users/galaxy/galaxy-old3/lib/galaxy/eggs/__init__.py, line 345,
 in resolve
 egg.resolve()
   File /Users/galaxy/galaxy-old3/lib/galaxy/eggs/__init__.py, line 195,
 in resolve
 return self.version_conflict( e.args[0], e.args[1] )
   File /Users/galaxy/galaxy-old3/lib/galaxy/eggs/__init__.py, line 226,
 in version_conflict
 r = pkg_resources.working_set.resolve( ( dist.as_requirement(), ),
 env, egg.fetch )
   File build/bdist.macosx-10.9-x86_64/egg/pkg_resources.py, line 588, in
 resolve
 The `plugin_env` should be an ``Environment`` instance that contains
 pkg_resources.DistributionNotFound: numpy==1.6.0
 Fetch failed.


 I know numpy, WebHelpers, MarkupSafe are installed and they are current
 (maybe too current?) ... what would be a good way to resolve the conflict?

 The manual download below fails on a dozen packages or so.

 sudo python ./scripts/make_egg_packager.py py2.7-macosx-10.9-x86_64-ucs2
 Using Python interpreter at
 /usr/local/Cellar/python/2.7.6/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents/MacOS/Python,
 Version 2.7.6 (default, Mar 24 2014, 14:39:40)
 [GCC 4.2.1 Compatible Apple LLVM 5.1 (clang-503.0.38)]
 This platform is 'py2.7-macosx-10.9-x86_64-ucs2'
 Override with:
   make_egg_packager.py forced-platform
 Completed packager is 'egg_packager-py2.7-macosx-10.9-x86_64-ucs2.py'.  To
 fetch eggs, please copy this file to a system with internet access and run
 with:
   python egg_packager-py2.7-macosx-10.9-x86_64-ucs2.py

 --
 Joshua Udall
 Assistant Professor
 295 WIDB
 Plant and Wildlife Science Dept.
 Brigham Young University
 Provo, UT 84602
 801-422-9307
 Fax: 801-422-0008
 USA

 ___
 Please keep all replies on the list by using reply all
 in your mail client.  To manage your subscriptions to this
 and other Galaxy lists, please use the interface at:
   http://lists.bx.psu.edu/

 To search Galaxy mailing lists use the unified search at:
   http://galaxyproject.org/search/mailinglists/



osx_intel_platform.patch
Description: Binary data
___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

Re: [galaxy-dev] Question concerning the xml file for local tools with multiple output files.

2014-03-28 Thread Nate Coraor
Hi Lifeng,

Another option would be the 'from_work_dir' option to the data tag. Have
a look at the tophat repository in the Tool Shed for an example:

http://toolshed.g2.bx.psu.edu/view/devteam/tophat

--nate


On Tue, Mar 25, 2014 at 11:29 AM, Hans-Rudolf Hotz
hansrudolf.h...@fmi.chwrote:

 Hi Lifeng

 I am glad to hear it works

 WRT your thoughts about using a wrapper script for each tool: I agree it
 might help you to standardize your tools, however it also introduces an
 extra step, which needs to be taken care of if  you want to change/upgrade
 your tool. Personally, I would only use a wrapper if it is necessary or it
 adds some benefits for the tool.  Others on the list might have a different
 opinion?

 Hans-Rudolf



 On Mar 25, 2014, at 3:13 PM, Lifeng Lin wrote:

  works like a charm. Thank you!
  I start to wonder if i should use this wrapper approach on all scripts
 regardless of the original input-output format for standardized
 integration, and possible automated integrations in the future.
 
 
  On Tue, Mar 25, 2014 at 6:53 AM, Hans-Rudolf Hotz 
 hansrudolf.h...@fmi.ch wrote:
  Hi Lifeng
 
  The easiest way to execute your script will be to provide a wrapper
 script (written in your preferred language, eg Python, perl, etc). Call the
 wrapper script like this:
 
  commandwrapper $input $output1 $output2/command
 
  and define $output1 $output2 according to your needs: format=fasta,
 format=txt
 
 
  the wrapper will call your script and rename/move the output.
 
 
  Hope this helps,
  Regards, Hans-Rudolf
 
 
  On Mar 25, 2014, at 3:23 AM, galaxy-user-boun...@lists.bx.psu.edu 
 galaxy-user-boun...@lists.bx.psu.edu wrote:
 
  
   From: Lifeng Lin linlif...@gmail.com
   Date: March 25, 2014 12:58:52 AM GMT+01:00
   To: galaxy-u...@lists.bx.psu.edu
   Subject: Question concerning the xml file for local tools with
 multiple output files.
  
  
   Hi folks,
  
   I am trying to integrate some of my local Perl scripts into a
 downloaded instance of Galaxy. So far the script with a simple in_file to
 out_file format worked great, but I am having problems understanding how
 to treat scripts with multiple output files that share the same input argv.
   for example: the script run like this in command line:
  
   script name input_name output_name_base
  
   and two files are generated from this script: output_name_base.fasta
 and output_name_base.txt.
  
   I am at a loss of how these parameter should be represented in the xml
 format, especially how the outputs data name= tag should be filled,
 since in the command tag, there is only one $output.
  
   Any suggestions?
  
   thanks!
   #GalaxyNoobie
  
  
   ___
   Please keep all replies on the list by using reply all
   in your mail client.  To manage your subscriptions to this
   and other Galaxy lists, please use the interface at:
http://lists.bx.psu.edu/
  
   To search Galaxy mailing lists use the unified search at:
http://galaxyproject.org/search/mailinglists/
 
 


 ___
 Please keep all replies on the list by using reply all
 in your mail client.  To manage your subscriptions to this
 and other Galaxy lists, please use the interface at:
   http://lists.bx.psu.edu/

 To search Galaxy mailing lists use the unified search at:
   http://galaxyproject.org/search/mailinglists/

___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

Re: [galaxy-dev] jobs stuck in new state

2014-03-28 Thread Nate Coraor
Hi David,

This is pretty common in the case of workflows. When a workflow step fails,
the next job in the workflow will be set to the paused state and all jobs
downstream of the paused job will remain in the new state until
corrective action is taken. The current query for finding jobs-ready-to-run
(if tracking jobs in the database, which is automatically enabled for
multiprocess Galaxy configurations) ignores 'new' state jobs whose inputs
are not ready, so these jobs sitting around should not cause any harm.

--nate


On Wed, Mar 26, 2014 at 12:25 PM, David Hoover hoove...@helix.nih.govwrote:

 I have many jobs stuck in the 'new' state on our local Galaxy instance.
  The jobs can't be stopped using the Admin-Manage jobs tool.  First, does
 anyone know why a job would get stuck in the 'new' state for weeks?  I have
 cleaned things up by manually setting their states to 'error' in the MySQL
 database.  Is there a better way of dealing with 'new' jobs?

 BTW, our Galaxy instance was updated about two weeks ago.

 Wondering,
 David Hoover
 Helix Systems Staff
 ___
 Please keep all replies on the list by using reply all
 in your mail client.  To manage your subscriptions to this
 and other Galaxy lists, please use the interface at:
   http://lists.bx.psu.edu/

 To search Galaxy mailing lists use the unified search at:
   http://galaxyproject.org/search/mailinglists/

___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

Re: [galaxy-dev] ERROR executing tool

2014-03-28 Thread Nate Coraor
On Thu, Mar 27, 2014 at 7:13 AM, virginia dalla via virdalla...@hotmail.com
 wrote:


 Hi,

 I tried to GROOMER my fastq data in fastq data, and galaxy did not allowed
 me:Error executing tool: objectstore, __call_method failed: get_filename on
 , kwargs: {}

 could you please help me?

 Thank you


Hi,

If you're using the Galaxy Main server at usegalaxy.org, could you use the
green bug icon to report this directly through Galaxy?

Thanks,
--nate



 viR





 ___
 Please keep all replies on the list by using reply all
 in your mail client.  To manage your subscriptions to this
 and other Galaxy lists, please use the interface at:
   http://lists.bx.psu.edu/

 To search Galaxy mailing lists use the unified search at:
   http://galaxyproject.org/search/mailinglists/

___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

Re: [galaxy-dev] [CONTENT] Re: Re: Re: Re: Unable to remove old datasets

2014-03-28 Thread Sanka, Ravi
Hi Nate,

I checked and there are 3 rows of dataset 301 in the 
history_dataset_association table (none in library_dataset_dataset_association):

dataset_id  create_time update_time deleted
301 2/14/14 18:49   3/25/14 20:27   TRUE
301 3/6/14 15:483/25/14 18:41   TRUE
301 3/6/14 20:113/6/14 20:11FALSE

The one with the most recent create_time has its deleted status set to false. 
The other 2, older ones are true.

I would have guessed that the most recent create_time instance is still false 
due being created within 30 days, but the second most recent is only 5 hours 
older and is set to true. Perhaps that instance was deleted by its user. That 
would cause its deleted status to become true, correct?

I assume that if I were to wait until all 3 instances' create_times are past 30 
days, my process will work, as admin_cleanup_datasets.py will set all 3 
instances to false.

Perchance, is there any setting on admin_cleanup_datasets.py that would cause 
it to judge datasets by their physical file's timestamp instead?

--
Ravi Sanka
ICS – Sr. Bioinformatics Engineer
J. Craig Venter Institute
301-795-7743
--

From: Nate Coraor n...@bx.psu.edumailto:n...@bx.psu.edu
Date: Friday, March 28, 2014 1:56 PM
To: Ravi Sanka rsa...@jcvi.orgmailto:rsa...@jcvi.org
Cc: Carl Eberhard carlfeberh...@gmail.commailto:carlfeberh...@gmail.com, 
Peter Cock p.j.a.c...@googlemail.commailto:p.j.a.c...@googlemail.com, 
galaxy-dev@lists.bx.psu.edumailto:galaxy-dev@lists.bx.psu.edu 
galaxy-dev@lists.bx.psu.edumailto:galaxy-dev@lists.bx.psu.edu
Subject: [CONTENT] Re: Re: [galaxy-dev] Re: Re: Unable to remove old datasets

Hi Ravi,

Can you check whether any other history_dataset_association or 
library_dataset_dataset_association rows exist which reference the dataset_id 
that you are attempting to remove?

When you run admin_cleanup_datasets.py, it'll set 
history_dataset_association.deleted = true. After that is done, you need to run 
cleanup_datasets.py with the `-6 -d 0` option to mark dataset.deleted = true, 
followed by `-3 -d 0 -r ` to remove the dataset file from disk and set 
dataset.purged = true. Note that the latter two operations will not do anything 
until *all* associated history_dataset_association and 
library_dataset_dataset_association rows are set to deleted = true.

--nate


On Fri, Mar 28, 2014 at 1:52 PM, Sanka, Ravi 
rsa...@jcvi.orgmailto:rsa...@jcvi.org wrote:
Hi Nate,

I checked the dataset's entry in history_dataset_association, and the value in 
field deleted is true.

But if this does not enable the cleanup scripts to remove the dataset from 
disk, then how can I accomplish that? As an admin, my intention is to 
completely remove datasets that are past a certain age from Galaxy, including 
all instances of the dataset that may exist, regardless of whether or not the 
various users who own said instances have deleted them from their histories.

Can this be done with admin_cleanup_datasets.py? If so, how?

--
Ravi Sanka
ICS – Sr. Bioinformatics Engineer
J. Craig Venter Institute
301-795-7743tel:301-795-7743
--

From: Nate Coraor n...@bx.psu.edumailto:n...@bx.psu.edu
Date: Friday, March 28, 2014 9:59 AM
To: Ravi Sanka rsa...@jcvi.orgmailto:rsa...@jcvi.org
Cc: Carl Eberhard carlfeberh...@gmail.commailto:carlfeberh...@gmail.com, 
Peter Cock p.j.a.c...@googlemail.commailto:p.j.a.c...@googlemail.com, 
galaxy-dev@lists.bx.psu.edumailto:galaxy-dev@lists.bx.psu.edu 
galaxy-dev@lists.bx.psu.edumailto:galaxy-dev@lists.bx.psu.edu
Subject: [CONTENT] Re: [galaxy-dev] Re: Re: Unable to remove old datasets

Hi Ravi,

If you take a look at the dataset's entry in the history_dataset_association 
table, is that marked deleted? admin_cleanup_datasets.py only marks 
history_dataset_association rows deleted, not datasets.

Running the cleanup_datasets.py flow with -d 0 should have then caused the 
dataset to be deleted and purged, but this may not be the case if there is more 
than one instance of the dataset you are trying to purge (either another copy 
in a history somewhere, or in a library).

--nate


On Tue, Mar 25, 2014 at 5:12 PM, Sanka, Ravi 
rsa...@jcvi.orgmailto:rsa...@jcvi.org wrote:
I have now been able to successfully remove datasets from disk. After deleting 
the dataset or history from the front-end interface (as the user), I then run 
the cleanup scripts as admin:

python ./scripts/cleanup_datasets/cleanup_datasets.py ./universe_wsgi.ini -d 0 
-1 $@  ./scripts/cleanup_datasets/delete_userless_histories.log
python ./scripts/cleanup_datasets/cleanup_datasets.py ./universe_wsgi.ini -d 0 
-2 -r $@  ./scripts/cleanup_datasets/purge_histories.log
python ./scripts/cleanup_datasets/cleanup_datasets.py ./universe_wsgi.ini -d 0 
-3 -r $@  ./scripts/cleanup_datasets/purge_datasets.log
python 

Re: [galaxy-dev] trackster is not working on the vrelease_2014.02.10--2--29ce93a13ac7

2014-03-28 Thread Jeremy Goecks
I just downloaded a fresh copy of galaxy-dist and everything worked fine with 
Trackster. This suggests that your Galaxy installation is somehow corrupted. 
You’ll need to rollback any changes to your repository and/or start from a 
fresh copy.

Let us know if you need help doing this.

Best,
J.

--
Jeremy Goecks
Assistant Professor of Computational Biology
George Washington University



On Mar 27, 2014, at 4:28 AM, Shu-Yi Su shu-yi...@embl.de wrote:

 Hi Jeremy,
 
 I reverted the files that I have changed, and also cleaned the cache once 
 again.  The error messages are the same:
 
 Failed to load resource: the server responded with a status of 404 (Not Found)
 require.js:1:1910Error: Script error for: libs/backbone/backbone-relational
 http://requirejs.org/docs/errors.html#scripterror
 http://gbcs-dev/galaxy-dev/static/scripts/utils/galaxy.utils.jsFailed to load 
 resource: the server responded with a status of 404 (Not Found)
 require.js:1:1910Error: Script error for: utils/galaxy.utils
 http://requirejs.org/docs/errors.html#scripterror
 
 Thanks a lot…..
 
 Best,
 Shu-Yi
 
 On Mar 27, 2014, at 12:00 AM, Jeremy Goecks wrote:
 
 The next step then is to revert all the changes that you pulled from 
 -central and report back the errors you’re seeing. Manually pulling selected 
 change sets can be problematic if you don’t get all the dependencies.
 
 Best,
 J.
 
 --
 Jeremy Goecks
 Assistant Professor of Computational Biology
 George Washington University
 
 
 
 On Mar 26, 2014, at 11:04 AM, Shu-Yi Su shu-yi...@embl.de wrote:
 
 Hi Jeremy,
 
 I cleaned the catche on both safari and firefox, but it doesn't work. It 
 still shows the same error messages.
 
 Thank you very much for the help!!
 
 Best,
 Shu-Yi
 
 On Mar 26, 2014, at 2:00 PM, Jeremy Goecks wrote:
 
 This sounds like a cache issue. Both of these scripts have been removed 
 from the distribution, so they should be absent from the distribution. Can 
 you try clearing your cache and see if that fixes the issue?
 
 Thanks,
 J.
 
 --
 Jeremy Goecks
 Assistant Professor of Computational Biology
 George Washington University
 
 
 
 On Mar 26, 2014, at 4:39 AM, Charles Girardot charles.girar...@embl.de 
 wrote:
 
 Hi Jeremy,
 
 After checking, the two js scripts are absent from the release:
 
 backbone-relational.js ( static/scripts/packed/libs/backbone/ )
 galaxy.utils.js ( static/scripts/packed/utils/ )
 
 bw
 
 C
 
 On 25 Mar 2014, at 16:16, Shu-Yi Su wrote:
 
 Hi Jeremy,
 
 Thank you very much for the reply.
 
 Yes, we are running on galaxy-dist, and manually pulled to update our 
 installation. The release version is 
 vrelease_2014.02.10--2--29ce93a13ac7.
 I have tried safari and firefox. Both are not working.
 Here is the error massage from console:
 [Error] Failed to load resource: the server responded with a status of 
 404 (Not Found) (backbone-relational.js, line 0)
 [Error] Failed to load resource: the server responded with a status of 
 404 (Not Found) (galaxy.utils.js, line 0)
 [Error] Error: Script error for: libs/backbone/backbone-relational
 http://requirejs.org/docs/errors.html#scripterror
  defaultOnError (require.js, line 1)
  onError (require.js, line 1)
  onScriptError (require.js, line 1)
 [Error] Error: Script error for: utils/galaxy.utils
 http://requirejs.org/docs/errors.html#scripterror
  defaultOnError (require.js, line 1)
  onError (require.js, line 1)
  onScriptError (require.js, line 1);;;
 
 We are also wondering if there is anything we didn't set up probably for 
 our universe_wsgi.ini file.
 
 Thank you.
 
 Best,
 Shu-Yi
 
 
 On Mar 25, 2014, at 4:05 PM, Jeremy Goecks wrote:
 
 Providing some additional information will help diagnose the problem:
 
 *are you running galaxy-dist? If so, have you manually pulled and 
 applied commits from galaxy-central? If so, which ones?
 *which Web browser are you using?
 *can you please open the JavaScript console in your browser and provide 
 any errors that you see?
 
 Thanks,
 J.
 
 --
 Jeremy Goecks
 Assistant Professor of Computational Biology
 George Washington University
 
 
 
 On Mar 24, 2014, at 11:38 AM, Shu-Yi Su shu-yi...@embl.de wrote:
 
 Hi all,
 
 We have recently updated our local Galaxy installation to  
 vrelease_2014.02.10--2--29ce93a13ac7 (database version is 118). I 
 found that the Trackster is not working. I have checked the latest 
 commits related to Trackster bugs. So i have updated theses files:
 ./static/scripts/viz/trackster.js (commits date: 2014-02-28)
 ./static/scripts/viz/trackster_ui.js (commits date: 2014-02-28)
 ./static/scripts/viz/trackster/tracks.js (commits date: 2014-03-16)
 ./static/scripts/utils/utils.js (commits date: 2014-03-19)
 ./static/scripts/utils/config.js (commits date: 2014-03-15)
 
 But, it is still not working. I have tried different format…bam, bed, 
 sam….all are not working.
 I looked into all possible files I can think about that might cause 
 the problems but still don't have any clues. 
 I also looked into the log, and