I just downloaded a fresh copy of galaxy-dist and everything worked fine with
Trackster. This suggests that your Galaxy installation is somehow corrupted.
You’ll need to rollback any changes to your repository and/or start from a
fresh copy.
Let us know if you need help doing this.
Best,
J.
-
Hi Nate,
I checked and there are 3 rows of dataset 301 in the
history_dataset_association table (none in library_dataset_dataset_association):
dataset_id create_time update_time deleted
301 2/14/14 18:49 3/25/14 20:27 TRUE
301 3/6/14 15:483/25/14 18:41 TRUE
301
On Thu, Mar 27, 2014 at 7:13 AM, virginia dalla via wrote:
>
> Hi,
>
> I tried to GROOMER my fastq data in fastq data, and galaxy did not allowed
> me:Error executing tool: objectstore, __call_method failed: get_filename on
> , kwargs: {}
>
> could you please help me?
>
> Thank you
>
Hi,
If you
Hi David,
This is pretty common in the case of workflows. When a workflow step fails,
the next job in the workflow will be set to the "paused" state and all jobs
downstream of the paused job will remain in the "new" state until
corrective action is taken. The current query for finding jobs-ready-t
Hi Lifeng,
Another option would be the 'from_work_dir' option to the tag. Have
a look at the tophat repository in the Tool Shed for an example:
http://toolshed.g2.bx.psu.edu/view/devteam/tophat
--nate
On Tue, Mar 25, 2014 at 11:29 AM, Hans-Rudolf Hotz
wrote:
> Hi Lifeng
>
> I am glad to
Hi Joshua,
You may be able to trick Galaxy into using existing versions of OS X eggs,
they are built for both 32 and 64-bit Intel, but should work fine with a
single-arch build. If the attached patch works, let me know and I'll commit
it.
If you'd rather not mess with the Galaxy source, you shoul
Hi Tim,
I have recently been working on getting Galaxy Main's configs and
server-modified files and directories out of the galaxy-dist directory, so
our goals are aligning. Not everything can be moved without some trickery
(e.g. symlinks) but most paths, including the paths to shed_*.xml are
confi
Hi David,
Setting track_jobs_in_database = True should not be required, recovery is
supposed to work either way.
Does Galaxy lose all jobs, or just the ones that completed while Galaxy was
restarting? Can you provide the output from the Galaxy log that shows an
attempt to recover a job and all re
Hi,
will share mine in a few minutes off the list.
Cheers,
Bjoern
Am 28.03.2014 16:28, schrieb Luca Toldo:
Dear Galaxians,
I'd greatly appreciate if someone that has a running instance of galaxy
using local computing power and as well remote nodes (accessed with PBS
Pro) could share the files
Dear Galaxians,
I'd greatly appreciate if someone that has a running instance of galaxy
using local computing power and as well remote nodes (accessed with PBS
Pro) could share the files
universe_wsgi.ini
job_conf.xml
I've been trying very hard but failed to make noticeable progress.
I have a lo
Hi Ravi,
Can you check whether any other history_dataset_association or
library_dataset_dataset_association rows exist which reference the
dataset_id that you are attempting to remove?
When you run admin_cleanup_datasets.py, it'll set
history_dataset_association.deleted = true. After that is done
Hi Nate,
I checked the dataset's entry in history_dataset_association, and the value in
field "deleted" is true.
But if this does not enable the cleanup scripts to remove the dataset from
disk, then how can I accomplish that? As an admin, my intention is to
completely remove datasets that are
Dear All
I wanted to send a note to folks about Globus World 2014. I apologize before
hand for spamming both the developer list and the users list also but I thought
this may be relevant to folks on the lists. Please let me know if you have any
questions.
GlobusWorld is this year’s biggest
Hi Ravi,
If you take a look at the dataset's entry in the
history_dataset_association table, is that marked deleted?
admin_cleanup_datasets.py only marks history_dataset_association rows
deleted, not datasets.
Running the cleanup_datasets.py flow with -d 0 should have then caused the
dataset to b
Hi,
Just wanted to let you know that my tophat2 install is working now. In case it
helps someone else in the future: Manually placing tool_dependencies.xml in the
correct shed_tools directory, placing the tool dependency package in the
correct dependencies directory, and uninstalling and then
Hi,
In order to have the transcript_id for each sequence extracted from the
cuffmerge .gtf file I had to change the extract_genomic_dna.py by adding
the following lines after line 153:
attributes = gff_util.parse_gff_attributes( feature[8] )
if ( "transcript_id" in attribut
Hi Edward,
Are you still working on your minimus2 wrapper? It does the basics very
nicely taking FASTA files as input (hiding the conversion into AMOS format
internally): http://toolshed.g2.bx.psu.edu/view/edward-kirton/minimus2
One minor improvement is the prefix parameters should be conditional
Hi,
I have modified the tool dependency xml file by adding Absolute path to
output file, like below, then my test cases are *passing*:
Thanks for all your help.
Thanks,
JanakiRam
On Fri, Mar 28, 2014 at 2:39 PM, Janaki Rama Rao Gollapudi <
janakiram.gollap...@india.
James,
I have been looking for it as well some while ago.
Would be good to post it somewhere on the wiki in a prominent place...how to
cite...
Thx
Alex
-Oorspronkelijk bericht-
Van: galaxy-dev-boun...@lists.bx.psu.edu
[mailto:galaxy-dev-boun...@lists.bx.psu.edu] Namens James Taylor
Verzo
19 matches
Mail list logo