I will modify the gff file as you mentioned and update galaxy.
Thanks a lot.
Yec'han
Yec'han LAIZET
Ingenieur
Plateforme Genome Transcriptome
Tel: 05 57 12 27 75
_
INRA-UMR BIOGECO 1202
Equipe Genetique
69 route
Hi Vladimir
I contacted with this question vendor tech support (Dell), but they
could not answer (or did not want to) and directed me to Galaxy
developers. I am using RHEL58 and SciLinux55 and want to install a local
instance of Galaxy. Both my systems are based on Python 2.4. Question –
can I
On Wednesday, October 31, 2012, Edward Hills wrote:
Thanks Peter.
My next question is, I have found that VCF files don't get split properly
as the header is not included in the second file as is usually required by
tools (such as vcf-subset). I have read the code and am happy to implement
Hi. Resending because I got no response. Can anybody suggest anything that
might explain this, or tell me how I can troubleshoot? Where to look in
the Python code? Whether anybody has seen anything like this? Our beta
tester can't actually test anything. This occurs whether he does the
FTP-style
Hi all;
I ran into SSL certification errors when using Java to connect to Galaxy
main via the API. My knowledge of this stuff is minimal, but I did some
searching and discovered that the certificate chain on Galaxy main is a problem:
On Oct 31, 2012, at 8:55 AM, Brad Chapman wrote:
Hi all;
I ran into SSL certification errors when using Java to connect to Galaxy
main via the API. My knowledge of this stuff is minimal, but I did some
searching and discovered that the certificate chain on Galaxy main is a
problem:
Started up a cluster on Amazon using the Launch a Galaxy Cloud Instance and got
the following message. Since I don't have any control over where the instances
are run not sure how I can control this. The last 4 or 5 times I have started
up an existing instance has worked with no problem.
For this instance, you'll need to restart using the old method for launching
via the console, specifying the zone 1b. Detection of the zone volumes are in
for existing clusters and specifying those for launch is on the short list of
things coming up for cloud launch.
On Oct 31, 2012, at
Hi,
I'm still setting up a local galaxy. Currently I'm testing the setup of NGS tools. If I try SAM to
BAM for a BAM file that has hg18 set as build I get a message that
Sequences are not currently available for the specified build. I guess that I have either to
manipulate one of the .loc
On Wed, Oct 31, 2012 at 11:30 AM, Andreas Kuntzagk
andreas.kuntz...@mdc-berlin.de wrote:
Hi,
I'm still setting up a local galaxy. Currently I'm testing the setup of NGS
tools. If I try SAM to BAM for a BAM file that has hg18 set as build I
get a message that
Sequences are not currently
Hi,
I'm trying to test out the functional testing mechanism by running it
on an existing Galaxy tool.
First I ran
./run_functional_tests.sh -list
which produced a list of tools I can test. I chose 'vcf_annotate' and
tested it as follows:
./run_functional_tests.sh -id vcf_annotate
This
Scooter;
(cc'ing the dev list and updating the subject line in case others are
interested)
I have been looking for Java related API's to run workflows externally and
haven't found anything searching message forums etc. Would like to
automate data coming off up hiseq uploaded to Amazon S3 and
Downloading data is handled in lib/galaxywebapps/galaxy/controllers/dataset.py,
method display(), which in turn calls this line:
--
return data.datatype.display_data(trans, data, preview, filename, to_ext,
chunk, **kwd)
--
Which, in most cases, calls display_data in
Where do I find info if the installed applications make use of multiple nodes
via MPI(etc) which would indicate the benefit of starting up X number of
nodes for faster processing?
You'll need to look at the individual tool documentation. In general, many
tools uses multiple cores, few use
Hello,
I am trying to configure my galaxy instance and I have two problem. The first
one is that I cannot delete users, I created some users for testing, I enabled
the option on the universe_wsg.ini, and the button appears, but the users set
only marked as deleted but they didn't disappear
Using large amazon instance
Trying to do an interval join of SNPs as output from pileup 120,000
regions(5.5Mb) with with snp135Common 12,000,000(425Mb) and get the following
errors. The goal is to pickup rs id's for known SNPs in the list of SNPs.
Is this a memory issue?
I was able to do the
Did a subtract first to get a known list of rs SNPs that will be found in the
tumor SNPs. That ran without error. Doing a join of the subtracted list of rs
SNPs and the tumor SNPs.
So something different in the join code then in the subtract code.
From: Scooter Willis
We are still getting empty TopHat output files on our Galaxy instance on
the cloud. We see that TopHat is generating data while the tool is running
(by monitoring our disk usage on the Amazon cloud), but the output is
empty files.
Is anyone else having this issue? Does anyone have any
Given that this doesn't seem to be happening on our public server or on local
instances, my best guess is that the issue is old code. Are you running the
most recent dist?
J.
On Oct 31, 2012, at 7:37 PM, Mohammad Heydarian wrote:
We are still getting empty TopHat output files on our Galaxy
In this case, it's useful to differentiate between (i) the AMI that Galaxy
Cloud uses and (ii) the Galaxy code running on the cloud. I suspect that (ii)
is out of data for you; this is not (yet) automatically updated, even when
starting a new instance.
Try using the admin console to update to
Hi Peter, thanks again.
Turns out that it has been implemented by the looks of it in
lib/galaxy/datatypes/tabular.py under class Vcf. However, despite this, it
is always the Text class in data.py that is loaded and not the proper Vcf
one.
Can you point me in the direction of where the type is
Hello Everyone,
I am about to write a syncing tool for Galaxy like Dropbox using Python
with the progress bar. How do I integrate it with galaxy? It would be easy
for client to upload files using the syncing tool. Are there any syncing
tools available for Galaxy?
Thanks
Local install of Galaxy on SciLinux55. Fails to upload 5.2 GB fastq file
from local HD, while normally loading smaller fastq and fasta datasets
(less than 1 GB). Chunks of 1.2 GB size remain in */database/tmp, which
all represent beginning of the file that fails to upload. Several
attempts to
23 matches
Mail list logo