Dear Galaxy Support,
I'm getting the following error message when trying to process larger Solid
files.
ERROR MESSAGE: Cluster could not complete job
- Compute Quality Statistic-- First got the error message. Ran ok after
re-running the job.
- Subsequent job of converting qual/csfasta - fastq
Hi All,
I'm looking to batch process 40 large data sets with the same galaxy
workflow.
This obviously can be done in a brute-force manual manner.
However, is there a better way to schedule/invoke these jobs in batch
1) from the UI with a plugin
2) command-line
3) web-service
Thanks in advance
dataset can be set to multiple files in this fashion.
-Dannon
On Feb 6, 2012, at 4:18 PM, Dave Lin wrote:
Hi All,
I'm looking to batch process 40 large data sets with the same galaxy
workflow.
This obviously can be done in a brute-force manual manner.
However, is there a better way
Hi All,
What is the recommend process for expanding the galaxyTool volume for an
existing galaxy instance (using EC2/cloudman)?
I tried the following, but it didnt' work for me.
0) Terminate cluster.
1) Amazon EC2- create snapshot of current galaxyTools volume
2) Amazon EC2- create volume from
needs.
Let me know if you need to modify an existing cluster and I'll guide you
through the process then.
Enis
On Thu, Feb 16, 2012 at 9:31 PM, Dave Lin d...@verdematics.com wrote:
Hi All,
What is the recommend process for expanding the galaxyTool volume for an
existing galaxy instance
Dear Cloudman Team,
I created a fresh galaxy instance using AMI: galaxy-cloudman-2011-03-22
(ami-da58aab3)
In case it matters, instance type = High-Memory Double Extra Large
I was trying to use the Cloudman Admin Console to update Galaxy. (
http://bitbucket.org/galaxy/galaxy-central) but galaxy
On Apr 24, 2012, at 4:36 PM, Dave Lin wrote:
Dear Cloudman Team,
I created a fresh galaxy instance using AMI: galaxy-cloudman-2011-03-22
(ami-da58aab3)
In case it matters, instance type = High-Memory Double Extra Large
I was trying to use the Cloudman Admin Console to update Galaxy
Hi Brad, Galaxy Support-
Besides changing them individually, is there a way to modify the settings
for a large number files from fastq to fastqsanger in batch? Either via the
bioblend or manually via the UI?
Thanks,
Dave
On Fri, Dec 7, 2012 at 3:53 AM, Langhorst, Brad langho...@neb.com wrote:
…
If you need to convert a large number you could set up a galaxy workflow
with a single fastq groomer step. That would allow you to start the job on
many fastq files at once.
Brad
On Apr 25, 2013, at 12:14 AM, Dave Lin d...@verdematics.com wrote:
Hi Brad, Galaxy Support-
Besides changing
I am getting similar errors as Brian reported back in March. (Note, we
appear to have the same last name, but no relation)
An error occurred with this dataset: *Job output not returned from cluster*
*
*
- Running on Cloudman with 5-6 nodes. (xlarge)
- The error seems to occur consistently when I
occurred running this
job: Job output not returned from cluster when I launch a large number of
samples.
If I analyze the same data sets/workflow, but launch 5 at a time, the
analysis proceeds smoothly.
Any pointers would be appreciated.
Thanks
Dave
On Thu, May 2, 2013 at 1:51 PM, Dave Lin d
11 matches
Mail list logo