Hi,
I found the error.
sh run.sh worked fine.
So, I added some lines to my galaxy script service to load environnement
variables :
. /etc/profile.d/settings.sh
. /etc/profile.d/drmaa.sh
Now it works, but I still have some warning when I run a megablast :
An error occurred running this job:
Hi All,
I have moved my database from sqlite to postgresql in my local instance
of Galaxy .But I want my old history in new database.How It's possible ?
Regards
shashi
On Wed, Jul 27, 2011 at 7:19 PM, Peter Cock p.j.a.c...@googlemail.comwrote:
On Wed, Jul 27, 2011 at 2:14 PM, shashi
Hello everyone,
I'm currently trying to automate data loading and pre-processing through
Galaxy's API, and I will need to delete and share histories at some
point. I know the API is still very new but is there a way to do that by
any chance?
Thanks,
L-A
On Thu, Jul 28, 2011 at 10:49 AM, remy d1 remy...@gmail.com wrote:
Hi,
...
Now it works, but I still have some warning when I run a megablast :
An error occurred running this job: 'num_threads' is currently ignored when
'subject' is specified.
You get that warning message from the current
Hi Ambarish,
Using what I had in my previous message:
mytoolname = drmaa://-w n -l mem_free=1G -l mem_token=1G -l h_vmem=1G/
my jobs do get submitted to the cluster and I see the other options from qstat
of the jobs
ie. hard resource_list: mem_free=1G,mem_token=1G,h_vmem=1G
Ka Ming
Hi,
I'm a user of Galaxy, I think Galaxy is a good idea except some small parts.
1. Progress Bar for uploading file
I know that implement a progress bar for every tool is quite hard.
But maybe it's not so hard for displaying the current percent of uploading,
it's really helpful and
On Thu, Jul 28, 2011 at 9:43 PM, Assaf Gordon gor...@cshl.edu wrote:
Hi,
The attached patch enables the BWA wrapper to work with Illumina-1.3+ FASTQ
files without Grooming (which goes well with the name of the tool: Map with
BWA for Illumina ).
Actually,
Changing the XML and the python
Hello Peter and all,
Peter Cock wrote, On 07/28/2011 05:08 PM:
It concerns me that you're doing this for both fastqillumina format
(good) and fastqsolexa (bad). Treating the later as fastqillumina
would give negative scores and probably cause trouble. Unless BWA
copes but if so it is a poor
We experienced an issue where some of the galaxy jobs were sitting in the 'new'
state for a quite long time. They were not waiting for cluster resources to
become available, but haven't been even queued up through DRMAA. We are
currently using non-debug mode and following were my observations:
i usually prefix the history names with an identifier so i can search for
them (e.g. AmiMT: read QC). but i agree, folders similar to the data
libraries would be useful, so i created a ticket.
https://bitbucket.org/galaxy/galaxy-central/issue/621/folders-to-organize-saved-histories
On Fri, Jul
I've been getting these errors sometimes lately, particularly when the
cluster is heavily loaded. The jobs have completed successfully, as I can
see the output if I click the pen icon, but the job is in a failed state.
Have any other sites been experiencing this problem?
Or can the galaxy
My jobs have this problem when the command for the tool is wrapped by the
stderr wrapper script.
Ka Ming
From: galaxy-dev-boun...@lists.bx.psu.edu [galaxy-dev-boun...@lists.bx.psu.edu]
On Behalf Of Edward Kirton [eskir...@lbl.gov]
Sent: July 28, 2011
Hello Mai,
I checked both your name and email against the membership list for
galaxy-user and galaxy-dev and didn't find a match. So, you are
unsubscribed from the Galaxy managed mailing list. However, there are
two things to double check:
1 - are you subscribed under a different email
Hi,
this is what is working for me, the the Output not returned from
cluster error is also disappeared.
default_cluster_job_runner = drmaa://-q galaxy -V/
if this works, then check your other options to test.
With Regards,
Ambarish Biswas,
University of Otago
Department of
Hi,
you didn't specify what clustering method u r using. Are u using drmaa
or pbs?
With Regards,
Ambarish Biswas,
University of Otago
Department of Biochemistry,
Dunedin, New Zealand,
Tel: +64(22)0855647
Fax: +64(0)3 479 7866
On Fri, Jul 29, 2011 at 10:03 AM, Shantanu
Thanks for the reply Ambarish. We are using SGE cluster and job submission is
done using drmaa.
--
Shantanu.
On Jul 28, 2011, at 7:24 PM, ambarish biswas wrote:
Hi,
you didn't specify what clustering method u r using. Are u using drmaa or
pbs?
With Regards,
Ambarish
hi,
can you paste the configuration from your *universe_wsgi.ini* file.
With Regards,
Ambarish Biswas,
University of Otago
Department of Biochemistry,
Dunedin, New Zealand,
Tel: +64(22)0855647
Fax: +64(0)3 479 7866
On Fri, Jul 29, 2011 at 12:31 PM, Shantanu Pavgi
I am testing the latest galaxy central distribution by running FASTQC. It
creates a directory call dataset_*_file in the job_working_directory and then
copies that directory to file_path when the job is complete. Is that on purpose
or is that a bug?
Thanks,
Ilya
Ilya Chorny Ph.D.
It's a feature although arguably necessitated by what some might
consider a design bug in FastQC
FastQC insists on writing the html report with links to the generated
images which it writes to a subdirectory. That won't work for Galaxy's
html datatype, so the wrapper script unpacks that structure
Hi
In my galaxy instance, whatever jobs i am submitting it goes into queued
state.
If I restart the server then the previous submitted jobs state changes to
running. but the newly submitted jobs again goes to queued state.
I am at a loss to understand this behaviour of galaxy and unable to
Thanks, that makes sense.
Ilya
-Original Message-
From: Ross [mailto:ross.laza...@gmail.com]
Sent: Thursday, July 28, 2011 10:14 PM
To: Chorny, Ilya
Cc: galaxy-dev@lists.bx.psu.edu
Subject: Re: [galaxy-dev] dataset_*_file directory being compied to file_path
when running fastqc
It's
21 matches
Mail list logo