Hi Eduardo,
I believe the job in question is not running because one of its inputs
(#101) failed to have metadata set properly. This situation can be fixed by
using the 'auto-detect' button on the failed dataset.
--nate
On Mon, Mar 31, 2014 at 5:20 PM, Eduardo Fox ofoxo...@gmail.com wrote:
Hi Ruth,
The queue was very busy this morning as it would appear that someone is
using the site for a workshop or class. I've cleared out these jobs since
they were blocking regular use, and your jobs should begin running normally
shortly. Sorry for the inconvenience.
Thanks,
--nate
On Wed,
Hi Mark,
This error is not tool related. Could you click the “bug” icon to send us an
error report?
—nate
On Nov 11, 2013, at 8:33 AM, Mark Lindsay m.a.lind...@bath.ac.uk wrote:
Dear Galaxy Users
I wondered if anybody is having the same problem.
I am trying to run CuffDiff using the
Hi Fabrice,
Are you connecting to `usegalaxy.org`? This changed when we moved
main.g2.bx.psu.edu to usegalaxy.org.
Thanks,
--nate
On Nov 8, 2013, at 6:32 AM, Fabrice Besnard wrote:
Hi,
I am trying to connect to the Galaxy server via Filezilla in order to
upload datasets.
The connexion
el linney
On Tue, Oct 22, 2013 at 12:03 PM, Nate Coraor n...@bx.psu.edu wrote:
Hi Elwood,
Jeremy and I took at a look at this. The failures in your history with this
message:
Error: number of labels must match number of conditions
...are due to a regression in the cuffdiff tool
On Oct 18, 2013, at 12:48 PM, Elwood Linney wrote:
My histories seem to be stopping their processing around the Tophat-cuffmerge
steps since the change over of Galaxy online. Sometimes a red box appears
over my history name but disappears in a minute or less.
I am wondering, given the way
The problem with cuffdiff should be fixed. Still looking at the tophat
failures.
Thanks for your patience, and for sticking with us as we iron this out.
--nate
On Oct 9, 2013, at 7:52 PM, Jennifer Jackson wrote:
Hi Elwood,
As I emailed to your direct question, we are looking at your bug
On Sep 11, 2013, at 7:48 AM, Amit Pande wrote:
Hi,
I am getting the following error from the server :
Command:USER genebus...@googlemail.com
Response:331 Password required for genebus...@googlemail.com
Command:PASS ***
Response:530 Sorry, the maximum number of
On Sep 11, 2013, at 9:56 AM, Elwood Linney wrote:
Hello,
My connection with Galaxy online shutoff last night while I was transferring
data to it. When I tried to connect again with Fetch this morning I was not
allowed and a message like Sorry the maximum number of clients for this user
Hi,
Almost all of the Galaxy development team uses Macs, so it should certainly
work. The problems with eggs, as you have both found, typically come with
using non-standard versions of Python. Your best bet is to use the version
that shipped with your version of OS X, which can be found in
On May 28, 2013, at 5:40 AM, Patel, Bella wrote:
Hi
I have tried to register for Galaxy but apparantly a user with my email
address already exists! Very strange as I have never registered for this
software.
Can you advise?
Bella Patel
Hi Bella,
It's possible that your account
On Mar 7, 2013, at 4:46 AM, pbour...@agilent.com pbour...@agilent.com wrote:
Hi,
I noticed since yesterday that my “disk usage” % (top-right) has jumped from
~20% to ~ 80 % for no apparent reason.
The added size of all my current histories is now only 29 Gb, which should be
~11%, and I
On Feb 15, 2013, at 10:35 AM, Mike Dufault wrote:
To whom it may concern:
The History panel on the right side of the Galaxy page is taking a very very
long time to load. Also, when it does load, I have tired to save my .bam
files and the transmissions gets truncated to ~7000kb - 8000kb
Hi Jim,
Could you send me a URL to the dataset so I can grab a copy and try to
reproduce this problem? Sorry for the trouble you've been having with the
upload functionality and the delay in getting back to you.
--nate
On Feb 5, 2013, at 8:48 AM, Jim Robinson wrote:
Hi,
I am having a
On Feb 13, 2013, at 12:25 PM, Jim Robinson wrote:
Sorry Nate, I misunderstood at first, you want a URL to the dataset here on
my server? I can definitely copy one up to an http server, I still have
Ricardo's files on a hard disk. I'll start the copy now and let you know
when its ready.
On Nov 16, 2012, at 10:41 AM, greg wrote:
On Fri, Nov 16, 2012 at 10:00 AM, Nate Coraor n...@bx.psu.edu wrote:
On Nov 16, 2012, at 9:14 AM, greg wrote:
Are you running the upload tool on a cluster?
--nate
Well I have Galaxy set up to send it's jobs to SGE/qsub but I haven't
verified
16, 2012 at 10:56 AM, Nate Coraor n...@bx.psu.edu wrote:
On Nov 16, 2012, at 10:41 AM, greg wrote:
On Fri, Nov 16, 2012 at 10:00 AM, Nate Coraor n...@bx.psu.edu wrote:
On Nov 16, 2012, at 9:14 AM, greg wrote:
Are you running the upload tool on a cluster?
--nate
Well I have Galaxy set
it? (Will it clean up anything
placed in the new_file_path directory?)
Thanks,
Greg
On Fri, Nov 16, 2012 at 10:56 AM, Nate Coraor n...@bx.psu.edu wrote:
On Nov 16, 2012, at 10:41 AM, greg wrote:
On Fri, Nov 16, 2012 at 10:00 AM, Nate Coraor n...@bx.psu.edu wrote:
On Nov 16, 2012, at 9:14 AM, greg
On Nov 7, 2012, at 5:47 AM, Diam Hsu wrote:
Dear Sir or Madam,
I used to use Galaxy to analyze NGS data, but these two days I could't login
my Galaxy account, it also told me that my account has been marked deleted,
and asked me to contact the Galaxy administrator to restore my
On Nov 5, 2012, at 3:15 PM, greg wrote:
And am I correct in the thinking that only the machine hosting the
galaxy web interface and submitting jobs needs the export
DRMAA_LIBRARY_PATH= variable?
The normal nodes running jobs don't need this, right?
Hi Greg,
That's correct. If you're
On Mon, Nov 5, 2012 at 3:23 PM, Nate Coraor n...@bx.psu.edu wrote:
On Nov 5, 2012, at 3:15 PM, greg wrote:
And am I correct in the thinking that only the machine hosting the
galaxy web interface and submitting jobs needs the export
DRMAA_LIBRARY_PATH= variable?
The normal nodes running jobs
the top, where other
variables are set.
--nate
Thanks again,
Greg
On Mon, Nov 5, 2012 at 3:33 PM, Nate Coraor n...@bx.psu.edu wrote:
On Nov 5, 2012, at 3:26 PM, greg wrote:
Well, I want it to ultimately run under Apache. Does it still make
sense to go in an init script?
thanks again
On Oct 18, 2012, at 1:43 AM, Todd Oakley wrote:
Yes, daemon/stop-daemon is the best way. However, to stop a process that was
not started with --daemon, this is what I do:
ps aux | grep galaxy
Identify the process numbers for 3 Galaxy processes, which will change every
time Galaxy is
On Oct 3, 2012, at 2:02 PM, Kshama Aswath wrote:
Hello:
I have this 20GB data that I have uploaded onto my history and trying to get
it run thr groomer. Just the first data set was uploaded yesterday and ran
groomer on it and it was not done this morning. The message indicated taht it
is
On Oct 2, 2012, at 12:45 PM, greg wrote:
Hi guys,
I'm following the instructions here to install Galaxy on our SGE cluster.
http://wiki.g2.bx.psu.edu/Admin/Config/Performance/Cluster
(I'm aiming for the unified install)
Here are a couple of questions I'm hoping someone could clear up for
On Aug 27, 2012, at 8:53 AM, petr wrote:
We are running galaxy on pbs cluster. In out setting some jobs can take
several weeks to finish and it is virtually impossible to wait for suitable
moment when server can be restarted or turn down without interrupting
running tasks. Is there an option
Hi Neil,
You're looking for the option to link to data, which is explained here:
http://wiki.g2.bx.psu.edu/Admin/Data%20Libraries/Uploading%20Library%20Files
--nate
On Aug 22, 2012, at 12:44 AM, neil.burd...@csiro.au wrote:
When uploading a file using “Get Data” it seems the file is
On Jun 25, 2012, at 2:49 AM, Björn Grüning wrote:
Hi Norbert,
please have a look at the FTP-Upload feature from galaxy.
The idea is that every user gets an FTP folder in which he can upload
data. If you create such a directory you can probably link your data in
such directories and galaxy
On Jun 25, 2012, at 1:53 PM, Shanshan Pang wrote:
Hi, I am in the step map with bowtie with a status of JOB IS WAITING TO RUN
for two days. I tried to re-run it, but did not work. So any suggestions on
it?
Thanks!
Hi Shanshan,
The cluster used to run mapping and other NGS tools is very
On Jun 16, 2012, at 7:38 PM, Jayaraman, Shyam wrote:
Hi,
My username is shyamsundar19...@gmail.com
Galaxy main is not running jobs since today morning. They stay queued (gray)
forever. I deleted and then re queued them twice. Still nothing.. Is the
server down or is something wrong
On Jun 16, 2012, at 7:31 PM, Jacob Musser wrote:
Hello,
I uploaded a couple small files via ftp to do some basic text manipulation on
them in the main galaxy server. I have queued up several jobs (including
just importing the files I uploaded into a history) but after two hours of
Hi,
Please try starting with:
$ LC_ALL=C ./run.sh
--nate
On May 18, 2012, at 11:22 AM, Seyed Mehdi Jazayeri wrote:
Dear Sir/Madam,
I am a PhD student working on RNA-Seq as my dissertation. As a matter of fact
I want to do analyses of gene expressions for the sequences that I have
Hi Shisheng and Mónica,
I see that jobs weren't dispatching on Galaxy Main, and have gotten them moving
again. It'll take a bit to run through the backlog. Sorry for the
inconvenience, and thanks for using Galaxy.
--nate
On Mar 22, 2012, at 5:30 AM, Mónica Pérez Alegre wrote:
Hi Shisheng
On Mar 20, 2012, at 8:13 AM, Richard Mark White wrote:
Hi,
Is anyone else having trouble connecting to main.g2.bx.psu.edu for FTP
uploads? I cannot seem to connect since yesterday.
Hi Rich,
It's back up now.
--nate
Rich
___
On Jan 30, 2012, at 11:37 AM, Stefan Kroeger wrote:
Hi Jen,
On 30.01.2012 16:12, Jennifer Jackson wrote:
Are you still having this problem? One thought is that this file is some
sort of README file in with the other data files from you source.
Another is that the data is an annotation
On Dec 22, 2011, at 5:35 PM, Sell, Christian wrote:
Hi,
I encountered similar problems today. I can’t open some histories, upload
data or transfer data between histories.
Regards
Christian
Hi,
Due to extremely high load at the time, jobs were delayed. This has been
resolved, so
On Dec 27, 2011, at 8:52 PM, Gabor Bartha wrote:
I have tried to use ftp to upload files to main.g2.bx.psu.edu but the jobs
have been failing with:
421 service not available, remote server has closed connection
after about 1.5GB.
Isn’t ftp the way you are supposed to upload large files?
On Dec 13, 2011, at 5:33 AM, Paul-Michael Agapow wrote:
A perhaps obvious question: how do I work out what version of Galaxy an
instance is?
This has come up a few times because of apparent bugs and different
behaviour across various development and production instances. Now if you’re
On Dec 14, 2011, at 11:19 AM, Magdalena Strzelecka wrote:
Hi,
I have submitted some jobs to Tophat, but they have not started since
yesterday (Dec 13th); i.e they were in a queue for 12 hrs. I have
re-submitted everything again (2 jobs), but the same situation is happening.
Is there
On Dec 17, 2011, at 8:34 AM, Richard Mark White wrote:
I am unable to access for past several hours. Are others having the same
issue?
Hi Rich,
Our core router has crashed, we're working on the problem and hope to have it
fixed within the next few hours. Sorry for the inconvenience.
On Nov 30, 2011, at 12:49 PM, Richard Mark White wrote:
Hi,
I was nearing my disk quota (at 97%), so I deleted a large number of
datasets using delete permanently. But my usage did not go down at all.
Is there a delay in this happening, or is there some way to purge the files?
Hi
consider the environment. Do you really need to print this email?
Nate Coraor n...@bx.psu.edu 11/11/2011 17:16
On Nov 11, 2011, at 10:01 AM, Andrew South wrote:
Thanks Nate, hope it's a quick fix. Best wishes, Andy
Hi Andy,
All backlogged NGS jobs should now be running, new ones
On Nov 11, 2011, at 9:21 AM, Andrew South wrote:
Hello folks
Anyone else having trouble with running Lastz to map?
Jobs are being sent but not running.
It stopped working for me two days ago after working perfectly, I've tried
fiddling with the formats but no joy.
Hi Andy,
It
Galaxy,
--nate
Please consider the environment. Do you really need to print this email?
Nate Coraor n...@bx.psu.edu 11/11/2011 14:55
On Nov 11, 2011, at 9:21 AM, Andrew South wrote:
Hello folks
Anyone else having
On Nov 7, 2011, at 7:11 PM, shamsher jagat wrote:
I uploaded two BAM files and was working with them but when I tried to use
them again they are automatically deleted along with all the steps of
analysis?
Any reason or I am missing something.
Hi Shamsher,
Were you logged in when you
GANDRILLON OLIVIER wrote:
Hello
I just received the following message while using Galaxy through the web
You are over your disk quota. Tool execution is on hold until your disk
usage drops below your allocated quota.
I deleted a couple of files but it didn't helped.
I checked the FAQ
Richard Mark White wrote:
hi,
so i went to options--saved history--advanced--deleted datasets. then
checked all of them, and then hit permanently delete.
but nothign happened. they still show up as deleted, and they are taking up
lots of my quota.
how do i get rid of these?
rich
Hi
Great, thanks for letting us know.
Richard Mark White wrote:
actually, i just waited a bit and now they are deletd.
r
From: Nate Coraor n...@bx.psu.edu
To: Richard Mark White whit...@yahoo.com
Cc: GANDRILLON OLIVIER olivier.gandril...@univ-lyon1.fr
Oren Livne wrote:
Dear All,
We have a lot of sequencing data files whose locations and
identifiers are managed by our home-grown webapp. We'd like to
enable users to single-sign-on from our webapp into Galaxy (we will
be using open ID on both systems for that). When the user opens a
Galaxy
Oren,
I do believe some folks have done this, I am a bit surprised none have
replied. But yes, since Condor supports DRMAA, you should be able to
point $DRMAA_LIBRARY_PATH at Condor's libdrmaa.so, enable the drmaa
runner, and try it out.
--nate
Oren Livne wrote:
Dear Dave, Victor,
Is it
Peter Cock wrote:
On Wed, Jun 22, 2011 at 6:47 AM, Robert Curtis Hendrickson
curt...@uab.edu wrote:
Nate,
The Galaxy’s ability to pull files with user/password from FTP sites as a
client is great.
However, I need to pull data from an HTTP site at a sequencing center with
Christopher Schroeder wrote:
Dear all,
we extended the galaxy functionality with a couple of smaller custom
scripts. Now we are in the need of some tmpFiles, since our data is
getting bigger. As far as I understand galaxy by now there might be
several options:
- job_working_directory
Hi,
The cluster resource manager unexpectedly died. Everything should be
working again after a restart. Please let us know if you're still
experiencing any problems, and thanks for using Galaxy.
--nate
Nicola Nadeau wrote:
I've been getting the same error when trying to run Picard, Mark
Hi Carlos,
We're expecting to release a stable distribution this week, but you are
always free to pull changes from the central repository. The suggested
method would be to just pull all changesets up to the one you want, or
up to the tip changeset.
It's not trivial to pull single changesets,
David K Crossman wrote:
Hello!
We uploaded 12 samples to Galaxy last night via FTP. This
morning, I went to Get Data and clicked on all 12 that were under the FTP
location, chose the type of file they were and reference genome and then
clicked Excecute. The 12 moved
Robert Jackson wrote:
Does anyone have practical info on galaxy setup on a cluster?
Hi Robert,
Please see the production server documentation at:
http://usegalaxy.org/production
--nate
Robert C. Jackson
Software Systems Specialist III
The University of Texas Pan-American
Computer
somy ork wrote:
Hi,
Am trying to configure galaxy to enable file upload via FTP.
My server is configured with SFTP. and not FTP.
Will this create a problem in galaxy? coz all my input files are above 2 GB.
Should i change any settings in the universe_wsgi.ini file?
Please guide me.
Hi
Hello,
Please don't set your bug report address (error_email_to in
universe_wsgi.ini) to the user mailing list. This should be set to a
local address at your site.
--nate
galaxy-user@lists.bx.psu.edu wrote:
GALAXY TOOL ERROR REPORT
This error report was sent
Matloob Khushi wrote:
Hello Galaxy Users
We are anticipating setting up a local instance of Galaxy and wondering what
is the most popular (or best) flavour of Linux people out there using.
I wonder which OS on UseGalaxy.org has been used. Thanks for your help.
Hi Matloob,
The public
Paul-Michael Agapow wrote:
From: Tilahun Abebe tilahun.ab...@uni.edu
We are trying to load Illumina data to our local Galaxy instance. The
files are between 700 MB and 2.2 GB. Files below 2 GB load in less
than
5 minutes. Files larger than 2 GB don't upload at all. We installed
. Has anyone tried this before?
Yes, this is a standard feature. zip, gzip, and bzip2 are all
supported. Only one file per archive at this time, however.
--nate
Thanks.
Tilahun
---
Nate Coraor wrote:
Paul-Michael Agapow wrote:
From: Tilahun Abebetilahun.ab...@uni.edu
We
-platform FTP client.
--nate
From: Nate Coraor [n...@bx.psu.edu]
Sent: Thursday, March 31, 2011 11:36 PM
To: Sher, Falak
Cc: galaxy-user@lists.bx.psu.edu
Subject: Re: [galaxy-user] Galaxy server-FTP
Sher, Falak wrote:
HI
I would like to use Galaxy
Sher, Falak wrote:
HI
I would like to use Galaxy server to upload files via FTP. I seek help for,
log in to the FTP server at main.g2.bx.psu.edu.
I dont see ftp log in option there.
help please ?
Hi Falak,
What software are you using to try to connect via FTP?
--nate
Falak
Felix Hammer wrote:
Hi, is there currently a problem with the Galaxy Server?
When will it be up again?
Hi Felix,
There were some problems caused by unusually high load which should now
be resolved. Jobs are running now, albeit a bit slowly. Please let us
know if there are any continuing
James Lindsay wrote:
Hello,
I was wondering if there was a way to export a dataset to the file
system? Basically I think it would be advantageous if someone could
copy a dataset to an export folder, they could then FTP this data
away or work with it locally?
Hi James,
This doesn't exist
Felix Hammer wrote:
Hi,
just trying to use Bowtie, but the job does not start.
Is there somthing wrong with the server at the moment?
Hi Felix,
It looks like the reservation on the cluster which runs bowtie jobs
isn't working at the moment and there's some contention for resources.
I've
Felix Hammer wrote:
Hi,
is it possible to use SCP to upload files instead of FTP?
Hi Felix,
At this time, no.
--nate
thxbye,
Felix
___
The Galaxy User list should be used for the discussion
of Galaxy analysis and other features on the public
Felix Hammer wrote:
Hi,
I am trying to map sequences with Bowtie.
Have been waiting for a few hours now but it still says Job is waiting to
run.
A few days ago I didn't have this problem and the job started immediately.
My guess is that the jobs are still stuck in a queue?
How can I
68 matches
Mail list logo