Hi Nate
Thank you very much for your e-mail
I am still testing and working on the last distribution, hopefully going
live this Sunday. But I will look into downloading and upgrading (one of
our dev servers) to the new distribution next week.
Regards, Hans
On 01/19/2012 07:39 PM, Nate
Hello,
We've created a new binary datatype for .fastq.gz files following the
same methodology as the BAM files since we don't want our fasta.gz
files to be gunzipped. I added the appropriate code in upload.py to
make sure of this. This new datatype and extension successfully does
not gunzip our
On Fri, Jan 20, 2012 at 12:42 PM, Leandro Hermida
soft...@leandrohermida.com wrote:
Hello,
We've created a new binary datatype for .fastq.gz files following the
same methodology as the BAM files since we don't want our fasta.gz
files to be gunzipped. I added the appropriate code in upload.py
Dear all,
due to some work on the user management of our servers we had to rename
the user galaxy runs as. Up to now we used ident as postgres
authentication method, meaning that here postgres expects unix username
galaxy to have permissions of galaxy postgres user.
The entry in
Hello Leandro,
I believe this behavior is due to the make_library_uploaded_dataset() method in
the ~/lib/galaxy/web/controllers/library_common controller. The current method
looks like this:
def make_library_uploaded_dataset( self, trans, cntrller, params, name,
path, type,
Hi Peter,
Sorry I wasn't clear, the .gz gets stripped from the name in the
Galaxy UI when you upload the files into a data library via the manage
data libraries form. When you upload it via Get Data - Upload File
the .gz is preserved which is what one would want since I am not
having it gunzipped
Hi Greg,
Ok this code change to library_common.py works, now when you use the
data libraries menu to bring in .fastq.gz files it doesn't cut off the
.gz thank you!
best,
Leandro
On Fri, Jan 20, 2012 at 3:32 PM, Greg Von Kuster g...@bx.psu.edu wrote:
Hello Leandro,
I believe this behavior is
On Jan 19, 2012, at 11:27 AM, Ivan Merelli wrote:
Hi,
is it possible to restrict the number of concurrent jobs for a single
user in a local instance of galaxy? I see that in the public site
this feature is implemented, but I don't find documentation about
how to implement this locally. I
On Jan 18, 2012, at 11:54 AM, Ryan Golhar wrote:
Nate - Is there a specific place in the Galaxy code that forks the samtools
index on bam files on the cluster or the head node? I really need to track
this down.
Hey Ryan,
Sorry it's taken so long, I've been pretty busy. The relevant code
January 20, 2012 Galaxy Development News Brief
Highlights:
* New Object Store data integration layer introduced
* RNA-seq updates: TopHat to 1.4.0 and Cufflinks, CuffDiff,
CuffCompare to 1.3.0.
* Tool Shed installation features and community tool additions
* Trackster performance upgrades and
Galaxy shouldn't be trying to do that, but it also shouldn't cause metadata to
fail.
On Jan 20, 2012, at 10:52 AM, Ryan Golhar wrote:
Thanks Nate. I'll play with that. Could it be that Galaxy is trying to
reset the permissions or ownership of the imported BAM files. I'm not
copying them
Hi,
There seems to be a weird bug with the Input dataset workflow
control feature, hard to explain clearly but I'll try my best.
If you define a custom datatype that is a simple subclass of an
existing galaxy datatype, e.g.:
datatype extension=myext type=galaxy.datatypes.data:Text
subclass=True
Thanks Nate.
I have tried that but it seems to run into problems with the expected output.
This may be simply the aforementioned python dependency issues, but I'll have
to investigate more in the coming week when I have time.
Just to get an idea of what I would like, I mainly want the
Just wanted to add that we have consistently seen this issue of 'samtools
index' running locally on our install. We are using SGE scheduler. Thanks for
pointing out details in the code Nate.
--
Shantanu.
On Jan 20, 2012, at 9:35 AM, Nate Coraor wrote:
On Jan 18, 2012, at 11:54 AM, Ryan
On Jan 11, 2012, at 12:18 PM, Ann Black wrote:
Good Morning galaxy group!
I was hoping that someone might have some ideas on a problem we have
experienced a handful of times running galaxy on our local cluster.
Occasionally we experience some communication timeouts between out cluster
On Jan 20, 2012, at 12:12 AM, ambarish biswas wrote:
Hi,
I'm not sure if this is related or not but I'm also noticed a
similar problem where the program is not found in the path, but it
exists. I have it submitted as an issue , but it could be related here
so mentioned here. The problem
yes, nate but that fails the job but it is, in fact, still running and
the error should be ignored
except Exception, e:
# so we don't kill the monitor thread
log.exception((%s/%s) Unable to check job status % (
galaxy_job_id, job_id ) )
Great idea, Nate (hint! hint!).
On Thu, Jan 19, 2012 at 10:27 AM, Nate Coraor n...@bx.psu.edu wrote:
Hey Ed,
This is a neat approach. You could possibly also do this in the Galaxy
database by associating users and groups with roles that match project names.
A select list or history
I really want this for Torque/Moab where the native spec flag is -A
Sent from my iPhone
On Jan 20, 2012, at 7:34 PM, Edward Kirton eskir...@lbl.gov wrote:
Great idea, Nate (hint! hint!).
On Thu, Jan 19, 2012 at 10:27 AM, Nate Coraor n...@bx.psu.edu wrote:
Hey Ed,
This is a neat
19 matches
Mail list logo