Hi all;
This year the Bioinformatics Open Source Conference (BOSC) will be taking
place in Vienna, Austria on July 15-16th. This is a yearly opportunity for
open source bioinformatics developers to get together in person and discuss
on-going projects. Nomi Harris, Peter Rice and the other
Louise-Amélie;
I just met a huge problem concerning the database. I'm currently trying
to transfer my data from MySQL to PostgreSQL by writing a Perl script
that would do the job.
I've used this ruby script to convert a Galaxy MySQL database to
PostgreSQL:
James;
I'm working on a simple python script that auto-magically creates
all indexes, and pre-processing necessary to add a genome to galaxy.
I was wondering if anyone had a script to download all the genomes
that are presented to you in the default genome drop down box?
Really great you are
If you haven't already registered for BOSC, now is your chance--after
June 3, prices will go up!
Registration for BOSC is through the ISMB main conference website:
http://www.iscb.org/ismbeccb2011-registration#sigs . Since BOSC is a
two-day SIG, the price is 2x the one-day SIG price listed on
Colin;
Second -and still unsolved- problem: When i try to run a picard tool, I get
the following exception:
File /home/hg/Galaxy/galaxy-dist/tools/picard/picard_wrapper.py, line 40
class PicardBase():
^
SyntaxError: invalid syntax
What version of Python are you
Clare;
I am trying to use install_data_s3 (data_fabfile.py) to get data from
the cloudbiolinux bucket into a virtualbox VM, as per the instructions
at usegalaxy.org/vm .
When I ran
fab -f data_fabfile.py -H localhost
install_data_s3:config/galaxy_default_biodata.yaml
the first time, it
Mattias;
Thanks for clarifying. One thing I can not find out is or Brad's scripts
are installing the packages on system level or in a specific directory
(like $GALAXY_APPS/package/version/).
The script installs them at the system level, primarily using the
package manager. There are some
Kip;
Are these binary BAM or BigWig type files you are having trouble
displaying at UCSC? I don't believe the Paste server has support for the
byte-range requests that UCSC needs, and you need to send files through
the proxy server. I've done this with nginx so don't have direct Apache
experience
Marcel;
well, I know I do not have to create a new AMI if I want to reuse an
instance myself.
However, I would like to share the modified GalaxyCloudman version with
other people and therefore I do have to create an AMI.
What Enis was suggesting is using the share-a-cluster
Scott;
The user passes in the AWS access and secret keys in the user-data
box. This page has all the details about the YAML format CloudMan
expects (in step 2 of the detailed steps):
http://wiki.g2.bx.psu.edu/Admin/Cloud
You can pick the passed in user-data up from any instance with
this url:
Yves;
I am currently investigating if Galaxy Cloudman can help us in analyzing
large NGS datasets.
I was first impressed by the simple setup, the autoscaling and
useability of Galaxy Cloudman but soon ran into the EBS limit of 1 TB L
I thought to be clever and umounted the
your abstract, please visit
http://www.open-bio.org/wiki/BOSC_2012#Submitting_Abstracts
BOSC 2012 Organizing Committee:
Nomi Harris (chair), Jan Aerts, Brad Chapman, Peter Cock, Chris Fields, Erwin
Frise, Peter Rice
___
Please keep all replies
Jan;
Glad to hear you got Galaxy running successfully. It sounds like
everything is good to go once we sort out the disk space issue.
However, when I try to use NX to get the virtual desktop going I get the
message usr/bin/nxserver: line 381: echo: write error: No space left
on device.
selinux tmp vmlinuz
dev homelib32 media pkg run srv usr
Thanks again,
Jan
On Mar 6, 2012, at 8:47 PM, Brad Chapman wrote:
Jan;
Glad to hear you got Galaxy running successfully. It sounds like
everything is good to go once we sort out the disk space
Jan;
Thanks for getting back with all the detailed information. I dug into
this further and understand what is happening:
- tools/data_source/upload.py calls
lib/galaxy/datatypes/sniff.py:stream_to_file
- stream_to_file uses pythons tempfile module
- tempfile defaults to using /tmp
- As large
Thanks,
Jan
On Mar 7, 2012, at 9:22 PM, Brad Chapman wrote:
Jan;
Thanks for getting back with all the detailed information. I dug into
this further and understand what is happening:
- tools/data_source/upload.py calls
lib/galaxy/datatypes/sniff.py:stream_to_file
- stream_to_file uses
Lance and Peter;
Peter, thanks for noticing the problem and duplicate tools. Lance, I'm
happy to merge these so there are not two different versions out there.
I prefer your use for genomeCoverageBed over my custom hacks. That's a
nice approach I totally missed.
I avoid the need for the sam
Peter and Lance;
I've made the update to Brad's script from the Tool Shed (attached),
switching to using genomeCoverageBed and bedGraphToBigWig
(based on the approach used in Lance's script), although in doing so
I dropped the region support (which wasn't exposed to the Galaxy
interface
Brian;
I wrote a pipeline (xml attached) that, from what I can gather,
succeeds, but galaxy shows it as an error and doesn't make the output
file accessible as a new data set.
Is it possible the software is writing to standard error? Galaxy doesn't
check status codes, but rather check for
Hi all;
I've been working on updating the bam_to_bigwig tool in the Toolshed
with an improved wrapper from Peter and Lance:
http://toolshed.g2.bx.psu.edu/repository/view_repository?id=2498224736779bdb
I pushed updates to the mercurial repository and now the latest revision
is at 3:294e9dae5a9b.
Hi all;
I've been thinking about ways to upload into user histories via the API.
My goal is to be able to have an analysis platform that works
alongside Galaxy and can easily pass files back and forth through the
history.
The tools API turned out to be a great fit for this, since you can
noted them in the request's comments
section.
Best,
J.
On Oct 4, 2012, at 3:44 PM, Brad Chapman wrote:
Hi all;
I've been thinking about ways to upload into user histories via the API.
My goal is to be able to have an analysis platform that works
alongside Galaxy and can easily pass files
Hi all;
Open Bio regularly organizes hackathon coding sessions in conjunction
with the Bioinformatics Open Source Conference. The goal is to get
together biologists writing open source code, provide a room and
internet, and encourage fun collaborative coding. We've had successful
two day
Erik and Jeremy;
I am a JBrowse Dev hoping to add the ability to export data directly
from JBrowse (JavaScript) to Galaxy
The API could be used for this.
Specifically, you could do an upload for the user from a URL via the
tools API. I know that Brad Chapman (cc'd) has done
Hi all;
I ran into SSL certification errors when using Java to connect to Galaxy
main via the API. My knowledge of this stuff is minimal, but I did some
searching and discovered that the certificate chain on Galaxy main is a problem:
Scooter;
(cc'ing the dev list and updating the subject line in case others are
interested)
I have been looking for Java related API's to run workflows externally and
haven't found anything searching message forums etc. Would like to
automate data coming off up hiseq uploaded to Amazon S3 and
Fabiano;
I help with the BioCloudCentral site, which is a community maintained
way to launch CloudMan and CloudBioLinux AMIs. Sorry for any confusion
between the different methods. Dannon, if you're up for it we should try
to coordinate better at least on the documentation side.
In terms of your
started.
Jen
Galaxy
On 2/21/13 12:43 PM, Brad Chapman wrote:
Hi all;
Is there a way for community members to contribute indexes to the rsync
server? This resource is awesome and I'm working on migrating the
CloudBioLinux retrieval scripts to use this instead of the custom S3
buckets we'd set
Hi all;
There are some upcoming coding events and conferences of interest to open source
biology programmers:
- BOSC/Broad Interoperability Hackathon -- This is a two day coding session at
the Broad Institute in Cambridge, MA on April 7-8 focused on improving tool
interoperability.
Hi all;
I'm helping organize a bioinformatics mini-symposium as part of SciPy 2013:
Bioinformatics mini-symposia: http://j.mp/Z4xxXB
SciPy info: http://conference.scipy.org/scipy2013/about.php
This is a great chance for the Python bioinformatics community to connect
with the wider Python
Zeeshan and Roman;
Thanks for passing this on. The complaints about keys are only warnings so
aren't the root cause of the issue. The failure is coming from here:
Err http://nebc.nerc.ac.uk unstable Release.gpg
Unable to connect to nebc.nerc.ac.uk:http:
Zeeshan, it looks like your machine
Mic;
I have tried to install gff with easy_install, but I got the following
error:
$ easy_install --prefix=/home/mic/apps/pymodules -UZ
https://github.com/chapmanb/bcbb/tree/master/gff
Downloading https://github.com/chapmanb/bcbb/tree/master/gff
error: Unexpected HTML page found at
Mic;
(moving to galaxy-dev list so folks there can follow, but future
questions are more appropriate for the Biopython list only since
this isn't a Galaxy question)
I have the following GFF file from a SNAP
X1 SNAPEinit 25792712-3.221 + . X1-snap.1
[...]
With
Lee;
Hi, what roles are embedded in nglims? (am I calling it by the right
name?)
I have a sequencing role which will be given to anyone submitting a
request, and that seems to be working just fine. I am also using the yaml
form to edit the submission form, and I think it's beautifully
at 9:35 AM, Brad Chapman chapm...@50mail.com wrote:
Lee;
Hi, what roles are embedded in nglims? (am I calling it by the right
name?)
I have a sequencing role which will be given to anyone submitting a
request, and that seems to be working just fine. I am also using the
yaml
form
Lee;
Hi, I would like to remove the multiplexed menu from the nglims sample
information form. How would I do that? In our lab, we don't want
submitters making their own libraries since they might accidentally use the
same barcodes or somehow mess up others' libraries. We want to remove
, line 305, in
__new__
sqlalchemy.exc.InvalidRequestError: Table 'sample_request_map' is already
defined for this MetaData instance. Specify 'extend_existing=True' to
redefine options and columns on an existing Table object.
On Mon, Jul 1, 2013 at 12:46 PM, Brad Chapman chapm...@50mail.com wrote
Lee;
Thanks for all the patience with this. I'll take the errors in order:
Original exception was:
Traceback (most recent call last):
[...]
raise TypeError(upgrade/downgrade functions must accept engine
TypeError: upgrade/downgrade functions must accept engine parameter (since
version
Lee;
Hi, I get an internal server error when clicking sequencing results
under the lab menu. I'm hoping for an easy fix :)
Basically the same thing happens on a few different menus actually
including most recently Review tool migration stages so I don't think
it's a nglims thing anymore.
Lee;
Hi, I would like to autofill cycles for next-gen sequencing. Most of our
submitters probably don't even know what that means actually and so we want
to simplify it. Same goes with paired end reads. How would I fill out the
nglims yaml file so that they could skip over that section?
, Brad Chapman chapm...@50mail.com wrote:
Lee;
Hi, I am using nglims, and I am getting Internal Error pages sometimes.
I
am not sure if it is because of Galaxy itself or nglims though.
I believe these are all Galaxy-specific things. I haven't changed
anything purposefully in the groups
Kyle;
I'm also excited about Docker for easing installation issues and fully
capturing run environments. I haven't yet done anything specifically for
Galaxy tools but put together a functional docker installation of
bcbio-nextgen as a step towards integration:
42 matches
Mail list logo