Re: [galaxy-dev] Import history from web server to local server

2013-02-07 Thread julie dubois
Sorry for my stupid ask but when I click on the link provide by the option
Export to File, I'm redirected on web page which shows me this message :
Still exporting history MyHistory;pleas check back soon. Link:
http://cistrome.org/ap/history/export_archive?id=###

I have no possibility to download the history! What is the problem? I have
certainly mis-understanded your explaination.
Thanks.
julie

2013/2/6 Jeremy Goecks jeremy.goe...@emory.edu

 Unfortunately using wget won't work in this case. The reason you have
 access to the history is your Galaxy cookie, which isn't shared with
 wget/curl.

 You'll need to click on the export link in your Web browser to download
 the history to your local computer and then move it to a local server.

 Best,
 J.

 On Feb 6, 2013, at 12:07 PM, julie dubois wrote:

 Thanks!
 Just one ask : How can I download this compressed history : is it the same
 that :
 wget url_of_exported_history
 and copy the file from this command in a local server ?
 Because this file is not an archive but a text file with html code

 thanks again.
 Julie

 2013/2/6 Jeremy Goecks jeremy.goe...@emory.edu

 Ah, I see the issue: the Cistrome instance cannot be used anonymously
 (without login). It's not possible for one Galaxy instance to work with
 another instance's history because instances work with objects anonymously
 rather than using login credentials.

 For now, you can download/copy the compressed history to a Web-accessible
 location (e.g. local web server, Dropbox) and import the history from that
 location. We'll look into improving this in the future.

 Best,
 J.

 On Feb 6, 2013, at 11:10 AM, julie dubois wrote:

 Hi, thanks for your help.
 I've tested your procedure and it doesn't work. I have the same error.

 Sorry and thanks for the creation of the card.
 Julie

 2013/2/6 Jeremy Goecks jeremy.goe...@emory.edu

 This is likely a permissions issue: importing your history into another
 Galaxy instance requires the history to be accessible. Here's a solution
 that, while ugly, should work:

 (1) Make the history accessible by going to Share/Publish and clicking
 the option to make it accessible.
 (2) Export your history again.
 (3) Use the history URL to import to another instance.

 I've created a card for enhancements to this feature that will make this
 process easier in the future.

 https://trello.com/c/qCfAWeYU

 Best,
 J


 On Feb 6, 2013, at 5:00 AM, julie dubois wrote:

 Hi all,
 I'm trying to import an history from the web server to my local server.
 First I've chosen Export to File in the history menu and I've seen a
 message with an url.
 Second in my local server I've created a new history and I chosen
 Import from File in my new history menu.
 I've pasted the url in the formular and A message says :
 Importing history from '
 http://cistrome.org/ap/history/export_archive?id='.
 This history will be visible when the import is complete

 But there is nothing in my new history and it seems that no operation is
 running!!!

 Is it normal?
 Another idea to transfer an hitory and his datasets in another history?
 Thanks.
 julie
 ___
 Please keep all replies on the list by using reply all
 in your mail client.  To manage your subscriptions to this
 and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/







___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/

Re: [galaxy-dev] card 79: Split large jobs over multiple nodes for processing

2013-02-07 Thread Peter Cock
On Wed, Feb 6, 2013 at 11:43 PM, alex.khassa...@csiro.au wrote:

 Hi All,

 Can anybody please add a few words on how can we use the “initial 
 implementation” which “ exists in the tasks framework”?

 -Alex


To enable this, set use_tasked_jobs = True in your universe_wsgi.ini
file. The tools must also be configured to allow this via the
parallelism tag. Many of my tools do this, for example see the NCBI
BLAST+ wrappers in the tool shed. Additionally the data file formats
must support being split, or being merged - which is done via Python
code in the Galaxy datatype definition (see the split and merge
methods in lib/galaxy/datatypes/*.py). Some other relevant Python code
is in lib/galaxy/jobs/splitters/*.py

Peter

___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/


[galaxy-dev] No peek and no column count in history items

2013-02-07 Thread graham etherington (TSL)
Hi,
Some of our users are experiencing problems with (mainly) tabular data on
our local install of Galaxy (changeset 8368:0042b30216fc, Nov 06 2012).
I'm presuming it's some kind of meta-data problem.
The first strange behavior is that they are getting green history items,
but when the history item is expanded it has 'no peek' in the data preview
and 'empty' as the line-count. When they click on the eye icon, their data
appears and the output is as expected.
Sometimes the data preview panel (in the history item) has the number of
columns across the top of it (but still with 'no peek') and this data can
be used downstream, but often the columns are missing or incorrect and
although the output file is correctly tabulated, it cannot be used
downstream.
All of these problems can be addressed by using the Auto-detect in the
Edit Attributes. This provides the correct column count, gives the line
count and provides a peek.
This happens with lots of different tools, usually (but not exclusively)
with tabular data.
I'm wondering if anyone has ever encountered this problem before and what
they did to address it.
Many thanks,

Graham



Dr. Graham Etherington
Bioinformatics Support Officer,
The Sainsbury Laboratory,
Norwich Research Park,
Norwich NR4 7UH.
UK
Tel: +44 (0)1603 450601




___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/


Re: [galaxy-dev] Reloading a tools configuration does not seem to actually work

2013-02-07 Thread Dannon Baker
Unfortunately not, and with the migration of tools to the toolshed installation 
mechanism I don't imagine this will be addressed (at least by the team) anytime 
soon.  If you wanted you could probably write a script that would reload a 
specified tool in each of the separate web processes, or just implement a 
complete rolling restart of your web processes to avoid service disruption 
while still loading the tool updates.

-Dannon


On Feb 6, 2013, at 8:40 PM, Anthonius deBoer thondeb...@me.com wrote:

 I am indeed using multiple web processes and I guess I am talking about the 
 old admine tool reloader...
 Is there any other way to do this for your own tools that you just manually 
 place in tools etc.?
 
 Thon
 
 On Feb 05, 2013, at 06:22 PM, Dannon Baker dannonba...@me.com wrote:
 
 Are you using multiple web processes, and are you referring to the old admin 
 tool reloader or the toolshed reloading interface?
 
 -Dannon
 
 On Feb 5, 2013, at 9:13 PM, Anthonius deBoer thondeb...@me.com wrote:
 
  Hi,
  
  I find that reloading a tool's configuration file does not really work.
  First, you have to click the reload buttow twice to actually have it 
  update the VERSION number (so it does read something)...
  But when I try to run my tool, the old bug is still there...
  
  I am using proxy server so something may still be cached, but I have to 
  restart my server for it actually to pick up the changes...
  
  Any ideas?
  
  Thon
  ___
  Please keep all replies on the list by using reply all
  in your mail client. To manage your subscriptions to this
  and other Galaxy lists, please use the interface at:
  
  http://lists.bx.psu.edu/
 

___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/


Re: [galaxy-dev] VirtualEnv

2013-02-07 Thread Nate Coraor

On Feb 6, 2013, at 11:04 AM, Thyssen, Gregory - ARS wrote:

 Hello
 Everything seems to be working on my local Galaxy
 I was talking to my IT guy who did the initial installation and he said that 
 virtualenv may not have been loaded.
 When he did
 % yum install python-virtualenv.noarch
 He got a not found error.
  
 He did install mercurial and then completed the Install instructions from
 www.biocodershub.net/community/guest-post-notes-on-installing-galaxy/
  
 What should we do at this point?
 Is VirtualEnv dispensible?
 What should I expect to be broken?
 Whats the fix?

Hi Greg,

virtualenv is not required, it just prevents site packages (extra modules not 
installed with the default python interpreter) from interfering with Galaxy's 
internally managed dependencies.  You can also install virtualenv using pip, 
easy_install, setup.py, or by just downloading virtualenv.py and running it 
directly.

http://pypi.python.org/pypi/virtualenv

--nate

  
 Thank you,
  
 Gregory Thyssen, PhD
 Molecular Biologist
 Cotton Fiber Bioscience
 USDA-ARS-Southern Regional Research Center
 1100 Robert E Lee Blvd
 New Orleans, LA 70124
 gregory.thys...@ars.usda.gov
 504-286-4280
  
  
 
 
 
 
 This electronic message contains information generated by the USDA solely for 
 the intended recipients. Any unauthorized interception of this message or the 
 use or disclosure of the information it contains may violate the law and 
 subject the violator to civil or criminal penalties. If you believe you have 
 received this message in error, please notify the sender and delete the email 
 immediately. ___
 Please keep all replies on the list by using reply all
 in your mail client.  To manage your subscriptions to this
 and other Galaxy lists, please use the interface at:
 
  http://lists.bx.psu.edu/


___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/


Re: [galaxy-dev] Import history from web server to local server

2013-02-07 Thread Jeremy Goecks
This means the history is still being compressed; large histories can take a 
long time to compressed, so you'll have to be patient and check back 
periodically to see when the history is ready for download.

Best,
J.

On Feb 7, 2013, at 4:02 AM, julie dubois wrote:

 Sorry for my stupid ask but when I click on the link provide by the option 
 Export to File, I'm redirected on web page which shows me this message : 
 Still exporting history MyHistory;pleas check back soon. Link: 
 http://cistrome.org/ap/history/export_archive?id=###
 
 I have no possibility to download the history! What is the problem? I have 
 certainly mis-understanded your explaination.
 Thanks.
 julie
 
 2013/2/6 Jeremy Goecks jeremy.goe...@emory.edu
 Unfortunately using wget won't work in this case. The reason you have access 
 to the history is your Galaxy cookie, which isn't shared with wget/curl.
 
 You'll need to click on the export link in your Web browser to download the 
 history to your local computer and then move it to a local server.
 
 Best,
 J.
 
 On Feb 6, 2013, at 12:07 PM, julie dubois wrote:
 
 Thanks!
 Just one ask : How can I download this compressed history : is it the same 
 that :
 wget url_of_exported_history
 and copy the file from this command in a local server ?
 Because this file is not an archive but a text file with html code
 
 thanks again.
 Julie
 
 2013/2/6 Jeremy Goecks jeremy.goe...@emory.edu
 Ah, I see the issue: the Cistrome instance cannot be used anonymously 
 (without login). It's not possible for one Galaxy instance to work with 
 another instance's history because instances work with objects anonymously 
 rather than using login credentials.
 
 For now, you can download/copy the compressed history to a Web-accessible 
 location (e.g. local web server, Dropbox) and import the history from that 
 location. We'll look into improving this in the future.
 
 Best,
 J.
 
 On Feb 6, 2013, at 11:10 AM, julie dubois wrote:
 
 Hi, thanks for your help.
 I've tested your procedure and it doesn't work. I have the same error.
 
 Sorry and thanks for the creation of the card.
 Julie
 
 2013/2/6 Jeremy Goecks jeremy.goe...@emory.edu
 This is likely a permissions issue: importing your history into another 
 Galaxy instance requires the history to be accessible. Here's a solution 
 that, while ugly, should work:
 
 (1) Make the history accessible by going to Share/Publish and clicking the 
 option to make it accessible.
 (2) Export your history again.
 (3) Use the history URL to import to another instance.
 
 I've created a card for enhancements to this feature that will make this 
 process easier in the future.
 
 https://trello.com/c/qCfAWeYU
 
 Best,
 J
 
 
 On Feb 6, 2013, at 5:00 AM, julie dubois wrote:
 
 Hi all,
 I'm trying to import an history from the web server to my local server. 
 First I've chosen Export to File in the history menu and I've seen a 
 message with an url.
 Second in my local server I've created a new history and I chosen Import 
 from File in my new history menu.
 I've pasted the url in the formular and A message says :
 Importing history from 
 'http://cistrome.org/ap/history/export_archive?id='. This 
 history will be visible when the import is complete
 
 But there is nothing in my new history and it seems that no operation is 
 running!!!
 
 Is it normal?
 Another idea to transfer an hitory and his datasets in another history?
 Thanks.
 julie
 ___
 Please keep all replies on the list by using reply all
 in your mail client.  To manage your subscriptions to this
 and other Galaxy lists, please use the interface at:
 
  http://lists.bx.psu.edu/
 
 
 
 
 
 

___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/

Re: [galaxy-dev] No peek and no column count in history items

2013-02-07 Thread Nate Coraor
On Feb 7, 2013, at 6:04 AM, graham etherington (TSL) wrote:

 Hi,
 Some of our users are experiencing problems with (mainly) tabular data on
 our local install of Galaxy (changeset 8368:0042b30216fc, Nov 06 2012).
 I'm presuming it's some kind of meta-data problem.
 The first strange behavior is that they are getting green history items,
 but when the history item is expanded it has 'no peek' in the data preview
 and 'empty' as the line-count. When they click on the eye icon, their data
 appears and the output is as expected.
 Sometimes the data preview panel (in the history item) has the number of
 columns across the top of it (but still with 'no peek') and this data can
 be used downstream, but often the columns are missing or incorrect and
 although the output file is correctly tabulated, it cannot be used
 downstream.
 All of these problems can be addressed by using the Auto-detect in the
 Edit Attributes. This provides the correct column count, gives the line
 count and provides a peek.
 This happens with lots of different tools, usually (but not exclusively)
 with tabular data.
 I'm wondering if anyone has ever encountered this problem before and what
 they did to address it.
 Many thanks,

Hi Graham,

Is this a sporadic problem, and are you using a cluster (and a shared 
filesystem)?

--nate

 
 Graham
 
 
 
 Dr. Graham Etherington
 Bioinformatics Support Officer,
 The Sainsbury Laboratory,
 Norwich Research Park,
 Norwich NR4 7UH.
 UK
 Tel: +44 (0)1603 450601
 
 
 
 
 ___
 Please keep all replies on the list by using reply all
 in your mail client.  To manage your subscriptions to this
 and other Galaxy lists, please use the interface at:
 
  http://lists.bx.psu.edu/


___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/


Re: [galaxy-dev] No peek and no column count in history items

2013-02-07 Thread Nate Coraor
On Feb 7, 2013, at 9:07 AM, graham etherington (TSL) wrote:

 Hi Nate,
 It's a sporadic problem (in that it happens quite often, but not all the
 time) and yes, the Galaxy jobs are dispatched to a cluster.
 I'm not sure about the shared file system. Galaxy is defined as a user on
 our cluster, along with many other users. Does that answer your question?
 Cheers,
 Graham

Thanks Graham, that information helps.  It's not really possible to use Galaxy 
on a cluster without a shared filesystem at this point.  My guess as to what's 
going on here is that the filesystem is caching attributes (e.g. that the file 
is empty) on job outputs.  This can be disabled via specific mount options, 
there is some discussion of it at the bottom of this section in the 
documentation:


http://wiki.galaxyproject.org/Admin/Config/Performance/Cluster#Unified_Method

--nate

 
 Dr. Graham Etherington
 Bioinformatics Support Officer,
 The Sainsbury Laboratory,
 Norwich Research Park,
 Norwich NR4 7UH.
 UK
 Tel: +44 (0)1603 450601
 
 
 
 
 
 On 07/02/2013 13:39, Nate Coraor n...@bx.psu.edu wrote:
 
 On Feb 7, 2013, at 6:04 AM, graham etherington (TSL) wrote:
 
 Hi,
 Some of our users are experiencing problems with (mainly) tabular data
 on
 our local install of Galaxy (changeset 8368:0042b30216fc, Nov 06 2012).
 I'm presuming it's some kind of meta-data problem.
 The first strange behavior is that they are getting green history items,
 but when the history item is expanded it has 'no peek' in the data
 preview
 and 'empty' as the line-count. When they click on the eye icon, their
 data
 appears and the output is as expected.
 Sometimes the data preview panel (in the history item) has the number of
 columns across the top of it (but still with 'no peek') and this data
 can
 be used downstream, but often the columns are missing or incorrect and
 although the output file is correctly tabulated, it cannot be used
 downstream.
 All of these problems can be addressed by using the Auto-detect in the
 Edit Attributes. This provides the correct column count, gives the line
 count and provides a peek.
 This happens with lots of different tools, usually (but not exclusively)
 with tabular data.
 I'm wondering if anyone has ever encountered this problem before and
 what
 they did to address it.
 Many thanks,
 
 Hi Graham,
 
 Is this a sporadic problem, and are you using a cluster (and a shared
 filesystem)?
 
 --nate
 
 
 Graham
 
 
 
 Dr. Graham Etherington
 Bioinformatics Support Officer,
 The Sainsbury Laboratory,
 Norwich Research Park,
 Norwich NR4 7UH.
 UK
 Tel: +44 (0)1603 450601
 
 
 
 
 ___
 Please keep all replies on the list by using reply all
 in your mail client.  To manage your subscriptions to this
 and other Galaxy lists, please use the interface at:
 
 http://lists.bx.psu.edu/
 
 
 


___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/


Re: [galaxy-dev] Why upload to Local Instance rather than import?

2013-02-07 Thread Jennifer Jackson

Hi Greg,

I believe that you do not need to actually copy the data to your 
workstation (although you could ...) - instead, symbolically link it 
from the external drive to your workstation so that it appears local. 
Then upload from that path. The data library upload by path option 
will follow a single symbolic link to data.


This is explained in much more detail in our wiki, please see the 
section Upload directory of files:

http://wiki.galaxyproject.org/Admin/DataLibraries/UploadingLibraryFiles

Sorry for the late reply - this email got sorted incorrectly - it really 
is best to either send replies back to the mailing list or to start over 
with a new thread (brand new message to list only, not just a reply 
with a new subject line). This ensures we get your question tagged and 
tracked - and you benefit from all the helpful developers on this list 
that know more than I about many technical things Galaxy.


If anyway else has suggestions for Greg, please feel free to add to the 
post,


Jen
Galaxy team

On 2/4/13 12:53 PM, Thyssen, Gregory - ARS wrote:

Hi Jen,

I have a local instance of Galaxy running on my workstation.

I have external hard drives full of fastq sequencing files.

I want to make them available to myself as data libraries.  I am the
Admin and sole user at this point.

The “upload” data libraries seems to pass the data through the (slow)
network, even though my workstation and external hard drives are linked
by USB.

What is the fastest way to import my files into my galaxy instance? Can
I copy them into some folder on the workstation’s hard drive?

Since everything is physically connected, I don’t think I should be
limited by my network speed.

Thanks,

Greg Thyssen



--
Jennifer Hillman-Jackson
Galaxy Support and Training
http://galaxyproject.org
___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

 http://lists.bx.psu.edu/


Re: [galaxy-dev] Import history from web server to local server

2013-02-07 Thread julie dubois
THANKS a lot
It works. It's a very big advance for us. Thanks again.
Julie

2013/2/7 Jeremy Goecks jeremy.goe...@emory.edu

 This means the history is still being compressed; large histories can take
 a long time to compressed, so you'll have to be patient and check back
 periodically to see when the history is ready for download.

 Best,
 J.

 On Feb 7, 2013, at 4:02 AM, julie dubois wrote:

 Sorry for my stupid ask but when I click on the link provide by the option
 Export to File, I'm redirected on web page which shows me this message :
 Still exporting history MyHistory;pleas check back soon. Link:
 http://cistrome.org/ap/history/export_archive?id=###

 I have no possibility to download the history! What is the problem? I have
 certainly mis-understanded your explaination.
 Thanks.
 julie

 2013/2/6 Jeremy Goecks jeremy.goe...@emory.edu

 Unfortunately using wget won't work in this case. The reason you have
 access to the history is your Galaxy cookie, which isn't shared with
 wget/curl.

 You'll need to click on the export link in your Web browser to download
 the history to your local computer and then move it to a local server.

 Best,
 J.

 On Feb 6, 2013, at 12:07 PM, julie dubois wrote:

 Thanks!
 Just one ask : How can I download this compressed history : is it the
 same that :
 wget url_of_exported_history
 and copy the file from this command in a local server ?
 Because this file is not an archive but a text file with html code

 thanks again.
 Julie

 2013/2/6 Jeremy Goecks jeremy.goe...@emory.edu

 Ah, I see the issue: the Cistrome instance cannot be used anonymously
 (without login). It's not possible for one Galaxy instance to work with
 another instance's history because instances work with objects anonymously
 rather than using login credentials.

 For now, you can download/copy the compressed history to a
 Web-accessible location (e.g. local web server, Dropbox) and import the
 history from that location. We'll look into improving this in the future.

 Best,
 J.

 On Feb 6, 2013, at 11:10 AM, julie dubois wrote:

 Hi, thanks for your help.
 I've tested your procedure and it doesn't work. I have the same error.

 Sorry and thanks for the creation of the card.
 Julie

 2013/2/6 Jeremy Goecks jeremy.goe...@emory.edu

 This is likely a permissions issue: importing your history into another
 Galaxy instance requires the history to be accessible. Here's a solution
 that, while ugly, should work:

 (1) Make the history accessible by going to Share/Publish and clicking
 the option to make it accessible.
 (2) Export your history again.
 (3) Use the history URL to import to another instance.

 I've created a card for enhancements to this feature that will make
 this process easier in the future.

 https://trello.com/c/qCfAWeYU

 Best,
 J


 On Feb 6, 2013, at 5:00 AM, julie dubois wrote:

 Hi all,
 I'm trying to import an history from the web server to my local server.
 First I've chosen Export to File in the history menu and I've seen a
 message with an url.
 Second in my local server I've created a new history and I chosen
 Import from File in my new history menu.
 I've pasted the url in the formular and A message says :
 Importing history from '
 http://cistrome.org/ap/history/export_archive?id='.
 This history will be visible when the import is complete

 But there is nothing in my new history and it seems that no operation
 is running!!!

 Is it normal?
 Another idea to transfer an hitory and his datasets in another history?
 Thanks.
 julie
 ___
 Please keep all replies on the list by using reply all
 in your mail client.  To manage your subscriptions to this
 and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/









___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/

Re: [galaxy-dev] Uploading a Directory of Files - IOError: [Errno 28] No space left on device

2013-02-07 Thread greg
Update:

When I run as the Galaxy user, Python does have the right temp directory:

 tempfile.gettempdir()
'/scratch/galaxy'

So does that mean this upload job isn't running as galaxy, or is
skipping the job_environment_setup_file?  Or could something else be
going on?

Any ideas, now I'm really stuck.

Thanks,

Greg



On Wed, Feb 6, 2013 at 3:35 PM, greg margeem...@gmail.com wrote:
 Ok, when I ran Python in my last two emails I was running as myself,
 not the galaxy user, and only the galaxy user has write permission to
 /scratch/galaxy

 So that's why Python was ignoring /scratch/galaxy for me.  If it
 doesn't have write access it tries the next temp directory in its
 list.

 I'm going to try debugging as the galaxy user next.

 -Greg

 On Wed, Feb 6, 2013 at 3:21 PM, greg margeem...@gmail.com wrote:
 Hi Nate,

 I don't see $TMPDIR being set on the cluster, in addition to my
 previous email I ran:

 print os.environ.keys()
 ['KDE_IS_PRELINKED', 'FACTERLIB', 'LESSOPEN', 'SGE_CELL', 'LOGNAME',
 'USER', 'INPUTRC', 'QTDIR', 'PATH', 'PS1', 'LANG', 'KDEDIR', 'TERM',
 'SHELL', 'TEMP', 'QTINC', 'G_BROKEN_FILENAMES', 'SGE_EXECD_PORT',
 'HISTSIZE', 'KDE_NO_IPV6', 'MANPATH', 'HOME', 'SGE_ROOT', 'QTLIB',
 'VIRTUAL_ENV', 'SGE_CLUSTER_NAME', '_', 'SSH_CONNECTION', 'SSH_TTY',
 'HOSTNAME', 'SSH_CLIENT', 'SHLVL', 'PWD', 'MAIL', 'LS_COLORS',
 'SGE_QMASTER_PORT']

 But I think we've narrowed it down to something interfering with
 Python deciding the temp file location.  I just can't figure out what.



 On Wed, Feb 6, 2013 at 3:18 PM, Nate Coraor n...@bx.psu.edu wrote:
 On Feb 6, 2013, at 3:00 PM, greg wrote:

 Thanks Nate,

 It turns out I already had this as the first line of my job setup file:

 export TEMP=/scratch/galaxy

 But when I look in that directory, there's plenty of free space, and I
 also don't see any recent files there.  So I'm wondering if the upload
 jobs aren't seeing that for some reason.

 Any ideas on how I could diagnose this more?

 Hi Greg,

 The first place to look would be in lib/galaxy/datatypes/sniff.py, line 96:

 fd, temp_name = tempfile.mkstemp()

 If you print temp_name, that will tell you what file the upload tool is 
 writing to.  You may also want to take a look at:

 http://docs.python.org/2/library/tempfile.html#tempfile.tempdir

 Some cluster environments set $TMPDIR, and if that is set, $TEMP will not 
 be used.

 --nate


 -Greg


 Relevant info?


 grep env galaxy-dist/universe_wsgi.ini
 environment_setup_file = /usr/local/galaxy/job_environment_setup_file

 cat /usr/local/galaxy/job_environment_setup_file
 export TEMP=/scratch/galaxy
 #active Python virtual env just for galaxy
 source /usr/local/galaxy/galaxy_python/bin/activate
 ... path setup lines ...










 On Wed, Feb 6, 2013 at 2:37 PM, Nate Coraor n...@bx.psu.edu wrote:
 On Feb 6, 2013, at 2:32 PM, greg wrote:

 Hi guys,

 When I try to upload a directory of files from a server directory I'm
 seeing the error below.

 It appears to be trying to write to a temp directory somewhere that
 I'm guessing doesn't have enough space?  Is there a way I can direct
 where it writes to for temporary files like this?

 Hi Greg,

 There are a few ways.  For some parts of Galaxy, you will want to set 
 new_file_path in universe_wsgi.ini to a suitable temp space.  However, 
 this is not the case for the upload tool.

 Am I understanding right, that these upload jobs are running on our
 cluster?  I think it would be a problem if its trying to use the
 default temp directory on each cluster node since they aren't
 provisioned with much space.

 This is correct.  On the cluster, add something to your user's shell 
 startup files (or see the environment_setup_file option in 
 universe_wsgi.ini) that will set the $TEMP or $TMPDIR environment 
 variable to a suitable temp space.

 --nate


 Please advise.

 Thanks,

 Greg



 Miscellaneous information:Traceback (most recent call last): File
 /misc/local/galaxy/galaxy-dist/tools/data_source/upload.py, line
 384, in __main__() File
 /misc/local/galaxy/galaxy-dist/tools/data_source/upload.py, line
 373, in __main__ add_file( dataset,
 Job Standard Error


 Traceback (most recent call last):
 File /misc/local/galaxy/galaxy-dist/tools/data_source/upload.py,
 line 384, in
   __main__()
 File /misc/local/galaxy/galaxy-dist/tools/data_source/upload.py,
 line 373, in __main__
   add_file( dataset, registry, json_file, output_path )
 File /misc/local/galaxy/galaxy-dist/tools/data_source/upload.py,
 line 270, in add_file
   line_count, converted_path = sniff.convert_newlines( dataset.path,
 in_place=in_place )
 File /misc/local/galaxy/galaxy-dist/lib/galaxy/datatypes/sniff.py,
 line 99, in convert_newlines
   fp.write( %s\n % line.rstrip( \r\n ) )
 IOError: [Errno 28] No space left on device
 ___
 Please keep all replies on the list by using reply all
 in your mail client.  To manage your subscriptions to this
 and other Galaxy lists, 

Re: [galaxy-dev] Uploading a Directory of Files - IOError: [Errno 28] No space left on device

2013-02-07 Thread greg
Could I modify /misc/local/galaxy/galaxy-dist/lib/galaxy/datatypes/sniff.py
to print out debug information like host, os.environ,
tempfile.gettempdir(), etc?

Would I be able to see its stdout from galaxy or the log, or is there
something special I need to do to retrieve the information?

On Thu, Feb 7, 2013 at 1:12 PM, greg margeem...@gmail.com wrote:
 Update:

 When I run as the Galaxy user, Python does have the right temp directory:

 tempfile.gettempdir()
 '/scratch/galaxy'

 So does that mean this upload job isn't running as galaxy, or is
 skipping the job_environment_setup_file?  Or could something else be
 going on?

 Any ideas, now I'm really stuck.

 Thanks,

 Greg



 On Wed, Feb 6, 2013 at 3:35 PM, greg margeem...@gmail.com wrote:
 Ok, when I ran Python in my last two emails I was running as myself,
 not the galaxy user, and only the galaxy user has write permission to
 /scratch/galaxy

 So that's why Python was ignoring /scratch/galaxy for me.  If it
 doesn't have write access it tries the next temp directory in its
 list.

 I'm going to try debugging as the galaxy user next.

 -Greg

 On Wed, Feb 6, 2013 at 3:21 PM, greg margeem...@gmail.com wrote:
 Hi Nate,

 I don't see $TMPDIR being set on the cluster, in addition to my
 previous email I ran:

 print os.environ.keys()
 ['KDE_IS_PRELINKED', 'FACTERLIB', 'LESSOPEN', 'SGE_CELL', 'LOGNAME',
 'USER', 'INPUTRC', 'QTDIR', 'PATH', 'PS1', 'LANG', 'KDEDIR', 'TERM',
 'SHELL', 'TEMP', 'QTINC', 'G_BROKEN_FILENAMES', 'SGE_EXECD_PORT',
 'HISTSIZE', 'KDE_NO_IPV6', 'MANPATH', 'HOME', 'SGE_ROOT', 'QTLIB',
 'VIRTUAL_ENV', 'SGE_CLUSTER_NAME', '_', 'SSH_CONNECTION', 'SSH_TTY',
 'HOSTNAME', 'SSH_CLIENT', 'SHLVL', 'PWD', 'MAIL', 'LS_COLORS',
 'SGE_QMASTER_PORT']

 But I think we've narrowed it down to something interfering with
 Python deciding the temp file location.  I just can't figure out what.



 On Wed, Feb 6, 2013 at 3:18 PM, Nate Coraor n...@bx.psu.edu wrote:
 On Feb 6, 2013, at 3:00 PM, greg wrote:

 Thanks Nate,

 It turns out I already had this as the first line of my job setup file:

 export TEMP=/scratch/galaxy

 But when I look in that directory, there's plenty of free space, and I
 also don't see any recent files there.  So I'm wondering if the upload
 jobs aren't seeing that for some reason.

 Any ideas on how I could diagnose this more?

 Hi Greg,

 The first place to look would be in lib/galaxy/datatypes/sniff.py, line 96:

 fd, temp_name = tempfile.mkstemp()

 If you print temp_name, that will tell you what file the upload tool is 
 writing to.  You may also want to take a look at:

 http://docs.python.org/2/library/tempfile.html#tempfile.tempdir

 Some cluster environments set $TMPDIR, and if that is set, $TEMP will not 
 be used.

 --nate


 -Greg


 Relevant info?


 grep env galaxy-dist/universe_wsgi.ini
 environment_setup_file = /usr/local/galaxy/job_environment_setup_file

 cat /usr/local/galaxy/job_environment_setup_file
 export TEMP=/scratch/galaxy
 #active Python virtual env just for galaxy
 source /usr/local/galaxy/galaxy_python/bin/activate
 ... path setup lines ...










 On Wed, Feb 6, 2013 at 2:37 PM, Nate Coraor n...@bx.psu.edu wrote:
 On Feb 6, 2013, at 2:32 PM, greg wrote:

 Hi guys,

 When I try to upload a directory of files from a server directory I'm
 seeing the error below.

 It appears to be trying to write to a temp directory somewhere that
 I'm guessing doesn't have enough space?  Is there a way I can direct
 where it writes to for temporary files like this?

 Hi Greg,

 There are a few ways.  For some parts of Galaxy, you will want to set 
 new_file_path in universe_wsgi.ini to a suitable temp space.  However, 
 this is not the case for the upload tool.

 Am I understanding right, that these upload jobs are running on our
 cluster?  I think it would be a problem if its trying to use the
 default temp directory on each cluster node since they aren't
 provisioned with much space.

 This is correct.  On the cluster, add something to your user's shell 
 startup files (or see the environment_setup_file option in 
 universe_wsgi.ini) that will set the $TEMP or $TMPDIR environment 
 variable to a suitable temp space.

 --nate


 Please advise.

 Thanks,

 Greg



 Miscellaneous information:Traceback (most recent call last): File
 /misc/local/galaxy/galaxy-dist/tools/data_source/upload.py, line
 384, in __main__() File
 /misc/local/galaxy/galaxy-dist/tools/data_source/upload.py, line
 373, in __main__ add_file( dataset,
 Job Standard Error


 Traceback (most recent call last):
 File /misc/local/galaxy/galaxy-dist/tools/data_source/upload.py,
 line 384, in
   __main__()
 File /misc/local/galaxy/galaxy-dist/tools/data_source/upload.py,
 line 373, in __main__
   add_file( dataset, registry, json_file, output_path )
 File /misc/local/galaxy/galaxy-dist/tools/data_source/upload.py,
 line 270, in add_file
   line_count, converted_path = sniff.convert_newlines( dataset.path,
 in_place=in_place )
 File 

Re: [galaxy-dev] Uploading a Directory of Files - IOError: [Errno 28] No space left on device

2013-02-07 Thread greg
I think I found the problem.  The TMPDIR environment variable was set
to /tmp/5393732.1.f03.q for jobs galaxy was running. (I guess the
admins do this?)

I updated /usr/local/galaxy/job_environment_setup_file and also
/home/galaxy/.bashrc to set TMPDIR to /scratch/galaxy and it seems to
work now.

Thanks for the help.

-Greg

On Thu, Feb 7, 2013 at 1:19 PM, greg margeem...@gmail.com wrote:
 Could I modify /misc/local/galaxy/galaxy-dist/lib/galaxy/datatypes/sniff.py
 to print out debug information like host, os.environ,
 tempfile.gettempdir(), etc?

 Would I be able to see its stdout from galaxy or the log, or is there
 something special I need to do to retrieve the information?

 On Thu, Feb 7, 2013 at 1:12 PM, greg margeem...@gmail.com wrote:
 Update:

 When I run as the Galaxy user, Python does have the right temp directory:

 tempfile.gettempdir()
 '/scratch/galaxy'

 So does that mean this upload job isn't running as galaxy, or is
 skipping the job_environment_setup_file?  Or could something else be
 going on?

 Any ideas, now I'm really stuck.

 Thanks,

 Greg



 On Wed, Feb 6, 2013 at 3:35 PM, greg margeem...@gmail.com wrote:
 Ok, when I ran Python in my last two emails I was running as myself,
 not the galaxy user, and only the galaxy user has write permission to
 /scratch/galaxy

 So that's why Python was ignoring /scratch/galaxy for me.  If it
 doesn't have write access it tries the next temp directory in its
 list.

 I'm going to try debugging as the galaxy user next.

 -Greg

 On Wed, Feb 6, 2013 at 3:21 PM, greg margeem...@gmail.com wrote:
 Hi Nate,

 I don't see $TMPDIR being set on the cluster, in addition to my
 previous email I ran:

 print os.environ.keys()
 ['KDE_IS_PRELINKED', 'FACTERLIB', 'LESSOPEN', 'SGE_CELL', 'LOGNAME',
 'USER', 'INPUTRC', 'QTDIR', 'PATH', 'PS1', 'LANG', 'KDEDIR', 'TERM',
 'SHELL', 'TEMP', 'QTINC', 'G_BROKEN_FILENAMES', 'SGE_EXECD_PORT',
 'HISTSIZE', 'KDE_NO_IPV6', 'MANPATH', 'HOME', 'SGE_ROOT', 'QTLIB',
 'VIRTUAL_ENV', 'SGE_CLUSTER_NAME', '_', 'SSH_CONNECTION', 'SSH_TTY',
 'HOSTNAME', 'SSH_CLIENT', 'SHLVL', 'PWD', 'MAIL', 'LS_COLORS',
 'SGE_QMASTER_PORT']

 But I think we've narrowed it down to something interfering with
 Python deciding the temp file location.  I just can't figure out what.



 On Wed, Feb 6, 2013 at 3:18 PM, Nate Coraor n...@bx.psu.edu wrote:
 On Feb 6, 2013, at 3:00 PM, greg wrote:

 Thanks Nate,

 It turns out I already had this as the first line of my job setup file:

 export TEMP=/scratch/galaxy

 But when I look in that directory, there's plenty of free space, and I
 also don't see any recent files there.  So I'm wondering if the upload
 jobs aren't seeing that for some reason.

 Any ideas on how I could diagnose this more?

 Hi Greg,

 The first place to look would be in lib/galaxy/datatypes/sniff.py, line 
 96:

 fd, temp_name = tempfile.mkstemp()

 If you print temp_name, that will tell you what file the upload tool is 
 writing to.  You may also want to take a look at:

 http://docs.python.org/2/library/tempfile.html#tempfile.tempdir

 Some cluster environments set $TMPDIR, and if that is set, $TEMP will not 
 be used.

 --nate


 -Greg


 Relevant info?


 grep env galaxy-dist/universe_wsgi.ini
 environment_setup_file = /usr/local/galaxy/job_environment_setup_file

 cat /usr/local/galaxy/job_environment_setup_file
 export TEMP=/scratch/galaxy
 #active Python virtual env just for galaxy
 source /usr/local/galaxy/galaxy_python/bin/activate
 ... path setup lines ...










 On Wed, Feb 6, 2013 at 2:37 PM, Nate Coraor n...@bx.psu.edu wrote:
 On Feb 6, 2013, at 2:32 PM, greg wrote:

 Hi guys,

 When I try to upload a directory of files from a server directory I'm
 seeing the error below.

 It appears to be trying to write to a temp directory somewhere that
 I'm guessing doesn't have enough space?  Is there a way I can direct
 where it writes to for temporary files like this?

 Hi Greg,

 There are a few ways.  For some parts of Galaxy, you will want to set 
 new_file_path in universe_wsgi.ini to a suitable temp space.  However, 
 this is not the case for the upload tool.

 Am I understanding right, that these upload jobs are running on our
 cluster?  I think it would be a problem if its trying to use the
 default temp directory on each cluster node since they aren't
 provisioned with much space.

 This is correct.  On the cluster, add something to your user's shell 
 startup files (or see the environment_setup_file option in 
 universe_wsgi.ini) that will set the $TEMP or $TMPDIR environment 
 variable to a suitable temp space.

 --nate


 Please advise.

 Thanks,

 Greg



 Miscellaneous information:Traceback (most recent call last): File
 /misc/local/galaxy/galaxy-dist/tools/data_source/upload.py, line
 384, in __main__() File
 /misc/local/galaxy/galaxy-dist/tools/data_source/upload.py, line
 373, in __main__ add_file( dataset,
 Job Standard Error


 Traceback (most recent call last):
 File 

Re: [galaxy-dev] Reloading a tools configuration does not seem to actually work

2013-02-07 Thread Anthonius deBoer
That's very unfortunate...I have a ton of tools and I guess now I have to create a package for them in a local toolshed to update them in a running galaxy server?In any case...The toolshed installation also does not work for me...I still have to restart galaxy, even after using the toolshed approach to install a tool...It either does not show up at all or give a bunch of errors, about not being able to find the tool...Is this also related to the fact I have two webservers and am behind a proxy server as well?ThonOn Feb 07, 2013, at 05:29 AM, Dannon Baker dannonba...@me.com wrote:Unfortunately not, and with the migration of tools to the toolshed installation mechanism I don't imagine this will be addressed (at least by the team) anytime soon. If you wanted you could probably write a script that would reload a specified tool in each of the separate web processes, or just implement a complete rolling restart of your web processes to avoid service disruption while still loading the tool updates.  -Dannon   On Feb 6, 2013, at 8:40 PM, Anthonius deBoer thondeb...@me.com wrote:   I am indeed using multiple web processes and I guess I am talking about the "old" admine tool reloader...  Is there any other way to do this for your own tools that you just manually place in tools etc.?ThonOn Feb 05, 2013, at 06:22 PM, Dannon Baker dannonba...@me.com wrote:Are you using multiple web processes, and are you referring to the old admin tool reloader or the toolshed reloading interface?-DannonOn Feb 5, 2013, at 9:13 PM, Anthonius deBoer thondeb...@me.com wrote: Hi,  I find that reloading a tool's configuration file does not really work.   First, you have to click the reload buttow twice to actually have it update the VERSION number (so it does read something)...   But when I try to run my tool, the old bug is still there...  I am using proxy server so something may still be cached, but I have to restart my server for it actually to pick up the changes...  Any ideas?  Thon   ___   Please keep all replies on the list by using "reply all"   in your mail client. To manage your subscriptions to this   and other Galaxy lists, please use the interface at:  http://lists.bx.psu.edu/   ___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/

[galaxy-dev] What image formats can be displayed in the Galaxy view panel?

2013-02-07 Thread Luobin Yang
Hi,

I tried the pcx and ps formats, but the browser just downloads these kinds
of files instead rendering them in the Galaxy window... It seems png and
pdf files can be rendered in the Galaxy windows. How can I make Galaxy
display other image formats like ps and pcx?

Thanks,
Luobin
___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/

Re: [galaxy-dev] What image formats can be displayed in the Galaxy view panel?

2013-02-07 Thread Peter Cock
On Thu, Feb 7, 2013 at 9:41 PM, Luobin Yang yangl...@isu.edu wrote:
 Hi,

 I tried the pcx and ps formats, but the browser just downloads these kinds
 of files instead rendering them in the Galaxy window... It seems png and pdf
 files can be rendered in the Galaxy windows. How can I make Galaxy display
 other image formats like ps and pcx?

 Thanks,
 Luobin

I would guess this is possible but only if those other image types are
first defined in Galaxy as new datatypes (with sensible MIME type
values).

Peter
___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/


Re: [galaxy-dev] What image formats can be displayed in the Galaxy view panel?

2013-02-07 Thread Ross
Luobin - one additional minor observation: In reality, Galaxy does not do
the displaying - it just sends stuff to the users' web browser for display.
So even when Galaxy knows what mimetype to attach to a specific image file,
the users' web browser response to that mimetype will always remain the
final frontier. Not everyone uses IE where dark magic just happens :)

Out of the box, many linux distributions do not enable (potentially
exploitable) browser plugin viewers for PDF or SVG for example - so
correctly displaying some mimetypes will always be beyond the control of
the Galaxy server.
EG: in 12.04 Ubuntu on my desktop using chrome or firefox, automagic pdf
viewing on a 64 bit system took a fair bit of fiddling to get working -
until then, even when Galaxy supplies the required mimetype, the user has
to download the PDF and open it by hand.


On Fri, Feb 8, 2013 at 10:16 AM, Peter Cock p.j.a.c...@googlemail.comwrote:

 On Thu, Feb 7, 2013 at 9:41 PM, Luobin Yang yangl...@isu.edu wrote:
  Hi,
 
  I tried the pcx and ps formats, but the browser just downloads these
 kinds
  of files instead rendering them in the Galaxy window... It seems png and
 pdf
  files can be rendered in the Galaxy windows. How can I make Galaxy
 display
  other image formats like ps and pcx?
 
  Thanks,
  Luobin

 I would guess this is possible but only if those other image types are
 first defined in Galaxy as new datatypes (with sensible MIME type
 values).

 Peter
 ___
 Please keep all replies on the list by using reply all
 in your mail client.  To manage your subscriptions to this
 and other Galaxy lists, please use the interface at:

   http://lists.bx.psu.edu/

___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/

Re: [galaxy-dev] card 79: Split large jobs over multiple nodes for processing

2013-02-07 Thread Alex.Khassapov
Thanks Peter. I see, parallelism works on a single large file by splitting it 
and using multiple instances to process the bits in parallel.

In our case we use 'composite' data type, simply an array of input files and we 
would like to process them in parallel, instead of having a 'foreach' loop in 
the tool wrapper.

Is it possible?

We are looking at CloudMan for creating a cluster in Galaxy now.

-Alex

-Original Message-
From: Peter Cock [mailto:p.j.a.c...@googlemail.com] 
Sent: Thursday, 7 February 2013 9:09 PM
To: Khassapov, Alex (CSIRO IMT, Clayton)
Cc: galaxy-dev@lists.bx.psu.edu
Subject: Re: [galaxy-dev] card 79: Split large jobs over multiple nodes for 
processing

On Wed, Feb 6, 2013 at 11:43 PM, alex.khassa...@csiro.au wrote:

 Hi All,

 Can anybody please add a few words on how can we use the initial 
 implementation which  exists in the tasks framework?

 -Alex


To enable this, set use_tasked_jobs = True in your universe_wsgi.ini file. The 
tools must also be configured to allow this via the parallelism tag. Many of 
my tools do this, for example see the NCBI
BLAST+ wrappers in the tool shed. Additionally the data file formats
must support being split, or being merged - which is done via Python code in 
the Galaxy datatype definition (see the split and merge methods in 
lib/galaxy/datatypes/*.py). Some other relevant Python code is in 
lib/galaxy/jobs/splitters/*.py

Peter

___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/


Re: [galaxy-dev] workflow intermediate files hog memory

2013-02-07 Thread mark.rose
Hi Dannon

I'm presuming that wiping hidden files as you suggest would eliminate them 
entirely so that there would be no history of them in the history, which 
seems less than desirable if you want a record of how the analysis was 
performed.  It seems that it would be better if the steps in the history 
persist, just remove their output files.  Or is there another way of keeping 
track of this?

Thanks for your help

Mark

-Original Message-
From: Dannon Baker [mailto:dannonba...@me.com]
Sent: Wednesday, February 06, 2013 8:40 AM
To: Rose Mark USRE
Cc: galaxy-dev@lists.bx.psu.edu
Subject: Re: [galaxy-dev] workflow intermediate files hog memory

Hey Mark,

Galaxy by default saves everything, as you've noticed.  In workflows, you can 
flag outputs after which intermediate (unflagged) steps will be 'hidden' in the 
history, but you can't automatically delete them, though this is something 
we've wanted to do for a while.  Unfortunately it requires rewriting the 
workflow execution model, so it's a larger task.  As a stopgap measure, being 
able to wipe out those 'hidden' datasets in one step would probably be useful.  
I'd actually thought this was already implemented as an option in the history 
panel menu, but I don't see it now.  I'm creating a Trello card now for adding 
that method, and there's already one for the deletion of intermediate datasets.

-Dannon


On Feb 6, 2013, at 7:07 AM, mark.r...@syngenta.com wrote:

 Hi All

 Intermediate files in a workflow often make up the large majority of a 
 workflow's output and, when this is an NGS analysis, this volume can be HUGE. 
  This is a considerable concern for me as we consider implementing a local 
 install of galaxy.  Storing all of this seems  useless (once workflow has 
 been worked out) and a huge memory hog if one wants to actually persist the 
 useful final outputs of workflows in galaxy.  Is there any way to specify 
 that the output of particular steps in a workflow be deleted (or sent to 
 /tmp) upon successful workflow completion?  How are others dealing with this? 
  Is it inadvisable to use galaxy to serve as a repository of results?

 Thanks

 Mark


 This message may contain confidential information. If you are not the
 designated recipient, please notify the sender immediately, and delete
 the original and any copies. Any use of the message by you is
 prohibited.
 ___
 Please keep all replies on the list by using reply all
 in your mail client.  To manage your subscriptions to this and other
 Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/





This message may contain confidential information. If you are not the 
designated recipient, please notify the sender immediately, and delete the 
original and any copies. Any use of the message by you is prohibited. 


___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/


Re: [galaxy-dev] What image formats can be displayed in the Galaxy view panel?

2013-02-07 Thread Luobin Yang
Ross  Peter,

Thanks for clarifying on this!
Luobin

On Thu, Feb 7, 2013 at 4:36 PM, Ross ross.laza...@gmail.com wrote:

 Luobin - one additional minor observation: In reality, Galaxy does not do
 the displaying - it just sends stuff to the users' web browser for display.
 So even when Galaxy knows what mimetype to attach to a specific image file,
 the users' web browser response to that mimetype will always remain the
 final frontier. Not everyone uses IE where dark magic just happens :)

 Out of the box, many linux distributions do not enable (potentially
 exploitable) browser plugin viewers for PDF or SVG for example - so
 correctly displaying some mimetypes will always be beyond the control of
 the Galaxy server.
 EG: in 12.04 Ubuntu on my desktop using chrome or firefox, automagic pdf
 viewing on a 64 bit system took a fair bit of fiddling to get working -
 until then, even when Galaxy supplies the required mimetype, the user has
 to download the PDF and open it by hand.


 On Fri, Feb 8, 2013 at 10:16 AM, Peter Cock p.j.a.c...@googlemail.comwrote:

 On Thu, Feb 7, 2013 at 9:41 PM, Luobin Yang yangl...@isu.edu wrote:
  Hi,
 
  I tried the pcx and ps formats, but the browser just downloads these
 kinds
  of files instead rendering them in the Galaxy window... It seems png
 and pdf
  files can be rendered in the Galaxy windows. How can I make Galaxy
 display
  other image formats like ps and pcx?
 
  Thanks,
  Luobin

 I would guess this is possible but only if those other image types are
 first defined in Galaxy as new datatypes (with sensible MIME type
 values).

 Peter
 ___
 Please keep all replies on the list by using reply all
 in your mail client.  To manage your subscriptions to this
 and other Galaxy lists, please use the interface at:

   http://lists.bx.psu.edu/






-- 
+++
Luobin Yang, Ph.D.
INBRE Bioinformatics Coordinator
Research Assistant Professor

Department of Biological Sciences,
Idaho State University
921 S. 8th Ave., Stop 8007
Pocatello, ID 83209-8007
Office: 208-282-5841
___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/

[galaxy-dev] Multiple Instances ...

2013-02-07 Thread Neil.Burdett
Hi,
   Can someone point me to the documentation to set up /configure multiple 
instances of Galaxy running on the same node please?

I think this is the best method of hiding tools based upon users email logon...

Thanks
Neil
___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/

Re: [galaxy-dev] Multiple Instances ...

2013-02-07 Thread Ross
Neil,

If by 'multiple' you mean 'independent' galaxy instances, they must each
talk to independent backend databases, so if you're thinking of running eg
2 or more independent instances at CSIRO, each for specific tool sets and
sending each of your users to one or other of them based on some smart
Apache code and their login, beware that users won't be able to share or
see any histories or datasets from one instance on the other.

That might work well - or not - but separate instances cannot safely share
the same backend database tables - they're just separate Galaxy instances -
like test and main are, and there's no specific documentation needed.

If you are asking about multiple processes (web servers etc) to scale up a
slow heavily loaded instance, that's documented in the wiki - eg a quick
search finds
http://wiki.galaxyproject.org/Admin/Config/Performance/ProductionServer?action=showredirect=Admin%2FConfig%2FPerformance


On Fri, Feb 8, 2013 at 1:40 PM, neil.burd...@csiro.au wrote:

  Hi,

Can someone point me to the documentation to set up /configure multiple
 instances of Galaxy running on the same node please?

 ** **

 I think this is the best method of hiding tools based upon users email
 logon...

 ** **

 Thanks

 Neil

___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/

Re: [galaxy-dev] Multiple Instances ...

2013-02-07 Thread Neil.Burdett
Thanks Ross.

I did mean separate Galaxy instances like test and main with their own 
independent backend databases.

How could I run say a test and a main from the same node?

I guess I'd need to modify the port number for each instance and then multiple 
entries in the apache config file i.e. So they show physically separate 
information when loaded... i.e. http://localhost/galaxyTest and . 
http://localhost/galaxyMain

RewriteEngine on
RewriteRule ^/galaxyTest$ /galaxyTest/ [R]
RewriteRule ^/galaxyTest/static/style/(.*) 
/home/nate/galaxy-dist-test/static/june_2007_style/blue/$1 [L]
RewriteRule ^/galaxyTest/static/scripts/(.*) 
/home/nate/galaxy-dist-test/static/scripts/packed/$1 [L]
RewriteRule ^/galaxyTest/static/(.*) /home/nate/galaxy-dist-test/static/$1 [L]
RewriteRule ^/galaxyTest/favicon.ico 
/home/nate/galaxy-dist-test/static/favicon.ico [L]
RewriteRule ^/galaxyTest/robots.txt 
/home/nate/galaxy-dist-test/static/robots.txt [L]
RewriteRule ^/galaxyTest (.*) http://localhost:9080$1 [P]


RewriteRule ^/galaxyMain$ /galaxyMain/ [R]

RewriteRule ^/galaxyMain/static/style/(.*) 
/home/nate/galaxy-dist-main/static/june_2007_style/blue/$1 [L]

RewriteRule ^/galaxyMain/static/scripts/(.*) 
/home/nate/galaxy-dist-main/static/scripts/packed/$1 [L]

RewriteRule ^/galaxyMain/static/(.*) /home/nate/galaxy-dist-main/static/$1 [L]

RewriteRule ^/galaxyMain/favicon.ico 
/home/nate/galaxy-dist-main/static/favicon.ico [L]

RewriteRule ^/galaxyMain/robots.txt 
/home/nate/galaxy-dist-main/static/robots.txt [L]

RewriteRule ^/galaxyMain (.*) http://localhost:9081$1 [P]



Is that correct?



Cheers

Neil


From: Ross [mailto:ross.laza...@gmail.com]
Sent: Friday, 8 February 2013 12:54 PM
To: Burdett, Neil (ICT Centre, Herston - RBWH)
Cc: galaxy-dev@lists.bx.psu.edu
Subject: Re: [galaxy-dev] Multiple Instances ...

Neil,

If by 'multiple' you mean 'independent' galaxy instances, they must each talk 
to independent backend databases, so if you're thinking of running eg 2 or more 
independent instances at CSIRO, each for specific tool sets and sending each of 
your users to one or other of them based on some smart Apache code and their 
login, beware that users won't be able to share or see any histories or 
datasets from one instance on the other.

That might work well - or not - but separate instances cannot safely share the 
same backend database tables - they're just separate Galaxy instances - like 
test and main are, and there's no specific documentation needed.

If you are asking about multiple processes (web servers etc) to scale up a slow 
heavily loaded instance, that's documented in the wiki - eg a quick search 
finds 
http://wiki.galaxyproject.org/Admin/Config/Performance/ProductionServer?action=showredirect=Admin%2FConfig%2FPerformance

On Fri, Feb 8, 2013 at 1:40 PM, 
neil.burd...@csiro.aumailto:neil.burd...@csiro.au wrote:
Hi,
   Can someone point me to the documentation to set up /configure multiple 
instances of Galaxy running on the same node please?

I think this is the best method of hiding tools based upon users email logon...

Thanks
Neil

___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/

[galaxy-dev] Fwd: Multiple Instances ...

2013-02-07 Thread Ross
ok - sorry I misunderstood.
Yes, assuming you already have one Galaxy instance configured right,
cloning and editing the rewrite and authentication sections for the other
paste process should work and that looks reasonable to me FWIW - OTOH,
apache configuration is definitely one of the darker arts so the only way
to tell is to run them and keep an eye on the two separate paste process
logs as you hit each site separately.

Of course, make sure the indepndent instance universe_wsgi.xml files
specify independent postgresql databases and keep the two independent
galaxy file store directories separated or very bad things will happen.

On Fri, Feb 8, 2013 at 2:20 PM, neil.burd...@csiro.au wrote:

  Thanks Ross.

 ** **

 I did mean separate Galaxy instances like test and main with their own
 independent backend databases.

 ** **

 How could I run say a test and a main from the same node? 

 ** **

 I guess I’d need to modify the port number for each instance and then
 multiple entries in the apache config file i.e. So they show physically
 separate information when loaded... i.e. http://localhost/galaxyTest and
 . http://localhost/galaxyMain

 ** **

 *RewriteEngine* *on*

 *RewriteRule* ^/galaxyTest$ */galaxyTest/* [R]

 *RewriteRule* ^/galaxyTest/static/style/(.*) *
 /home/nate/galaxy-dist-test/static/june_2007_style/blue/*$1 [L]

 *RewriteRule* ^/galaxyTest/static/scripts/(.*) *
 /home/nate/galaxy-dist-test/static/scripts/packed/*$1 [L]

 *RewriteRule* ^/galaxyTest/static/(.*) *
 /home/nate/galaxy-dist-test/static/*$1 [L]

 *RewriteRule* ^/galaxyTest/favicon.ico *
 /home/nate/galaxy-dist-test/static/favicon.ico* [L]

 *RewriteRule* ^/galaxyTest/robots.txt *
 /home/nate/galaxy-dist-test/static/robots.txt* [L]

 *RewriteRule* ^/galaxyTest (.*) http://localhost:9080$1 [P]

 ** **

 *RewriteRule* ^/galaxyMain$ */galaxyMain/* [R]

 *RewriteRule* ^/galaxyMain/static/style/(.*) 
 */home/nate/galaxy-dist-main/static/june_2007_style/blue/*$1 [L]

 *RewriteRule* ^/galaxyMain/static/scripts/(.*) 
 */home/nate/galaxy-dist-main/static/scripts/packed/*$1 [L]

 *RewriteRule* ^/galaxyMain/static/(.*) 
 */home/nate/galaxy-dist-main/static/*$1 [L]

 *RewriteRule* ^/galaxyMain/favicon.ico 
 */home/nate/galaxy-dist-main/static/favicon.ico* [L]

 *RewriteRule* ^/galaxyMain/robots.txt 
 */home/nate/galaxy-dist-main/static/robots.txt* [L]

 *RewriteRule* ^/galaxyMain (.*) http://localhost:9081$1 [P]

 ** **

 Is that correct?

 ** **

 Cheers

 Neil

 ** **

 ** **

 *From:* Ross [mailto:ross.laza...@gmail.com]
 *Sent:* Friday, 8 February 2013 12:54 PM
 *To:* Burdett, Neil (ICT Centre, Herston - RBWH)
 *Cc:* galaxy-dev@lists.bx.psu.edu
 *Subject:* Re: [galaxy-dev] Multiple Instances ...

 ** **

 Neil, 

 ** **

 If by 'multiple' you mean 'independent' galaxy instances, they must each
 talk to independent backend databases, so if you're thinking of running eg
 2 or more independent instances at CSIRO, each for specific tool sets and
 sending each of your users to one or other of them based on some smart
 Apache code and their login, beware that users won't be able to share or
 see any histories or datasets from one instance on the other. 

 ** **

 That might work well - or not - but separate instances cannot safely share
 the same backend database tables - they're just separate Galaxy instances -
 like test and main are, and there's no specific documentation needed.

 ** **

 If you are asking about multiple processes (web servers etc) to scale up a
 slow heavily loaded instance, that's documented in the wiki - eg a quick
 search finds
 http://wiki.galaxyproject.org/Admin/Config/Performance/ProductionServer?action=showredirect=Admin%2FConfig%2FPerformance
 

 ** **

 On Fri, Feb 8, 2013 at 1:40 PM, neil.burd...@csiro.au wrote:

 Hi,

Can someone point me to the documentation to set up /configure multiple
 instances of Galaxy running on the same node please?

  

 I think this is the best method of hiding tools based upon users email
 logon...

  

 Thanks

 Neil

 **




-- 
Ross Lazarus MBBS MPH;
Head, Medical Bioinformatics, BakerIDI; Tel: +61 385321444
http://scholar.google.com/citations?hl=enuser=UCUuEM4J
___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/