[galaxy-dev] the multi job splitter
Dear list, In my galaxy fork, I extensively use the job splitters. Sometimes though, I have to split to different file types for the same job. That raises an exception in the lib/galaxy/jobs/splitters/multi.py module. I have turned this behaviour off for my own work, but am now wondering whether this is very bad practice. In other words, does somebody know why the multi splitter does not support multiple file type splitting? cheers, jorrit -- Scientific programmer Mass spec analysis support @ BILS Janne Lehtiö / Lukas Käll labs SciLifeLab Stockholm ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
Re: [galaxy-dev] the multi job splitter
On Thu, Oct 25, 2012 at 9:36 AM, Jorrit Boekel jorrit.boe...@scilifelab.se wrote: Dear list, In my galaxy fork, I extensively use the job splitters. Sometimes though, I have to split to different file types for the same job. That raises an exception in the lib/galaxy/jobs/splitters/multi.py module. I have turned this behaviour off for my own work, but am now wondering whether this is very bad practice. In other words, does somebody know why the multi splitter does not support multiple file type splitting? cheers, jorrit Could you clarify what you mean by showing some of your tool's XML file. i.e. How is the input and its splitting defined. Are you asking about splitting two input files at the same time? Peter ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
Re: [galaxy-dev] the multi job splitter
On 10/25/2012 11:25 AM, Peter Cock wrote: On Thu, Oct 25, 2012 at 10:00 AM, Jorrit Boekel jorrit.boe...@scilifelab.se wrote: On 10/25/2012 10:54 AM, Peter Cock wrote: On Thu, Oct 25, 2012 at 9:36 AM, Jorrit Boekel jorrit.boe...@scilifelab.se wrote: Dear list, In my galaxy fork, I extensively use the job splitters. Sometimes though, I have to split to different file types for the same job. That raises an exception in the lib/galaxy/jobs/splitters/multi.py module. I have turned this behaviour off for my own work, but am now wondering whether this is very bad practice. In other words, does somebody know why the multi splitter does not support multiple file type splitting? cheers, jorrit Could you clarify what you mean by showing some of your tool's XML file. i.e. How is the input and its splitting defined. Are you asking about splitting two input files at the same time? Peter Hi Peter, Something like the following: command interpreter=pythonbullseye.py $hardklor_results $ms2_in.extension $ms2_in $output $use_nonmatch/command parallelism method=multi split_inputs=hardklor_results,ms2_in shared_inputs=config_file split_mode=from_composite merge_outputs=output/ inputs The tool takes two datasets of different formats, which are to be split in the same amount of files, which belong together as pairs. So the inputs are $hardklor_results and $ms2_in (which should be split in a paired manor) and there is one output $output to merge? What is shared_inputs=config_file for as that isn't in the command tag anywhere. Exactly. The tool uses results from a tool called hardklor to adjust the mass spectra contained in the ms2_input. And whoops, haven't taken out the now obsolete config file. thanks for spotting that. Note that I have implemented an odd way of splitting, which is from a number of files in the dataset.extra_files_path to symlinks in the task working dirs. The number of files is thus equal to the number of parts resulting from a split, and I have ensured that each part is paired correctly. I assume this hasn't been necessary in the genomics field, but for proteomics, at least in our lab, multiple-file datasets are the standard. My fork is at http://bitbucket.org/glormph/adapt if you want to check more closely. I don't quite follow your example, but I can see some (simpler?) cases for sequencing data - paired splitting of a FASTA + QUAL file, or paired splitting of two FASTQ files (forward and reverse reads). Here the sequence files can be broken up into any size (e.g. split in four, or divided into batches of 1, but not split based on size on disk), as long as the pairing is preserved. i.e. Given FASTA and QUAL for read1, read2, , read10 then if the FASTA file is split into read1, read2, , read1000 as the first chunk, then the first QUAL chunk must also have the same one thousand reads. (In these examples the pairing should be verifiable via the read names, so errors should be easy to catch - I don't know if you have that luxury in your situation). What you describe is pretty much the same as my situation, except that I don't have two large single input files as your fastq files, but two sets of the same number of files stored in the composite file directories (galaxy/database/files/000/dataset_x_files ). I keep the files matched by keeping a _task_%d suffix to their names. So each task is matched with its correct counterpart with the same number. My question is still though if it would be bad to not raise an exception when different filetypes are split in the same job. cheers, jorrit -- Scientific programmer Mass spec analysis support @ BILS Janne Lehtiö / Lukas Käll labs SciLifeLab Stockholm ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
Re: [galaxy-dev] the multi job splitter
On Thu, Oct 25, 2012 at 10:35 AM, Jorrit Boekel jorrit.boe...@scilifelab.se wrote: On 10/25/2012 11:25 AM, Peter Cock wrote: I don't quite follow your example, but I can see some (simpler?) cases for sequencing data - paired splitting of a FASTA + QUAL file, or paired splitting of two FASTQ files (forward and reverse reads). Here the sequence files can be broken up into any size (e.g. split in four, or divided into batches of 1, but not split based on size on disk), as long as the pairing is preserved. i.e. Given FASTA and QUAL for read1, read2, , read10 then if the FASTA file is split into read1, read2, , read1000 as the first chunk, then the first QUAL chunk must also have the same one thousand reads. (In these examples the pairing should be verifiable via the read names, so errors should be easy to catch - I don't know if you have that luxury in your situation). What you describe is pretty much the same as my situation, except that I don't have two large single input files as your fastq files, but two sets of the same number of files stored in the composite file directories (galaxy/database/files/000/dataset_x_files ). I keep the files matched by keeping a _task_%d suffix to their names. So each task is matched with its correct counterpart with the same number. My question is still though if it would be bad to not raise an exception when different filetypes are split in the same job. In general splitting multiple files of different types seems dangerous. That is presumably the point of the Galaxy exception. In my example of splitting a pair of FASTQ files, they are the same format, so Galaxy can make assumptions about how they will be split. Note splitting into chunks based on the size on disk would be wrong (e.g. if the forward reads in the first file are all longer than the reverse reads in the second file). In the case of splitting a paired FASTA + QUAL file, these are now different file formats, so more caution is required. In fact both can be split are the sequence/read level so can be processed. I think the key requirement here for 'matched' splitting is each file must have the same number of 'records' (in my example, sequencing reads, in your case sub-files), and can be split into a chunks of the same number of 'records'. Perhaps different file type combinations could be special cases in the splitter code? Then if there is no dedicated splitter for a given combination, then that combination cannot be split. Peter ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
Re: [galaxy-dev] the multi job splitter
On 10/25/2012 12:02 PM, Peter Cock wrote: On Thu, Oct 25, 2012 at 10:35 AM, Jorrit Boekel jorrit.boe...@scilifelab.se wrote: My question is still though if it would be bad to not raise an exception when different filetypes are split in the same job. In general splitting multiple files of different types seems dangerous. That is presumably the point of the Galaxy exception. In my example of splitting a pair of FASTQ files, they are the same format, so Galaxy can make assumptions about how they will be split. Note splitting into chunks based on the size on disk would be wrong (e.g. if the forward reads in the first file are all longer than the reverse reads in the second file). In the case of splitting a paired FASTA + QUAL file, these are now different file formats, so more caution is required. In fact both can be split are the sequence/read level so can be processed. I think the key requirement here for 'matched' splitting is each file must have the same number of 'records' (in my example, sequencing reads, in your case sub-files), and can be split into a chunks of the same number of 'records'. Perhaps different file type combinations could be special cases in the splitter code? Then if there is no dedicated splitter for a given combination, then that combination cannot be split. Peter I could imagine the multi splitter calling some sort of validating method of the different datatypes to gather information about the different datasets, e.g. split size, split numbers, matching file types, before executing a split. There may be more and better ways to get around it though. I'll settle for disabling the check now, if mainline galaxy would be interested we could look at it further I guess. cheers, jorrit -- Scientific programmer Mass spec analysis support @ BILS Janne Lehtiö / Lukas Käll labs SciLifeLab Stockholm ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
[galaxy-dev] Error when creating Visualization
Hi, on our local Galaxy instance I try to setup Visualization. I followed http://wiki.g2.bx.psu.edu/Learn/Visualization#Setup_for_Local_Instances But when I try to create a new visalization I get an error with this traceback: URL: http://bbc.mdc-berlin.de/galaxy/visualization/create File '/data/galaxy/galaxy-dist/eggs/WebError-0.8a-py2.6.egg/weberror/evalexception/middleware.py', line 364 in respond app_iter = self.application(environ, detect_start_response) File '/data/galaxy/galaxy-dist/eggs/Paste-1.6-py2.6.egg/paste/debug/prints.py', line 98 in __call__ environ, self.app) File '/data/galaxy/galaxy-dist/eggs/Paste-1.6-py2.6.egg/paste/wsgilib.py', line 539 in intercept_output app_iter = application(environ, replacement_start_response) File '/data/galaxy/galaxy-dist/eggs/Paste-1.6-py2.6.egg/paste/recursive.py', line 80 in __call__ return self.application(environ, start_response) File '/data/galaxy/galaxy-dist/eggs/Paste-1.6-py2.6.egg/paste/httpexceptions.py', line 632 in __call__ return self.application(environ, start_response) File '/data/galaxy/galaxy-dist/lib/galaxy/web/framework/base.py', line 160 in __call__ body = method( trans, **kwargs ) File '/data/galaxy/galaxy-dist/lib/galaxy/web/framework/__init__.py', line 93 in decorator return func( self, trans, *args, **kwargs ) File '/data/galaxy/galaxy-dist/lib/galaxy/webapps/galaxy/controllers/visualization.py', line 590 in create type=visualization_type ) File '/data/galaxy/galaxy-dist/lib/galaxy/web/base/controller.py', line 343 in create_visualization revision = trans.model.VisualizationRevision( visualization=visualization, title=title, config=config, dbkey=dbkey ) File 'string', line 4 in __init__ File '/data/galaxy/galaxy-dist/eggs/SQLAlchemy-0.5.6_dev_r6498-py2.6.egg/sqlalchemy/orm/state.py', line 82 in initialize_instance return manager.events.original_init(*mixed[1:], **kwargs) File '/data/galaxy/galaxy-dist/lib/galaxy/model/__init__.py', line 2764 in __init__ self.visualization = visualization File '/data/galaxy/galaxy-dist/eggs/SQLAlchemy-0.5.6_dev_r6498-py2.6.egg/sqlalchemy/orm/attributes.py', line 150 in __set__ self.impl.set(instance_state(instance), instance_dict(instance), value, None) File '/data/galaxy/galaxy-dist/eggs/SQLAlchemy-0.5.6_dev_r6498-py2.6.egg/sqlalchemy/orm/attributes.py', line 590 in set value = self.fire_replace_event(state, dict_, value, old, initiator) File '/data/galaxy/galaxy-dist/eggs/SQLAlchemy-0.5.6_dev_r6498-py2.6.egg/sqlalchemy/orm/attributes.py', line 610 in fire_replace_event value = ext.set(state, value, previous, initiator or self) File '/data/galaxy/galaxy-dist/eggs/SQLAlchemy-0.5.6_dev_r6498-py2.6.egg/sqlalchemy/orm/attributes.py', line 847 in set new_state, new_dict = instance_state(child), instance_dict(child) AttributeError: 'dict' object has no attribute '_sa_instance_state' Any idea what may be wrong? regards, Andreas -- Andreas Kuntzagk SystemAdministrator Berlin Institute for Medical Systems Biology at the Max-Delbrueck-Center for Molecular Medicine Robert-Roessle-Str. 10, 13125 Berlin, Germany http://www.mdc-berlin.de/en/bimsb/BIMSB_groups/Dieterich ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
Re: [galaxy-dev] Error when creating Visualization
My fault. I did not set a title for it. (Anyway this should be caught I think.) regards, Andreas On 25.10.2012 13:18, Andreas Kuntzagk wrote: Hi, on our local Galaxy instance I try to setup Visualization. I followed http://wiki.g2.bx.psu.edu/Learn/Visualization#Setup_for_Local_Instances But when I try to create a new visalization I get an error with this traceback: URL: http://bbc.mdc-berlin.de/galaxy/visualization/create File '/data/galaxy/galaxy-dist/eggs/WebError-0.8a-py2.6.egg/weberror/evalexception/middleware.py', line 364 in respond app_iter = self.application(environ, detect_start_response) File '/data/galaxy/galaxy-dist/eggs/Paste-1.6-py2.6.egg/paste/debug/prints.py', line 98 in __call__ environ, self.app) File '/data/galaxy/galaxy-dist/eggs/Paste-1.6-py2.6.egg/paste/wsgilib.py', line 539 in intercept_output app_iter = application(environ, replacement_start_response) File '/data/galaxy/galaxy-dist/eggs/Paste-1.6-py2.6.egg/paste/recursive.py', line 80 in __call__ return self.application(environ, start_response) File '/data/galaxy/galaxy-dist/eggs/Paste-1.6-py2.6.egg/paste/httpexceptions.py', line 632 in __call__ return self.application(environ, start_response) File '/data/galaxy/galaxy-dist/lib/galaxy/web/framework/base.py', line 160 in __call__ body = method( trans, **kwargs ) File '/data/galaxy/galaxy-dist/lib/galaxy/web/framework/__init__.py', line 93 in decorator return func( self, trans, *args, **kwargs ) File '/data/galaxy/galaxy-dist/lib/galaxy/webapps/galaxy/controllers/visualization.py', line 590 in create type=visualization_type ) File '/data/galaxy/galaxy-dist/lib/galaxy/web/base/controller.py', line 343 in create_visualization revision = trans.model.VisualizationRevision( visualization=visualization, title=title, config=config, dbkey=dbkey ) File 'string', line 4 in __init__ File '/data/galaxy/galaxy-dist/eggs/SQLAlchemy-0.5.6_dev_r6498-py2.6.egg/sqlalchemy/orm/state.py', line 82 in initialize_instance return manager.events.original_init(*mixed[1:], **kwargs) File '/data/galaxy/galaxy-dist/lib/galaxy/model/__init__.py', line 2764 in __init__ self.visualization = visualization File '/data/galaxy/galaxy-dist/eggs/SQLAlchemy-0.5.6_dev_r6498-py2.6.egg/sqlalchemy/orm/attributes.py', line 150 in __set__ self.impl.set(instance_state(instance), instance_dict(instance), value, None) File '/data/galaxy/galaxy-dist/eggs/SQLAlchemy-0.5.6_dev_r6498-py2.6.egg/sqlalchemy/orm/attributes.py', line 590 in set value = self.fire_replace_event(state, dict_, value, old, initiator) File '/data/galaxy/galaxy-dist/eggs/SQLAlchemy-0.5.6_dev_r6498-py2.6.egg/sqlalchemy/orm/attributes.py', line 610 in fire_replace_event value = ext.set(state, value, previous, initiator or self) File '/data/galaxy/galaxy-dist/eggs/SQLAlchemy-0.5.6_dev_r6498-py2.6.egg/sqlalchemy/orm/attributes.py', line 847 in set new_state, new_dict = instance_state(child), instance_dict(child) AttributeError: 'dict' object has no attribute '_sa_instance_state' Any idea what may be wrong? regards, Andreas -- Andreas Kuntzagk SystemAdministrator Berlin Institute for Medical Systems Biology at the Max-Delbrueck-Center for Molecular Medicine Robert-Roessle-Str. 10, 13125 Berlin, Germany http://www.mdc-berlin.de/en/bimsb/BIMSB_groups/Dieterich ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
Re: [galaxy-dev] [Patch] ToolShed support for included tools
Hi Bjorn, I've committed an slightly altered version of your contributed code for handling tool dependencies that are zip archives in change set 8107:b12140970208, which will be available in the next Galaxy release currently scheduled for the end of next week. Your code required Python 2.6+, so I made some changes to support Python 2.5+. Please let me know if you have additional questions regarding this. Thanks very much for your contributions! Greg Von Kuster On Oct 15, 2012, at 1:15 PM, Björn Grüning wrote: Hi, i'm writing a galaxy wrapper for bismark and trim-galore. Both are plain perl scripts, that wraps around other dependencies (e.g. Bowtie). The idea was to include the perl-scripts directly in the galaxy-wrapper and update the PATH to the REPOSITORY_INSTALL_DIR in the tool_dependency.xml file. package name=bismark version=0.7.7 install version=1.0 actions action type=set_environment environment_variable name=PATH action=prepend_to$REPOSITORY_INSTALL_DIR/environment_variable /action /actions /install readme bismark, bismark_genome_preparation and bismark_methylation_extractor are shipped with that wrapper /readme /package Unfortunately, that was not supported because the toolshed expected at least one action_type. The attached patch should add that feature. Furthermore, bowtie2 is only available as zip archive and afaik that was not handled in the toolshed. The attached patch also added check_zipfile(), extract_zip() and zip_extraction_directory() to fully support zip archives. Thanks! Bjoern toolshed_zip.patch___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/ ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
[galaxy-dev] Tools Always Failing?
Re-post to galaxy-...@bx.psu.edu Todd - this will give your questions better visibility. This is your own tool? And other tools packaged with the standard Galaxy distribution or installed from the Tool Shed run OK on the cluster? If your own tool (and others work, ruling out cluster set-up as an issue), you are probably already working from this wiki, but if not, section #42 covers exit codes: http://wiki.g2.bx.psu.edu/Admin/Tools/Tool%20Config%20Syntax Let's see what other feedback/advice comes from the developers, Best, Jen Galaxy team On 10/25/12 1:34 PM, Yilk, Todd A wrote: Hello all, I'm using a local install of Galaxy connected to a UGE cluster and Python 2.6.7. Most of my tools are reporting failures in the history with messages like: An error occurred running this job: /(34) Job output not returned from cluster; exit status = 0/ / / With the relevant lines from Galaxy's log file looking like: galaxy.jobs.runners.drmaa DEBUG 2012-10-25 10:31:26,586 job 34 working directory is /opt/galaxy/dev/Galaxy-JGI_galaxy-dev/database/job_working_directory/000/34 galaxy.jobs.runners.drmaa DEBUG 2012-10-25 10:31:26,636 job 34 input = [[/opt/galaxy/dev/Galaxy-JGI_galaxy-dev/database/files/000/dataset_48.dat, fastq, 5230830]] galaxy.jobs.runners.drmaa DEBUG 2012-10-25 10:31:26,671 (34) submitting file /opt/galaxy/dev/Galaxy-JGI_galaxy-dev/database/uge/galaxy_34.sh galaxy.jobs.runners.drmaa DEBUG 2012-10-25 10:31:26,672 (34) command is: memtimepro -q -o /opt/galaxy/dev/Galaxy-JGI_galaxy-dev/database/job_working_directory/000/34/34.drmmt perl /opt/galaxy/dev/Galaxy-JGI_galaxy-dev/tools/lanl/readMapping/fastqSplitter/separate_paired_end_reads.pl -i /opt/galaxy/dev/Galaxy-JGI_galaxy-dev/database/files/000/dataset_48.dat -l /opt/galaxy/dev/Galaxy-JGI_galaxy-dev/database/files/000/dataset_53.dat -r /opt/galaxy/dev/Galaxy-JGI_galaxy-dev/database/files/000/dataset_54.dat ; cd /opt/galaxy/dev/Galaxy-JGI_galaxy-dev; /opt/galaxy/dev/Galaxy-JGI_galaxy-dev/set_metadata.sh ./database/files /opt/galaxy/dev/Galaxy-JGI_galaxy-dev/database/job_working_directory/000/34 . /opt/galaxy/dev/Galaxy-JGI_galaxy-dev/universe_wsgi.ini /opt/galaxy/dev/Galaxy-JGI_galaxy-dev/database/tmp/tmp831aRx /opt/galaxy/dev/Galaxy-JGI_galaxy-dev/database/job_working_directory/000/34/galaxy.json /opt/galaxy/dev/Galaxy-JGI_galaxy-dev/database/job_working_directory/000/34/metadata_in_HistoryDatasetAssociation_59_fpBxRp,/opt/galaxy/dev/Galaxy-JGI_galaxy-dev/database/job_working_directory/000/34/metadata_kwds_HistoryDatasetAssociation_59_MLg1Aw,/opt/galaxy/dev/Galaxy-JGI_galaxy-dev/database/job_working_directory/000/34/metadata_out_HistoryDatasetAssociation_59_ZVYBIT,/opt/galaxy/dev/Galaxy-JGI_galaxy-dev/database/job_working_directory/000/34/metadata_results_HistoryDatasetAssociation_59_q4FpIq,,/opt/galaxy/dev/Galaxy-JGI_galaxy-dev/database/job_working_directory/000/34/metadata_override_HistoryDatasetAssociation_59_3HXPLq /opt/galaxy/dev/Galaxy-JGI_galaxy-dev/database/job_working_directory/000/34/metadata_in_HistoryDatasetAssociation_60_Qw0GpA,/opt/galaxy/dev/Galaxy-JGI_galaxy-dev/database/job_working_directory/000/34/metadata_kwds_HistoryDatasetAssociation_60_ZF6Zk_,/opt/galaxy/dev/Galaxy-JGI_galaxy-dev/database/job_working_directory/000/34/metadata_out_HistoryDatasetAssociation_60_eEItGr,/opt/galaxy/dev/Galaxy-JGI_galaxy-dev/database/job_working_directory/000/34/metadata_results_HistoryDatasetAssociation_60_bMa6HN,,/opt/galaxy/dev/Galaxy-JGI_galaxy-dev/database/job_working_directory/000/34/metadata_override_HistoryDatasetAssociation_60_X1El1z galaxy.jobs.runners.drmaa INFO 2012-10-25 10:31:26,720 (34) queued as 553774 galaxy.jobs.runners.drmaa DEBUG 2012-10-25 10:31:28,947 (34/553774) state change: job is running -- snip -- galaxy.jobs.runners.drmaa DEBUG 2012-10-25 10:32:29,411 (34/553774) state change: job finished normally 128.165.72.57 - - [25/Oct/2012:10:32:31 -0600] POST /root/history_item_updates HTTP/1.0 200 - http://galaxy-dev.lanl.gov/history; Mozilla/5.0 (Macintosh; Intel Mac OS X 10.6; rv:15.0) Gecko/20100101 Firefox/15.0.1 galaxy.jobs.runners.drmaa DEBUG 2012-10-25 10:32:31,241 (34) Job output not returned from cluster; exit status = 0 galaxy.jobs DEBUG 2012-10-25 10:32:31,285 The tool did not define exit code or stdio handling; checking stderr for success galaxy.jobs DEBUG 2012-10-25 10:32:31,427 setting dataset state to ERROR galaxy.jobs DEBUG 2012-10-25 10:32:31,471 setting dataset state to ERROR galaxy.jobs DEBUG 2012-10-25 10:32:31,728 job 34 ended galaxy.datatypes.metadata DEBUG 2012-10-25 10:32:31,729 Cleaning up external metadata files And yet, for this example, the tool did actually run successfully with the expected output files where they should be in the galaxy/database/files directory. Further, the 34.drmec file has just the exit code 0, the 34.drmout file is empty and the 34.drmmt file is: { program: perl, arguments: [
[galaxy-dev] Apache static contents compression caused corrupted history export tar.gz file
Hi guys, Just found that history export file created on our galaxy always corrupted. We spotted that enabling the Apache Compression (on Galaxy wiki) is the cause. SetOutputFilter DEFLATESetEnvIfNoCase Request_URI \.(?:gif|jpe?g|png)$ no-gzip dont-varySetEnvIfNoCase Request_URI \.(?:t?gz|zip|bz2)$ no-gzip dont-vary The URL for downloading export file does not match either rules. So I add the 3rd one and issue is fixed. SetEnvIfNoCase Request_URI export_archive no-gzip dont-vary Maybe anyone in the team can have a look and decide if this should be added to the wiki: http://wiki.g2.bx.psu.edu/Admin/Config/Apache%20Proxy Cheers, D ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/