Re: [galaxy-dev] read_bits64 error when loading large genomes/data into trackster

2014-04-16 Thread Raj Ayyampalayam

Thanks Jeremy.

-Raj

On Wednesday, April 16, 2014 9:32:23 AM, Jeremy Goecks wrote:

Hi Raj,

Thanks for sharing your datasets. This issue should be fixed as of
this commit in -central:

https://bitbucket.org/galaxy/galaxy-central/commits/3a154b52090316dee61f42993eaa5a1fa40116d3

Best,
J.

--
Jeremy Goecks
Assistant Professor of Computational Biology
George Washington University



On Mar 25, 2014, at 11:15 AM, Raj Ayyampalayam ra...@uga.edu
mailto:ra...@uga.edu wrote:


Jeremy,

I uploaded my dataset to the main galaxy site and the visualization
failed.
Here is the link to the history:
https://usegalaxy.org/u/raj-a-n/h/amborella-viz-test
I used the same genome and a cufflinks gtf file instead of the bam
file (Bam is too big to load).

I am seeing the same problem on both main and my local galaxy
instances (local is tracking latest galaxy-dist).

Thanks,
-Raj


On Wednesday, March 05, 2014 11:25:26 AM, Raj Ayyampalayam wrote:

I checked the version of bx-python egg and confirmed that it is
bx_python-0.7.1_7b95ff194725-py2.7-linux-x86_64-ucs2.egg.

I will upload the dataset to the public server and try it over there
and report.

Thanks,
-Raj

On Wednesday, March 05, 2014 11:21:56 AM, Jeremy Goecks wrote:

Would it be possible that you have an old copy of the bx-python egg?
You should have bx-python 0.7.1

If you check your eggs directory and you see version 0.7.1, then there
may be something wrong with bx-python. In this case, please upload
your build and bam to our public server and try again; if it fails on
our public server, share the datasets with me and I'll take a look.

Thanks,
J.

--
Jeremy Goecks
Assistant Professor, Computational Biology Institute
George Washington University



On Mar 4, 2014, at 5:14 PM, Raj Ayyampalayam ra...@uga.edu
mailto:ra...@uga.edu
mailto:ra...@uga.edu wrote:


Hello,

I am trying to visualize a large genome (Large number of scaffolds)
and a large (bam file) in trackster on our local galaxy instance
(running release_2014.02.10).
When ever I try to do the above I see the following error in the logs:

 File bbi_file.pyx, line 215, in bx.bbi.bbi_file.BBIFile.query
(lib/bx/bbi/bbi_file.c:5596)
 File bbi_file.pyx, line 222, in bx.bbi.bbi_file.BBIFile.query
(lib/bx/bbi/bbi_file.c:5210)
 File bbi_file.pyx, line 183, in bx.bbi.bbi_file.BBIFile.summarize
(lib/bx/bbi/bbi_file.c:4475)
 File bbi_file.pyx, line 248, in
bx.bbi.bbi_file.BBIFile._get_chrom_id_and_size
(lib/bx/bbi/bbi_file.c:5656)
 File bpt_file.pyx, line 76, in bx.bbi.bpt_file.BPTFile.find
(lib/bx/bbi/bpt_file.c:1388)
 File bpt_file.pyx, line 55, in bx.bbi.bpt_file.BPTFile.r_find
(lib/bx/bbi/bpt_file.c:1154)
AttributeError: 'BinaryFileReader' object has no attribute
'read_bits64'

Trackster works OK when I load smaller data sets.

It seems that there was a fix for this in bx-python code, as per mail
from Jeremy
(http://dev.list.galaxyproject.org/trackster-error-for-viewing-rat-data-rn5-tp4662664p4662672.html).

How do I get the fixed code into my galaxy instance?

Thanks,
-Raj


___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
http://galaxyproject.org/search/mailinglists/





___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
http://galaxyproject.org/search/mailinglists/








___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
 http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
 http://galaxyproject.org/search/mailinglists/


Re: [galaxy-dev] read_bits64 error when loading large genomes/data into trackster

2014-03-25 Thread Raj Ayyampalayam

Jeremy,

I uploaded my dataset to the main galaxy site and the visualization 
failed.
Here is the link to the history: 
https://usegalaxy.org/u/raj-a-n/h/amborella-viz-test
I used the same genome and a cufflinks gtf file instead of the bam file 
(Bam is too big to load).


I am seeing the same problem on both main and my local galaxy instances 
(local is tracking latest galaxy-dist).


Thanks,
-Raj


On Wednesday, March 05, 2014 11:25:26 AM, Raj Ayyampalayam wrote:

I checked the version of bx-python egg and confirmed that it is
bx_python-0.7.1_7b95ff194725-py2.7-linux-x86_64-ucs2.egg.

I will upload the dataset to the public server and try it over there
and report.

Thanks,
-Raj

On Wednesday, March 05, 2014 11:21:56 AM, Jeremy Goecks wrote:

Would it be possible that you have an old copy of the bx-python egg?
You should have bx-python 0.7.1

If you check your eggs directory and you see version 0.7.1, then there
may be something wrong with bx-python. In this case, please upload
your build and bam to our public server and try again; if it fails on
our public server, share the datasets with me and I'll take a look.

Thanks,
J.

--
Jeremy Goecks
Assistant Professor, Computational Biology Institute
George Washington University



On Mar 4, 2014, at 5:14 PM, Raj Ayyampalayam ra...@uga.edu
mailto:ra...@uga.edu wrote:


Hello,

I am trying to visualize a large genome (Large number of scaffolds)
and a large (bam file) in trackster on our local galaxy instance
(running release_2014.02.10).
When ever I try to do the above I see the following error in the logs:

  File bbi_file.pyx, line 215, in bx.bbi.bbi_file.BBIFile.query
(lib/bx/bbi/bbi_file.c:5596)
  File bbi_file.pyx, line 222, in bx.bbi.bbi_file.BBIFile.query
(lib/bx/bbi/bbi_file.c:5210)
  File bbi_file.pyx, line 183, in bx.bbi.bbi_file.BBIFile.summarize
(lib/bx/bbi/bbi_file.c:4475)
  File bbi_file.pyx, line 248, in
bx.bbi.bbi_file.BBIFile._get_chrom_id_and_size
(lib/bx/bbi/bbi_file.c:5656)
  File bpt_file.pyx, line 76, in bx.bbi.bpt_file.BPTFile.find
(lib/bx/bbi/bpt_file.c:1388)
  File bpt_file.pyx, line 55, in bx.bbi.bpt_file.BPTFile.r_find
(lib/bx/bbi/bpt_file.c:1154)
AttributeError: 'BinaryFileReader' object has no attribute
'read_bits64'

Trackster works OK when I load smaller data sets.

It seems that there was a fix for this in bx-python code, as per mail
from Jeremy
(http://dev.list.galaxyproject.org/trackster-error-for-viewing-rat-data-rn5-tp4662664p4662672.html).

How do I get the fixed code into my galaxy instance?

Thanks,
-Raj


___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
http://galaxyproject.org/search/mailinglists/





___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
 http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
 http://galaxyproject.org/search/mailinglists/



___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
 http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
 http://galaxyproject.org/search/mailinglists/


Re: [galaxy-dev] new data manager can't build bowtie or sam fasta, interface differences, how to fetch ncbi stuff

2014-03-06 Thread Raj Ayyampalayam
You are probably missing the samtools and bowtie2 dependencies. Make 
sure Galaxy can find the binaries.


-Raj

On Fri Mar  7 00:16:07 2014, Langhorst, Brad wrote:

I’m slowly wrapping my head around the new data manager stuff… but I
don’t think I quite have it yet.

I was able to download hg19 using the reference genome tool

I see where it puts downloaded files now - took me a while to figure
that out.

When I run the sam fa build tool it fails like this:
Traceback (most recent call last):
   File 
/mnt/ngswork/galaxy/galaxy-toolshed-tools.0214/testtoolshed.g2.bx.psu.edu/repos/blankenberg/data_manager_sam_fa_index_builder/926e50397b83/data_manager_sam_fa_index_builder/data_manager/data_manager_sam_fa_index_builder.py
  
http://bx.psu.edu/repos/blankenberg/data_manager_sam_fa_index_builder/926e50397b83/data_manager_sam_fa_index_builder/data_manager/data_manager_sam_fa_index_builder.py,
 line 71, in module
 if __name__ == __main__: main()
   File 
/mnt/ngswork/galaxy/galaxy-toolshed-tools.0214/testtoolshed.g2.bx.psu.edu/repos/blankenberg/data_manager_sam_fa_index_builder/926e50397b83/data_manager_sam_fa_index_builder/data_manager/data_manager_sam_fa_index_builder.py
  
http://bx.psu.edu/repos/blankenberg/data_manager_sam_fa_index_builder/926e50397b83/data_manager_sam_fa_index_builder/data_manager/data_manager_sam_fa_index_builder.py,
 line 66, in main
 build_sam_index( data_manager_dict, options.fasta_filename, 
target_directory, options.fasta_dbkey,  data_table_name=options.data_table_name 
or DEFAULT_DATA_TABLE_NAME )
   File 
/mnt/ngswork/galaxy/galaxy-toolshed-tools.0214/testtoolshed.g2.bx.psu.edu/repos/blankenberg/data_manager_sam_fa_index_builder/926e50397b83/data_manager_sam_fa_index_builder/data_manager/data_manager_sam_fa_index_builder.py
  
http://bx.psu.edu/repos/blankenberg/data_manager_sam_fa_index_builder/926e50397b83/data_manager_sam_fa_index_builder/data_manager/data_manager_sam_fa_index_builder.py,
 line 25, in build_sam_index
 proc = subprocess.Popen( args=args, shell=False, cwd=target_directory, 
stderr=tmp_stderr.fileno() )
   File /usr/lib64/python2.6/subprocess.py, line 639, in __init__
 errread, errwrite)
   File /usr/lib64/python2.6/subprocess.py, line 1228, in _execute_child
 raise child_exception
OSError: [Errno 2] No such file or directory

What am I missing here?

I can’t tell what file it’s referring to.

The bowtie2 index builder also fails.



The old interface that allowed download, then building of chosen
indices was more convenient.
It’s not fun to download, wait, then click each builder to schedule a
build.

Is a replacement for the more unified interface planned?
Am I supposed to just build a workflow that builds all the stuff i want?



I can’t tell what I’m supposed to feed as “dbkey” to the ncbi downloader…
It’s also not easy for me to find the list of dbkeys available at
ucsc. (eschColi_K12 does not seem to work - don’t know why, but i
guess it doesn’t know to try to microbes)

it would be very nice to allow users to select which genome they want
by choosing from a list.

I’m headed back for manual loc file territory for now -
 but I’d love some enlightenment about how this is supposed to work
and or information about how others are using it.


Brad



--
Brad Langhorst, Ph.D.
Applications and Product Development Scientist



___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
   http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
   http://galaxyproject.org/search/mailinglists/


--
Bio-informatics consultant
QBCG (http://qbcg.uga.edu)
706-542-6092 (8-12 All week)
706-583-0442 (12-5 All week)



___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
 http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
 http://galaxyproject.org/search/mailinglists/

[galaxy-dev] read_bits64 error when loading large genomes/data into trackster

2014-03-04 Thread Raj Ayyampalayam

Hello,

I am trying to visualize a large genome (Large number of scaffolds) and 
a large (bam file) in trackster on our local galaxy instance (running 
release_2014.02.10).

When ever I try to do the above I see the following error in the logs:

  File bbi_file.pyx, line 215, in bx.bbi.bbi_file.BBIFile.query 
(lib/bx/bbi/bbi_file.c:5596)
  File bbi_file.pyx, line 222, in bx.bbi.bbi_file.BBIFile.query 
(lib/bx/bbi/bbi_file.c:5210)
  File bbi_file.pyx, line 183, in bx.bbi.bbi_file.BBIFile.summarize 
(lib/bx/bbi/bbi_file.c:4475)
  File bbi_file.pyx, line 248, in 
bx.bbi.bbi_file.BBIFile._get_chrom_id_and_size (lib/bx/bbi/bbi_file.c:5656)
  File bpt_file.pyx, line 76, in bx.bbi.bpt_file.BPTFile.find 
(lib/bx/bbi/bpt_file.c:1388)
  File bpt_file.pyx, line 55, in bx.bbi.bpt_file.BPTFile.r_find 
(lib/bx/bbi/bpt_file.c:1154)

AttributeError: 'BinaryFileReader' object has no attribute 'read_bits64'

Trackster works OK when I load smaller data sets.

It seems that there was a fix for this in bx-python code, as per mail 
from Jeremy 
(http://dev.list.galaxyproject.org/trackster-error-for-viewing-rat-data-rn5-tp4662664p4662672.html).

How do I get the fixed code into my galaxy instance?

Thanks,
-Raj


___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

Re: [galaxy-dev] Sudden error in local galaxy installation

2013-08-02 Thread Raj Ayyampalayam

After some investigating I think the problem is at the line:
   converted_dataset = converter.execute( trans, incoming=params, 
set_output_hid=visible, set_output_history=set_output_history)[1]
in   File 
/panfs/pstor.storage/home/qbcglab/galaxy_run/galaxy-dist/lib/galaxy/datatypes/data.py, 
line 483, in convert_dataset


-Raj

On Fri Aug  2 00:51:39 2013, Raj Ayyampalayam wrote:

Here is the first part of the error (Traceback):

galaxy.tools ERROR 2013-08-01 22:56:16,590 Exception caught while
attempting tool execution:
Traceback (most recent call last):
  File
/panfs/pstor.storage/home/qbcglab/galaxy_run/galaxy-dist/lib/galaxy/tools/__init__.py,
line 1936, in handle_input
_, out_data = self.execute( trans, incoming=params, history=history )
  File
/panfs/pstor.storage/home/qbcglab/galaxy_run/galaxy-dist/lib/galaxy/tools/__init__.py,
line 2264, in execute
return self.tool_action.execute( self, trans, incoming=incoming,
set_output_hid=set_output_hid, history=history, **kwargs )
  File
/panfs/pstor.storage/home/qbcglab/galaxy_run/galaxy-dist/lib/galaxy/tools/actions/__init__.py,
line 213, in execute
chrom_info = build_fasta_dataset.get_converted_dataset( trans,
'len' ).file_name
  File
/panfs/pstor.storage/home/qbcglab/galaxy_run/galaxy-dist/lib/galaxy/model/__init__.py,
line 1308, in get_converted_dataset
new_dataset = self.datatype.convert_dataset( trans, self,
target_ext, return_output=True, visible=False, deps=deps,
set_output_history=False ).values()[0]
  File
/panfs/pstor.storage/home/qbcglab/galaxy_run/galaxy-dist/lib/galaxy/datatypes/data.py,
line 483, in convert_dataset
converted_dataset = converter.execute( trans, incoming=params,
set_output_hid=visible, set_output_history=set_output_history)[1]
  File
/panfs/pstor.storage/home/qbcglab/galaxy_run/galaxy-dist/lib/galaxy/tools/__init__.py,
line 2264, in execute
return self.tool_action.execute( self, trans, incoming=incoming,
set_output_hid=set_output_hid, history=history, **kwargs )
  File
/panfs/pstor.storage/home/qbcglab/galaxy_run/galaxy-dist/lib/galaxy/tools/actions/__init__.py,
line 213, in execute
chrom_info = build_fasta_dataset.get_converted_dataset( trans,
'len' ).file_name
  File
/panfs/pstor.storage/home/qbcglab/galaxy_run/galaxy-dist/lib/galaxy/model/__init__.py,
line 1308, in get_converted_dataset
new_dataset = self.datatype.convert_dataset( trans, self,
target_ext, return_output=True, visible=False, deps=deps,
set_output_history=False ).values()[0]
  File
/panfs/pstor.storage/home/qbcglab/galaxy_run/galaxy-dist/lib/galaxy/datatypes/data.py,
line 483, in convert_dataset
converted_dataset = converter.execute( trans, incoming=params,
set_output_hid=visible, set_output_history=set_output_history)[1]
  File
/panfs/pstor.storage/home/qbcglab/galaxy_run/galaxy-dist/lib/galaxy/tools/__init__.py,
line 2264, in execute
return self.tool_action.execute( self, trans, incoming=incoming,
set_output_hid=set_output_hid, history=history, **kwargs )
  File
/panfs/pstor.storage/home/qbcglab/galaxy_run/galaxy-dist/lib/galaxy/tools/actions/__init__.py,
line 213, in execute
chrom_info = build_fasta_dataset.get_converted_dataset( trans,
'len' ).file_name
  File
/panfs/pstor.storage/home/qbcglab/galaxy_run/galaxy-dist/lib/galaxy/model/__init__.py,
line 1308, in get_converted_dataset
new_dataset = self.datatype.convert_dataset( trans, self,
target_ext, return_output=True, visible=False, deps=deps,
set_output_history=False ).values()[0]
  File
/panfs/pstor.storage/home/qbcglab/galaxy_run/galaxy-dist/lib/galaxy/datatypes/data.py,
line 483, in convert_dataset
converted_dataset = converter.execute( trans, incoming=params,
set_output_hid=visible, set_output_history=set_output_history)[1]
  File
/panfs/pstor.storage/home/qbcglab/galaxy_run/galaxy-dist/lib/galaxy/tools/__init__.py,
line 2264, in execute
return self.tool_action.execute( self, trans, incoming=incoming,
set_output_hid=set_output_hid, history=history, **kwargs )
  File
/panfs/pstor.storage/home/qbcglab/galaxy_run/galaxy-dist/lib/galaxy/tools/actions/__init__.py,
line 213, in execute
chrom_info = build_fasta_dataset.get_converted_dataset( trans,
'len' ).file_name
  File
/panfs/pstor.storage/home/qbcglab/galaxy_run/galaxy-dist/lib/galaxy/model/__init__.py,
line 1308, in get_converted_dataset
new_dataset = self.datatype.convert_dataset( trans, self,
target_ext, return_output=True, visible=False, deps=deps,
set_output_history=False ).values()[0]
  File
/panfs/pstor.storage/home/qbcglab/galaxy_run/galaxy-dist/lib/galaxy/datatypes/data.py,
line 483, in convert_dataset
converted_dataset = converter.execute( trans, incoming=params,
set_output_hid=visible, set_output_history=set_output_history)[1]
  File
/panfs/pstor.storage/home/qbcglab/galaxy_run/galaxy-dist/lib/galaxy/tools/__init__.py,
line 2264, in execute
return self.tool_action.execute( self, trans, incoming=incoming,
set_output_hid=set_output_hid, history=history, **kwargs )
  File

[galaxy-dev] Sudden error in local galaxy installation

2013-08-01 Thread Raj Ayyampalayam

Hello,

My local cluster based galaxy installation was running fine till this 
evening. Later I decided to test some workflows and got an error with 
massive stack trace.
I can't make any sense of this. Here is the last part of the error. The 
stack trace is literally 1000's of lines. Can somebody throw some light 
on this?


Thanks,
-Raj

  File 
/panfs/pstor.storage/home/qbcglab/galaxy_run/galaxy-dist/lib/galaxy/datatypes/data.py, 
line 483, in convert_dataset
converted_dataset = converter.execute( trans, incoming=params, 
set_output_hid=visible, set_output_history=set_output_history)[1]
  File 
/panfs/pstor.storage/home/qbcglab/galaxy_run/galaxy-dist/lib/galaxy/tools/__init__.py, 
line 2264, in execute
return self.tool_action.execute( self, trans, incoming=incoming, 
set_output_hid=set_output_hid, history=history, **kwargs )
  File 
/panfs/pstor.storage/home/qbcglab/galaxy_run/galaxy-dist/lib/galaxy/tools/actions/__init__.py, 
line 201, in execute

db_dataset = trans.db_dataset_for( input_dbkey )
  File 
/panfs/pstor.storage/home/qbcglab/galaxy_run/galaxy-dist/lib/galaxy/web/framework/__init__.py, 
line 1046, in db_dataset_for

for ds in datasets:
  File build/bdist.linux-x86_64/egg/sqlalchemy/orm/query.py, line 
2227, in __iter__

return self._execute_and_instances(context)
  File build/bdist.linux-x86_64/egg/sqlalchemy/orm/query.py, line 
2242, in _execute_and_instances

result = conn.execute(querycontext.statement, self._params)
  File build/bdist.linux-x86_64/egg/sqlalchemy/engine/base.py, line 
1449, in execute

params)
  File build/bdist.linux-x86_64/egg/sqlalchemy/engine/base.py, line 
1576, in _execute_clauseelement

inline=len(distilled_params)  1)
  File build/bdist.linux-x86_64/egg/sqlalchemy/sql/expression.py, 
line 1778, in compile

return self._compiler(dialect, bind=bind, **kw)
  File build/bdist.linux-x86_64/egg/sqlalchemy/sql/expression.py, 
line 1784, in _compiler

return dialect.statement_compiler(dialect, self, **kw)
  File build/bdist.linux-x86_64/egg/sqlalchemy/sql/compiler.py, line 
276, in __init__

engine.Compiled.__init__(self, dialect, statement, **kwargs)
  File build/bdist.linux-x86_64/egg/sqlalchemy/engine/base.py, line 
705, in __init__

self.string = self.process(self.statement)
  File build/bdist.linux-x86_64/egg/sqlalchemy/engine/base.py, line 
724, in process

return obj._compiler_dispatch(self, **kwargs)
  File build/bdist.linux-x86_64/egg/sqlalchemy/sql/visitors.py, line 
72, in _compiler_dispatch

return getter(visitor)(self, **kw)
  File build/bdist.linux-x86_64/egg/sqlalchemy/sql/compiler.py, line 
997, in visit_select

t = select._whereclause._compiler_dispatch(self, **kwargs)
  File build/bdist.linux-x86_64/egg/sqlalchemy/sql/visitors.py, line 
72, in _compiler_dispatch

return getter(visitor)(self, **kw)
  File build/bdist.linux-x86_64/egg/sqlalchemy/sql/compiler.py, line 
498, in visit_clauselist

for c in clauselist.clauses)
  File build/bdist.linux-x86_64/egg/sqlalchemy/sql/compiler.py, line 
496, in genexpr

s for s in
  File build/bdist.linux-x86_64/egg/sqlalchemy/sql/compiler.py, line 
498, in genexpr

for c in clauselist.clauses)
  File build/bdist.linux-x86_64/egg/sqlalchemy/sql/visitors.py, line 
72, in _compiler_dispatch

return getter(visitor)(self, **kw)
  File build/bdist.linux-x86_64/egg/sqlalchemy/sql/compiler.py, line 
618, in visit_binary

**kw
  File build/bdist.linux-x86_64/egg/sqlalchemy/sql/compiler.py, line 
663, in _operator_dispatch

return fn(OPERATORS[operator])
  File build/bdist.linux-x86_64/egg/sqlalchemy/sql/compiler.py, line 
617, in lambda

self, **kw),
  File build/bdist.linux-x86_64/egg/sqlalchemy/sql/visitors.py, line 
72, in _compiler_dispatch

return getter(visitor)(self, **kw)
  File build/bdist.linux-x86_64/egg/sqlalchemy/sql/compiler.py, line 
679, in visit_bindparam

name = self._truncate_bindparam(bindparam)
  File build/bdist.linux-x86_64/egg/sqlalchemy/sql/compiler.py, line 
743, in _truncate_bindparam

bind_name = self._truncated_identifier(bindparam, bind_name)
  File build/bdist.linux-x86_64/egg/sqlalchemy/sql/compiler.py, line 
754, in _truncated_identifier

anonname = name.apply_map(self.anon_map)
  File build/bdist.linux-x86_64/egg/sqlalchemy/sql/expression.py, 
line 1325, in apply_map

return self % map_
RuntimeError: maximum recursion depth exceeded

--
Bio-informatics consultant
QBCG (http://qbcg.uga.edu)
706-542-6092 (8-12 All week)
706-583-0442 (12-5 All week)



___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
 http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
 http://galaxyproject.org/search/mailinglists/


Re: [galaxy-dev] Sudden error in local galaxy installation

2013-08-01 Thread Raj Ayyampalayam
, in get_converted_dataset
new_dataset = self.datatype.convert_dataset( trans, self, 
target_ext, return_output=True, visible=False, deps=deps, 
set_output_history=False ).values()[0]
  File 
/panfs/pstor.storage/home/qbcglab/galaxy_run/galaxy-dist/lib/galaxy/datatypes/data.py, 
line 483, in convert_dataset
converted_dataset = converter.execute( trans, incoming=params, 
set_output_hid=visible, set_output_history=set_output_history)[1]
  File 
/panfs/pstor.storage/home/qbcglab/galaxy_run/galaxy-dist/lib/galaxy/tools/__init__.py, 
line 2264, in execute
return self.tool_action.execute( self, trans, incoming=incoming, 
set_output_hid=set_output_hid, history=history, **kwargs )
  File 
/panfs/pstor.storage/home/qbcglab/galaxy_run/galaxy-dist/lib/galaxy/tools/actions/__init__.py, 
line 213, in execute
chrom_info = build_fasta_dataset.get_converted_dataset( trans, 
'len' ).file_name
  File 
/panfs/pstor.storage/home/qbcglab/galaxy_run/galaxy-dist/lib/galaxy/model/__init__.py, 
line 1308, in get_converted_dataset
new_dataset = self.datatype.convert_dataset( trans, self, 
target_ext, return_output=True, visible=False, deps=deps, 
set_output_history=False ).values()[0]
  File 
/panfs/pstor.storage/home/qbcglab/galaxy_run/galaxy-dist/lib/galaxy/datatypes/data.py, 
line 483, in convert_dataset
converted_dataset = converter.execute( trans, incoming=params, 
set_output_hid=visible, set_output_history=set_output_history)[1]
  File 
/panfs/pstor.storage/home/qbcglab/galaxy_run/galaxy-dist/lib/galaxy/tools/__init__.py, 
line 2264, in execute
return self.tool_action.execute( self, trans, incoming=incoming, 
set_output_hid=set_output_hid, history=history, **kwargs )
  File 
/panfs/pstor.storage/home/qbcglab/galaxy_run/galaxy-dist/lib/galaxy/tools/actions/__init__.py, 
line 213, in execute
chrom_info = build_fasta_dataset.get_converted_dataset( trans, 
'len' ).file_name
  File 
/panfs/pstor.storage/home/qbcglab/galaxy_run/galaxy-dist/lib/galaxy/model/__init__.py, 
line 1308, in get_converted_dataset
new_dataset = self.datatype.convert_dataset( trans, self, 
target_ext, return_output=True, visible=False, deps=deps, 
set_output_history=False ).values()[0]
  File 
/panfs/pstor.storage/home/qbcglab/galaxy_run/galaxy-dist/lib/galaxy/datatypes/data.py, 
line 483, in convert_dataset
converted_dataset = converter.execute( trans, incoming=params, 
set_output_hid=visible, set_output_history=set_output_history)[1]
  File 
/panfs/pstor.storage/home/qbcglab/galaxy_run/galaxy-dist/lib/galaxy/tools/__init__.py, 
line 2264, in execute
return self.tool_action.execute( self, trans, incoming=incoming, 
set_output_hid=set_output_hid, history=history, **kwargs )
  File 
/panfs/pstor.storage/home/qbcglab/galaxy_run/galaxy-dist/lib/galaxy/tools/actions/__init__.py, 
line 213, in execute
chrom_info = build_fasta_dataset.get_converted_dataset( trans, 
'len' ).file_name
  File 
/panfs/pstor.storage/home/qbcglab/galaxy_run/galaxy-dist/lib/galaxy/model/__init__.py, 
line 1308, in get_converted_dataset
new_dataset = self.datatype.convert_dataset( trans, self, 
target_ext, return_output=True, visible=False, deps=deps, 
set_output_history=False ).values()[0]
  File 
/panfs/pstor.storage/home/qbcglab/galaxy_run/galaxy-dist/lib/galaxy/datatypes/data.py, 
line 483, in convert_dataset
converted_dataset = converter.execute( trans, incoming=params, 
set_output_hid=visible, set_output_history=set_output_history)[1]
  File 
/panfs/pstor.storage/home/qbcglab/galaxy_run/galaxy-dist/lib/galaxy/tools/__init__.py, 
line 2264, in execute
return self.tool_action.execute( self, trans, incoming=incoming, 
set_output_hid=set_output_hid, history=history, **kwargs )
  File 
/panfs/pstor.storage/home/qbcglab/galaxy_run/galaxy-dist/lib/galaxy/tools/actions/__init__.py, 
line 213, in execute
chrom_info = build_fasta_dataset.get_converted_dataset( trans, 
'len' ).file_name


Seems like it is dying at chrom_info = 
build_fasta_dataset.get_converted_dataset( trans, 'len' ).file_name


Thanks,
-Raj
On 8/2/13 12:00 AM, Raj Ayyampalayam wrote:

Hello,

My local cluster based galaxy installation was running fine till this 
evening. Later I decided to test some workflows and got an error with 
massive stack trace.
I can't make any sense of this. Here is the last part of the error. 
The stack trace is literally 1000's of lines. Can somebody throw some 
light on this?


Thanks,
-Raj

  File 
/panfs/pstor.storage/home/qbcglab/galaxy_run/galaxy-dist/lib/galaxy/datatypes/data.py, 
line 483, in convert_dataset
converted_dataset = converter.execute( trans, incoming=params, 
set_output_hid=visible, set_output_history=set_output_history)[1]
  File 
/panfs/pstor.storage/home/qbcglab/galaxy_run/galaxy-dist/lib/galaxy/tools/__init__.py, 
line 2264, in execute
return self.tool_action.execute( self, trans, incoming=incoming, 
set_output_hid=set_output_hid, history=history, **kwargs )
  File 
/panfs/pstor.storage/home/qbcglab/galaxy_run/galaxy

[galaxy-dev] Storage question for Galaxy-Admins

2013-05-29 Thread Raj Ayyampalayam

Hello,

I am trying to come up with a number for our local galaxy storage 
requirements when we go live (This is an installation for research at 
our university).


I checked out the survey that was done a while back for the 
GalaxyAdmins. It seems that the bigger installations have allocated 
about 200TB for their installation.


It would be of great help to me if I can get some feedback on storage 
used by local galaxy installations.

If possible please use the following format:

Total capacity:
Used capacity:
Storage Technology:
Comments:

Thank you all very much.
-Raj

___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
 http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
 http://galaxyproject.org/search/mailinglists/


Re: [galaxy-dev] Trinity wrapper not working on local installation

2013-03-28 Thread Raj Ayyampalayam

Hi,

None of the solutions that were mentioned fixed the issue. I will do 
some more debugging later this evening.


Thanks,
Raj

On Thursday, March 28, 2013 12:26:56 PM, James Taylor wrote:

I haven't looked at this directly, but it should be using None, in
which case the correct comparison is library_type is None and
library_type is not None.

--
James Taylor, Assistant Professor, Biology/CS, Emory University


On Thu, Mar 28, 2013 at 6:35 AM, Peter Cock p.j.a.c...@googlemail.com wrote:

On Thu, Mar 28, 2013 at 4:36 AM, Raj Ayyampalayam ra...@uga.edu wrote:

Hello,

I am trying to get the trinity wrapper to work on our local galaxy
installation using the latest trinity version.

The main issue is that the Trinity.pl call has --SS_lib_type None in the
script file. The data I am using is unstranded paired end reads and I am
selecting None in the tool parameters.
I looked at the trinity_all.xml file and it seems like the following code is
not working:

 #if $inputs.library_type != 'None':
 --SS_lib_type $inputs.library_type
 #end if

I am new to cheeta and python and I am not sure why this code is not
working. Any suggestion on how to go about debugging it?


This is almost certainly a type comparison error, obscured by
the cheetah template language and the parameter proxy classes.
In Python there is a special object None, which is probably what
the library type is using. I would try making this an explicit
comparison of strings (a pattern used in many other wrappers,
e.g. tools/gatk/*.xml):

  #if str($inputs.library_type) != 'None':
  --SS_lib_type $inputs.library_type
  #end if

Or, this might work too:

  #if $inputs.library_type != None:
  --SS_lib_type $inputs.library_type
  #end if

(This does seem to be a bug in the trinity wrapper)

Peter
___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
   http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
   http://galaxyproject.org/search/mailinglists/




--
Bio-informatics consultant
QBCG (http://qbcg.uga.edu)
706-542-6092
706-583-0442

___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
 http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
 http://galaxyproject.org/search/mailinglists/


[galaxy-dev] Trinity wrapper not working on local installation

2013-03-27 Thread Raj Ayyampalayam

Hello,

I am trying to get the trinity wrapper to work on our local galaxy 
installation using the latest trinity version.


The main issue is that the Trinity.pl call has --SS_lib_type None in the 
script file. The data I am using is unstranded paired end reads and I am 
selecting None in the tool parameters.
I looked at the trinity_all.xml file and it seems like the following 
code is not working:


#if $inputs.library_type != 'None':
--SS_lib_type $inputs.library_type
#end if

I am new to cheeta and python and I am not sure why this code is not 
working. Any suggestion on how to go about debugging it?


Thanks,
-Raj


___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

[galaxy-dev] Uploading large data sets from cluster file system for local galaxy install

2013-03-22 Thread Raj Ayyampalayam

Hello,

I have a question for the many galaxy admins here. I am in the process 
of setting up a local galaxy install submitting the jobs as real users 
to our local SGE cluster. I am trying to figure out the best way to load 
large datasets (Sequence files etc..) stored in the local clusters 
storage. These files might not be world readable and we cannot assume 
that the user knows how to make it so.


I was wondering if anybody here has tackled the problem and found a 
workable solution?


Thanks,
-Raj


___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
 http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
 http://galaxyproject.org/search/mailinglists/


Re: [galaxy-dev] Problem running ncbi_makeblastdb as real user in cluster

2013-03-20 Thread Raj Ayyampalayam

Hi Guys,

Thanks for the quick response. I installed the latest blast datatypes 
from the toolshed and that fixed the issue.


Peter, can you also fix the  $outfile.extra_files_path to 
$outfile.files_path on line 5 in the makedb wrapper xml file.


Thanks,
-Raj



On Wednesday, March 20, 2013 10:42:43 AM, Peter Cock wrote:

On Wed, Mar 20, 2013 at 1:55 PM, Peter Cock p.j.a.c...@googlemail.com wrote:

On Wed, Mar 20, 2013 at 1:52 PM, Nicola Soranzo sora...@crs4.it wrote:

Il giorno mer, 20/03/2013 alle 12.41 +, Peter Cock ha scritto:


The patch just removes the MetadataElement call - is that wise?


Hi Peter,
my question instead is, what are they for? With 'name' they have no
meaning.


Looking at the code, I don't see a recent API change regarding
the name:

https://bitbucket.org/galaxy/galaxy-central/history-node/default/lib/galaxy/datatypes/metadata.py?at=default

Is there perhaps a universe_wsgi.ini setting which might be involved,
since I've not seen this error locally?


$ cd lib/galaxy/datatypes/
$ grep 'MetadataElement(' *.py|grep -v name

returns nothing, so the 'name' parameter is really mandatory.

Best,
Nicola



Good question - I'd have to ask Edward what he thought this did,
but you seem to be right that as the code stands this metadata
element is rather pointless.

(I'm still puzzled why we don't see the error here though).

I'll apply your patch and test it locally...

Peter


Nicola - It looks good here, uploaded to the ToolShed as v0.0.15
http://toolshed.g2.bx.psu.edu/view/devteam/blast_datatypes

Raj - Could you apply the update from the ToolShed and
confirm if that fixes the problem for you?

Edward - have you received my direct email off list?

Thanks,

Peter



--
Bio-informatics consultant
QBCG (http://qbcg.uga.edu)
706-542-6092
706-583-0442

___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

 http://lists.bx.psu.edu/


Re: [galaxy-dev] Problems with rsync from datacache.g2.bx.psu.edu/indexes

2013-03-20 Thread Raj Ayyampalayam

Hi Nate,

I am trying to rsync the indexes and getting a rsync error.

$ rsync -avz rsync://datacache.g2.bx.psu.edu/indexes/phiX .
@ERROR: chroot failed
rsync error: error starting client-server protocol (code 5) at 
main.c(1530) [receiver=3.0.6]


Should I change anything in my command?

Thanks,
-Raj


On 2/15/2013 11:22 AM, Nate Coraor wrote:

On Feb 15, 2013, at 11:12 AM, Rodolfo Aramayo wrote:


Hi,

I hope this is the right place to ask,

I am trying to rsync from:

rsync -avzP rsync://datacache.g2.bx.psu.edu/indexes/

and I am getting the error:

@ERROR: chroot failed
rsync error: error starting client-server protocol (code 5) at main.c(1534) 
[Receiver=3.0.9

I know I can rsync to any other places

Do you happen to know what is going on?

Hi Rodolfo,

The array on which the data cache is located is currently down due to an 
upgrade problem.  We hope to have this fixed today.  Sorry for the 
inconvenience.

--nate


Thanks

--Rodolfo

___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/


___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

   http://lists.bx.psu.edu/



--
Bio-informatics consultant
QBCG (http://qbcg.uga.edu)
706-542-6092
706-583-0442


___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

 http://lists.bx.psu.edu/


[galaxy-dev] Problem running ncbi_makeblastdb as real user in cluster

2013-03-19 Thread Raj Ayyampalayam

Hello,

I am setting up a local galaxy installation and using our local SGE 
cluster to run the jobs as real users.


One of the first few tools I tested was the ncbi makebkastdb tool. When 
I try to run it the history for that particular job is immediately shown 
as follows:


0: (unnamed dataset)
Failed to retrieve dataset information.
An error occurred with this dataset:/hasattr(): attribute name must be 
string/


In the stdout of the job I was getting the following error:

Fatal error: Matched on Error:
Error: NCBI C++ Exception:

/usr/local/src/ncbiblast+/2.2.27/c++/src/objtools/blast/seqdb_writer/build_db.cpp,
 line 979: Error: ncbi::s_CreateDirectories() - Failed to create directory 
'dataset_38_files'


I finally figured out that the makedb wrapper was using 
$outfile.extra_files_path instead of $outfile.files_path. I fixed the 
wrapper and restarted the instance. The job now runs ok and reports 
successful completion in the stdout.


But, the history for this particular job is still shown as explained above.

Can somebody help me figure out what is going wrong in my setup?

Here are some of the relevant logs:

== web1.log ==
128.192.203.31 - - [19/Mar/2013:15:14:35 -0400] POST /tool_runner/index 
HTTP/1.1 200 - 
http://galaxy.qbcg.uga.edu/tool_runner?tool_id=toolshed.g2.bx.psu.edu/repos/devteam/ncbi_blast_plus/ncbi_makeblastdb/0.0.1; 
Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.22 (KHTML, like 
Gecko) Chrome/25.0.1364.172 Safari/537.22


== web0.log ==
128.192.203.31 - - [19/Mar/2013:15:14:36 -0400] GET /history HTTP/1.1 
200 - http://galaxy.qbcg.uga.edu/tool_runner/index; Mozilla/5.0 
(Windows NT 6.1; WOW64) AppleWebKit/537.22 (KHTML, like Gecko) 
Chrome/25.0.1364.172 Safari/537.22
galaxy.webapps.galaxy.api.history_contents ERROR 2013-03-19 15:14:36,161 
Error in history API at listing contents with history d413a19dec13d11e, 
hda e89067bb68bee7a0: hasattr(): attribute name must be string
galaxy.webapps.galaxy.api.history_contents ERROR 2013-03-19 15:14:36,316 
Error in history API at listing contents with history d413a19dec13d11e, 
hda ba03619785539f8c: hasattr(): attribute name must be string
galaxy.webapps.galaxy.api.history_contents ERROR 2013-03-19 15:14:36,316 
Error in history API at listing contents with history d413a19dec13d11e, 
hda cbbbf59e8f08c98c: hasattr(): attribute name must be string
galaxy.webapps.galaxy.api.history_contents ERROR 2013-03-19 15:14:36,317 
Error in history API at listing contents with history d413a19dec13d11e, 
hda 964b37715ec9bd22: hasattr(): attribute name must be string
galaxy.webapps.galaxy.api.history_contents ERROR 2013-03-19 15:14:36,317 
Error in history API at listing contents with history d413a19dec13d11e, 
hda 1fad1eaf5f4f1766: hasattr(): attribute name must be string
galaxy.webapps.galaxy.api.history_contents ERROR 2013-03-19 15:14:36,318 
Error in history API at listing contents with history d413a19dec13d11e, 
hda 2fdbd5c5858e78fb: hasattr(): attribute name must be string


== manager.log ==
galaxy.jobs.manager DEBUG 2013-03-19 15:14:40,856 (42) Job assigned to 
handler 'handler0'


== handler0.log ==
galaxy.jobs DEBUG 2013-03-19 15:14:41,790 (42) Working directory for job 
is: 
/panfs/pstor.storage/home/qbcglab/galaxy_run/galaxy-dist/database/job_working_directory/000/42
galaxy.jobs.rules.200_rules DEBUG 2013-03-19 15:14:41,791 
toolshed.g2.bx.psu.edu/repos/devteam/ncbi_blast_plus/ncbi_makeblastdb/0.0.1
galaxy.jobs.handler DEBUG 2013-03-19 15:14:41,792 dispatching job 42 to 
drmaa runner

galaxy.jobs.handler INFO 2013-03-19 15:14:41,882 (42) Job dispatched
galaxy.tools DEBUG 2013-03-19 15:14:42,068 Building dependency shell 
command for dependency 'makeblastdb'
galaxy.tools WARNING 2013-03-19 15:14:42,068 Failed to resolve 
dependency on 'makeblastdb', ignoring
galaxy.jobs.runners.drmaa DEBUG 2013-03-19 15:14:42,453 (42) submitting 
file 
/panfs/pstor.storage/home/qbcglab/galaxy_run/galaxy-dist/database/pbs/galaxy_42.sh
galaxy.jobs.runners.drmaa DEBUG 2013-03-19 15:14:42,453 (42) command is: 
makeblastdb -version  
/panfs/pstor.storage/home/qbcglab/galaxy_run/galaxy-dist/database/tmp/GALAXY_VERSION_STRING_42; 
makeblastdb -out 
/panfs/pstor.storage/home/qbcglab/galaxy_run/galaxy-dist/database/job_working_directory/000/42/dataset_41_files/blastdb 
-hash_index -in  
/panfs/pstor.storage/home/qbcglab/galaxy_run/galaxy-dist/database/files/000/dataset_30.dat 
 -title wert -dbtype nucl; cd 
/panfs/pstor.storage/home/qbcglab/galaxy_run/galaxy-dist; 
/panfs/pstor.storage/home/qbcglab/galaxy_run/galaxy-dist/set_metadata.sh 
./database/files 
/panfs/pstor.storage/home/qbcglab/galaxy_run/galaxy-dist/database/job_working_directory/000/42 
. 
/panfs/pstor.storage/home/qbcglab/galaxy_run/galaxy-dist/universe_wsgi.ini 
/panfs/pstor.storage/home/qbcglab/galaxy_run/galaxy-dist/database/tmp/tmpxi116L 
/panfs/pstor.storage/home/qbcglab/galaxy_run/galaxy-dist/database/job_working_directory/000/42/galaxy.json 

Re: [galaxy-dev] How to upload local files in Galaxy

2012-06-08 Thread Raj Ayyampalayam

Hello,

I am interested in this local upload file tool. Can you please send 
the files to me as well.


Thanks.
-Raj

On 6/8/12 4:36 AM, Alban Lermine wrote:

Hi,

There is also another solution if you don't want let users being able 
to create libraries.
We have implemented this solution in our local production server here 
at Institut Curie.
We add a tool call local upload file, that takes as entry parameters 
the name of the dataset, the type of file and the path to the file you 
want to upload.
At the execution, the bash script behind will remove the output 
dataset created by Galaxy and replaced it by a symbolic link to the 
file (so you don't duplicate files).
Just to warn you, there could be a security failure with this tool, 
because if it is locally executed, it will be by the Galaxy 
applicative user that can potentially have more rights on file than 
the current user.
To go through this failure, we execute this tool on our local cluster 
(with pbs/torque) as the current user (so if the current user try to 
upload a file that he doesn't owned, the tool will not be able to 
create the symbolic link and the user will receive an error on the 
Galaxy interface).


Tell me if you're interested in such a tool, and I send you the xml 
file and bash script associated.


Bests,

Alban



Le 07/06/2012 20:11, Mehmet Belgin a écrit :

Brad,

Thank you for your fast reply! Looks like library_import_dir is for 
admins and there is another library option for users. I will try with 
that one and see if the files appear in the GUI.


Thanks!


=
Mehmet Belgin, Ph.D. (mehmet.bel...@oit.gatech.edu 
mailto:mehmet.bel...@oit.gatech.edu)
Scientific Computing Consultant | OIT - Academic and Research 
Technologies

Georgia Institute of Technology
258 Fourth Street, Rich Building, Room 326
Atlanta, GA  30332-0700
Office: (404) 385-0665




On Jun 7, 2012, at 2:04 PM, Langhorst, Brad wrote:


Mehmet:

It's not important how the files get there, they could be moved via 
ftp, scp, cp, smb - whatever.
Galaxy will use that directory to import from no matter how the 
files arrive.


I found that confusing at first too.

Brad
On Jun 7, 2012, at 1:55 PM, Mehmet Belgin wrote:


Hi Everyone,

I am helping a research group to use Galaxy on our clusters. 
Unfortunately I have no previous experience with Galaxy, but 
learning along the way. We are almost there, but cannot figure out 
one particular issue. This is about configuration of Galaxy, so I 
thought developers list is a better place to submit than the user list.


The galaxy web interface allows for either copy/paste of text, or a 
URL. Unfortunately we cannot setup a FTP server as instructed due 
to restrictions on the cluster. The files we are trying to upload 
are large; around 2GB in size. It does not make sense to upload 
these files to a remote location (which we can provide an URL for) 
and download them back, since the data and galaxy are on the same 
system. However, I could not find a way to open these files locally.


I did some reading, and hoped that library_import_dir in 
universe_wsgi.ini would do the trick, but it didn't. Therefore, I 
will really appreciate any suggestions.


Thanks a lot in advance!

-Mehmet





___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

http://lists.bx.psu.edu/


--
Brad Langhorst
langho...@neb.com mailto:langho...@neb.com
978-380-7564







___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

   http://lists.bx.psu.edu/



--
Alban Lermine
Unité 900 : Inserm - Mines ParisTech - Institut Curie
« Bioinformatics and Computational Systems Biology of Cancer »
11-13 rue Pierre et Marie Curie (1er étage) - 75005 Paris - France
Tel : +33 (0) 1 56 24 69 84


___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

   http://lists.bx.psu.edu/


--
Bio-informatics consultant
GGF (http://dna.uga.edu) and QBCG (http://qbcg.uga.edu)
706-542-6092 (8-12 Tuesday, Thursday and Friday)
706-542-6877 (8-12 Monday, Wednesday and Friday)
706-583-0442 (12-5 All week)



___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/