Hello Dr. Senturk,

  This is the second time I have gotten a question like this one in a week
so I am going to cc galaxy-dev mailing in the response so there is a public
link I can send out about progress in the future.

  There is some limited caching support so you don't have to re-transfer
the same inputs over and over. There is no mechanism for periodically
cleaning out this cache however and if the LWR is producing large outputs
these need to be transferred back to the Galaxy server and re-uploaded once
before they are added to the cache. I have sketched out some plans for
improving some of these limitations here - https://trello.com/c/MPlt8DHJ, I
consider these important features but I am still not sure how quickly I
will be able to get to these issues though.

  If you are interested in testing out caching, I have added some
documentation to Galaxy's sample job_conf.xml file how to enable caching on
the client (
https://bitbucket.org/galaxy/galaxy-central/commits/d6826c29e15a1f17553ae2fe2b149ef65ac4da9d
)
and on the server side you simply need to specify a directory to store
cached files by un-commenting the property file_cache_dir in LWR's
server.ini configuration file. As experimental as I consider the feature -
it was a big help for some particular and large  workflows when I was in my
previous position and it allowed us to really scale up what we were doing
the LWR and Galaxy.

Hope this helps and thanks for your continued interest in the LWR and
Galaxy!
-John



On Thu, May 8, 2014 at 6:54 AM, izzet fatih <ifs...@gmail.com> wrote:

> Hi John,
> I'm new in Galaxy so please correct me if I'm wrong. As far as I
> understand, Galaxy dispatches jobs by /lib/galaxy/jobs/handler.py one by
> one even though they are part of a workflow. Consequently, LWR receives
> them separately. Let's have two consecutive steps to be run by LWR. If the
> data size is big, this will cause excessive data transfer burden between
> the steps. Do you have any plans to improve this?
>
> Thanks,
>
>
> On Wed, Apr 30, 2014 at 4:14 PM, John Chilton <jmchil...@gmail.com> wrote:
>
>> Doh - sorry about that. It should be fixed with this commit -
>> https://bitbucket.org/jmchilton/lwr/commits/5ebad638537e19b55b7b6c812c49ae9da079480c
>> .
>>
>> Thanks for your interest in the LWR and let me know if there is anything
>> else I can do to fix it, improve it, etc...
>>
>> -John
>>
>>
>> On Wed, Apr 30, 2014 at 1:19 PM, izzet fatih <ifs...@gmail.com> wrote:
>>
>>> Hi John,
>>> I've downloaded and configured lwr server and it is great tool thanks
>>> for developing it.
>>> When I execute run.sh I got the following error
>>>
>>> File
>>> "/home/isenturk/src/jmchilton-lwr-bd8d1cc44e75/lwr/managers/base/directory.py",
>>> line 13, in <module>
>>>     from ..util.env import env_to_statement
>>> ImportError: No module named env
>>>
>>> This exception is thrown from
>>> jmchilton-lwr-bd8d1cc44e75/lwr/managers/base/*directory.py*
>>> It is trying to use env_to_statement command from ..util.env
>>>
>>> If I am not mistaken, the module is supposed to be under
>>> jmchilton-lwr-bd8d1cc44e75/lwr/managers/util/
>>> However, it is not there.
>>>
>>> Thanks,
>>>
>>>
>>> --
>>> [image: The Ohio State University]
>>> Izzet F Senturk, PhD
>>> Post Doctoral Researcher
>>> College of Medicine SBS-Biomedical Informatics
>>> 310-05 Lincoln Tower, 1800 Cannon Drive, Columbus, OH 43210
>>> 6148250565 Office
>>> sentur...@osu.edu http://bmi.osu.edu/hpc/
>>>
>>
>>
>
>
> --
> [image: The Ohio State University]
> Izzet F Senturk, PhD
> Post Doctoral Researcher
> College of Medicine SBS-Biomedical Informatics
> 310-05 Lincoln Tower, 1800 Cannon Drive, Columbus, OH 43210
> 6148250565 Office
> sentur...@osu.edu http://bmi.osu.edu/hpc/
>
___________________________________________________________
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

Reply via email to