My recommendation would be make the tool dependency install work on as
many platforms as you can and not try to optimize in such a way that
it is not going to work - i.e. favor reproduciblity over performance.
If a system administrator or institution want to sacrifice
reproduciblity and optimize specific packages they should be able to
do so manually. Its not just Atlas and CPU throttling right? Its
vendor versions of MPI, GPGPU variants of code, variants of OpenMP,
etc....  Even if the tool shed provided some mechanism for determining
if some particular package optimization is going to work, perhaps its
better to just not enable it by default because frequently these cause
slightly different results than the unoptimized version.

The problem with this recommendation that is Galaxy currently provides
no mechanism for doing so. Luckily this is easy to solve and the
solution solves other problems. If the tool dependency resolution code
would grab the manually configured dependency instead of the tool shed
variant when available, instead of favoring the opposite, then it
would be really easy to add in an optimized version of numpy or an MPI
version of software X.

Whats great is this solves other problems as well. For instance, our
genomics Galaxy web server runs Debian but the worker nodes run
CentOS. This means many tool shed installed dependencies do not work.
JJ being the patient guy he is goes in and manually updates the tool
shed installed env.sh files to load modules. Even if you think not
running the same version of the OS on your server and worker nodes is
a bit crazy, there is the much more reasonable (common) case of just
wanting to submit to multiple different clusters. When I was talking
with the guys at NCGAS they were unsure how to do this, this one
change would make that a lot more tenable.

-John

On Thu, Sep 26, 2013 at 1:29 PM, Björn Grüning
<bjoern.gruen...@pharmazie.uni-freiburg.de> wrote:
> Hi,
>
>> Hi Bjoern,
>>
>> Is there anything else we (the Galaxy community) can do to help
>> sort out the ATLAS installation problems?
>
> Thanks for asking. I have indeed a few things I would like some
> comments.
>
>> Another choice might be to use OpenBLAS instead of ATLAS, e.g.
>> http://stackoverflow.com/questions/11443302/compiling-numpy-with-openblas-integration
>
> I have no experience with it. Does it also need to turn off CPU
> throttling? I would assume so, or how is it optimizing itself?
>
>> However, I think we build NumPy without using ATLAS or any
>> BLAS library. That seems like the most pragmatic solution
>> in the short term - which I think is what Dan tried here:
>> http://testtoolshed.g2.bx.psu.edu/view/blankenberg/package_numpy_1_7
>
> I can remove them if that is the consensus.
>
> A few points:
> - fixing the atlas issue can speed up numpy, scipy, R considerably (by
> 400% in some cases)
> - as far as I understand that performance gain is due to optimizing
> itself on specific hardware, for atlas there is no way around to disable
> CPU throttling (How about OpenBlas?)
> - it seems to be complicated to deactivate CPU throttling on OS-X
> - binary installation does not make sense in that case, because ATLAS is
> self optimizing
> - Distribution shipped ATLAS packages are not really faster
>
> Current state:
> - Atlas tries two different commands to deactivate CPU throttling. Afaik
> that only works on some Ubuntu versions, where no root privileges are
> necessary.
> - If atlas fails for some reason, numpy/R/scipy installation should not
> be affected (that's was at least the aim)
>
> Questions:
> - Is it worth the hassle for some speed improvements? pip install numpy,
> would be so easy?
>
> - If we want to support ATLAS, any better idea to how to implement it?
> Any Tool Shed feature that can help? -> interactive installation?
>         - can we flag a tool dependency as optional? So it can fail?
>
> - Can anyone help with testing and fixing it?
>
>
> Any opinions/comments?
> Bjoern
>
>> Thanks,
>>
>> Peter
>
>
>
> ___________________________________________________________
> Please keep all replies on the list by using "reply all"
> in your mail client.  To manage your subscriptions to this
> and other Galaxy lists, please use the interface at:
>   http://lists.bx.psu.edu/
>
> To search Galaxy mailing lists use the unified search at:
>   http://galaxyproject.org/search/mailinglists/

___________________________________________________________
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

Reply via email to