On Nov 29, 11:37 am, Robert Bradshaw <[EMAIL PROTECTED]>
wrote:
> On Nov 29, 2008, at 11:26 AM, Justin C. Walker wrote:
>
>
Hi,
> > On Nov 29, 2008, at 07:35 , pong wrote:
>
> >> Hi,
>
> >> I wonder if SAGE is optimized for multi-core CUPs (people told me
> >> that many programs don't).
>
> > This is not an easy question to answer. Sage is built from many
> > components that were not specifically designed with Sage or
> > multiprocessor issues in mind.
>
> > Most programmers and algorithm designers, even today, don't think in
> > terms of a "tight-coupled" multiprocessor implementation. Some do
> > think of loose coupling. The distinction is the amount of information
> > needed to be shared between the "cooperating processes". For example,
> > Michael and a host of others worked hard to get the Sage build to take
> > advantage of multiple processors. This was not easy, because the
> > components come from many sources, and their build process was not
> > designed to take advantage of these systems, but it was feasible
> > because the different components need very little information from the
> > other components (basically, 'make' has to know what the dependencies
> > are, so it can find independent builds to run at the same time).
>
> > The software that makes up Sage is another matter entirely. Code does
> > not automatically work optimally on a multiprocessor system (whether
> > multi-core or multiple single-core chips). That effort takes a lot of
> > work.
>
> One important component of Sage, ATLAS, does take advantage of
> multiple processors, which is directly taken advantage of in all the
> linear algebra (numeric and exact).
No, it doesn't per default in Sage on non-OSX since we only build a
single threaded ATLAS there. Due to the Accelerate Framework on OSX
large problems that employ BLAS do use multiple cores automatically.
The Accelerate Framework contains code derived from ATLAS for the BLAS
portion, but that code was never given back by Apple.
I have build Sage versions with a multi CPU ATLAS per default, but
that code never made it into upstream. I have been contemplating what
to do and I do have the following BLAS plan:
* add support for multi threaded ATLAS
* add optional support for multi threaded GotoBLAS. GotoBLAS is
faster than ATLAS and build much faster, but is only free for non-
commercial use, so it can't be default. Another advantage is that you
can select the number of threads used for a given problem at runtime
and that is not possible with ATLAS - at least not with the 3.8.2
stable release.
* add optional support for CUDA BLAS, i.e. running the BLAS
computations on the GPU. CUDA BLAS is also closed source and requires
a recent Nvidia GPU since only those guarantee IEEE compliance for
singles and doubles (Tesla preferred :))
I want to do all three at the same time since I will have to touch the
same code base for all three problems. CUDA BLAS will likely only go
into a few places (LinBox for sure, probably numpy/scipy), but as we
will have a system with 2 Tesla C1060 next week in the Sage lab I have
hogh hopes for getting support into Sage quickly.
In the end the above work requires setting certain build flags and we
certainly need a cleanup there and once all the above happens we need
much better documentation since as is the few ATLAS options already
are mysterious even to me unless I read the source of various spkg-
installs.
Thoughts?
> - Robert
Cheers,
Michael
--~--~---------~--~----~------------~-------~--~----~
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at
http://groups.google.com/group/sage-support
URLs: http://www.sagemath.org
-~----------~----~----~----~------~----~------~--~---