On Thu, Jun 25, 2009 at 5:53 AM, Robert
Bradshaw<[email protected]> wrote:
>
> On Jun 23, 2009, at 3:09 PM, rje wrote:
>
>> First, thanks to David and William, who have answered my questions in
>> the past.
>>
>> I have access to NVIDIA Tesla and AMD Firestream GPGPU hardware. Are
>> there any existing tools which would help facilitate porting and
>> finely parallelizing the following 3-line Sage program to take
>> advantage of that hardware ?
>
> Not that I know of, but you can look at how the computation is done
> under the hood and write it in a more parallel way.
>
>> sage: G=DirichletGroup(18900, GF(193));X=G.list();Y=X[0];
>> sage: M=ModularSymbols(Y,4,sign=1);
>
> This probably is dominated by the linear algebra over Q.

Where Q = GF(193) -- see above.

>  Dense linear
> algebra is done using multi-modular methods, which lend themselves
> nicely to parallelization. (Not sure about sparse over Q, or even if
> this is represented sparsely, but I think it should be).
>
>> sage: A=(M.T(19)-162).kernel()
>
> Computing the Hecke Matrix is done via a large sum, which could be
> distributed among several processes.
>
> For both of these steps, it's easy to see how to do it in theory, but
> it'll be some manual work to actually implement it. Would be
> interesting to have as part of Sage though.

>
> - Robert
>
> >
>



-- 
William Stein
Associate Professor of Mathematics
University of Washington
http://wstein.org

--~--~---------~--~----~------------~-------~--~----~
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to 
[email protected]
For more options, visit this group at 
http://groups.google.com/group/sage-support
URLs: http://www.sagemath.org
-~----------~----~----~----~------~----~------~--~---

Reply via email to