Tuning is always difficult, especially when the search space is
multidimensional.

FLINT 2.0 will come with a tuning facility which will tune all linear
cutoffs at build time.

I'm afraid I don't have anything to offer for multidimensional
cutoffs. Usually the search space is just too large to have an
efficient tuning implementation.

Something I do notice is that multiprecision arithmetic tends to need
less adjustments to its tuning than stuff over Z/pZ for word sized p
for example, or for single precision stuff.

Trying to find a single algorithm that doesn't require some tuning
cutoffs is almost certainly impossible, if you want maximum
performance.

Bill.

2009/3/20 William Stein <wst...@gmail.com>:
> On Fri, Mar 20, 2009 at 10:18 AM, Nick Alexander <ncalexan...@gmail.com> 
> wrote:
>>
>>>> Instead I worked around this by computing the determinants
>>>> mod 2 and mod 13 and using CRT (if the determinants were
>>>> both units).  The time was then almost trivial.  Suppose I
>>>> replace this problem over ZZ/25ZZ or ZZ/256ZZ. I would
>>>> still hope that the problem would NOT be lifted to ZZ for
>>>> computation, since this would certainly not terminate in
>>>> reasonable time for a dense matrix.
>>
>> In general doing this via the CRT requires factoring, so some cutoff
>> must be made, which will become outdated...
>
> Writing fast code that works for a range of inputs almost always
> involves making one more explicit cutoffs, and every single one that
> is ever made will eventually be non-optimal in the future, or even on
> differently configured computers now.
>
> I just spent some time making the linear algebra over ZZ and QQ 5-250
> times faster in lots of cases by special casing small dimensions to
> much more quickly use PARI (etc.), then switching to some
> asymptotically fast native sage algorithm for big input.   This
> literally speeds up a lot of code by a factor of 5-250 that uses small
> linear algebra.  I just hardcoded some defaults for now -- e.g., use
> PARI for dets of matrices with less than 50 rows.   Of course, it will
> be much better to factor out all this "tuning info" in a global object
> that can be optimized for a range of hardware and be updated over time
> (sort of like a Sage version of ATLAS).   That said, even with
> hardcoded values, doing this special casing is *dramatically* -- I
> mean *dramatically* -- better than not doing anything.  It also makes
> it much easier to factor out the "tuning" object, when we do that
> later.
>
> Bill Hart -- you might want to make some remarks at this point, since
> you are constantly dealing with such "tuning" issues in MPIR and
> FLINT.
>
>  -- William
>

--~--~---------~--~----~------------~-------~--~----~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to 
sage-devel-unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://www.sagemath.org
-~----------~----~----~----~------~----~------~--~---

Reply via email to