I see what you mean, and you might be right in this case. But anyways i keep my proposal: do you think that having an algorithm=parallel option for the cases where an optimal tuning heuristic is not possible? I can think on several situations where, even with a good tuning, you can find exceptions . Think, for example, on provable primality testing algorithms: there are probabilistic methods that are much faster than the deterministic one on average... but much slower in the worst case. Or groebner basis computations, where some orderings are usually much faster than others, but you can find examples where this rule fails. Or the case of symbollic integration, where not only the speed can vary between maxima nd sympy without a clear criterion to determine a priory which will be faster, but also the ability to find a solution may also vary.
I think that this could be a good way to take advantage of several cores. On 8 mayo, 12:33, Jeroen Demeyer <jdeme...@cage.ugent.be> wrote: > In my opinion, this looks like a bad excuse to skip proper tuning > (especially for the ticket you refer to)... -- You received this message because you are subscribed to the Google Groups "sage-devel" group. To unsubscribe from this group and stop receiving emails from it, send an email to sage-devel+unsubscr...@googlegroups.com. To post to this group, send email to sage-devel@googlegroups.com. Visit this group at http://groups.google.com/group/sage-devel?hl=en. For more options, visit https://groups.google.com/groups/opt_out.