Quoting Mikkel Krøigård <[EMAIL PROTECTED]>:

> Citat Martin Geisler <[EMAIL PROTECTED]>:
>
> > I've looked at the GMPY code, and it is a fairly straightforward
> > wrapper for the GMP library, as you describe.
> >
> > But I don't know if it makes it easier for us to benchmark just
> > because it is split into its own C code...
> I never said it would. If you use this approach, it is easy to see how much
> is
> spent on the dangerous arithmetic, but I guess a profiler could tell you how
> much time Python spends on the functions implementing the operators anyway.

If that's the case, then it doesn't make sense w.r.t. the profiling to
use GMPY. I was assuming the profiler could not give you information that was so
fine-grained.

But at least it is good news that Sigurd saw a speed-up from using C, albeit on
large numbers. It indicates that the raw computing time is not completely
dwarfed by bookkeeping etc.

>
> It is not completely unimaginable, however, that someone would want to know
> how
> much actually goes on inside gmpy (arithmetic on big numbers, the data) and
> how
> much goes on outside (counting variables, various kinds of overhead).

That someone is me. I think it is important to know what fraction of the time we
spend on computing we HAVE to do.

regards, Ivan
_______________________________________________
viff-devel mailing list (http://viff.dk/)
viff-devel@viff.dk
http://lists.viff.dk/listinfo.cgi/viff-devel-viff.dk

Reply via email to