I see, you are thinking of the general fractional case.
My point was that whole numbers seem to pop up often and to reuse those is easy 
I did a test of tracking actual floating point numbers and the majority of 
heavy usage comes
from integral values.  It would indeed be strange if some fractional number 
were heavily use but it can be argued that integral ones are "special" in many 
ways.
Anyway, Skip noted that 50% of all floats are whole numbers between -10 and 10 
inclusive, and this is the code that I employ in our python build today:

PyObject *
PyFloat_FromDouble(double fval)
{
        register PyFloatObject *op;
        int ival;
        if (free_list == NULL) {
                if ((free_list = fill_free_list()) == NULL)
                        return NULL;
                /* CCP addition, cache common values */
                if (!f_reuse[0]) {
                        int i;
                        for(i = 0; i<21; i++)
                                f_reuse[i] = PyFloat_FromDouble((double)(i-10));
                }
        }
        /* CCP addition, check for recycling */
        ival = (int)fval;
        if ((double)ival == fval && ival>=-10 && ival <= 10) {
                ival+=10;
                if (f_reuse[ival]) {
                        Py_INCREF(f_reuse[ival]);
                        return f_reuse[ival];
                }
        }
...


Cheers,

Kristján

> -----Original Message-----
> From: "Martin v. Löwis" [mailto:[EMAIL PROTECTED] 
> Sent: 2. október 2006 14:37
> To: Kristján V. Jónsson
> Cc: Bob Ippolito; python-dev@python.org
> Subject: Re: [Python-Dev] Caching float(0.0)
> 
> Kristján V. Jónsson schrieb:
> > I can't see how this situation is any different from the 
> re-use of low 
> > ints.  There is no fundamental law that says that ints 
> below 100 are 
> > more common than other, yet experience shows that  this is 
> so, and so 
> > they are reused.
> 
> There are two important differences:
> 1. it is possible to determine whether the value is "special" in
>    constant time, and also fetch the singleton value in constant
>    time for ints; the same isn't possible for floats.
> 2. it may be that there is a loss of precision in reusing an existing
>    value (although I'm not certain that this could really happen).
>    For example, could it be that two values compare successful in
>    ==, yet are different values? I know this can't happen for
>    integers, so I feel much more comfortable with that cache.
> 
> > Rather than to view this as a programming error, why not 
> simply accept 
> > that this is a recurring pattern and adjust python to be more 
> > efficient when faced by it?  Surely a lot of karma lies that way?
> 
> I'm worried about the penalty that this causes in terms of 
> run-time cost. Also, how do you chose what values to cache?
> 
> Regards,
> Martin
> 
_______________________________________________
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com

Reply via email to