[EMAIL PROTECTED] wrote:
> Steve> By these statistics I think the answer to the original
> question Steve> is clearly "no" in the general case.
>
> As someone else (Guido?) pointed out, the literal case isn't all that
> interesting. I modified floatobject.c to track a few interesting
> floating point values:
>
> static unsigned int nfloats[5] = {
> 0, /* -1.0 */
> 0, /* 0.0 */
> 0, /* +1.0 */
> 0, /* everything else */
> 0, /* whole numbers from -10.0 ... 10.0 */
> };
>
> PyObject *
> PyFloat_FromDouble(double fval)
> {
> register PyFloatObject *op;
> if (free_list == NULL) {
> if ((free_list = fill_free_list()) == NULL)
> return NULL;
> }
>
> if (fval == 0.0) nfloats[1]++;
> else if (fval == 1.0) nfloats[2]++;
> else if (fval == -1.0) nfloats[0]++;
> else nfloats[3]++;
>
> if (fval >= -10.0 && fval <= 10.0 && (int)fval == fval) {
> nfloats[4]++;
> }
This doesn't actually give us a very useful indication of potential
memory savings. What I think would be more useful is tracking the
maximum simultaneous count of each value i.e. what the maximum refcount
would have been if they were shared.
Tim Delaney
_______________________________________________
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe:
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com