Couple of observations:
1. Are you sure your data is handled as 32 bit all the way through? Run time casting will offset performance gains on 32 bit floats. Is your comparison routine casting to double?
I thought this might be the case - but I thought it would be small. The only place it might be doing hidden casts would be in statements like
"query->xmin <= key->xmax".
2. Math CPUs usually crunch at 80 bits, can't save much by using 32 bit floats, although cache coherency will be better.
3. 64 bit cpu will probably run better on 64 bit floats.
4. Is your dataset congested enough that you now have duplicate values by loss of precision? This would of course impact performance. How big is your dataset? How big is your avg. result set?
My test datasets are quite spatially separate, so I dont expect there to be any "accidental" overlaps. There's about 12 million rows (in total) in the datasets - I've only noticed about 2 of these overlaps. My test datasets are 1000 rows, 10,000 rows, 10,000,000 rows, and a few different ones in the 200,000 row range.
I'm testing queries that return anywhere from 1 geometry to about 10,000.
The actual index search is only a few milliseconds longer using the float32 bounding box for a mid-sized table returning a handfull of rows. When the result sets get bigger (or you start doing nested queries), the performance differences become greater.
dave
---------------------------(end of broadcast)--------------------------- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match