Hi Heikki!
Thank you for reviewing the code, it's always inspiring when a work is
noted (:

>Looking at the code, by "margin", you mean the sum of all "edges", i.e. of
each dimension, of the cube.
Indeed. As far as I remember, this is a terminology of old R*-tree paper.
Now they use the word "perimeter", seems like it's closer for
English-speaking folks, so in future patches I'll switch to "perimeter"

>I guess the point of that is to differentiate between cubes that have the
same volume, but a different shape.
<Not very sufficient R-tree details>
Somewhat like this. For an R[something]-tree volume is an ultimate measure
to compare figures, close to multidimensional cube. Tree must tend to form
MBRs with equal edges to ensure search optimizations spread across all
dimensions equally.
But on practice volume degrade quickly. When you index dots, you quickly
get a page with a "plane", lot's of dots with one dimensions on a fixed
value. And then R-tree do not work anymore, inserts are somewhat-randomized
by GiST, but in practice tree is a lot worse than randomized, due to
randomization skew towards early insert candidates.
Perimeter is not as good as volume in general case. But it does not degrade
until you have truly equal object MBRs.

So this was volume vs perimeter. For example in MySQL they use #define to
choose one strategy of these, which seems for me not a good design. In this
patch I propose to use perimeter when volume strategy degraded, effectively
when one dimension edge is 0 or some dimensions edges are close to epsilon.

But there is one more tie to break. When you choose subtree for insertion,
some candidates have zero volume extension and zero edge extension. They do
not need to be extended to accommodate new tuple. In this case we choose
smallest one, by volume. If all of them have 0 volume, we choose by

All in all, we have 4 different cases here, ordered by priority.
</Not very sufficient R-tree details>
>So, let's try to come up with a scheme that doesn't require IEEE 754
1. Conversion for my algorithm to float ops. I actually haven't thought
about it before, but seems it's impossible. Bit shift makes no sense for
regular float operations, under any circumstances exponent bits cannot shit
to significant field and vise versa. If we do not move last bit of exponent
- all significant field bits are garbage, that leaves us with only 6 bits
for actual value, which is unacceptable.
2. Your algorithm, among loosing some bits of precision (which is
absolutely acceptable - we have like 31 of them and that’s a lot) rely on
false assumption. We compare tuples on page not by MBR of inserted tuple,
but by MBR of tuple on page, which is different for every penalty
I'll put it in your var names, they are good to reason about proposed

oE: Original tuple's edge sum (perimeter measure). This tuple is on a page.
nE: New tuple's edge sum. This tuple is to be inserted. We calc penalty to
choose subtree with minimum penalty.
dE: Edge increase. Edge of union(original|new) minus oE.

oV: Original tuple's volume
nV: New tuple's volume
dV: Volume increase

Best desired algorithm: sort tuples by dV, then by dE, then by oV, then by
oE (all ascending) and pick top 1.
Unfortunately, I do not see a way to do that in 31 bit of float (in GiST we
cannot use sign bit, <=0 is used as a magic condition).
So implemented is following:
Say we have number ALOT which is guaranteed to be bigger than dV,dE,oV,oE.
if( dV > 0 )
  penalty = ALOT*3 + dV;//nonzero dV goes last
elif ( dE > 0 )
  penalty = ALOT*2 + dE;//than goes dE
elif ( oV > 0 )
  penalty = ALOT + oV
  penalty = oE;
//oE is zero for subtrees containing only equal points. In this case we
pass split optimization possibility to next GiST attribute

Volume and perimeter of new entry does not affect choice of subtree
directly, whether it is zero or not.

>Hmm. That's a pretty large increase in construction time. Have you done
any profiling on where the time goes?
Since I've changes only one function, I didn't consider profiling. I've
asked Michael to check disassembly of the function call tree, implemented
his recommendations and patch v2 have all build performance back (: And now
it computes aggregates 40% faster.
>continue work within cube
There is one more option: implement extension index using CREATE ACCESS
METHOD feature of 9.6. It is an option: we can skip all GiST API hacks and
implement real RR*-tree, there are some other improvements.
But, on the other hand, improvement of GiST API could be beneficial to
other extensions.

Again, thanks for your attention. Possibly we can come up with some
solution without dirty hacks.

Best regards, Andrey Borodin, Octonica & Ural Federal University.

Reply via email to