Hi, hackers!
Some time ago I put a patch to commitfest that optimizes slightly GiST inserts and updates. It's effect in general is quite modest (15% perf improved), but for sorted data inserts are 3 times quicker. This effect I attribute to mitigation of deficiencies of old split algorithm. Alexander Korotkov advised me to lookup some features of the RR*-tree. The RR*-tree differs from combination of Gist + cube extension in two algorithms: 1. Algorithm to choose subtree for insertion 2. Split algorithm This is the message regarding implementation of first one in GiST. I think that both of this algorithms will improve querying performance. Long story short, there are two problems in choose subtree algorithm. 1. GiST chooses subtree via penalty calculation for each entry, while RR*-tree employs complicated conditional logic: when there are MBBs (Minimum Bounding Box) which do not require extensions, smallest of them is chosen; in some cases, entries are chosen not by volume, but by theirs margin. 2. RR*-tree algorithm jumps through entry comparation non-linearly, I think this means that GiST penalty computation function will have to access other entries on a page. In this message I address only first problem. I want to construct a penalty function that will choose: 1. Entry with a zero volume and smallest margin, that can accommodate insertion tuple without extension, if one exists; 2. Entry with smallest volume, that can accommodate insertion tuple without extension, if one exists; 3. Entry with zero volume increase after insert and smallest margin increase, if one exists; 4. Entry with minimal volume increase after insert. Current cube behavior inspects only 4th option by returning as a penalty (float) MBB’s volume increase. To implement all 4 options, I use a hack: order of positive floats corresponds exactly to order of integers with same binary representation (I’m not sure this stands for every single supported platform). So I use two bits of float as if it were integer, and all others are used as shifted bits of corresponding float: option 4 – volume increase, 3 - margin increase, 2 – MBB volume, 1 – MBB margin. You can check the reinterpretation cast function pack_float() in the patch. I’ve tested patch performance with attached test. On my machine patch slows index construction time from 60 to 76 seconds, reduces size of the index from 85Mb to 82Mb, reduces time of aggregates computation from 36 seconds to 29 seconds. I do not know: should I continue this work in cube, or it’d be better to fork cube? Maybe anyone have tried already RR*-tree implementation? Science papers show very good search performance increase. Please let me know if you have any ideas, information or interest in this topic. Best regards, Andrey Borodin, Octonica & Ural Federal University.
diff --git a/contrib/cube/cube.c b/contrib/cube/cube.c index 3feddef..9c8c63d 100644 --- a/contrib/cube/cube.c +++ b/contrib/cube/cube.c @@ -96,6 +96,7 @@ bool cube_contains_v0(NDBOX *a, NDBOX *b); bool cube_overlap_v0(NDBOX *a, NDBOX *b); NDBOX *cube_union_v0(NDBOX *a, NDBOX *b); void rt_cube_size(NDBOX *a, double *sz); +void rt_cube_edge(NDBOX *a, double *sz); NDBOX *g_cube_binary_union(NDBOX *r1, NDBOX *r2, int *sizep); bool g_cube_leaf_consistent(NDBOX *key, NDBOX *query, StrategyNumber strategy); bool g_cube_internal_consistent(NDBOX *key, NDBOX *query, StrategyNumber strategy); @@ -420,6 +421,13 @@ g_cube_decompress(PG_FUNCTION_ARGS) PG_RETURN_POINTER(entry); } +float pack_float(float actualValue, int realm) +{ + // two bits for realm, other for value + int realmAjustment = *((int*)&actualValue)/4; + int realCode = realm * (INT32_MAX/4) + realmAjustment; // we have 4 realms + return *((float*)&realCode); +} /* ** The GiST Penalty method for boxes @@ -428,6 +436,11 @@ g_cube_decompress(PG_FUNCTION_ARGS) Datum g_cube_penalty(PG_FUNCTION_ARGS) { + // REALM 0: No extension is required, volume is zerom return edge + // REALM 1: No extension is required, return nonzero volume + // REALM 2: Volume extension is zero, return nonzero edge extension + // REALM 3: Volume extension is nonzero, return it + GISTENTRY *origentry = (GISTENTRY *) PG_GETARG_POINTER(0); GISTENTRY *newentry = (GISTENTRY *) PG_GETARG_POINTER(1); float *result = (float *) PG_GETARG_POINTER(2); @@ -440,6 +453,32 @@ g_cube_penalty(PG_FUNCTION_ARGS) rt_cube_size(ud, &tmp1); rt_cube_size(DatumGetNDBOX(origentry->key), &tmp2); *result = (float) (tmp1 - tmp2); + if(*result == 0) + { + double tmp3 = tmp1; + rt_cube_edge(ud, &tmp1); + rt_cube_edge(DatumGetNDBOX(origentry->key), &tmp2); + *result = (float) (tmp1 - tmp2); + if(*result == 0) + { + if(tmp3!=0) + { + *result = pack_float(tmp3,2); + } + else + { + *result = pack_float(tmp1,2); + } + } + else + { + *result = pack_float(*result,2); + } + } + else + { + *result = pack_float(*result,3); + } /* * fprintf(stderr, "penalty\n"); fprintf(stderr, "\t%g\n", *result); @@ -891,6 +930,20 @@ rt_cube_size(NDBOX *a, double *size) return; } +void +rt_cube_edge(NDBOX *a, double *size) +{ + int i; + *size = 0.0; + + if (a != (NDBOX *) NULL) + { + for (i = 0; i < DIM(a); i++) + *size += Abs(UR_COORD(a, i) - LL_COORD(a, i)); + } + return; +} + /* make up a metric in which one box will be 'lower' than the other -- this can be useful for sorting and to determine uniqueness */ int32
gistselecttest.sql
Description: Binary data
-- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers