On Sat, Dec 21, 2013 at 4:36 AM, Heikki Linnakangas
<hlinnakan...@vmware.com> wrote:
>
> Yet another version. The encoding/decoding code is now quite isolated in
> ginpostinglist.c, so it's easy to experiment with different encodings. This
> patch uses varbyte encoding again.
>
> I got a bit carried away, experimented with a bunch of different encodings.
> I tried rice encoding, rice encoding with block and offset number delta
> stored separately, the simple9 variant, and varbyte encoding.
>
> The compressed size obviously depends a lot on the distribution of the
> items, but in the test set I used, the differences between different
> encodings were quite small.
>
> One fatal problem with many encodings is VACUUM. If a page is completely
> full and you remove one item, the result must still fit. In other words,
> removing an item must never enlarge the space needed. Otherwise we have to
> be able to split on vacuum, which adds a lot of code, and also makes it
> possible for VACUUM to fail if there is no disk space left. That's
> unpleasant if you're trying to run VACUUM to release disk space. (gin fast
> updates already has that problem BTW, but let's not make it worse)
>
> I believe that eliminates all encodings in the Simple family, as well as
> PForDelta, and surprisingly also Rice encoding. For example, if you have
> three items in consecutive offsets, the differences between them are encoded
> as 11 in rice encoding. If you remove the middle item, the encoding for the
> next item becomes 010, which takes more space than the original.
>
> AFAICS varbyte encoding is safe from that. (a formal proof would be nice
> though).
>
> So, I'm happy to go with varbyte encoding now, indeed I don't think we have
> much choice unless someone can come up with an alternative that's
> VACUUM-safe. I have to put this patch aside for a while now, I spent a lot
> more time on these encoding experiments than I intended. If you could take a
> look at this latest version, spend some time reviewing it and cleaning up
> any obsolete comments, and re-run the performance tests you did earlier,
> that would be great. One thing I'm slightly worried about is the overhead of
> merging the compressed and uncompressed posting lists in a scan. This patch
> will be in good shape for the final commitfest, or even before that.
>


I just tried out the patch "gin-packed-postinglists-varbyte2.patch"
(which looks like the latest one in this thread) as follows:

1) Applied patch to the HEAD (on commit
94b899b829657332bda856ac3f06153d09077bd1)
2) Created a test table and index

create table test (a text);
copy test from '/usr/share/dict/words';
create index test_trgm_idx on test using gin (a gin_trgm_ops);

3) Got the following error on a wildcard query:

postgres=# explain (buffers, analyze) select count(*) from test where
a like '%tio%';
ERROR:  lock 9447 is not held
STATEMENT:  explain (buffers, analyze) select count(*) from test where
a like '%tio%';
ERROR:  lock 9447 is not held

Following is the "bt":

#0  LWLockRelease (lockid=9447) at lwlock.c:747
#1  0x000000000074a356 in LockBuffer (buffer=4638, mode=0) at bufmgr.c:2760
#2  0x0000000000749d6e in UnlockReleaseBuffer (buffer=4638) at bufmgr.c:2551
#3  0x0000000000478bcc in entryGetNextItem (ginstate=0x2380100,
entry=0x23832a8) at ginget.c:555
#4  0x0000000000479346 in entryGetItem (ginstate=0x2380100,
entry=0x23832a8) at ginget.c:695
#5  0x000000000047987e in scanGetItem (scan=0x237fee8,
advancePast=0x7fffa1a46b80, item=0x7fffa1a46b80,
recheck=0x7fffa1a46b7f "\001") at ginget.c:925
#6  0x000000000047ae3f in gingetbitmap (fcinfo=0x7fffa1a46be0) at ginget.c:1540
#7  0x00000000008a9486 in FunctionCall2Coll (flinfo=0x236f558,
collation=0, arg1=37224168, arg2=37236160) at fmgr.c:1323
#8  0x00000000004bd091 in index_getbitmap (scan=0x237fee8,
bitmap=0x2382dc0) at indexam.c:649
#9  0x000000000064827c in MultiExecBitmapIndexScan (node=0x237fce0) at
nodeBitmapIndexscan.c:89
#10 0x00000000006313b6 in MultiExecProcNode (node=0x237fce0) at
execProcnode.c:550
#11 0x000000000064713a in BitmapHeapNext (node=0x237e998) at
nodeBitmapHeapscan.c:104
#12 0x000000000063c348 in ExecScanFetch (node=0x237e998,
accessMtd=0x6470ac <BitmapHeapNext>, recheckMtd=0x647cbc
<BitmapHeapRecheck>) at execScan.c:82
#13 0x000000000063c3bc in ExecScan (node=0x237e998, accessMtd=0x6470ac
<BitmapHeapNext>, recheckMtd=0x647cbc <BitmapHeapRecheck>) at
execScan.c:132
#14 0x0000000000647d3a in ExecBitmapHeapScan (node=0x237e998) at
nodeBitmapHeapscan.c:436
#15 0x0000000000631121 in ExecProcNode (node=0x237e998) at execProcnode.c:414
#16 0x0000000000644992 in agg_retrieve_direct (aggstate=0x237e200) at
nodeAgg.c:1073
#17 0x00000000006448cc in ExecAgg (node=0x237e200) at nodeAgg.c:1026
#18 0x0000000000631247 in ExecProcNode (node=0x237e200) at execProcnode.c:476
#19 0x000000000062ef2a in ExecutePlan (estate=0x237e0e8,
planstate=0x237e200, operation=CMD_SELECT, sendTuples=1 '\001',
numberTuples=0, direction=ForwardScanDirection, dest=0xd0c360) at
execMain.c:1474
#20 0x000000000062cfeb in standard_ExecutorRun (queryDesc=0x237c208,
direction=ForwardScanDirection, count=0) at execMain.c:308
#21 0x000000000062ce51 in ExecutorRun (queryDesc=0x237c208,
direction=ForwardScanDirection, count=0) at execMain.c:256
#22 0x00000000005ccc46 in ExplainOnePlan (plannedstmt=0x237c170,
into=0x0, es=0x7fffa1a47580, queryString=0x231d158 "explain (buffers,
analyze) select count(*) from test where a like '%tio%';", params=0x0)
    at explain.c:471
#23 0x00000000005cc8f1 in ExplainOneQuery (query=0x2353c10, into=0x0,
es=0x7fffa1a47580, queryString=0x231d158 "explain (buffers, analyze)
select count(*) from test where a like '%tio%';", params=0x0)
    at explain.c:327
#24 0x00000000005cc5d7 in ExplainQuery (stmt=0x231e2c8,
queryString=0x231d158 "explain (buffers, analyze) select count(*) from
test where a like '%tio%';", params=0x0, dest=0x2353b40) at
explain.c:225
#25 0x0000000000784101 in standard_ProcessUtility
(parsetree=0x231e2c8, queryString=0x231d158 "explain (buffers,
analyze) select count(*) from test where a like '%tio%';",
context=PROCESS_UTILITY_TOPLEVEL,
    params=0x0, dest=0x2353b40, completionTag=0x7fffa1a477e0 "") at
utility.c:687
#26 0x0000000000783835 in ProcessUtility (parsetree=0x231e2c8,
queryString=0x231d158 "explain (buffers, analyze) select count(*) from
test where a like '%tio%';", context=PROCESS_UTILITY_TOPLEVEL,
    params=0x0, dest=0x2353b40, completionTag=0x7fffa1a477e0 "") at
utility.c:352
#27 0x0000000000782771 in PortalRunUtility (portal=0x235aec8,
utilityStmt=0x231e2c8, isTopLevel=1 '\001', dest=0x2353b40,
completionTag=0x7fffa1a477e0 "") at pquery.c:1187
#28 0x00000000007824c0 in FillPortalStore (portal=0x235aec8,
isTopLevel=1 '\001') at pquery.c:1061
#29 0x0000000000781dc6 in PortalRun (portal=0x235aec8,
count=9223372036854775807, isTopLevel=1 '\001', dest=0x231eef8,
altdest=0x231eef8, completionTag=0x7fffa1a479c0 "") at pquery.c:785
#30 0x000000000077be51 in exec_simple_query (query_string=0x231d158
"explain (buffers, analyze) select count(*) from test where a like
'%tio%';") at postgres.c:1054
#31 0x000000000078004f in PostgresMain (argc=1, argv=0x22b85d8,
dbname=0x22b8438 "postgres", username=0x22b8418 "amit") at
postgres.c:4011
#32 0x000000000071a811 in BackendRun (port=0x22d5c00) at postmaster.c:4085
#33 0x0000000000719f9b in BackendStartup (port=0x22d5c00) at postmaster.c:3774
#34 0x00000000007167c0 in ServerLoop () at postmaster.c:1585
#35 0x0000000000715eed in PostmasterMain (argc=3, argv=0x22b6350) at
postmaster.c:1240
#36 0x0000000000678890 in main (argc=3, argv=0x22b6350) at main.c:194


--
Amit


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to