A cool idea, but if I understand it correctly very specific and fussy. New
DB's are spawned on this model, and all the developers would have to be
aware of this non-standard behaviour, and DBA"s would have to create these
indexes every month, for every DB (as the log tables are created every
month). There are 89 session_id values in the January log (log_2013_01) so
this would quickly get out of control. But - like I said - an interesting
idea for more specific challenges.

 

From: Marc Mamin [mailto:m.ma...@intershop.de] 
Sent: February 21, 2013 2:41 PM
To: Jeff Janes; Carlo Stonebanks
Cc: pgsql-performance@postgresql.org
Subject: AW: [PERFORM] Are bitmap index scans slow to start?

 

 

>Rebuilding the index might help, as it would put all the leaf pages holding
values for session_id=27 adjacent to each other, so they would read from
disk faster.  But with a name like >"session_id", I don't know how long such
clustering would last though.

>If I'm right about the index disk-read time, then switching to a plain
index scan rather than a bitmap index scan would make no difference--either
way the data has to come off the disk.  


 

>>I'd prefer a
>>strategy that allowed fast performance the first time, rather than slow
the
>>first time and extremely fast subsequently.

Hello,

if the index is only used to locate rows for single session_id, you may
consider split it in a set of partial indexes.

e.g. 
create index i_0 on foo where session_id%4 =0;
create index i_1 on foo where session_id%4 =1;
create index i_2 on foo where session_id%4 =2;
create index i_3 on foo where session_id%4 =3;

(can be built in parallel using separate threads)

Then you will have to ensure that all your WHERE clauses also contain the
index condition:

WHERE session_id = 27 AND session_id%4 =27%4

regards,

Marc Mamin

Reply via email to