This is stepping back quite a while; let me point people to the thread
of 2003-02 where Mariusz Czu\x{0142}ada <[EMAIL PROTECTED]> was
looking for a way of optimizing a VIEW that was a UNION.


The subject has come up a few times through PostgreSQL history, and
I'd imagine to think I may have a little something new to offer to it.

Let's consider a table used to store log information:

create table log_table (
   request_time timestamp with time zone,
   object character varying,  -- What they asked for
   request_type character(8), -- What they did to it
   request_size integer,
   requestor inet,
   request_status integer,
   result_size integer,
   request_detail character varying
create index log_times on log_table(request_time);
create index log_object on log_table(object);

Every time "something happens," an entry goes into this table.
Unfortunately, the table is likely to grow to tremendous size, over
time, and there are all sorts of troublesome things about purging it:

 -> Fragmentation may waste space and destroy the usefulness of

 -> Deleting data row by row will cause replication logic to go mad,
    as triggers get invoked for every single row modified;

 -> The action of deletion will draw the data we just decided was
    _useless_ into memory, injuring cache utilization badly as we fill
    the cache with trash.

The obvious thought: Create several tables, and join them together
into a view.  So instead of log_table being a table, we have
log_table_1 thru log_table_3, each with the schema describe above, and
define the view:

create view log_table as select * from log_table_1 union all 
                         select * from log_table_2 union all 
                         select * from log_table_3;

It's easy enough (modulo a little debugging and pl/pgsql work :-)) to
turn this into an updatable view so that inserts into log_table use a
different log table every (day|week|month).  And we can TRUNCATE the
eldest one, which is a cheap operation.

This approach also resembles the way the "O guys" handle partitioned
tables, so it's not merely about "logs."

Unfortunately, selects on the VIEW are, at present, unable to make use
of the indices.  So if we want all log entries for June 11th, the

  select * from log_table where request_time between 'june 11 2003' and
                                                   'june 12 2003';

returns a plan:
Subquery Scan log_table  (cost=0.00..10950.26 rows=177126 width=314)
  ->  Append  (cost=0.00..10950.26 rows=177126 width=314)
        ->  Subquery Scan *SELECT* 1  (cost=0.00..3089.07 rows=50307 width=71)
              ->  Seq Scan on log_table_1  (cost=0.00..3089.07 rows=50307 width=71)
        ->  Subquery Scan *SELECT* 2  (cost=0.00..602.92 rows=9892 width=314)
              ->  Seq Scan on log_table_2  (cost=0.00..602.92 rows=9892 width=314)
        ->  Subquery Scan *SELECT* 3  (cost=0.00..2390.09 rows=39209 width=314)
              ->  Seq Scan on log_table_3  (cost=0.00..2390.09 rows=39209 width=314)

In effect, the query is materialized into:

select * from 
   (select * from log_table_1 union all select * from log_table_2
   union all select * from log_table_3) as merger
where [request_time between 'june 11 2003' and 'june 12 2003'];

What would perform better would be to attach the WHERE clause to each
of the union members.  (Everyone stop and sing "Solidarity Forever"


select * from
   select * from log_table_1 where request_time between 'june 11 2003' and 'june 12 
2003' union all
   select * from log_table_2 where request_time between 'june 11 2003' and 'june 12 
2003' union all
   select * from log_table_3 where request_time between 'june 11 2003' and 'june 12 
2003' union all
   ) as merged_version;

Subquery Scan merged_version  (cost=0.00..947.04 rows=247 width=314) (actual 
time=55.86..1776.42 rows=20124 loops=1)
  ->  Append  (cost=0.00..947.04 rows=247 width=314) (actual time=55.84..1483.60 
rows=20124 loops=1)
        ->  Subquery Scan *SELECT* 1  (cost=0.00..3.02 rows=1 width=71) (actual 
time=55.83..289.81 rows=3422 loops=1)
              ->  Index Scan using log_table_1_trans_on_idx on log_table_1  
(cost=0.00..3.02 rows=1 width=71) (actual time=55.80..239.84 rows=3422 loops=1)
        ->  Subquery Scan *SELECT* 2  (cost=0.00..191.38 rows=49 width=314) (actual 
time=62.32..1115.15 rows=16702 loops=1)
              ->  Index Scan using log_table_2_trans_on_idx on log_table_2  
(cost=0.00..191.38 rows=49 width=314) (actual time=62.29..873.63 rows=16702 loops=1)
        ->  Subquery Scan *SELECT* 3  (cost=0.00..752.64 rows=196 width=314) (actual 
time=26.69..26.69 rows=0 loops=1)
              ->  Index Scan using log_table_3_trans_on_idx on log_table_3  
(cost=0.00..752.64 rows=196 width=314) (actual time=26.69..26.69 rows=0 loops=1)
Total runtime: 1806.39 msec

Which is nice and quick, as it cuts each set down to size _before_
merging them.

Mariusz had been looking, back in February, for an optimization that
would, in effect, throw away the UNION ALL clauses that were
unnecessary.  Tom Lane and Stephan Szabo, in discussing this,
observed, quite rightly, that this is liable to be an obscure sort of

Tom Lane writes:
> Stephan Szabo <[EMAIL PROTECTED]> writes:
> > Yeah, but I think what he's hoping is that it'll notice that
> > "key=1 and key=3" would be noticed as a false condition so that it doesn't
> > scan those tables since a row presumably can't satisify both. The question
> > would be, is the expense of checking the condition for all queries
> > greater than the potential gain for these sorts of queries.

> Yes, this is the key point: we won't put in an optimization that
> wins on a small class of queries unless there is no material cost
> added for planning cases where it doesn't apply.

In contrast, I would argue that adding the WHERE clause in as an extra
condition on each of the UNION subqueries is an optimization that is
likely to win in _most_ cases.

It helps with the example I illustrated; it would help with Mariusz'
scenario, not by outright eliminating UNION subqueries, but rather by
making their result sets empty.

select key, value from view123 where key = 2

transforms into...

select key, value from tab1 where key=1 [and key = 2] 
union all
select key, value from tab2 where key=2 [and key = 2] 
union all
select key, value from tab3 where key=3 [and key = 2];

The generalization is that:

  select * from
    (select [fields1] from t1 where [cond1] (UNION|UNION ALL|INTERSECT) 
     select [fields2] from t2 where [cond2] (UNION|UNION ALL|INTERSECT) 
     select [fieldsn] from tn where [condn]) as COMBINATION
   WHERE [globalcond];

is equivalent to:

  select * from
    (select [fields1] from t1 where ([cond1]) and [globalcond] (UNION|UNION 
     select [fields2] from t2 where ([cond2]) and [globalcond] (UNION|UNION 
     select [fieldsn] from tn where ([condn]) and [globalcond]
     ) as COMBINATION;

[globalcond] has to be expressed in terms of the fields available for
each subquery, but that already needs to be true, because the global
condition at present is being applied to the fields that are given by
let name="cbbrowne" and tld="" in String.concat "@" [name;tld];;
Christopher Browne
(416) 646 3304 x124 (land)

---------------------------(end of broadcast)---------------------------
TIP 8: explain analyze is your friend

Reply via email to