Re: [PERFORM] Bad query optimizer misestimation because of TOAST

2005-02-07 Thread Markus Schaber
Hi, Tom,

Tom Lane schrieb:
 Markus Schaber [EMAIL PROTECTED] writes:
 [Query optimizer misestimation using lossy GIST on TOASTed columns]

 What I would be inclined to do is to extend ANALYZE to make an estimate
 of the extent of toasting of every toastable column, and then modify
 cost_qual_eval to charge a nonzero cost for evaluation of Vars that are
 potentially toasted.

What to do now? To fix this issue seems to be a rather long-term job.

Is it enough to document workarounds (as in PostGIS), provided that
there are such workarounds for other GIST users?

Is there a bug tracking system we could file the problem, so it does not
get lost?

Markus
--
markus schaber | dipl. informatiker
logi-track ag | rennweg 14-16 | ch 8001 zürich
phone +41-43-888 62 52 | fax +41-43-888 62 53
mailto:[EMAIL PROTECTED] | www.logi-track.com


signature.asc
Description: OpenPGP digital signature


Re: [PERFORM] Bad query optimizer misestimation because of TOAST tables

2005-02-02 Thread Tom Lane
Markus Schaber [EMAIL PROTECTED] writes:
 IMHO, this tells the reason. The query planner has a table size of 3
 pages, which clearly is a case for a seqscan. But during the seqscan,
 the database has to fetch an additional amount of 8225 toast pages and
 127 toast index pages, and rebuild the geometries contained therein.

I don't buy this analysis at all.  The toasted columns are not those in
the index (because we don't support out-of-line-toasted index entries),
so a WHERE clause that only touches indexed columns isn't going to need
to fetch anything from the toast table.  The only stuff it would fetch
is in rows that passed the WHERE and need to be returned to the client
--- and those costs are going to be the same either way.

I'm not entirely sure where the time is going, but I do not think you
have proven your theory about it.  I'd suggest building the backend
with -pg and getting some gprof evidence.

regards, tom lane

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [PERFORM] Bad query optimizer misestimation because of TOAST

2005-02-02 Thread Markus Schaber
Hi, Tom,

Tom Lane schrieb:

IMHO, this tells the reason. The query planner has a table size of 3
pages, which clearly is a case for a seqscan. But during the seqscan,
the database has to fetch an additional amount of 8225 toast pages and
127 toast index pages, and rebuild the geometries contained therein.

 I don't buy this analysis at all.  The toasted columns are not those in
 the index (because we don't support out-of-line-toasted index entries),
 so a WHERE clause that only touches indexed columns isn't going to need
 to fetch anything from the toast table.  The only stuff it would fetch
 is in rows that passed the WHERE and need to be returned to the client
 --- and those costs are going to be the same either way.

 I'm not entirely sure where the time is going, but I do not think you
 have proven your theory about it.  I'd suggest building the backend
 with -pg and getting some gprof evidence.

The column is a PostGIS column, and the index was created using GIST.
Those are lossy indices that do not store the whole geometry, but only
the bounding box  corners of the Geometry (2 Points).

Without using the index, the  Operator (which tests for bbox
overlapping) has to load the whole geometry from disk, and extract the
bbox therein (as it cannot make use of partial fetch).

Some little statistics:

logigis=# select max(mem_size(geom)), avg(mem_size(geom))::int,
max(npoints(geom)) from adminbndy1;
   max|   avg   |  max
--+-+
 20998856 | 1384127 | 873657
(1 Zeile)

So the geometries use are about 1.3 MB average size, and have a maximum
size of 20Mb. I'm pretty shure this cannot be stored without TOASTing.

Additionally, my suggested workaround using a separate bbox column
really works:

logigis=# alter table adminbndy1 ADD column bbox geometry;
ALTER TABLE
logigis=# update adminbndy1 set bbox = setsrid(box3d(geom)::geometry, 4326);
UPDATE 83
logigis=# explain analyze SELECT geom FROM adminbndy1 WHERE bbox 
setsrid('BOX3D(9.4835390946502 47.39365740740741,9.5164609053498
47.40634259259259)'::box3d,4326);
QUERY PLAN

---
 Seq Scan on adminbndy1  (cost=1.00..10022.50 rows=1
width=32) (actual time=0.554..0.885 rows=5 loops=1)
   Filter: (bbox  'SRID=4326;BOX3D(9.4835390946502 47.3936574074074
0,9.5164609053498 47.4063425925926 0)'::geometry)
 Total runtime: 0.960 ms
(3 Zeilen)

Here, the seqential scan matching exactly the same 5 rows only needs
about 1/8000th of time, because it does not have to touch the TOAST
pages at all.

logigis=# \o /dev/null
logigis=# \timing
Zeitmessung ist an.
logigis=# SELECT geom FROM adminbndy1 WHERE geom 
setsrid('BOX3D(9.4835390946502 47.39365740740741,9.5164609053498
47.40634259259259)'::box3d,4326);
Zeit: 11224,185 ms
logigis=# SELECT geom FROM adminbndy1 WHERE bbox 
setsrid('BOX3D(9.4835390946502 47.39365740740741,9.5164609053498
47.40634259259259)'::box3d,4326);
Zeit: 7689,720 ms

So you can see that, when actually detoasting the 5 rows and
deserializing the geometries to WKT format (their canonical text
representation), the time relation gets better, but there's still a
noticeable difference.

Markus
--
markus schaber | dipl. informatiker
logi-track ag | rennweg 14-16 | ch 8001 zürich
phone +41-43-888 62 52 | fax +41-43-888 62 53
mailto:[EMAIL PROTECTED] | www.logi-track.com


signature.asc
Description: OpenPGP digital signature


Re: [PERFORM] Bad query optimizer misestimation because of TOAST

2005-02-02 Thread Tom Lane
Markus Schaber [EMAIL PROTECTED] writes:
 Tom Lane schrieb:
 I don't buy this analysis at all.  The toasted columns are not those in
 the index (because we don't support out-of-line-toasted index entries),
 so a WHERE clause that only touches indexed columns isn't going to need
 to fetch anything from the toast table.

 The column is a PostGIS column, and the index was created using GIST.
 Those are lossy indices that do not store the whole geometry, but only
 the bounding box  corners of the Geometry (2 Points).
 Without using the index, the  Operator (which tests for bbox
 overlapping) has to load the whole geometry from disk, and extract the
 bbox therein (as it cannot make use of partial fetch).

Ah, I see; I forgot to consider the GIST storage option, which allows
the index contents to be something different from the represented column.
Hmm ...

What I would be inclined to do is to extend ANALYZE to make an estimate
of the extent of toasting of every toastable column, and then modify
cost_qual_eval to charge a nonzero cost for evaluation of Vars that are
potentially toasted.

This implies an initdb-forcing change in pg_statistic, which might or
might not be allowed for 8.1 ... we are still a bit up in the air on
what our release policy will be for 8.1.

My first thought about what stat ANALYZE ought to collect is average
number of out-of-line TOAST chunks per value.  Armed with that number
and size information about the TOAST table, it'd be relatively simple
for costsize.c to estimate the average cost of fetching such values.

I'm not sure if it's worth trying to model the cost of decompression of
compressed values.  Surely that's a lot cheaper than fetching
out-of-line values, so maybe we can just ignore it.  If we did want to
model it then we'd also need to make ANALYZE note the fraction of values
that require decompression, and maybe something about their sizes.

This approach would overcharge for operations that are able to work with
partially fetched values, but it's probably not reasonable to expect the
planner to account for that with any accuracy.

Given this we'd have a pretty accurate computation of the true cost of
the seqscan alternative, but what of indexscans?  The current
implementation charges one evaluation of the index qual(s) per
indexscan, which is not really right because actually the index
component is never evaluated at all.  This didn't matter when the index
component was a Var with zero eval cost, but if we're charging some eval
cost it might.  But ... since it's charging only one eval per scan
... the error is probably down in the noise in practice, and it may not
be worth trying to get it exactly right.

A bigger concern is what about lossy indexes?  We currently ignore the
costs of rechecking qual expressions for fetched rows, but this might be
too inaccurate for situations like yours.  I'm hesitant to mess with it
though.  For one thing, to get it right we'd need to understand how many
rows will be returned by the raw index search (which is the number of
times we'd need to recheck).  At the moment the only info we have is the
number that will pass the recheck, which could be a lot less ... and of
course, even that is probably a really crude estimate when we are
dealing with this sort of operator.

Seems like a bit of a can of worms ...

regards, tom lane

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster