"Bruce Momjian" <[EMAIL PROTECTED]> writes:
> I tested TOAST using a method similar to the above method against CVS
> HEAD, with default shared_buffers = 32MB and no assert()s. I created
> backends with power-of-2 seetings for TOAST_TUPLES_PER_PAGE (4(default),
> 8, 16, 32, 64) which gives TOAST/non-TOAST breakpoints of 2k(default),
> 1k, 512, 256, and 128, roughly.
> The results are here:
> Strangely, 128 bytes seems to be the break-even point for TOAST and
> non-TOAST, even for sequential scans of the entire heap touching all
> long row values. I am somewhat confused why TOAST has faster access
> than inline heap data.
Did your test also imply setting the MAX_TOAST_CHUNK_SIZE (or however that's
spelled)? And what size long values were you actually storing? How did you
I wonder if what's happening is that you have large chunks which when stored
inline are leaving lots of dead space in the table. Ie, if you're generating
values with size near 2k and the default chunk size you would expect to find
an average of 1k dead space per page, or a 12.5% drain on performance. As you
lower the chunk size you decrease that margin.
However I agree that it's hard to believe that the costs of random access
wouldn't swamp that 12.5% overhead pretty quickly.
One query I used when measuring the impact of the variable varlena stuff was
this which gives the distribution of tuples/page over a table:
FROM (SELECT count(*) AS n
GROUP BY (point_in(tidout(ctid)))
) as x
GROUP BY n;
Which might help you peek at what's going on. You could also combine
pg_column_size(foo.*) to measure the size of the tuple. I think that will
measure the size of the tuple as is before the columns are detoasted.
---------------------------(end of broadcast)---------------------------
TIP 2: Don't 'kill -9' the postmaster