On Mon, 2008-06-16 at 15:35 -0400, Tom Lane wrote:
Recent discussions with the PostGIS hackers led me to think about ways
to reduce overhead when the same TOAST value is repeatedly detoasted.
In the example shown here
http://archives.postgresql.org/pgsql-hackers/2008-06/msg00384.php
90% of
Simon Riggs [EMAIL PROTECTED] writes:
Agreed. Yet I'm thinking that a more coherent approach to optimising the
tuple memory usage in the executor tree might be better than the special
cases we seem to have in various places. I don't know what that is, or
even if its possible though.
Yeah. I
On Jun 16, 2008, at 3:35 PM, Tom Lane wrote:
to a cache entry rather than a freshly palloc'd value. The cache
lookup
key is the toast table OID plus value OID. Now pg_detoast_datum()
has no
...
the result of decompressing an inline-compressed datum, because
those have
no unique ID
Jeff [EMAIL PROTECTED] writes:
On Jun 16, 2008, at 3:35 PM, Tom Lane wrote:
the result of decompressing an inline-compressed datum, because those
have no unique ID that could be used for a lookup key. This puts a
bit of a
Wouldn't the tid fit this? or table oid + tid?
No. The killer
But we can resolve that by ruling that the required lifetime is the same
as the value would have had if it'd really been palloc'd --- IOW, until
the memory context that was current at the time gets deleted or reset.
Many support functions of GiST/GIN live in very short memory context - only
Teodor Sigaev [EMAIL PROTECTED] writes:
But we can resolve that by ruling that the required lifetime is the same
as the value would have had if it'd really been palloc'd --- IOW, until
the memory context that was current at the time gets deleted or reset.
Many support functions of GiST/GIN
I definitely think it's worth it, even if it doesn't handle an
inline-compressed datum.
Yeah. I'm not certain how much benefit we could get there anyway.
If the datum isn't out-of-line then there's a small upper limit on how
big it can be and hence a small upper limit on how long it takes
Recent discussions with the PostGIS hackers led me to think about ways
to reduce overhead when the same TOAST value is repeatedly detoasted.
In the example shown here
http://archives.postgresql.org/pgsql-hackers/2008-06/msg00384.php
90% of the runtime is being consumed by repeated detoastings of a
* Tom Lane ([EMAIL PROTECTED]) wrote:
One unsolved problem is that this scheme doesn't provide any way to cache
the result of decompressing an inline-compressed datum, because those have
no unique ID that could be used for a lookup key.
That's pretty unfortunate.
Ideas?
Not at the moment,
Stephen Frost [EMAIL PROTECTED] writes:
* Tom Lane ([EMAIL PROTECTED]) wrote:
Comments, better ideas? Anyone think this is too much trouble to take
for the problem?
I definitely think it's worth it, even if it doesn't handle an
inline-compressed datum.
Yeah. I'm not certain how much
10 matches
Mail list logo