On 10/23/07, Hannu Krosing <[EMAIL PROTECTED]> wrote:
> Ühel kenal päeval, T, 2007-10-23 kell 13:04, kirjutas Heikki
> Linnakangas:
> > Gokulakannan Somasundaram wrote:
> > > Say, with a normal index, you need to goto the table for checking the
> > > snapshot. So you would be loading both the index pages + table pages,
> in
> > > order to satisfy a certain operations. Whereas in thick index you
> occupy 16
> > > bytes per tuple more in order to avoid going to the table. So memory
> > > management is again better. But i can run the load test, if that's
> > > required.
> >
> > Yes, performance testing is required for any performance-related patch.
> >
> > Remember that you're competing against DSM. We're going to want some
> > kind of a DSM anyway because it allows skipping unmodified parts of the
> > heap in vacuum.
> I would suggest that you use just an additional heap with decoupled
> visibility fields as DSM.
> For a large number of usage scenarios this will be highly compressible
> and will mostly stay in processor caches .
> You can start slow, and have the info duplicated in both main heap and
> visibility heap (aka DSM).
> There are several advantages to keeping a separate visibility heap:
> 1) it is usually higly compressible, at least you can throw away
> cmin/cmax quite soon, usually also FREEZE and RLE encode the rest.
> 2) faster access, more tightly packed data pages.
> 3) index-only scans
> 4) superfast VACUUM FREEZE
> 5) makes VACUUM faster even for worst cases (interleaving live and dead
> tuples)
> 6) any index scan will be faster due to fetching only visible rows from
> main heap.

if you have to store the visibility fields of all the tuples of each table,
then you may not be able to accomodate in the cache. Say if a table is  of 1
million rows, we would need 22 MB of  visibility space(since visibility info
takes 16 bytes. I think if we have to link it with say tuple-id(6 Bytes). I
think we may need to link it with indexes with one more id. i am not
counting that now).  If we have  10 tables, then we will have 220 MB.
Keeping them pinned in memory may not be advisable in some circumstances. If
it is not going to be in memory, then that is no different from referring a
table. But i accept that is a concept worth trying out. I think the
advantage with thick indexes comes with the fact, that it is optional. If we
can make this also as optional, that would be better.
But if we are going to suggest it as a replacement of DSM, then it loses the
advantage of being small.

> >  Even when all the tuples are in memory, index only scans are
> > > almost 40-60% faster than the index scans with thin indexes.
> >
> > Have you actually benchmarked that? What kind of query was that? I don't
> > believe for a second that fetching the heap tuple when the page is in
> > memory accounts for 40-60% of the overhead of regular index scans.
> It depends heavily on the type of memory (postgresql page or disk cache)
> it is in.
> I remember doing Slony sobscribes in early days, and the speed
> difference on loading a table with active PK index was several times,
> depending on shared_buffers setting.
> That was for a table, where both heap and index did fit in the 2G memory
> which was available, the difference being only shuffling the pages
> between postgresql buffer and linux system cache or not.
> > BTW, another issue you'll have to tackle, that a DSM-based patch will
> > have to solve as well, is how to return tuples from an index. In b-tree,
> > we scan pages page at a time, keeping a list of all tids that match the
> > scanquals in BTScanOpaque. If we need to return not only the tids of the
> > matching tuples, but the tuples as well, where do we store them? You
> > could make a palloc'd copy of them all, but that seems quite expensive.
> Have you considered returning them as "already visibility-checked pages"
> similar to what views or set-returning functions return ?
> -------------------
> Hannu

CertoSQL Project,
Allied Solution Groups.

Reply via email to