On Jun 3, 2011 8:38 PM, Bruce Momjian br...@momjian.us wrote:
I realize we just read the pages from the kernel to maintain sequential
I/O, but do we actually read the contents of the page if we know it
doesn't need vacuuming? If so, do we need to?
I dont follow. What's your question?
Tom's
On 03.06.2011 22:16, Bruce Momjian wrote:
I realize we just read the pages from the kernel to maintain sequential
I/O, but do we actually read the contents of the page if we know it
doesn't need vacuuming?
Yes.
If so, do we need to?
Not necessarily, but it allows us to freeze old tuples,
Heikki Linnakangas wrote:
On 27.05.2011 16:52, Pavan Deolasee wrote:
On closer inspection, I realized that we have
deliberately put in this hook to ensure that we use visibility maps
only when we see at least SKIP_PAGES_THRESHOLD worth of all-visible
sequential pages to take advantage of
On Fri, May 27, 2011 at 8:40 PM, Greg Stark gsst...@mit.edu wrote:
Separately it's a bit strange that we actually have to visit the
pages. We have all the information we need in the VM to determine
whether there's a run of 32 vacuum-clean pages. Why can't we look at
the next 32 pages and if
I wonder if we have tested the reasoning behind having
SKIP_PAGES_THRESHOLD and the magic number of 32 assigned to it
currently. While looking at the code after a long time and doing some
tests, I realized that a manual VACUUM would always scan first 31
pages of a relation which has not received
Pavan Deolasee pavan.deola...@gmail.com writes:
My statistical skills are limited, but wouldn't that mean that for a
fairly well distributed write activity across a large table, if there
are even 3-4% update/deletes, we would most likely hit a
not-all-visible page for every 32 pages scanned ?
2011/5/27 Pavan Deolasee pavan.deola...@gmail.com:
I wonder if we have tested the reasoning behind having
SKIP_PAGES_THRESHOLD and the magic number of 32 assigned to it
currently. While looking at the code after a long time and doing some
tests, I realized that a manual VACUUM would always
On 27.05.2011 16:52, Pavan Deolasee wrote:
On closer inspection, I realized that we have
deliberately put in this hook to ensure that we use visibility maps
only when we see at least SKIP_PAGES_THRESHOLD worth of all-visible
sequential pages to take advantage of possible OS seq scan
2011/5/27 Cédric Villemain cedric.villemain.deb...@gmail.com:
2011/5/27 Pavan Deolasee pavan.deola...@gmail.com:
I wonder if we have tested the reasoning behind having
SKIP_PAGES_THRESHOLD and the magic number of 32 assigned to it
currently. While looking at the code after a long time and
On Fri, May 27, 2011 at 7:36 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Pavan Deolasee pavan.deola...@gmail.com writes:
My statistical skills are limited, but wouldn't that mean that for a
fairly well distributed write activity across a large table, if there
are even 3-4% update/deletes, we would
On Fri, May 27, 2011 at 7:11 AM, Heikki Linnakangas
heikki.linnakan...@enterprisedb.com wrote:
Well, as with normal queries, it's usually faster to just seqscan the whole
table if you need to access more than a few percent of the pages, because
sequential I/O is so much faster than random I/O.
On Fri, May 27, 2011 at 7:41 PM, Heikki Linnakangas
heikki.linnakan...@enterprisedb.com wrote:
On 27.05.2011 16:52, Pavan Deolasee wrote:
On closer inspection, I realized that we have
deliberately put in this hook to ensure that we use visibility maps
only when we see at least
On Fri, May 27, 2011 at 11:10 AM, Greg Stark gsst...@mit.edu wrote:
It would be nice if the VM had a bit for all-frozen but that
wouldn't help much except in the case of truly cold data. We could
perhaps keep the frozen data per segment or per VM page (which covers
a large section of the
13 matches
Mail list logo