On Wed, Jan 24, 2007 at 07:30:05PM -0500, Tom Lane wrote:
> Kenneth Marshall <[EMAIL PROTECTED]> writes:
> > Not that I am aware of. Even extending the relation by one additional
> > block can make a big difference in performance
>
> Do you have any evidence to back up that assertion?
>
> It seem
Tom Lane wrote:
> Bruce Momjian <[EMAIL PROTECTED]> writes:
> > Jim C. Nasby wrote:
> >> If we extended relations by more than one page at a time we'd probably
> >> have a better shot at the blocks on disk being contiguous and all read
> >> at the same time by the OS.
>
> > Actually, there is evid
Bruce Momjian <[EMAIL PROTECTED]> writes:
> Jim C. Nasby wrote:
>> If we extended relations by more than one page at a time we'd probably
>> have a better shot at the blocks on disk being contiguous and all read
>> at the same time by the OS.
> Actually, there is evidence that adding only a single
Jim C. Nasby wrote:
> On Mon, Jan 22, 2007 at 12:17:39PM -0800, Ron Mayer wrote:
> > Gregory Stark wrote:
> > >
> > > Actually no. A while back I did experiments to see how fast reading a file
> > > sequentially was compared to reading the same file sequentially but
> > > skipping
> > > x% of the
Kenneth Marshall <[EMAIL PROTECTED]> writes:
> Not that I am aware of. Even extending the relation by one additional
> block can make a big difference in performance
Do you have any evidence to back up that assertion?
It seems a bit nontrivial to me --- not the extension part exactly, but
making
On Mon, Jan 22, 2007 at 05:11:03PM -0600, Jim C. Nasby wrote:
> On Mon, Jan 22, 2007 at 12:17:39PM -0800, Ron Mayer wrote:
> > Gregory Stark wrote:
> > >
> > > Actually no. A while back I did experiments to see how fast reading a file
> > > sequentially was compared to reading the same file sequen
On Tue, Jan 23, 2007 at 09:01:41PM -0600, Jim Nasby wrote:
> On Jan 22, 2007, at 6:53 PM, Kenneth Marshall wrote:
> >The default should
> >be approximately the OS standard read-ahead amount.
>
> Is there anything resembling a standard across the OSes we support?
> Better yet, is there a standard
On Jan 22, 2007, at 6:53 PM, Kenneth Marshall wrote:
The default should
be approximately the OS standard read-ahead amount.
Is there anything resembling a standard across the OSes we support?
Better yet, is there a standard call that allows you to find out what
the read-ahead setting is?
-
On Mon, Jan 22, 2007 at 05:51:53PM +, Gregory Stark wrote:
> Actually no. A while back I did experiments to see how fast reading a file
> sequentially was compared to reading the same file sequentially but skipping
> x% of the blocks randomly. The results were surprising (to me) and depressing.
On Jan 22, 2007, at 11:16 AM, Richard Huxton wrote:
Bruce Momjian wrote:
Yep, agreed on the random I/O issue. The larger question is if
you have
a huge table, do you care to reclaim 3% of the table size, rather
than
just vacuum it when it gets to 10% dirty? I realize the vacuum is
going
On Mon, Jan 22, 2007 at 12:17:39PM -0800, Ron Mayer wrote:
> Gregory Stark wrote:
> >
> > Actually no. A while back I did experiments to see how fast reading a file
> > sequentially was compared to reading the same file sequentially but skipping
> > x% of the blocks randomly. The results were surp
On Mon, Jan 22, 2007 at 07:24:20PM +, Heikki Linnakangas wrote:
> Kenneth Marshall wrote:
> >On Mon, Jan 22, 2007 at 06:42:09PM +, Simon Riggs wrote:
> >>Hold that thought! Read Heikki's Piggyback VACUUM idea on new thread...
> >
> >There may be other functions that could leverage a similar
On Mon, Jan 22, 2007 at 06:42:09PM +, Simon Riggs wrote:
> On Mon, 2007-01-22 at 13:27 -0500, Bruce Momjian wrote:
> > Yep, agreed on the random I/O issue. The larger question is if you have
> > a huge table, do you care to reclaim 3% of the table size, rather than
> > just vacuum it when it g
7;Connor; Pavan
>Deolasee; Christopher Browne; pgsql-general@postgresql.org;
>pgsql-hackers@postgresql.org
>Subject: Re: [HACKERS] [GENERAL] Autovacuum Improvements
>
>"Bruce Momjian" <[EMAIL PROTECTED]> writes:
>
>> Yep, agreed on the random I/O issue. The large
Gregory Stark wrote:
>
> Actually no. A while back I did experiments to see how fast reading a file
> sequentially was compared to reading the same file sequentially but skipping
> x% of the blocks randomly. The results were surprising (to me) and depressing.
> The breakeven point was about 7%. [.
Alvaro Herrera wrote:
> Bruce Momjian wrote:
> >
> > Yep, agreed on the random I/O issue. The larger question is if you have
> > a huge table, do you care to reclaim 3% of the table size, rather than
> > just vacuum it when it gets to 10% dirty? I realize the vacuum is going
> > to take a lot of
Kenneth Marshall wrote:
On Mon, Jan 22, 2007 at 06:42:09PM +, Simon Riggs wrote:
Hold that thought! Read Heikki's Piggyback VACUUM idea on new thread...
There may be other functions that could leverage a similar sort of
infrastructure. For example, a long DB mining query could be registere
Gregory Stark wrote:
"Bruce Momjian" <[EMAIL PROTECTED]> writes:
I agree it index cleanup isn't > 50% of vacuum. I was trying to figure
out how small, and it seems about 15% of the total table, which means if
we have bitmap vacuum, we can conceivably reduce vacuum load by perhaps
80%, assuming
Bruce Momjian wrote:
Yep, agreed on the random I/O issue. The larger question is if you have
a huge table, do you care to reclaim 3% of the table size, rather than
just vacuum it when it gets to 10% dirty? I realize the vacuum is going
to take a lot of time, but vacuuming to relaim 3% three tim
Bruce Momjian wrote:
>
> Yep, agreed on the random I/O issue. The larger question is if you have
> a huge table, do you care to reclaim 3% of the table size, rather than
> just vacuum it when it gets to 10% dirty? I realize the vacuum is going
> to take a lot of time, but vacuuming to relaim 3%
On Mon, 2007-01-22 at 13:27 -0500, Bruce Momjian wrote:
> Yep, agreed on the random I/O issue. The larger question is if you have
> a huge table, do you care to reclaim 3% of the table size, rather than
> just vacuum it when it gets to 10% dirty? I realize the vacuum is going
> to take a lot of t
"Bruce Momjian" <[EMAIL PROTECTED]> writes:
> Yep, agreed on the random I/O issue. The larger question is if you have
> a huge table, do you care to reclaim 3% of the table size, rather than
> just vacuum it when it gets to 10% dirty? I realize the vacuum is going
> to take a lot of time, but v
Yep, agreed on the random I/O issue. The larger question is if you have
a huge table, do you care to reclaim 3% of the table size, rather than
just vacuum it when it gets to 10% dirty? I realize the vacuum is going
to take a lot of time, but vacuuming to relaim 3% three times seems like
it is go
"Bruce Momjian" <[EMAIL PROTECTED]> writes:
> I agree it index cleanup isn't > 50% of vacuum. I was trying to figure
> out how small, and it seems about 15% of the total table, which means if
> we have bitmap vacuum, we can conceivably reduce vacuum load by perhaps
> 80%, assuming 5% of the table
On Mon, 2007-01-22 at 12:18 -0500, Bruce Momjian wrote:
> Heikki Linnakangas wrote:
> >
> > In any case, for the statement "Index cleanup is the most expensive part
> > of vacuum" to be true, you're indexes would have to take up 2x as much
> > space as the heap, since the heap is scanned twice.
Heikki Linnakangas wrote:
> Bruce Momjian wrote:
> > Heikki Linnakangas wrote:
> >> Russell Smith wrote:
> >>> 2. Index cleanup is the most expensive part of vacuum. So doing a
> >>> partial vacuum actually means more I/O as you have to do index cleanup
> >>> more often.
> >> I don't think that'
Bruce Momjian wrote:
Heikki Linnakangas wrote:
Russell Smith wrote:
2. Index cleanup is the most expensive part of vacuum. So doing a
partial vacuum actually means more I/O as you have to do index cleanup
more often.
I don't think that's usually the case. Index(es) are typically only a
fract
Heikki Linnakangas wrote:
> Russell Smith wrote:
> > 2. Index cleanup is the most expensive part of vacuum. So doing a
> > partial vacuum actually means more I/O as you have to do index cleanup
> > more often.
>
> I don't think that's usually the case. Index(es) are typically only a
> fraction
Russell Smith wrote:
2. Index cleanup is the most expensive part of vacuum. So doing a
partial vacuum actually means more I/O as you have to do index cleanup
more often.
I don't think that's usually the case. Index(es) are typically only a
fraction of the size of the table, and since 8.2 we
On Sun, 2007-01-21 at 14:26 -0600, Jim C. Nasby wrote:
> On Sun, Jan 21, 2007 at 11:39:45AM +, Heikki Linnakangas wrote:
> > Russell Smith wrote:
> > >Strange idea that I haven't researched, Given Vacuum can't be run in a
> > >transaction, it is possible at a certain point to quit the current
On Sun, Jan 21, 2007 at 11:39:45AM +, Heikki Linnakangas wrote:
> Russell Smith wrote:
> >Strange idea that I haven't researched, Given Vacuum can't be run in a
> >transaction, it is possible at a certain point to quit the current
> >transaction and start another one. There has been much ch
On Sun, Jan 21, 2007 at 12:24:38PM +, Simon Riggs wrote:
> Partial vacuum would still be possible if you remembered where you got
> to in the VACUUM and then started from that same point next time. It
> could then go to the end of the table and wrap back around.
ISTM the Dead Space Map would g
On Sat, 2007-01-20 at 09:41 +1100, Russell Smith wrote:
> Darcy Buskermolen wrote:
> > [snip]
> >
> > Another thought, is it at all possible to do a partial vacuum? ie spend
> > the
> > next 30 minutes vacuuming foo table, and update the fsm with what hew have
> > learned over the 30 mins, e
Russell Smith wrote:
Strange idea that I haven't researched, Given Vacuum can't be run in a
transaction, it is possible at a certain point to quit the current
transaction and start another one. There has been much chat and now a
TODO item about allowing multiple vacuums to not starve small ta
Darcy Buskermolen wrote:
[snip]
Another thought, is it at all possible to do a partial vacuum? ie spend the
next 30 minutes vacuuming foo table, and update the fsm with what hew have
learned over the 30 mins, even if we have not done a full table scan ?
There was a proposal for this, but
Added to TODO:
> o Allow multiple vacuums so large tables do not starve small
> tables
>
> http://archives.postgresql.org/pgsql-general/2007-01/msg00031.php
>
> o Improve control of auto-vacuum
>
> http://archives.postgresql.org/pgsql-hackers/2006-12/msg00876.p
On Friday 19 January 2007 01:47, Simon Riggs wrote:
> On Tue, 2007-01-16 at 07:16 -0800, Darcy Buskermolen wrote:
> > On Tuesday 16 January 2007 06:29, Alvaro Herrera wrote:
> > > elein wrote:
> > > > Have you made any consideration of providing feedback on autovacuum
> > > > to users? Right now we
On Tue, 2007-01-16 at 07:16 -0800, Darcy Buskermolen wrote:
> On Tuesday 16 January 2007 06:29, Alvaro Herrera wrote:
> > elein wrote:
> > > Have you made any consideration of providing feedback on autovacuum to
> > > users? Right now we don't even know what tables were vacuumed when and
> > > what
Matthew T. O'Connor wrote:
> Alvaro Herrera wrote:
>> I'd like to hear other people's opinions on Darcy Buskermolen proposal
>> to have a log table, on which we'd register what did we run, at what
>> time, how long did it last, [...]
>
> I think most people would just be happy if we could get aut
Alvaro Herrera wrote:
I'd like to hear other people's opinions on Darcy Buskermolen proposal
to have a log table, on which we'd register what did we run, at what
time, how long did it last, how many tuples did it clean, etc. I feel
having it on the regular text log is useful but it's not good en
On Tuesday 16 January 2007 06:29, Alvaro Herrera wrote:
> elein wrote:
> > Have you made any consideration of providing feedback on autovacuum to
> > users? Right now we don't even know what tables were vacuumed when and
> > what was reaped. This might actually be another topic.
>
> I'd like to he
elein wrote:
> Have you made any consideration of providing feedback on autovacuum to users?
> Right now we don't even know what tables were vacuumed when and what was
> reaped. This might actually be another topic.
I'd like to hear other people's opinions on Darcy Buskermolen proposal
to have a
Simon Riggs wrote:
> Perhaps we should focus on the issues that might result, so that we
> address those before we spend time on the details of the user interface.
> Can we deadlock or hang from running multiple autovacuums?
If you were to run multiple autovacuum processes the way they are today,
Pavan Deolasee wrote:
> Simon Riggs wrote:
>> On Fri, 2006-12-29 at 20:25 -0300, Alvaro Herrera wrote:
>>> Christopher Browne wrote:
>>>
Seems to me that you could get ~80% of the way by having the simplest
"2 queue" implementation, where tables with size < some threshold get
thrown
Simon Riggs wrote:
> On Fri, 2006-12-29 at 20:25 -0300, Alvaro Herrera wrote:
>> Christopher Browne wrote:
>>
>>> Seems to me that you could get ~80% of the way by having the simplest
>>> "2 queue" implementation, where tables with size < some threshold get
>>> thrown at the "little table" queue,
Simon Riggs wrote:
On Fri, 2006-12-29 at 20:25 -0300, Alvaro Herrera wrote:
Christopher Browne wrote:
Seems to me that you could get ~80% of the way by having the simplest
"2 queue" implementation, where tables with size < some threshold get
thrown at the "little table" queue, and tables above
On Fri, Jan 12, 2007 at 07:33:05PM -0300, Alvaro Herrera wrote:
> Simon Riggs wrote:
>
> > Some feedback from initial testing is that 2 queues probably isn't
> > enough. If you have tables with 100s of blocks and tables with millions
> > of blocks, the tables in the mid-range still lose out. So I'
In an attempt to throw the authorities off his trail, [EMAIL PROTECTED] (Alvaro
Herrera) transmitted:
> Simon Riggs wrote:
>
>> Some feedback from initial testing is that 2 queues probably isn't
>> enough. If you have tables with 100s of blocks and tables with
>> millions of blocks, the tables in
On Fri, 2007-01-12 at 19:33 -0300, Alvaro Herrera wrote:
> > Alvaro, have you completed your design?
>
> No, I haven't, and the part that's missing is precisely the queues
> stuff. I think I've been delaying posting it for too long, and that is
> harmful because it makes other people waste time
Simon Riggs wrote:
> Some feedback from initial testing is that 2 queues probably isn't
> enough. If you have tables with 100s of blocks and tables with millions
> of blocks, the tables in the mid-range still lose out. So I'm thinking
> that a design with 3 queues based upon size ranges, plus the
On Fri, 2006-12-29 at 20:25 -0300, Alvaro Herrera wrote:
> Christopher Browne wrote:
>
> > Seems to me that you could get ~80% of the way by having the simplest
> > "2 queue" implementation, where tables with size < some threshold get
> > thrown at the "little table" queue, and tables above that s
51 matches
Mail list logo