On Oct 1, 2009, at 4:18 PM, Robert Haas wrote:
On Thu, Oct 1, 2009 at 5:08 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Robert Haas robertmh...@gmail.com writes:
The elephant in the room here is that if the relation is a million
pages of which 1-100,000 and 1,000,000 are in use, no amount of bias
On Thu, 2009-10-01 at 17:08 -0400, Tom Lane wrote:
The discussion at the moment is about ways
of reducing the probability of getting into that situation in the
first place.
Definitely.
That doesn't preclude also providing some more-invasive tools that
people can use when they do get
Tom Lane t...@sss.pgh.pa.us wrote:
Kevin Grittner kevin.gritt...@wicourts.gov writes:
[pages with free space or total pages in relation?]
It's going to be the latter --- we do not know, and are *not* going
to invest the cycles to find out, how many pages have a useful
amount of free space.
Kevin Grittner wrote:
Tom Lane t...@sss.pgh.pa.us wrote:
(Hm, so we might want to make the probability depend on
max_connections?)
Without doing rigorous math on it, I'd guess that to prevent
contention among n connections you'd want the probably of resetting
the sweep to be about 1
On Sep 30, 2009, at 5:13 PM, Kevin Grittner wrote:
decibel deci...@decibel.org wrote:
*any* step that improves dealing with table bloat is extremely
welcome, as right now you're basically stuck rebuilding the table.
+1
Although, possibly more irritating than actually rebuilding it is
decibel wrote:
So while something that makes it easier to clean out the end of a
table would be good, I think the critical need is a way to make
vacuum more aggressive about obtaining the exclusive lock.
I wonder if we should have a different mode of operation that only
attempted the truncate
Alvaro Herrera alvhe...@commandprompt.com writes:
I wonder if we should have a different mode of operation that only
attempted the truncate (say VACUUM TRUNCATE), optionally being
non-conditional about obtaining the required lock. That said, I wonder
even more whether any such hacks are still
On Thu, 2009-10-01 at 12:05 -0400, Tom Lane wrote:
Alvaro Herrera alvhe...@commandprompt.com writes:
I wonder if we should have a different mode of operation that only
attempted the truncate (say VACUUM TRUNCATE), optionally being
non-conditional about obtaining the required lock. That
Simon Riggs si...@2ndquadrant.com writes:
No real need to be random is there? In the bloated space scenario,
VACUUM will be triggered but will be unable to remove the empty blocks.
So in that case VACUUM can hint the FSM to perform start from beginning
of relation behaviour.
No, that's an
Simon Riggs si...@2ndquadrant.com wrote:
It would be more useful to think of this as look for huge chunks of
space and fill them rather than start at beginning, since space
won't always be right at start.
Either I misunderstand you or I disagree. If there's a huge chunk of
space near the
On Thu, 2009-10-01 at 11:32 -0500, Kevin Grittner wrote:
Either I misunderstand you or I disagree.
That does seem to be a common stance, though I will read on. :-)
--
Simon Riggs www.2ndQuadrant.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make
On Thu, 2009-10-01 at 11:32 -0500, Kevin Grittner wrote:
If there's a huge chunk of
space near the end, and many many smaller spaces spread throughout,
what I'd like is for rows to be placed in those small ones. This
would minimize the number of pages to read for queries, and would
present
Simon Riggs si...@2ndquadrant.com writes:
Yes, as Tom points out, this must be done with bias away from the very
end of the table.
I meant that we should start from the beginning of large spaces and that
we shouldn't assume that all space worth filling is at start of
relation.
Right. One
Tom Lane t...@sss.pgh.pa.us wrote:
Simon Riggs si...@2ndquadrant.com writes:
Yes, as Tom points out, this must be done with bias away from the
very end of the table.
I meant that we should start from the beginning of large spaces and
that we shouldn't assume that all space worth filling is
Kevin Grittner kevin.gritt...@wicourts.gov writes:
Tom Lane t...@sss.pgh.pa.us wrote:
So for example we might try resetting the search to the start of the
relation with probability 0.01.
If I understand the heuristic you propose, and my math skill haven't
eroded too badly from lack of use,
Kevin Grittner wrote:
Tom Lane t...@sss.pgh.pa.us wrote:
Simon Riggs si...@2ndquadrant.com writes:
Yes, as Tom points out, this must be done with bias away from the
very end of the table.
I meant that we should start from the beginning of large spaces and
that we shouldn't assume that all
Heikki Linnakangas heikki.linnakan...@enterprisedb.com wrote:
Kevin Grittner wrote:
Tom Lane t...@sss.pgh.pa.us wrote:
So for example we might try resetting the search to the start of
the
relation with probability 0.01.
If I understand the heuristic you propose, and my math skill
Kevin Grittner kevin.gritt...@wicourts.gov wrote:
0.99 percent chance to continue the sweep
Make that a 99% chance, or a 0.99 chance (in case the typo was not
apparent).
Am I saying something stupid here?
Well, besides that line?
-Kevin
--
Sent via pgsql-hackers mailing list
On Thu, Oct 1, 2009 at 3:23 PM, Heikki Linnakangas
heikki.linnakan...@enterprisedb.com wrote:
We probably could do with more bias. For example, we still prefer pages
close to the page we last inserted to, by searching for free space in
the same FSM page first, before starting the search from
Kevin Grittner kevin.gritt...@wicourts.gov writes:
The way I figure it, if there is a 0.01 chance to reset the sweep,
then there's a 0.99 percent chance to continue the sweep from the last
position. 0.99^229 is about 0.1, which means there is a 10% chance
not to have reset after that many
Robert Haas robertmh...@gmail.com writes:
The elephant in the room here is that if the relation is a million
pages of which 1-100,000 and 1,000,000 are in use, no amount of bias
is going to help us truncate the relation unless every tuple on page
1,000,000 gets updated or deleted.
Well, there
On Thu, Oct 1, 2009 at 4:47 PM, Tom Lane t...@sss.pgh.pa.us wrote:
A possible downside of keeping things compact this way is that you'd
probably get a longer average search distance because of all the early
pages tending to remain full. Maybe what we want is some bias against
inserting in the
On Thu, Oct 1, 2009 at 5:08 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Robert Haas robertmh...@gmail.com writes:
The elephant in the room here is that if the relation is a million
pages of which 1-100,000 and 1,000,000 are in use, no amount of bias
is going to help us truncate the relation unless
Robert Haas robertmh...@gmail.com wrote:
The elephant in the room here is that if the relation is a million
pages of which 1-100,000 and 1,000,000 are in use, no amount of bias
is going to help us truncate the relation unless every tuple on page
1,000,000 gets updated or deleted.
Perhaps
Tom Lane t...@sss.pgh.pa.us writes:
That doesn't preclude also providing some more-invasive tools
that people can use when they do get into that situation;
About this side of things, what about having any query which asks for
system columns (any of them) take a specific (new) exclusive row
Tom Lane t...@sss.pgh.pa.us wrote:
(Hm, so we might want to make the probability depend on
max_connections?)
Without doing rigorous math on it, I'd guess that to prevent
contention among n connections you'd want the probably of resetting
the sweep to be about 1 / (n * 2). That would mean
Kevin Grittner kevin.gritt...@wicourts.gov wrote:
I think it would make sense to just start using this once you get
into the last half or quarter of the free pages. If you go with the
last quarter, then you might want to use a higher probability than I
suggested above, although that would
Kevin Grittner kevin.gritt...@wicourts.gov writes:
Fuzzy thinking there -- if it's the last quarter of the *free* pages,
the suggested probabilities should be fine. (Somehow I got to
thinking, for a moment, that it would be the last quarter of the
relation's overall pages)
It's going to
decibel deci...@decibel.org wrote:
*any* step that improves dealing with table bloat is extremely
welcome, as right now you're basically stuck rebuilding the table.
+1
Although, possibly more irritating than actually rebuilding it is
evaluating borderline bloat situations to determine if
On Fri, 2009-09-18 at 10:47 +0900, Itagaki Takahiro wrote:
Simon Riggs si...@2ndquadrant.com wrote:
* compact - page selection specifically attempts to find the lowest
numbered blocks, so that the table will naturally shrink over time.
We cannot shrink the table if one tuple remains at
On Sep 18, 2009, at 1:09 AM, Simon Riggs wrote:
On Fri, 2009-09-18 at 10:47 +0900, Itagaki Takahiro wrote:
Simon Riggs si...@2ndquadrant.com wrote:
* compact - page selection specifically attempts to find the lowest
numbered blocks, so that the table will naturally shrink over time.
We
On Thu, 2009-09-17 at 16:26 +0100, Simon Riggs wrote:
Just been looking again at the way FSM works. In fsm_search_avail() we
essentially have just a single way for working out how to search the
tree.
Seems like it would be good to abstract this so that we can implement a
number of FSM
Just been looking again at the way FSM works. In fsm_search_avail() we
essentially have just a single way for working out how to search the
tree.
Seems like it would be good to abstract this so that we can implement a
number of FSM search strategies
* (current) randomize - page selection
Simon Riggs si...@2ndquadrant.com wrote:
* compact - page selection specifically attempts to find the lowest
numbered blocks, so that the table will naturally shrink over time.
We cannot shrink the table if one tuple remains at the end of table
and the tuple is always HOT-updated, because we
34 matches
Mail list logo