Hi Tom/Alvaro,
Kindly let us know if the correction provided in previous mail is fine or
not! Current code any way handle scenario-1 whereas it is still vulnerable
to scenario-2.
>From previous mail:
*Scenario-1:* current_time (2015) -> changed_to_past (1995) ->
stays-here-for-half-day -> correct
Hi,
To my understanding it will probably not open doors for worst situations!
Please correct if my below understanding is correct.
The latch will wake up under below three situations:
a) Socket error (=> result is set to negative number)
b) timeout (=> result is set to TIMEOUT)
c) some event arri
Prakash Itnal writes:
> Sorry for the late response. The current patch only fixes the scenario-1
> listed below. It will not address the scenario-2. Also we need a fix in
> unix_latch.c where the remaining sleep time is evaluated, if latch is woken
> by other events (or result=0). Here to it is po
Hi,
Sorry for the late response. The current patch only fixes the scenario-1
listed below. It will not address the scenario-2. Also we need a fix in
unix_latch.c where the remaining sleep time is evaluated, if latch is woken
by other events (or result=0). Here to it is possible the latch might go
On 2015-06-17 18:10:42 -0300, Alvaro Herrera wrote:
> Yeah, the case is pretty weird and I'm not really sure that the server
> ought to be expected to behave. But if this is actually the only part
> of the server that misbehaves because of sudden gigantic time jumps, I
> think it's fair to patch i
Tom Lane wrote:
> Alvaro Herrera writes:
> > Yeah, the case is pretty weird and I'm not really sure that the server
> > ought to be expected to behave. But if this is actually the only part
> > of the server that misbehaves because of sudden gigantic time jumps, I
> > think it's fair to patch it.
On Wed, Jun 17, 2015 at 5:10 PM, Alvaro Herrera
wrote:
> Yeah, the case is pretty weird and I'm not really sure that the server
> ought to be expected to behave. But if this is actually the only part
> of the server that misbehaves because of sudden gigantic time jumps, I
> think it's fair to pat
Cédric Villemain wrote:
>
>
> Le 17/06/2015 23:10, Alvaro Herrera a écrit :
> > Tom Lane wrote:
> >> launcher_determine_sleep() does have a minimum sleep time, and it seems
> >> like we could fairly cheaply guard against this kind of scenario by also
> >> enforcing a maximum sleep time (of say 5
Le 17/06/2015 23:10, Alvaro Herrera a écrit :
> Tom Lane wrote:
>> launcher_determine_sleep() does have a minimum sleep time, and it seems
>> like we could fairly cheaply guard against this kind of scenario by also
>> enforcing a maximum sleep time (of say 5 or 10 minutes). Not quite
>> convince
Alvaro Herrera writes:
> Yeah, the case is pretty weird and I'm not really sure that the server
> ought to be expected to behave. But if this is actually the only part
> of the server that misbehaves because of sudden gigantic time jumps, I
> think it's fair to patch it. Here's a proposed patch.
Haribabu Kommi writes:
> I can think of a case where the "launcher_determine_sleep" function
> returns a big sleep value because of system time change.
> Because of that it is possible that the launcher is not generating
> workers to do the vacuum. May be I am wrong.
I talked with Alvaro about th
Prakash Itnal wrote:
> Currently the issue is easily reproducible. Steps to reproduce:
> * Set some aggressive values for auto-vacuuming.
> * Run a heavy database update/delete/insert queries. This leads to invoking
> auto-vacuuming in quick successions.
> * Change the system time to older for eg.
On Wed, Jun 17, 2015 at 2:17 PM, Prakash Itnal wrote:
> Hi,
>
> Currently the issue is easily reproducible. Steps to reproduce:
> * Set some aggressive values for auto-vacuuming.
> * Run a heavy database update/delete/insert queries. This leads to invoking
> auto-vacuuming in quick successions.
>
Hi,
Currently the issue is easily reproducible. Steps to reproduce:
* Set some aggressive values for auto-vacuuming.
* Run a heavy database update/delete/insert queries. This leads to invoking
auto-vacuuming in quick successions.
* Change the system time to older for eg. 1995-01-01
Suddenly auto-
Hi,
@Avaro Herrera, Thanks for quick reply. I was on leave and hence not able
to reply soon.
This issue was observed on customer site. However after long discussion and
digging into what happened around the date 2nd May 2015, we got to know
that NTP server suddenly went back in time to 1995. It r
Prakash Itnal wrote:
> Hello,
>
> Recently we encountered a issue where the disc space is continuously
> increasing towards 100%. Then a manual vacuum freed the disc space. But
> again it is increasing. When digged more it is found that auto-vacuuming
> was not running or it is either stucked/hang
Hello,
Recently we encountered a issue where the disc space is continuously
increasing towards 100%. Then a manual vacuum freed the disc space. But
again it is increasing. When digged more it is found that auto-vacuuming
was not running or it is either stucked/hanged.
Version: 9.1.12
Auto vacuum
On 4/18/2013 11:44 AM, Jan Wieck wrote:
Yes, that was the rationale behind it combined with "don't change
function call sequences and more" all over the place.
function call signatures
--
Anyone who trades liberty for security deserves neither
liberty nor security. -- Benjamin Franklin
--
S
On 4/12/2013 2:08 PM, Alvaro Herrera wrote:
Tom Lane escribió:
Are you saying you intend to revert that whole concept? That'd be
okay with me, I think. Otherwise we need some thought about how to
inform the stats collector what's really happening.
Maybe what we need is to consider table tru
On 4/12/2013 1:57 PM, Tom Lane wrote:
Kevin Grittner writes:
Tom Lane wrote:
I think that the minimum appropriate fix here is to revert the hunk
I quoted, ie take out the suppression of stats reporting and analysis.
I'm not sure I understand -- are you proposing that is all we do
for both
Kevin Grittner writes:
> For now what I'm suggesting is generating statistics in all the
> cases it did before, plus the case where it starts truncation but
> does not complete it. The fact that before this patch there were
> cases where the autovacuum worker was killed, resulting in not
> genera
[some relevant dropped bits of the thread restored]
Tom Lane wrote:
> Kevin Grittner writes:
>> Tom Lane wrote:
>>> Kevin Grittner writes:
Jeff Janes wrote:
I propose to do the following:
(1) Restore the prior behavior of the VACUUM command. This
was only ever intended
Tom Lane escribió:
> Are you saying you intend to revert that whole concept? That'd be
> okay with me, I think. Otherwise we need some thought about how to
> inform the stats collector what's really happening.
Maybe what we need is to consider table truncation as a separate
activity from vacuum
Kevin Grittner writes:
> Tom Lane wrote:
>> I think that the minimum appropriate fix here is to revert the hunk
>> I quoted, ie take out the suppression of stats reporting and analysis.
> I'm not sure I understand -- are you proposing that is all we do
> for both the VACUUM command and autovacuu
Andres Freund writes:
> On 2013-04-12 13:09:02 -0400, Tom Lane wrote:
>> However, we're still thinking too small. I've been wondering whether we
>> couldn't entirely remove the dirty, awful kluges that were installed in
>> the lock manager to kill autovacuum when somebody blocked behind it.
>> Th
Tom Lane wrote:
> Kevin Grittner writes:
>> OK, will review that to confirm;but assuming that's right, and
>> nobody else is already working on a fix, I propose to do the
>> following:
>
>> (1) Restore the prior behavior of the VACUUM command. This was
>> only ever intended to be a fix for a se
On 2013-04-12 13:09:02 -0400, Tom Lane wrote:
> Kevin Grittner writes:
> > OK, will review that to confirm;but assuming that's right, and
> > nobody else is already working on a fix, I propose to do the
> > following:
>
> > (1)� Restore the prior behavior of the VACUUM command.� This was
> > only
Kevin Grittner writes:
> OK, will review that to confirm;but assuming that's right, and
> nobody else is already working on a fix, I propose to do the
> following:
> (1) Restore the prior behavior of the VACUUM command. This was
> only ever intended to be a fix for a serious autovacuum problem
Jeff Janes wrote:
>> If we're going to have the message, we should make it useful.
>> My biggest question here is not whether we should add this info,
>> but whether it should be DEBUG instead of LOG
> I like it being LOG. If it were DEBUG, I don't think anyone
> would be likely to see it when
On Thursday, April 11, 2013, Kevin Grittner wrote:
>
> > I also log the number of pages truncated at the time it gave up,
> > as it would be nice to know if it is completely starving or
> > making some progress.
>
> If we're going to have the message, we should make it useful. My
> biggest questi
On Thursday, April 11, 2013, Tom Lane wrote:
> Jeff Janes writes:
> > I guess I'm a couple releases late to review the "autovacuum truncate
> > exclusive lock" patch (a79ae0bc0d454b9f2c95a), but this patch did not
> only
> > affect autovac, it affects manual vacuum as well (as did the original
>
Jeff Janes wrote:
> I guess I'm a couple releases late to review the "autovacuum
> truncate exclusive lock" patch (a79ae0bc0d454b9f2c95a), but this
> patch did not only affect autovac, it affects manual vacuum as
> well (as did the original behavior it is a modification of). So
> the compiler co
Jeff Janes writes:
> I guess I'm a couple releases late to review the "autovacuum truncate
> exclusive lock" patch (a79ae0bc0d454b9f2c95a), but this patch did not only
> affect autovac, it affects manual vacuum as well (as did the original
> behavior it is a modification of). So the compiler cons
I guess I'm a couple releases late to review the "autovacuum truncate
exclusive lock" patch (a79ae0bc0d454b9f2c95a), but this patch did not only
affect autovac, it affects manual vacuum as well (as did the original
behavior it is a modification of). So the compiler constants are misnamed,
and the
Shane Ambler <[EMAIL PROTECTED]> writes:
> Would this be the issue fixed in 8.1.1? -
> Prevent autovacuum from crashing during ANALYZE of expression index
More likely this one:
2007-06-14 09:53 alvherre
* src/backend/commands/: vacuum.c (REL8_1_STABLE), vacuum.c
(REL8_2_STABLE),
"Alvaro Herrera" <[EMAIL PROTECTED]> writes:
> Shane Ambler wrote:
>
>> Given that the analyze will obviously take a long time, is this scenario
>> likely to happen with 8.3.1? or has it been fixed since 8.1.x?
>
> In 8.3, autovacuum cancels itself if it sees it is conflicting with
> another que
Alvaro Herrera wrote:
Shane Ambler wrote:
Given that the analyze will obviously take a long time, is this scenario
likely to happen with 8.3.1? or has it been fixed since 8.1.x?
In 8.3, autovacuum cancels itself if it sees it is conflicting with
another query.
Would this be the issue fixed
Shane Ambler wrote:
> Given that the analyze will obviously take a long time, is this scenario
> likely to happen with 8.3.1? or has it been fixed since 8.1.x?
In 8.3, autovacuum cancels itself if it sees it is conflicting with
another query.
> Would this be the issue fixed in 8.1.1? -
> Preve
I am currently attempting to import the world street map data as
currently available from openstreetmap.org and am wondering if I will
come across a possible problem that is mentioned on their website, which
appears to be relevant for 8.1
From http://wiki.openstreetmap.org/index.php/Mapnik th
On Thu, 13 Oct 2005 14:20:46 -0500
Kevin Grittner <[EMAIL PROTECTED]> wrote:
> I can confirm that the patch was in the snapshot I picked up this
> morning at about 10:30 CDT. We've been using it since then and
> have not seen the problem in spite of attempting to provoke it with
> database vacuum
On Thu, 13 Oct 2005 15:09:58 -0400
Tom Lane <[EMAIL PROTECTED]> wrote:
> Robert Creager <[EMAIL PROTECTED]> writes:
> > Might this be the same problem as the recent thread "database vacuum from
> > cron hanging" where Tom is: "I'm busy volatile-izing all the code in
> > bufmgr.c ... should be able
I can confirm that the patch was in the snapshot I picked up this
morning at about 10:30 CDT. We've been using it since then and
have not seen the problem in spite of attempting to provoke it with
database vacuums.
-Kevin
>>> Tom Lane <[EMAIL PROTECTED]> 10/13/05 2:09 PM >>>
Robert Creager <[E
Robert Creager <[EMAIL PROTECTED]> writes:
> Might this be the same problem as the recent thread "database vacuum from cron
> hanging" where Tom is: "I'm busy volatile-izing all the code in bufmgr.c ...
> should be able to commit a fix soon."?
Seems reasonably likely, seeing that the original repo
I have a vacuum process kicked of by autovacuum that appears hung and causing
general grief. When I put too many queries at the db in this state, the Context
Switches cruises up to ~90k and stay there. Queries that normally take < 1
second are up to over a minute. The autovacuum thread has been
Oops! [EMAIL PROTECTED] (Bruce Momjian) was seen spray-painting on a wall:
> That could be part of auto-vacuum. Vacuum itself would still
> sequential scan, I think. The idea is to easily grab expire tuples
> when they are most cheaply found.
The nifty handling of this would be to introduce "VAC
Matthew T. O'Connor wrote:
> Bruce Momjian wrote:
>
> >Matthew T. O'Connor wrote:
> >
> >
> >>Bruce Momjian wrote:
> >>
> >>
> >>>I have added an auto-vacuum TODO item:
> >>>
> >>>* Auto-vacuum
> >>> o Move into the backend code
> >>> o Scan the buffer cache to find free space or
Bruce Momjian wrote:
Matthew T. O'Connor wrote:
Bruce Momjian wrote:
I have added an auto-vacuum TODO item:
* Auto-vacuum
o Move into the backend code
o Scan the buffer cache to find free space or use background writer
o Use free-space map information to guide refilling
Matthew T. O'Connor wrote:
> Bruce Momjian wrote:
>
> >I have added an auto-vacuum TODO item:
> >
> >* Auto-vacuum
> >o Move into the backend code
> >o Scan the buffer cache to find free space or use background writer
> >o Use free-space map information to guide refilling
>
Bruce Momjian wrote:
I have added an auto-vacuum TODO item:
* Auto-vacuum
o Move into the backend code
o Scan the buffer cache to find free space or use background writer
o Use free-space map information to guide refilling
I'm not sure what you mean exactly by "Scan the buffer
Hello Russell,
Russell Smith wrote:
I am doing serious thinking about the implementation of Auto Vacuum as part of
the backend, Not using libpq, but classing internal functions directly.
It appears to me that calling internal functions directly is a better
implementation than using the external l
Matthew T. O'Connor looked at this fairly closely leading up to 8.0
feature freeze. There was a long discussion earlier this year with respect
to libpq vs. using backend functions directly to vacuum multiple
databases.
http://archives.postgresql.org/pgsql-hackers/2004-03/msg00931.php
This should
I have added an auto-vacuum TODO item:
* Auto-vacuum
o Move into the backend code
o Scan the buffer cache to find free space or use background writer
o Use free-space map information to guide refilling
-
Hi All,
I am doing serious thinking about the implementation of Auto Vacuum as part of
the backend, Not using libpq, but classing internal functions directly.
It appears to me that calling internal functions directly is a better
implementation than using the external library to do the job.
I kn
On Tue, 2002-12-10 at 13:09, scott.marlowe wrote:
> On 10 Dec 2002, Rod Taylor wrote:
> > Perhaps a more appropriate rule would be 1 AVD per tablespace? Since
> > PostgreSQL only has a single tablespace at the moment
>
> But Postgresql can already place different databases on different data
On 10 Dec 2002, Rod Taylor wrote:
> > > Not sure what you mean by that, but it sounds like the behaviour of my AVD
> > > (having it block until the vacuum command completes) is fine, and perhaps
> > > preferrable.
> >
> > I can easily imagine larger systems with multiple CPUs and multiple disk
On Tue, 2002-12-10 at 12:00, Greg Copeland wrote:
> On Tue, 2002-12-10 at 08:42, Rod Taylor wrote:
> > > > Not sure what you mean by that, but it sounds like the behaviour of my AVD
> > > > (having it block until the vacuum command completes) is fine, and perhaps
> > > > preferrable.
> > >
> >
On Tue, 2002-12-10 at 08:42, Rod Taylor wrote:
> > > Not sure what you mean by that, but it sounds like the behaviour of my AVD
> > > (having it block until the vacuum command completes) is fine, and perhaps
> > > preferrable.
> >
> > I can easily imagine larger systems with multiple CPUs and m
On 10 Dec 2002 at 9:42, Rod Taylor wrote:
> Perhaps a more appropriate rule would be 1 AVD per tablespace? Since
> PostgreSQL only has a single tablespace at the moment
Sorry I am talking without doing much of it(Stuck to windows for job) But
actually when I was talking with Matthew offlist
> > Not sure what you mean by that, but it sounds like the behaviour of my AVD
> > (having it block until the vacuum command completes) is fine, and perhaps
> > preferrable.
>
> I can easily imagine larger systems with multiple CPUs and multiple disk
> and card bundles to support multiple datab
On Fri, 2002-11-29 at 07:19, Shridhar Daithankar wrote:
> On 29 Nov 2002 at 7:59, Matthew T. O'Connor wrote:
>
> > On Thursday 28 November 2002 23:26, Shridhar Daithankar wrote:
> > > On 28 Nov 2002 at 10:45, Tom Lane wrote:
> > > > This is almost certainly a bad idea. vacuum is not very
> > > >
On Fri, 2002-11-29 at 06:59, Matthew T. O'Connor wrote:
> On Thursday 28 November 2002 23:26, Shridhar Daithankar wrote:
> > On 28 Nov 2002 at 10:45, Tom Lane wrote:
> > > "Matthew T. O'Connor" <[EMAIL PROTECTED]> writes:
> > > > interesting thought. I think this boils down to how many knobs do we
- Original Message -
From: "Shridhar Daithankar" <[EMAIL PROTECTED]>
To: "Matthew T. O'Connor" <[EMAIL PROTECTED]>
Sent: Monday, December 02, 2002 11:12 AM
Subject: Re: [HACKERS] Auto Vacuum Daemon (again...)
> On 28 Nov 2002 at 3:02, Matthew T. O&
On 29 Nov 2002 at 7:59, Matthew T. O'Connor wrote:
> On Thursday 28 November 2002 23:26, Shridhar Daithankar wrote:
> > On 28 Nov 2002 at 10:45, Tom Lane wrote:
> > > This is almost certainly a bad idea. vacuum is not very
> > > processor-intensive, but it is disk-intensive. Multiple vacuums run
On Thursday 28 November 2002 23:26, Shridhar Daithankar wrote:
> On 28 Nov 2002 at 10:45, Tom Lane wrote:
> > "Matthew T. O'Connor" <[EMAIL PROTECTED]> writes:
> > > interesting thought. I think this boils down to how many knobs do we
> > > need to put on this system. It might make sense to say al
On 28 Nov 2002 at 10:45, Tom Lane wrote:
> "Matthew T. O'Connor" <[EMAIL PROTECTED]> writes:
> > interesting thought. I think this boils down to how many knobs do we
> > need to put on this system. It might make sense to say allow upto X
> > concurrent vacuums, a 4 processor system might handle 4
"Matthew T. O'Connor" <[EMAIL PROTECTED]> writes:
> interesting thought. I think this boils down to how many knobs do we
> need to put on this system. It might make sense to say allow upto X
> concurrent vacuums, a 4 processor system might handle 4 concurrent
> vacuums very well.
This is almost c
On Thu, 2002-11-28 at 01:58, Shridhar Daithankar wrote:
> There are differences in approach here. The reason I prefer polling rather than
> signalig is IMO vacuum should always be a low priority activity and as such it
> does not deserve a signalling overhead.
>
> A simpler way of integrating wo
On 27 Nov 2002 at 13:01, Matthew T. O'Connor wrote:
> On Wed, 2002-11-27 at 01:59, Shridhar Daithankar wrote:
> > I would not like postmaster forking into pgavd app. As far as possible, we
> > should not touch the core. This is a client app. and be it that way. Once we
> > integrate it into back
On 26 Nov 2002 at 21:54, Matthew T. O'Connor wrote:
> First: Do we want AVD integrated into the main source tree, or should it
> remain a "tool" that can be downloaded from gborg. I would think it
> should be controlled by the postmaster, and configured from GUC (at
> least basic on off settings)
Several months ago tried to implement a special postgres backend as an
Auto Vacuum Daemon (AVD), somewhat like the stats collector. I failed
due to my lack of experience with the postgres source.
On Sep 23, Shridhar Daithankar released an AVD written in C++ that acted
as a client program rather
70 matches
Mail list logo