Sorry, I sent a wrong patch. The attached is the right one.
At Mon, 11 Mar 2019 13:57:21 +0100, Julien Rouhaud wrote
in
> On Mon, Mar 11, 2019 at 10:03 AM David Rowley
> wrote:
> >
> > On Mon, 11 Mar 2019 at 09:58, Tom Lane wrote:
> > > The second patch is a delta that rounds off to the next
At Mon, 11 Mar 2019 13:57:21 +0100, Julien Rouhaud wrote
in
> On Mon, Mar 11, 2019 at 10:03 AM David Rowley
> wrote:
> >
> > On Mon, 11 Mar 2019 at 09:58, Tom Lane wrote:
> > > The second patch is a delta that rounds off to the next smaller unit
> > > if there is one, producing a less noisy
On Mon, Mar 11, 2019 at 10:03 AM David Rowley
wrote:
>
> On Mon, 11 Mar 2019 at 09:58, Tom Lane wrote:
> > The second patch is a delta that rounds off to the next smaller unit
> > if there is one, producing a less noisy result:
> >
> > regression=# set work_mem = '30.1GB';
> > SET
> >
On Mon, 11 Mar 2019 at 09:58, Tom Lane wrote:
> The second patch is a delta that rounds off to the next smaller unit
> if there is one, producing a less noisy result:
>
> regression=# set work_mem = '30.1GB';
> SET
> regression=# show work_mem;
> work_mem
> --
> 30822MB
> (1 row)
>
>
Julien Rouhaud writes:
> On Sat, Mar 9, 2019 at 10:04 PM Tom Lane wrote:
>> 2. It's always bugged me that we don't allow fractional unit
>> specifications, say "0.1GB", even for GUCs that are integers underneath.
>> That would be a simple additional change on top of this, but I didn't
>> do it
On Sun, Mar 10, 2019 at 4:47 PM Tom Lane wrote:
>
> Julien Rouhaud writes:
> > On Sat, Mar 9, 2019 at 10:04 PM Tom Lane wrote:
> >> I tried this, and it seems to work pretty well. The first of the two
> >> attached patches just teaches guc.c to support units for float values,
> >> incidentally
Julien Rouhaud writes:
> On Sat, Mar 9, 2019 at 10:04 PM Tom Lane wrote:
>> I tried this, and it seems to work pretty well. The first of the two
>> attached patches just teaches guc.c to support units for float values,
>> incidentally allowing "us" as an input unit for time-based GUCs.
> Why
On Sat, Mar 9, 2019 at 11:14 PM Tom Lane wrote:
>
> BTW ... I noticed while fooling with this that GUC's out-of-range
> messages can be confusing:
>
> regression=# set vacuum_cost_delay = '1s';
> ERROR: 1000 is outside the valid range for parameter "vacuum_cost_delay" (0
> .. 100)
>
> One's
On Sat, Mar 9, 2019 at 10:04 PM Tom Lane wrote:
>
> I tried this, and it seems to work pretty well. The first of the two
> attached patches just teaches guc.c to support units for float values,
> incidentally allowing "us" as an input unit for time-based GUCs.
Why not allowing third party
BTW ... I noticed while fooling with this that GUC's out-of-range
messages can be confusing:
regression=# set vacuum_cost_delay = '1s';
ERROR: 1000 is outside the valid range for parameter "vacuum_cost_delay" (0 ..
100)
One's immediate reaction to that is "I put in 1, not 1000". I think
it'd
Gavin Flower writes:
> Hope about keeping the default unit of ms, but converting it to a
> 'double' for input, but storing it as int (or long?) number of
> nanoseconds. Gives finer grain of control withouthaving to specify a
> unit, while still allowing calculations to be fast?
Don't really
Julien Rouhaud writes:
> On Sat, Mar 9, 2019 at 7:58 PM Andrew Dunstan
> wrote:
>> On 3/9/19 12:55 PM, Tom Lane wrote:
>>> The idea of converting vacuum_cost_delay into a float variable, while
>>> keeping its native unit as ms, seems probably more feasible from a
>>> compatibility standpoint.
On 10/03/2019 06:55, Tom Lane wrote:
Andrew Dunstan writes:
On 3/9/19 4:28 AM, David Rowley wrote:
I agree that vacuum_cost_delay might not be granular enough, however.
If we're going to change the vacuum_cost_delay into microseconds, then
I'm a little concerned that it'll silently break
On Sat, Mar 9, 2019 at 7:58 PM Andrew Dunstan
wrote:
>
> On 3/9/19 12:55 PM, Tom Lane wrote:
> >> Maybe we could leave the default units as msec but store it and allow
> >> specifying as usec. Not sure how well the GUC mechanism would cope with
> >> that.
> > I took a quick look at that and I'm
On 3/9/19 12:55 PM, Tom Lane wrote:
> Andrew Dunstan writes:
>> On 3/9/19 4:28 AM, David Rowley wrote:
>>> I agree that vacuum_cost_delay might not be granular enough, however.
>>> If we're going to change the vacuum_cost_delay into microseconds, then
>>> I'm a little concerned that it'll
Andrew Dunstan writes:
> On 3/9/19 4:28 AM, David Rowley wrote:
>> I agree that vacuum_cost_delay might not be granular enough, however.
>> If we're going to change the vacuum_cost_delay into microseconds, then
>> I'm a little concerned that it'll silently break existing code that
>> sets it.
David Rowley writes:
> I agree that vacuum_cost_delay might not be granular enough, however.
> If we're going to change the vacuum_cost_delay into microseconds, then
> I'm a little concerned that it'll silently break existing code that
> sets it. Scripts that do manual off-peak vacuums are
On 3/9/19 4:28 AM, David Rowley wrote:
> On Sat, 9 Mar 2019 at 16:11, Tom Lane wrote:
>> I propose therefore that instead of increasing vacuum_cost_limit,
>> what we ought to be doing is reducing vacuum_cost_delay by a similar
>> factor. And, to provide some daylight for people to reduce it
On Sat, 9 Mar 2019 at 16:11, Tom Lane wrote:
> I propose therefore that instead of increasing vacuum_cost_limit,
> what we ought to be doing is reducing vacuum_cost_delay by a similar
> factor. And, to provide some daylight for people to reduce it even
> more, we ought to arrange for it to be
I wrote:
> [ worries about overflow with VacuumCostLimit approaching INT_MAX ]
Actually, now that I think a bit harder, that disquisition was silly.
In fact, I'm inclined to argue that the already-committed patch
is taking the wrong approach, and we should revert it in favor of a
different idea.
Andrew Dunstan writes:
> Increase it to what?
Jeff's opinion that it could be INT_MAX without causing trouble is
a bit over-optimistic, see vacuum_delay_point():
if (VacuumCostActive && !InterruptPending &&
VacuumCostBalance >= VacuumCostLimit)
{
intmsec;
On 3/8/19 6:47 PM, David Rowley wrote:
> On Sat, 9 Mar 2019 at 07:10, Tom Lane wrote:
>> Jeff Janes writes:
>>> Now that this is done, the default value is only 5x below the hard-coded
>>> maximum of 10,000.
>>> This seems a bit odd, and not very future-proof. Especially since the
>>>
On Sat, 9 Mar 2019 at 07:10, Tom Lane wrote:
>
> Jeff Janes writes:
> > Now that this is done, the default value is only 5x below the hard-coded
> > maximum of 10,000.
> > This seems a bit odd, and not very future-proof. Especially since the
> > hard-coded maximum appears to have no logic to it
Jeff Janes writes:
> Now that this is done, the default value is only 5x below the hard-coded
> maximum of 10,000.
> This seems a bit odd, and not very future-proof. Especially since the
> hard-coded maximum appears to have no logic to it anyway, at least none
> that is documented. Is it just
On Wed, Mar 6, 2019 at 2:54 PM Andrew Dunstan <
andrew.duns...@2ndquadrant.com> wrote:
>
> On 3/6/19 1:38 PM, Jeremy Schneider wrote:
> > On 3/5/19 14:14, Andrew Dunstan wrote:
> >> This patch is tiny, seems perfectly reasonable, and has plenty of
> >> support. I'm going to commit it shortly
On Thu, 7 Mar 2019 at 08:54, Andrew Dunstan
wrote:
>
>
> On 3/6/19 1:38 PM, Jeremy Schneider wrote:
> > On 3/5/19 14:14, Andrew Dunstan wrote:
> >> This patch is tiny, seems perfectly reasonable, and has plenty of
> >> support. I'm going to commit it shortly unless there are last minute
> >>
On 3/6/19 12:10 AM, David Rowley wrote:
> Thanks for chipping in on this.
>
> On Wed, 6 Mar 2019 at 01:53, Tomas Vondra
> wrote:
>> But on the other hand it feels a bit weird that we increase this one
>> value and leave all the other (also very conservative) defaults alone.
>
> Which others
On 3/6/19 1:38 PM, Jeremy Schneider wrote:
> On 3/5/19 14:14, Andrew Dunstan wrote:
>> This patch is tiny, seems perfectly reasonable, and has plenty of
>> support. I'm going to commit it shortly unless there are last minute
>> objections.
> +1
>
done.
cheers
andrew
--
Andrew Dunstan
Thanks for chipping in on this.
On Wed, 6 Mar 2019 at 01:53, Tomas Vondra wrote:
> But on the other hand it feels a bit weird that we increase this one
> value and leave all the other (also very conservative) defaults alone.
Which others did you have in mind? Like work_mem, shared_buffers? If
On 2019-03-05 17:14:55 -0500, Andrew Dunstan wrote:
> This patch is tiny, seems perfectly reasonable, and has plenty of
> support. I'm going to commit it shortly unless there are last minute
> objections.
+1
On 2/25/19 8:38 AM, David Rowley wrote:
> On Tue, 26 Feb 2019 at 02:06, Joe Conway wrote:
>> On 2/25/19 1:17 AM, Peter Geoghegan wrote:
>>> On Sun, Feb 24, 2019 at 9:42 PM David Rowley
>>> wrote:
The current default vacuum_cost_limit of 200 seems to be 15 years old
and was added in
On Tue, Mar 5, 2019 at 7:53 AM Tomas Vondra
wrote:
> But on the other hand it feels a bit weird that we increase this one
> value and leave all the other (also very conservative) defaults alone.
Are you talking about vacuum-related defaults or defaults in general?
In 2014, we increased the
On 3/5/19 1:14 AM, Peter Geoghegan wrote:
> On Mon, Feb 25, 2019 at 8:48 AM Robert Haas wrote:
>> +1 for raising the default substantially. In my experience, and it
>> seems others are in a similar place, nobody ever gets into trouble
>> because the default is too high, but sometimes people get
On Mon, Feb 25, 2019 at 8:48 AM Robert Haas wrote:
> +1 for raising the default substantially. In my experience, and it
> seems others are in a similar place, nobody ever gets into trouble
> because the default is too high, but sometimes people get in trouble
> because the default is too low.
Robert Haas wrote:
> Not sure exactly what value would accomplish that goal.
I think autovacuum_vacuum_cost_limit = 2000 is a good starting point.
Yours,
Laurenz Albe
On Mon, Feb 25, 2019 at 8:39 AM David Rowley
wrote:
> I decided to do the times by 10 option that I had mentioned Ensue
> debate about that...
+1 for raising the default substantially. In my experience, and it
seems others are in a similar place, nobody ever gets into trouble
because the
On Mon, Feb 25, 2019 at 4:44 PM Laurenz Albe wrote:
>
> David Rowley wrote:
> > On Tue, 26 Feb 2019 at 02:06, Joe Conway wrote:
> > >
> > > On 2/25/19 1:17 AM, Peter Geoghegan wrote:
> > > > On Sun, Feb 24, 2019 at 9:42 PM David Rowley
> > > > wrote:
> > > >> The current default
David Rowley wrote:
> On Tue, 26 Feb 2019 at 02:06, Joe Conway wrote:
> >
> > On 2/25/19 1:17 AM, Peter Geoghegan wrote:
> > > On Sun, Feb 24, 2019 at 9:42 PM David Rowley
> > > wrote:
> > >> The current default vacuum_cost_limit of 200 seems to be 15 years old
> > >> and was added in
On Tue, 26 Feb 2019 at 02:06, Joe Conway wrote:
>
> On 2/25/19 1:17 AM, Peter Geoghegan wrote:
> > On Sun, Feb 24, 2019 at 9:42 PM David Rowley
> > wrote:
> >> The current default vacuum_cost_limit of 200 seems to be 15 years old
> >> and was added in f425b605f4e.
> >>
> >> Any supporters for
On 2/25/19 1:17 AM, Peter Geoghegan wrote:
> On Sun, Feb 24, 2019 at 9:42 PM David Rowley
> wrote:
>> The current default vacuum_cost_limit of 200 seems to be 15 years old
>> and was added in f425b605f4e.
>>
>> Any supporters for raising the default?
>
> I also think that the current default
I support rising the default.
>From standpoint of no-clue database admin, it's easier to give more
resources to Postgres and google what process called "autovacuum" does than
to learn why is it being slow on read.
It's also tricky that index only scans depend on working autovacuum, and
On Sun, Feb 24, 2019 at 9:42 PM David Rowley
wrote:
> The current default vacuum_cost_limit of 200 seems to be 15 years old
> and was added in f425b605f4e.
>
> Any supporters for raising the default?
I also think that the current default limit is far too conservative.
--
Peter Geoghegan
42 matches
Mail list logo