Josh Berkus wrote:
>
> > I think the difficulty is figuring out what to get the existing
> > workers to give us some memory when a new one comes along. You want
> > the first worker to potentially use ALL the memory... until worker #2
> > arrives.
>
> Yeah, doing this would mean that you couldn'
>> Relevant to this is the question: *when* does vacuum do its memory
>> allocation? Is memory allocation reasonably front-loaded, or does
>> vacuum keep grabbing more RAM until it's done?
>
> All at start.
That means that "allocation by halves" would work fine.
--
Excerpts from Josh Berkus's message of mar nov 16 15:52:14 -0300 2010:
>
> > I think the difficulty is figuring out what to get the existing
> > workers to give us some memory when a new one comes along. You want
> > the first worker to potentially use ALL the memory... until worker #2
> > arrive
> I think the difficulty is figuring out what to get the existing
> workers to give us some memory when a new one comes along. You want
> the first worker to potentially use ALL the memory... until worker #2
> arrives.
Yeah, doing this would mean that you couldn't give worker #1 all the
memory,
On Tue, 2010-11-16 at 10:36 -0800, Josh Berkus wrote:
> On 11/16/10 9:27 AM, Robert Haas wrote:
> > I'm a little skeptical about creating more memory tunables. DBAs who
> > are used to previous versions of PG will find that their vacuum is now
> > really slow, because they adjusted maintenance_wor
On Tue, Nov 16, 2010 at 1:36 PM, Josh Berkus wrote:
> On 11/16/10 9:27 AM, Robert Haas wrote:
>> I'm a little skeptical about creating more memory tunables. DBAs who
>> are used to previous versions of PG will find that their vacuum is now
>> really slow, because they adjusted maintenance_work_me
On 11/16/10 9:27 AM, Robert Haas wrote:
> I'm a little skeptical about creating more memory tunables. DBAs who
> are used to previous versions of PG will find that their vacuum is now
> really slow, because they adjusted maintenance_work_mem but not this
Also, generally people who are using autov
On Tue, Nov 16, 2010 at 11:12 AM, Alvaro Herrera
wrote:
> Magnus was just talking to me about having a better way of controlling
> memory usage on autovacuum. Instead of each worker using up to
> maintenance_work_mem, which ends up as a disaster when DBA A sets to a
> large value and DBA B raises
On 16.11.2010 18:12, Alvaro Herrera wrote:
Thoughts?
Sounds reasonable, but you know what would be even better? Use less
memory in vacuum, so that it doesn't become an issue to begin with.
There was some discussion on that back in 2007
(http://archives.postgresql.org/pgsql-hackers/2007-02/ms
Itagaki Takahiro writes:
> On Wed, Nov 17, 2010 at 01:12, Alvaro Herrera wrote:
>> So for the initial implementation, we could just have each worker set
>> its local maintenance_work_mem to autovacuum_maintenance_memory /
>> max_workers.
>> That way there's never excessive memory usage.
> It so
On Wed, Nov 17, 2010 at 01:12, Alvaro Herrera wrote:
> So for the initial implementation, we could just have each worker set
> its local maintenance_work_mem to autovacuum_maintenance_memory / max_workers.
> That way there's never excessive memory usage.
It sounds reasonable, but is there the sam
Magnus was just talking to me about having a better way of controlling
memory usage on autovacuum. Instead of each worker using up to
maintenance_work_mem, which ends up as a disaster when DBA A sets to a
large value and DBA B raises autovacuum_max_workers, we could simply
have an "autovacuum_main
12 matches
Mail list logo