On Thu, Sep 16, 2010 at 1:20 PM, Markus Wanner wrote:
> On 09/16/2010 04:26 PM, Robert Haas wrote:
>>
>> I agree. I've already said my piece on how I think that stuff would
>> need to be reworked to be acceptable, so we might have to agree to
>> disagree on those, especially if your goal is to ge
Morning,
On 09/16/2010 04:26 PM, Robert Haas wrote:
I agree. I've already said my piece on how I think that stuff would
need to be reworked to be acceptable, so we might have to agree to
disagree on those, especially if your goal is to get something
committed that doesn't involve a major rewrit
On Thu, Sep 16, 2010 at 4:47 AM, Markus Wanner wrote:
>
> BTW, that'd be what I call a huge patch:
>
> bgworkers, excluding dynshmem and imessages:
> 34 files changed, 2910 insertions(+), 1421 deletions(-)
>
> from there to Postgres-R:
> 98 files changed, 14856 insertions(+), 230 deletions(-)
>
Hi,
On 09/15/2010 08:54 PM, Robert Haas wrote:
I think that the bar for committing to another in-core replication
solution right now is probably fairly high.
I'm not trying to convince you to accept the Postgres-R patch.. at least
not now.
BTW, that'd be what I call a huge patch:
bgworker
On Wed, Sep 15, 2010 at 2:28 PM, Markus Wanner wrote:
>> I guess the real issue here is whether it's possible to, and whether
>> you're interested in, extracting a committable subset of this work,
>> and if so what that subset should look like.
>
> Well, as it doesn't currently provide any real be
Robert,
On 09/15/2010 07:23 PM, Robert Haas wrote:
I haven't scrutinized your code but it seems like the
minimum-per-database might be complicating things more than necessary.
You might find that you can make the logic simpler without that. I
might be wrong, though.
I still think of that as
On Wed, Sep 15, 2010 at 2:48 AM, Markus Wanner wrote:
>> Hmm. So what happens if you have 1000 databases with a minimum of 1
>> worker per database and an overall limit of 10 workers?
>
> The first 10 databases would get an idle worker. As soon as real jobs
> arrive, the idle workers on databases
Hi,
On 09/15/2010 03:44 AM, Robert Haas wrote:
Hmm. So what happens if you have 1000 databases with a minimum of 1
worker per database and an overall limit of 10 workers?
The first 10 databases would get an idle worker. As soon as real jobs
arrive, the idle workers on databases that don't ha
On Tue, Sep 14, 2010 at 2:59 PM, Markus Wanner wrote:
> On 09/14/2010 08:41 PM, Robert Haas wrote:
>>
>> To avoid consuming system resources forever if they're not being used.
>
> Well, what timeout would you choose. And how would you justify it compared
> to the amounts of system resources consum
On 09/14/2010 08:41 PM, Robert Haas wrote:
To avoid consuming system resources forever if they're not being used.
Well, what timeout would you choose. And how would you justify it
compared to the amounts of system resources consumed by an idle process
sitting there and waiting for a job?
I'
On Tue, Sep 14, 2010 at 2:26 PM, Markus Wanner wrote:
> On 09/14/2010 08:06 PM, Robert Haas wrote:
>> One idea I had was to have autovacuum workers stick around for a
>> period of time after finishing their work. When we need to autovacuum
>> a database, first check whether there's an existing wo
On Tue, Sep 14, 2010 at 1:56 PM, Alvaro Herrera
wrote:
> Excerpts from Tom Lane's message of mar sep 14 13:46:17 -0400 2010:
>> Alvaro Herrera writes:
>> > I think we've had enough problems with the current design of forking a
>> > new autovac process every once in a while, that I'd like to have
On 09/14/2010 08:06 PM, Robert Haas wrote:
One idea I had was to have autovacuum workers stick around for a
period of time after finishing their work. When we need to autovacuum
a database, first check whether there's an existing worker that we can
use, and if so use him. If not, start a new on
Hi,
I'm glad discussion on this begins.
On 09/14/2010 07:55 PM, Tom Lane wrote:
So there is a minimum of one avworker per database?
Nope, you can set that to 0. You don't *need* to keep idle workers around.
That's a guaranteed
nonstarter. There are many people with thousands of databases,
Markus Wanner writes:
> On 09/14/2010 07:46 PM, Tom Lane wrote:
>> That seems like a fairly large can of worms to open: we have never tried
>> to make backends switch from one database to another, and I don't think
>> I'd want to start such a project with autovac.
> They don't. Even with bgworker
Hi,
On 09/14/2010 07:46 PM, Tom Lane wrote:
Alvaro Herrera writes:
I think we've had enough problems with the current design of forking a
new autovac process every once in a while, that I'd like to have them as
permanent processes instead, waiting for orders from the autovac
launcher. From th
Excerpts from Tom Lane's message of mar sep 14 13:46:17 -0400 2010:
> Alvaro Herrera writes:
> > I think we've had enough problems with the current design of forking a
> > new autovac process every once in a while, that I'd like to have them as
> > permanent processes instead, waiting for orders f
Alvaro Herrera writes:
> I think we've had enough problems with the current design of forking a
> new autovac process every once in a while, that I'd like to have them as
> permanent processes instead, waiting for orders from the autovac
> launcher. From that POV, bgworkers would make sense.
Tha
Excerpts from Markus Wanner's message of mar sep 14 12:56:59 -0400 2010:
> What bugs me a bit is that I didn't really get much feedback regarding
> the *bgworker* portion of code. Especially as that's the part I'm most
> interested in feedback.
I think we've had enough problems with the current
On 09/14/2010 06:26 PM, Robert Haas wrote:
As a matter of project management, I am inclined to think that until
we've hammered out this issue, there's not a whole lot useful that can
be done on any of the BG worker patches. So I am wondering if we
should set those to Returned with Feedback or bu
On Mon, Aug 30, 2010 at 11:30 AM, Markus Wanner wrote:
> On 08/30/2010 04:52 PM, Tom Lane wrote:
>> Let me just point out that awhile back we got a *measurable* performance
>> boost by eliminating a single indirect fetch from the buffer addressing
>> code path.
>
> I'll take a look a that, thanks.
Hi,
On 08/30/2010 04:52 PM, Tom Lane wrote:
Let me just point out that awhile back we got a *measurable* performance
boost by eliminating a single indirect fetch from the buffer addressing
code path.
I'll take a look a that, thanks.
So I don't have any faith in untested assertions
Neither
Markus Wanner writes:
> AFAICT we currently have three fixed size blocks to manage shared
> buffers: the buffer blocks themselves, the buffer descriptors, the
> strategy status (for the freelist) and the buffer lookup table.
> It's not obvious to me how these data structures should perform bett
(Sorry, need to disable Ctrl-Return, which quite often sends mails
earlier than I really want.. continuing my mail)
On 08/27/2010 10:46 PM, Robert Haas wrote:
Yeah, probably. I think designing something that works efficiently
over a network is a somewhat different problem than designing
someth
Hi,
On 08/27/2010 10:46 PM, Robert Haas wrote:
What other subsystems are you imagining servicing with a dynamic
allocator? If there were a big demand for this functionality, we
probably would have been forced to implement it already, but that's
not the case. We've already discussed the fact th
On Fri, Aug 27, 2010 at 2:17 PM, Markus Wanner wrote:
>> In addition, it means that maximum_message_queue_size_per_backend (or
>> whatever it's called) can be changed on-the-fly; that is, it can be
>> PGC_SIGHUP rather than PGC_POSTMASTER.
>
> That's certainly a point. However, as you are proposin
Hi,
On 08/26/2010 11:57 PM, Robert Haas wrote:
It wouldn't require you to preallocate a big chunk of shared memory
Agreed, you wouldn't have to allocate it in advance. We would still want
a configurable upper limit. So this can be seen as another approach for
an implementation of a dynamic a
On Thu, Aug 26, 2010 at 3:40 PM, Markus Wanner wrote:
> On 08/26/2010 09:22 PM, Tom Lane wrote:
>>
>> Not having to have a hard limit on the space for unconsumed messages?
>
> Ah, I see. However, spilling to disk is unwanted for the current use cases
> of imessages. Instead the sender needs to be
On Thu, Aug 26, 2010 at 3:03 PM, Markus Wanner wrote:
>> On the more general topic of imessages, I had one other thought that
>> might be worth considering. Instead of using shared memory, what
>> about using a file that is shared between the sender and receiver?
>
> What would that buy us? (At t
On 08/26/2010 09:22 PM, Tom Lane wrote:
Not having to have a hard limit on the space for unconsumed messages?
Ah, I see. However, spilling to disk is unwanted for the current use
cases of imessages. Instead the sender needs to be able to deal with
out-of-(that-specific-part-of-shared)-memory
Markus Wanner writes:
> On 08/26/2010 02:44 PM, Robert Haas wrote:
>> On the more general topic of imessages, I had one other thought that
>> might be worth considering. Instead of using shared memory, what
>> about using a file that is shared between the sender and receiver?
> What would that b
Robert,
On 08/26/2010 02:44 PM, Robert Haas wrote:
I dunno. It was just a thought. I haven't actually looked at the
code to see how much synergy there is. (Sorry, been really busy...)
No problem, was just wondering if there's any benefit you had in mind.
On the more general topic of imess
On Thu, Aug 26, 2010 at 6:07 AM, Markus Wanner wrote:
> What'd be the benefits of having separate coordinator processes? They'd be
> doing pretty much the same: coordinate background processes. (And yes, I
> clearly consider autovacuum to be just one kind of background process).
I dunno. It was
Itagaki-san,
On 08/26/2010 01:02 PM, Itagaki Takahiro wrote:
OK, I see why you proposed coordinator hook (yeah, I call it hook :)
rather than adding user-defined processes.
I see. If you call that a hook, I'm definitely not a hook-hater ;-) at
least not according to your definition.
Howev
On Thu, Aug 26, 2010 at 7:42 PM, Markus Wanner wrote:
>> Markus, do you need B? Or A + standard backend processes are enough?
>
> No, I certainly don't need B.
OK, I see why you proposed coordinator hook (yeah, I call it hook :)
rather than adding user-defined processes.
> Why not just use an or
On 08/26/2010 05:01 AM, Itagaki Takahiro wrote:
Markus, do you need B? Or A + standard backend processes are enough?
If you need B eventually, starting with B might be better.
No, I certainly don't need B.
Why not just use an ordinary backend to do "user defined background
processing"? It cov
Itagaki-san,
thanks for reviewing this.
On 08/26/2010 03:39 AM, Itagaki Takahiro wrote:
Other changes in the patch doesn't seem be always needed for the purpose.
In other words, the patch is not minimal.
Hm.. yeah, maybe the separation between step1 and step2 is a bit
arbitrary. I'll look in
Hi,
thanks for your feedback on this, it sort of got lost below the
discussion about the dynamic shared memory stuff, IMO.
On 08/26/2010 04:39 AM, Robert Haas wrote:
It's not clear to me whether it's better to have a single coordinator
process that handles both autovacuum and other things, or
On Thu, Aug 26, 2010 at 11:39 AM, Robert Haas wrote:
>> On Tue, Jul 13, 2010 at 11:31 PM, Markus Wanner wrote:
>>> This patch turns the existing autovacuum launcher into an always running
>>> process, partly called the coordinator.
>
> It's not clear to me whether it's better to have a single coo
On Wed, Aug 25, 2010 at 9:39 PM, Itagaki Takahiro
wrote:
> On Tue, Jul 13, 2010 at 11:31 PM, Markus Wanner wrote:
>> This patch turns the existing autovacuum launcher into an always running
>> process, partly called the coordinator. If autovacuum is disabled, the
>> coordinator process still gets
On Tue, Jul 13, 2010 at 11:31 PM, Markus Wanner wrote:
> This patch turns the existing autovacuum launcher into an always running
> process, partly called the coordinator. If autovacuum is disabled, the
> coordinator process still gets started and keeps around, but it doesn't
> dispatch vacuum job
This patch turns the existing autovacuum launcher into an always running
process, partly called the coordinator. If autovacuum is disabled, the
coordinator process still gets started and keeps around, but it doesn't
dispatch vacuum jobs. The coordinator process now uses imessages to
communicate
42 matches
Mail list logo