On 04.02.2012 07:24, Jeff Janes wrote:
Is it safe to assume that, under "#ifdef LWLOCK_STATS", a call to
LWLockAcquire will always precede any calls to LWLockWaitUntilFree
when a new process is started, to calloc the stats assays?
>
I guess it is right now, because the only user is WALWrite, wh
On Mon, Jan 30, 2012 at 6:57 AM, Heikki Linnakangas
wrote:
> On 27.01.2012 15:38, Robert Haas wrote:
>>
>> On Fri, Jan 27, 2012 at 8:35 AM, Heikki Linnakangas
>> wrote:
>>>
>>> Yeah, we have to be careful with any overhead in there, it can be a hot
>>> spot. I wouldn't expect any measurable diff
On 31 January 2012 14:24, Robert Haas wrote:
> I think you're trying to muddy the waters. Heikki's implementation
> was different than yours, and there are some things about it I'm not
> 100% thrilled with, but it's fundamentally the same concept. The new
> idea you're describing here is somethi
On Tue, Jan 31, 2012 at 3:02 AM, Simon Riggs wrote:
> On Tue, Jan 31, 2012 at 7:43 AM, Heikki Linnakangas
> wrote:
>
>> That seems like a pretty marginal gain. If you're bound by the speed of
>> fsyncs, this will reduce the latency by the time it takes to mark the clog,
>> which is tiny in compar
On Tue, Jan 31, 2012 at 7:43 AM, Heikki Linnakangas
wrote:
> That seems like a pretty marginal gain. If you're bound by the speed of
> fsyncs, this will reduce the latency by the time it takes to mark the clog,
> which is tiny in comparison to all the other stuff that needs to happen,
> like, flu
On 31.01.2012 01:35, Simon Riggs wrote:
The plan here is to allow WAL flush and clog updates to occur
concurrently. Which allows the clog contention and update time to be
completely hidden behind the wait for the WAL flush. That is only
possible if we have the WALwriter involved since we need two
On Mon, Jan 30, 2012 at 8:04 PM, Heikki Linnakangas
wrote:
> So, what's the approach you're working on?
I've had a few days leave at end of last week, so no time to fully
discuss the next steps with the patch. That's why you were requested
not to commit anything.
You've suggested there was no r
On 30.01.2012 21:55, Simon Riggs wrote:
On Mon, Jan 30, 2012 at 3:32 PM, Heikki Linnakangas
wrote:
On 30.01.2012 17:18, Simon Riggs wrote:
Peter and I have been working on a new version that seems likely to
improve performance over your suggestions. We should be showing
something soon.
Pl
On Mon, Jan 30, 2012 at 3:32 PM, Heikki Linnakangas
wrote:
> On 30.01.2012 17:18, Simon Riggs wrote:
>> I asked clearly and specifically for you to hold back committing
>> anything. Not sure why you would ignore that and commit without
>> actually asking myself or Peter. On a point of principle a
On Sun, Jan 29, 2012 at 1:20 PM, Greg Smith wrote:
> On 01/28/2012 07:48 PM, Jeff Janes wrote:
>>
>
>> I haven't inspected that deep fall off at 30 clients for the patch.
>> By way of reference, if I turn off synchronous commit, I get
>> tps=1245.8 which is 100% CPU limited. This sets an theoreti
On 30.01.2012 17:18, Simon Riggs wrote:
On Mon, Jan 30, 2012 at 2:57 PM, Heikki Linnakangas
wrote:
I committed this. I ran pgbench test on an 8-core box and didn't see any
slowdown. It would still be good if you get a chance to rerun the bigger
test, but I feel confident that there's no measu
On Mon, Jan 30, 2012 at 2:57 PM, Heikki Linnakangas
wrote:
> I committed this. I ran pgbench test on an 8-core box and didn't see any
> slowdown. It would still be good if you get a chance to rerun the bigger
> test, but I feel confident that there's no measurable slowdown.
I asked clearly and s
On 27.01.2012 15:38, Robert Haas wrote:
On Fri, Jan 27, 2012 at 8:35 AM, Heikki Linnakangas
wrote:
Yeah, we have to be careful with any overhead in there, it can be a hot
spot. I wouldn't expect any measurable difference from the above, though.
Could I ask you to rerun the pgbench tests you di
On 01/28/2012 07:48 PM, Jeff Janes wrote:
Others are going to test this out on high-end systems. I wanted to
try it out on the other end of the scale. I've used a Pentium 4,
3.2GHz,
with 2GB of RAM and with a single IDE drive running ext4. ext4 is
amazingly bad on IDE, giving about 25 fsync's p
On 2012-01-29 01:48, Jeff Janes wrote:
I ran three modes, head, head with commit_delay, and the group_commit patch
shared_buffers = 600MB
wal_sync_method=fsync
optionally with:
commit_delay=5
commit_siblings=1
pgbench -i -s40
for clients in 1 5 10 15 20 25 30
pgbench -T 30 -M prepared -c $cli
On Fri, Jan 27, 2012 at 5:35 AM, Heikki Linnakangas
wrote:
> On 26.01.2012 04:10, Robert Haas wrote:
>
>>
>> I think you should break this off into a new function,
>> LWLockWaitUntilFree(), rather than treating it as a new LWLockMode.
>> Also, instead of adding lwWaitOnly, I would suggest that we
On Fri, Jan 27, 2012 at 8:35 AM, Heikki Linnakangas
wrote:
> Yeah, we have to be careful with any overhead in there, it can be a hot
> spot. I wouldn't expect any measurable difference from the above, though.
> Could I ask you to rerun the pgbench tests you did recently with this patch?
> Or can y
On 26.01.2012 04:10, Robert Haas wrote:
On Wed, Jan 25, 2012 at 3:11 AM, Heikki Linnakangas
wrote:
Attached is a patch to do that. It adds a new mode to
LWLockConditionalAcquire(), LW_EXCLUSIVE_BUT_WAIT. If the lock is free, it
is acquired and the function returns immediately. However, unlike
On Wed, Jan 25, 2012 at 3:11 AM, Heikki Linnakangas
wrote:
> I've been thinking, what exactly is the important part of this group commit
> patch that gives the benefit? Keeping the queue sorted isn't all that
> important - XLogFlush() requests for commits will come in almost the correct
> order an
I've been thinking, what exactly is the important part of this group
commit patch that gives the benefit? Keeping the queue sorted isn't all
that important - XLogFlush() requests for commits will come in almost
the correct order anyway.
I also don't much like the division of labour between gro
On Fri, Jan 20, 2012 at 10:30 PM, Heikki Linnakangas
wrote:
> I spent some time cleaning this up. Details below, but here are the
> highlights:
>
> * Reverted the removal of wal_writer_delay
> * Removed heuristic on big flushes
No contested viewpoints on anything there.
> * Doesn't rely on WAL
On 21 January 2012 03:13, Peter Geoghegan wrote:
> I have taken the time to re-run the benchmark and update the wiki with
> that new information - I'd call it a draw.
On second though, the occasional latency spikes that we see with my
patch (which uses the poll() based latch in the run that is
be
On 20 January 2012 22:30, Heikki Linnakangas
wrote:
> Maybe we should have a heuristic to split a large flush into smaller chunks.
> The WAL segment boundary would be a quite natural split point, for example,
> because when crossing the file boundary you have to issue separate fsync()s
> for the f
I spent some time cleaning this up. Details below, but here are the
highlights:
* Reverted the removal of wal_writer_delay
* Doesn't rely on WAL writer. Any process can act as the "leader" now.
* Removed heuristic on big flushes
* Uses PGSemaphoreLock/Unlock instead of latches
On 20.01.2012 17:
On Thu, Jan 19, 2012 at 10:46 PM, Peter Geoghegan wrote:
> On 19 January 2012 17:40, Robert Haas wrote:
>> I don't know what you mean by this. I think removing wal_writer_delay
>> is premature, because I think it still may have some utility, and the
>> patch removes it. That's a separate change
On 19 January 2012 17:40, Robert Haas wrote:
> I don't know what you mean by this. I think removing wal_writer_delay
> is premature, because I think it still may have some utility, and the
> patch removes it. That's a separate change that should be factored
> out of this patch and discussed sepa
On Wed, Jan 18, 2012 at 5:38 PM, Simon Riggs wrote:
> On Wed, Jan 18, 2012 at 1:23 AM, Robert Haas wrote:
>> On Tue, Jan 17, 2012 at 12:37 PM, Heikki Linnakangas
>> wrote:
>>> I found it very helpful to reduce wal_writer_delay in pgbench tests, when
>>> running with synchronous_commit=off. The r
On Wed, Jan 18, 2012 at 1:23 AM, Robert Haas wrote:
> On Tue, Jan 17, 2012 at 12:37 PM, Heikki Linnakangas
> wrote:
>> I found it very helpful to reduce wal_writer_delay in pgbench tests, when
>> running with synchronous_commit=off. The reason is that hint bits don't get
>> set until the commit r
On Tue, Jan 17, 2012 at 12:37 PM, Heikki Linnakangas
wrote:
> I found it very helpful to reduce wal_writer_delay in pgbench tests, when
> running with synchronous_commit=off. The reason is that hint bits don't get
> set until the commit record is flushed to disk, so making the flushes more
> frequ
Excerpts from Jim Nasby's message of mar ene 17 21:21:57 -0300 2012:
> On Jan 15, 2012, at 4:42 PM, Peter Geoghegan wrote:
> > Attached is a patch that myself and Simon Riggs collaborated on. I
> > took the group commit patch that Simon posted to the list back in
> > November, and partially rewrot
On Jan 15, 2012, at 4:42 PM, Peter Geoghegan wrote:
> Attached is a patch that myself and Simon Riggs collaborated on. I
> took the group commit patch that Simon posted to the list back in
> November, and partially rewrote it.
Forgive me if this is a dumb question, but I noticed a few places doing
On 17 January 2012 17:37, Heikki Linnakangas
wrote:
> I found it very helpful to reduce wal_writer_delay in pgbench tests, when
> running with synchronous_commit=off. The reason is that hint bits don't get
> set until the commit record is flushed to disk, so making the flushes more
> frequent redu
On 17.01.2012 16:35, Peter Geoghegan wrote:
On 16 January 2012 08:11, Heikki Linnakangas
wrote:
I think it might be simpler if it wasn't the background writer that's
responsible for "driving" the group commit queue, but the backends
themselves. When a flush request comes in, you join the queue
On 16 January 2012 08:11, Heikki Linnakangas
wrote:
> Impressive results. How about uploading the PDF to the community wiki?
Sure. http://wiki.postgresql.org/wiki/Group_commit .
> I think it might be simpler if it wasn't the background writer that's
> responsible for "driving" the group commit q
On 16.01.2012 00:42, Peter Geoghegan wrote:
I've also attached the results of a pgbench-tools driven benchmark,
which are quite striking (Just the most relevant image - e-mail me
privately if you'd like a copy of the full report, as I don't want to
send a large PDF file to the list as a courtesy
35 matches
Mail list logo