On Sun, Feb 12, 2012 at 10:49 PM, Amit Kapila wrote:
>>> Without sorted checkpoints (or some other fancier method) you have to
>>> write out the entire pool before you can do any fsyncs. Or you have
>>> to do multiple fsyncs of the same file, with at least one occurring
>>> after the entire pool
to traverse more
to find a suitable buffer.
However, if clean buffer is put in freelist, it can be directly picked from
there.
-Original Message-
From: pgsql-hackers-ow...@postgresql.org
[mailto:pgsql-hackers-ow...@postgresql.org] On Behalf Of Jeff Janes
Sent: Monday, February 13, 2012 12:14 AM
To: G
On Tue, Feb 7, 2012 at 1:22 PM, Greg Smith wrote:
> On 02/03/2012 11:41 PM, Jeff Janes wrote:
>>>
>>> -The steady stream of backend writes that happen between checkpoints have
>>> filled up most of the OS write cache. A look at /proc/meminfo shows
>>> around
>>> 2.5GB "Dirty:"
>>
>> "backend writ
On 02/03/2012 11:41 PM, Jeff Janes wrote:
-The steady stream of backend writes that happen between checkpoints have
filled up most of the OS write cache. A look at /proc/meminfo shows around
2.5GB "Dirty:"
"backend writes" includes bgwriter writes, right?
Right.
Has using a newer kernal wit
On Mon, Jan 16, 2012 at 5:59 PM, Greg Smith wrote:
> On 01/16/2012 11:00 AM, Robert Haas wrote:
>>
>> Also, I am still struggling with what the right benchmarking methodology
>> even is to judge whether
>> any patch in this area "works". Can you provide more details about
>> your test setup?
>
>
On Mon, Jan 16, 2012 at 8:59 PM, Greg Smith wrote:
> [ interesting description of problem scenario and necessary conditions for
> reproducing it ]
This is about what I thought was happening, but I'm still not quite
sure how to recreate it in the lab.
Have you had a chance to test with Linux 3.2
On 1/16/12 5:59 PM, Greg Smith wrote:
>
> What I think is needed instead is a write-heavy benchmark with a think
> time in it, so that we can dial the workload up to, say, 90% of I/O
> capacity, but that spikes to 100% when checkpoint sync happens. Then
> rearrangements in syncing that reduces ca
On 01/16/2012 11:00 AM, Robert Haas wrote:
Also, I am still struggling with what the right benchmarking
methodology even is to judge whether
any patch in this area "works". Can you provide more details about
your test setup?
The "test" setup is a production server with a few hundred users at
On Mon, Jan 16, 2012 at 2:57 AM, Greg Smith wrote:
> ...
> 2012-01-16 02:39:01.184 EST [25052]: DEBUG: checkpoint sync: number=34
> file=base/16385/11766 time=0.006 msec
> 2012-01-16 02:39:01.184 EST [25052]: DEBUG: checkpoint sync delay: seconds
> left=3
> 2012-01-16 02:39:01.284 EST [25052]: D
Last year at this point, I submitted an increasingly complicated
checkpoint sync spreading feature. I wasn't able to prove any
repeatable drop in sync time latency from those patches. While that was
going on, and continuing into recently, the production server that
started all this with its s
10 matches
Mail list logo