Hello,
These are interesting runs.
In a situation in which small values are set in dirty_bytes and
dirty_backgound_bytes, a buffer is likely stored in the HD immediately
after the buffer is written in the kernel by the checkpointer. Thus, I
tried a quick hack to make the checkpointer invoke
Hello Takashi-san,
I've noticed that the behavior in 'checkpoint_partitions = 1' is not the
same as that of original 9.5alpha2. Attached
'partitioned-checkpointing-v3.patch' fixed the bug, thus please use it.
I've done two sets of run on an old box (16 GB, 8 cores, RAID1 HDD)
with "pgbench
...@postgresql.org
> [mailto:pgsql-hackers-ow...@postgresql.org] On Behalf Of Takashi Horikawa
> Sent: Saturday, September 12, 2015 12:50 PM
> To: Simon Riggs; Fabien COELHO
> Cc: pgsql-hackers@postgresql.org
> Subject: Re: [HACKERS] Partitioned checkpointing
>
> Hi,
>
>
aturday, September 12, 2015 11:50 AM
> To: Fabien COELHO
> Cc: pgsql-hackers@postgresql.org
> Subject: Re: [HACKERS] Partitioned checkpointing
>
> Hello Fabien,
>
> > I wanted to do some tests with this POC patch. For this purpose, it
would
> > be nice to have a gu
--
> From: pgsql-hackers-ow...@postgresql.org
> [mailto:pgsql-hackers-ow...@postgresql.org] On Behalf Of Takashi Horikawa
> Sent: Saturday, September 12, 2015 11:36 PM
> To: Fabien COELHO
> Cc: pgsql-hackers@postgresql.org
> Subject: Re: [HACKERS] Partitioned checkpointing
>
Hi,
Partitioned checkpoint have the significant disadvantage that it
increases random write io by the number of passes. Which is a bad idea,
*especially* on SSDs.
> >So we'd need logic like this
> >1. Run through shared buffers and analyze the files contained in there
> >2. Assign files to one
Hello Simon,
The idea to do a partial pass through shared buffers and only write a
fraction of dirty buffers, then fsync them is a good one.
Sure.
The key point is that we spread out the fsyncs across the whole checkpoint
period.
Yes, this is really Andres suggestion, as I understood it.
Hello Takashi-san,
I wanted to do some tests with this POC patch. For this purpose, it would
be nice to have a guc which would allow to activate or not this feature.
Could you provide a patch with such a guc? I would suggest to have the
number of partitions as a guc, so that choosing 1
On 11 September 2015 at 09:07, Fabien COELHO wrote:
> Some general comments :
>
Thanks for the summary Fabien.
> I understand that what this patch does is cutting the checkpoint of
> buffers in 16 partitions, each addressing 1/16 of buffers, and each with
> its own
ledge Discovery Research Laboratories
> -Original Message-
> From: pgsql-hackers-ow...@postgresql.org
> [mailto:pgsql-hackers-ow...@postgresql.org] On Behalf Of Andres Freund
> Sent: Saturday, September 12, 2015 1:30 AM
> To: Tomas Vondra
> Cc: pgsql-hackers@postgresql.org
>
ashi(堀川 隆); pgsql-hackers@postgresql.org
> Subject: Re: [HACKERS] Partitioned checkpointing
>
> On 11 September 2015 at 09:07, Fabien COELHO <coe...@cri.ensmp.fr> wrote:
>
>
>
> Some general comments :
>
>
>
> Thanks for the summary Fabien.
>
Hello Fabien,
> I wanted to do some tests with this POC patch. For this purpose, it would
> be nice to have a guc which would allow to activate or not this feature.
Thanks.
> Could you provide a patch with such a guc? I would suggest to have the
number
> of partitions as a guc, so that choosing
On 09/11/2015 03:56 PM, Simon Riggs wrote:
The idea to do a partial pass through shared buffers and only write a
fraction of dirty buffers, then fsync them is a good one.
The key point is that we spread out the fsyncs across the whole
checkpoint period.
I doubt that's really what we want
Hi All,
Recently, I have found a paper titled "Segmented Fussy Checkpointing for
Main Memory Databases" published in 1996 at ACM symposium on Applied
Computing, which inspired me to implement a similar mechanism in PostgreSQL.
Since the early evaluation results obtained from a 16 core server was
I don't feel that another source of the performance dip has been heartily
addressed; full-page-write rush, which I call here, would be a major issue.
That is, the average size of transaction log (XLOG) records jumps up sharply
immediately after the beginning of each checkpoint, resulting in
Hello Takashi-san,
I suggest that you have a look at the following patch submitted in June:
https://commitfest.postgresql.org/6/260/
And these two threads:
http://www.postgresql.org/message-id/flat/alpine.DEB.2.10.1408251900211.11151@sto/
tember 11, 2015 12:03 AM
> To: Horikawa Takashi(堀川 隆)
> Subject: Re: [HACKERS] Partitioned checkpointing
>
> Takashi Horikawa wrote:
>
> > # Since I'm not sure whether it is OK to send an email to this mailing
> > with attaching some files other than patch, I refrain now from
ackers-ow...@postgresql.org] On Behalf Of Fabien COELHO
> Sent: Thursday, September 10, 2015 7:33 PM
> To: Horikawa Takashi(堀川 隆)
> Cc: pgsql-hackers@postgresql.org
> Subject: Re: [HACKERS] Partitioned checkpointing
>
>
>
> > I don't feel that another source of the performance
18 matches
Mail list logo