This patch is based on below Todo Item:

Consider adding buffers the background writer finds reusable to the free list



I have tried implementing it and taken the readings for Select when all the 
data is in either OS buffers

or Shared Buffers.



The Patch has simple implementation for  "bgwriter or checkpoint process moving 
the unused buffers (unpinned with "ZERO" usage_count buffers) into "freelist".

Results (Results.html attached with mail) are taken with following 
configuration.

Current scenario is
    1. Load all the files in to OS buffers (using pg_prewarm with 'read' 
operation) of all
       tables and indexes.
    2. Try to load all buffers with "pgbench_accounts" table and 
"pgbench_accounts_pkey"
       pages (using pg_prewarm with 'buffers' operation).
    3. Run the pgbench with select only for 20 minutes.

Platform details:
    Operating System: Suse-Linux 10.2 x86_64
    Hardware : 4 core (Intel(R) Xeon(R) CPU L5408 @ 2.13GHz)
    RAM : 24GB

Server Configuration:
    shared_buffers = 6GB     (1/4 th of RAM size)

Pgbench configuration:
        transaction type: SELECT only
        scaling factor: 1200
        query mode: simple
        number of clients: <varying from 8 to 64 >
        number of threads: <varying from 8 to 64 >
        duration: 1200 s





Comments or suggestions?



I am still collecting data for update and other operations performance results 
with different database configuration.



With Regards,

Amit Kapila.
diff --git a/src/backend/storage/buffer/bufmgr.c 
b/src/backend/storage/buffer/bufmgr.c
index dba19eb..2b9cfbb 100644
--- a/src/backend/storage/buffer/bufmgr.c
+++ b/src/backend/storage/buffer/bufmgr.c
@@ -1660,8 +1660,20 @@ SyncOneBuffer(int buf_id, bool skip_recently_used)
 
        if (!(bufHdr->flags & BM_VALID) || !(bufHdr->flags & BM_DIRTY))
        {
-               /* It's clean, so nothing to do */
-               UnlockBufHdr(bufHdr);
+               /*
+                * If the buffer is unused then move it to freelist
+                */
+               if ((bufHdr->flags & BM_VALID)
+                       && (bufHdr->refcount == 0 && bufHdr->usage_count == 0)
+                       && (bufHdr->freeNext == FREENEXT_NOT_IN_LIST))
+               {
+                       InvalidateBuffer(bufHdr);
+               }
+               else
+               {
+                       /* It's clean, so nothing to do */
+                       UnlockBufHdr(bufHdr);
+               }
                return result;
        }
 
@@ -1677,6 +1689,20 @@ SyncOneBuffer(int buf_id, bool skip_recently_used)
        LWLockRelease(bufHdr->content_lock);
        UnpinBuffer(bufHdr, true);
 
+
+       /*
+        * If the buffer is unused then move it to freelist
+        */
+       LockBufHdr(bufHdr);
+       if (bufHdr->refcount == 0 && bufHdr->usage_count == 0)
+       {
+               InvalidateBuffer(bufHdr);
+       }
+       else
+       {
+               UnlockBufHdr(bufHdr);
+       }
+
        return result | BUF_WRITTEN;
 }
 
Original Postgres 9.3devel
SIZE 16GB-5GB 16GB-5GB 16GB - 5GB 16GB - 5GB
Clients 8C / 8T 16C / 16T 32C / 32T 64C / 64T
RUN-1 60269 72325329 52853 63425001 32562 39096275 15375 18502725
RUN-2 60370 72451857 58453 70151866 33181 39841490 16348 19670518
RUN-3 59292 71159080 58976 70782600 33584 40344977 16469 19801260
Average 59977 71978755 56761 68119822 33109 39760914 16064 19324834
Bgwriter/Checkpoint process moving unused bufferes to Free List modification
SIZE 16GB-5GB 16GB-5GB 16GB - 5GB 16GB - 5GB
Clients 16C / 16T 16C / 16T 32C / 32T 64C / 64T
RUN-1 57152 68591311 60072 72096257 57957 69574459 50240 60363537
RUN-2 60707 72858156 60013 72026319 57939 69566401 50090 60115068
RUN-3 60567 72689308 59853 71832898 57925 69546383 50297 60360896
Average 59475 71379592 59979 71985158 57940 69562414 50209 60279834
Diff in %
Difference -0.837 -0.8324 5.6694 5.6743 74.998 74.952 212.56 211.93
-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to