On 2016-04-05 17:12:11 -0400, Robert Haas wrote:
> On Wed, Mar 30, 2016 at 4:10 PM, Andres Freund <and...@anarazel.de> wrote:
> > Indeed. On SSDs I see about a 25-35% gain, on HDDs about 5%. If I
> > increase the size of backend_flush_after to 64 (like it's for bgwriter)
> > I however do get about 15% for HDDs as well.
> 
> I tried the same test mentioned in the original post on cthulhu (EDB
> machine, CentOS 7.2, 8 sockets, 8 cores per socket, 2 threads per
> core, Xeon E7-8830 @ 2.13 GHz).  I attempted to test both the effects
> of multi_extend_v21 and the *_flush_after settings.  The machine has
> both HD and SSD, but I used HD for this test.

> master, logged tables, 4 parallel copies:                                     
>         1m15.411s, 1m14.248s, 1m15.040s
> master, logged tables, 1 copy:                                                
>         0m28.336s, 0m28.040s, 0m29.576s
> multi_extend_v21, logged tables, 4 parallel copies:                           
>         0m46.058s, 0m44.515s, 0m45.688s
> multi_extend_v21, logged tables, 1 copy:                                      
>         0m28.440s, 0m28.129s, 0m30.698s
> master, logged tables, 4 parallel copies, {backend,bgwriter}_flush_after=0:   
>         1m2.817s, 1m4.467s, 1m12.319s
> multi_extend_v21, logged tables, 4 parallel copies, 
> {backend,bgwriter}_flush_after=0: 0m41.301s, 0m41.104s, 0m41.342s
> master, logged tables, 1 copy, {backend,bgwriter}_flush_after=0:              
>         0m26.948s, 0m26.829s, 0m26.616s

Any chance you could repeat with backend_flush_after set to 64? I wonder
if the current value isn't just too small a default for HDDs due to
their increased latency.

- Andres


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to