>8 x 16GB 1600MHz PC3-12800 DDR3                 - 128GB total
>>shared_buffers=60GB

I would say 60GB is too high when you have 128GB system memory.
Try lowering it to shared_buffers=32GB and let the O/S handle more of the
work.


On Tue, Aug 18, 2015 at 11:49 AM, Jeff Janes <jeff.ja...@gmail.com> wrote:

> On Tue, Aug 18, 2015 at 8:01 AM, Michael H <mich...@wemoto.com> wrote:
>
>> Hi,
>>
>> I've been tuning our new database server, here's some info...
>>
>> CentOS Linux release 7.1.1503 (Core)
>> 3.10.0-229.11.1.el7.x86_64
>>
>> 8 x 16GB 1600MHz PC3-12800 DDR3                 - 128GB total
>> 2 x AMD Opteron 6386SE 2.8GHz/16-core/140w      - 32 cores total
>> 4 x 300GB SAS 10k HDD                           - raid 1+0 configuration
>> 1GB FBWC for P-series smart array               - cache enabled
>>
>> I'm using the CentOS provided packages for PostgreSQL
>> Version     : 9.2.13
>> Release     : 1.el7_1
>>
>> I'm getting fairly good statistics from this server but after asking for
>> some advice I was pointed towards PostgreSQL 9.3 (posix memory management)
>> and PostgreSQL 9.4 (pg_replication_slots).
>>
>> I dropped my original install of 9.2.13 above and went straight to the
>> 9.4 from the PostgreSQL repositories.
>>
>
>
> How did you get your data from 9.2 to 9.4?  Did you run ANALYZE on it
> afterwards?
>
>
>
>> Are there any known issues with my kernel and PostgreSQL? I found this
>> post -
>>
>> http://www.databasesoup.com/2014/09/why-you-need-to-avoid-linux-kernel-32.html
>>
>> which states there are known issues up to kernel 3.10.. the reason I ask,
>> no matter how small or big a configuration change I make I can't match my
>> 9.2.13 install. I'm seeing huge decreases in TPS on all my benchmarks.
>>
>> for example, 9.2.13, my own extremely heavy SQL file being used here,
>> hence the lower TPS...
>>
>> 32      37.357197
>> 64      34.145088
>> 128     19.682544
>> 256     9.910772
>> 512     5.803358
>>
>> compared to 9.4 - exactly the same tests and parameters configured (I
>> also started from defaults and tuned up as best I could).
>>
>> 32      14.982111
>> 64      14.894859
>> 128     14.277631
>> 256     13.679516
>> 512     13.679516
>>
>
> Pick the query that dropped in performance the most, then run it with
> "explain (analyze, buffers)" and with track_io_timing turned on, and
> compare this between the servers.  Did the plan change, or just the time?
>
> Cheers,
>
> Jeff
>



-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.

Reply via email to