A couple more links here about issues with kernel 3.10
http://www.databasesoup.com/2014/09/why-you-need-to-avoid-linux-kernel-32.html
http://www.postgresql.org/message-id/flat/20150203.174637.1316840640181577524.t-is...@sraoss.co.jp#20150203.174637.1316840640181577524.t-is...@sraoss.co.jp
Hi Alvaro,
On 18/08/15 17:41, Alvaro Herrera wrote:
Alvaro Herrera wrote:
One thing to look at is the rate of WAL generation for a set number of
transactions. Maybe the later releases are generating more WAL due to
multixacts, for instance (prior to 9.3 these weren't wal-logged.)
Also try
Hi Joshua,
On 18/08/15 16:12, Joshua D. Drake wrote:
On 08/18/2015 08:01 AM, Michael H wrote:
Hi,
I've been tuning our new database server, here's some info...
CentOS Linux release 7.1.1503 (Core)
3.10.0-229.11.1.el7.x86_64
8 x 16GB 1600MHz PC3-12800 DDR3- 128GB total
2 x AMD
Hi Melvin,
On 18/08/15 17:19, Melvin Davidson wrote:
8 x 16GB 1600MHz PC3-12800 DDR3 - 128GB total
shared_buffers=60GB
I would say 60GB is too high when you have 128GB system memory.
Try lowering it to shared_buffers=32GB and let the O/S handle more of
the work.
I have tested
Hi Alvaro,
On 18/08/15 17:39, Alvaro Herrera wrote:
Joshua D. Drake wrote:
On 08/18/2015 09:19 AM, Melvin Davidson wrote:
8 x 16GB 1600MHz PC3-12800 DDR3 - 128GB total
shared_buffers=60GB
I would say 60GB is too high when you have 128GB system memory.
Try lowering it to
I wrote:
One thing to look at is the rate of WAL generation for a set number of
transactions. Maybe the later releases are generating more WAL due to
multixacts, for instance (prior to 9.3 these weren't wal-logged.)
FWIW a very easy way to measure this is to look at the output of
pg_xlogdump
Joshua D. Drake wrote:
On 08/18/2015 09:41 AM, Alvaro Herrera wrote:
Alvaro Herrera wrote:
One thing to look at is the rate of WAL generation for a set number of
transactions. Maybe the later releases are generating more WAL due to
multixacts, for instance (prior to 9.3 these weren't
On 08/18/2015 09:41 AM, Alvaro Herrera wrote:
Alvaro Herrera wrote:
One thing to look at is the rate of WAL generation for a set number of
transactions. Maybe the later releases are generating more WAL due to
multixacts, for instance (prior to 9.3 these weren't wal-logged.)
Also try
Hi,
I've been tuning our new database server, here's some info...
CentOS Linux release 7.1.1503 (Core)
3.10.0-229.11.1.el7.x86_64
8 x 16GB 1600MHz PC3-12800 DDR3 - 128GB total
2 x AMD Opteron 6386SE 2.8GHz/16-core/140w - 32 cores total
4 x 300GB SAS 10k HDD
On 08/18/2015 08:01 AM, Michael H wrote:
Hi,
I've been tuning our new database server, here's some info...
CentOS Linux release 7.1.1503 (Core)
3.10.0-229.11.1.el7.x86_64
8 x 16GB 1600MHz PC3-12800 DDR3- 128GB total
2 x AMD Opteron 6386SE 2.8GHz/16-core/140w - 32 cores total
Joshua D. Drake wrote:
On 08/18/2015 09:19 AM, Melvin Davidson wrote:
8 x 16GB 1600MHz PC3-12800 DDR3 - 128GB total
shared_buffers=60GB
I would say 60GB is too high when you have 128GB system memory.
Try lowering it to shared_buffers=32GB and let the O/S handle more of
Alvaro Herrera wrote:
One thing to look at is the rate of WAL generation for a set number of
transactions. Maybe the later releases are generating more WAL due to
multixacts, for instance (prior to 9.3 these weren't wal-logged.)
Also try 9.5alpha2, wherein bug #8470 is fixed, which is a big
On Tue, Aug 18, 2015 at 8:01 AM, Michael H mich...@wemoto.com wrote:
Hi,
I've been tuning our new database server, here's some info...
CentOS Linux release 7.1.1503 (Core)
3.10.0-229.11.1.el7.x86_64
8 x 16GB 1600MHz PC3-12800 DDR3 - 128GB total
2 x AMD Opteron 6386SE
8 x 16GB 1600MHz PC3-12800 DDR3 - 128GB total
shared_buffers=60GB
I would say 60GB is too high when you have 128GB system memory.
Try lowering it to shared_buffers=32GB and let the O/S handle more of the
work.
On Tue, Aug 18, 2015 at 11:49 AM, Jeff Janes jeff.ja...@gmail.com
On 08/18/2015 09:19 AM, Melvin Davidson wrote:
8 x 16GB 1600MHz PC3-12800 DDR3 - 128GB total
shared_buffers=60GB
I would say 60GB is too high when you have 128GB system memory.
Try lowering it to shared_buffers=32GB and let the O/S handle more of
the work.
I would also look
Here are the only things that I amended, all other settings are defaults.
maintenance_work_mem=2GB
checkpoint_segments=64
wal_keep_segments=128
max_prepared_transactions=10
max_wal_senders=3
wal_level=hot_standby
max_files_per_process=100
max_stack_depth=7MB
16 matches
Mail list logo