Re: {Spam} [PERFORM] Will shared_buffers crash a server

2011-05-01 Thread Marti Raudsepp
Qiang Wang  wrote:
> We have PostgreSQL 8.3 running on Debian Linux server. We built an
> applicantion using PHP programming language and Postgres database. There are
> appoximatly 150 users using the software constantly. We had some performance
> degration before and after some studies we figured out we will need to tune
> PostgreSQL configurations.

> However we suffered 2 times server crashes after tunning the configuration.
> Does anyone have any idea how this can happen?

Could you explain in more detail, *how* it crashed?

On Linux, the first suspect for crashes is usually the OOM
(out-of-memory) killer. When the kernel thinks it's run out of memory,
it picks a task and kills it. Due to the way PostgreSQL uses shared
memory, it's more likely to be killed than other processes.

To figure out whether you've suffered an OOM kill, run "dmesg", you
would see something like:
[2961426.424851] postgres invoked oom-killer: gfp_mask=0x201da,
order=0, oomkilladj=0
[2961426.424857] postgres cpuset=/ mems_allowed=0
[2961426.424861] Pid: 932, comm: postgres Not tainted 2.6.31-22-server
#65-Ubuntu
[2961426.424863] Call Trace:
...

The first step in solving OOM kills is disabling memory overcommit;
add 'vm.overcommit_memory = 0' to /etc/sysctl.conf and run the command
'echo 0 > /proc/sys/vm/overcommit_memory'

This doesn't prevent OOM kills entirely, but usually reduces them
significantly, queries will now abort with an "out of memory" error if
they're responsible for memory exhaustion.

You can also reduce the chance that PostgreSQL is chosen for killing,
by changing its oom_adj, documented here:
http://blog.credativ.com/en/2010/03/postgresql-and-linux-memory-management.html

Regards,
Marti

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] The right SHMMAX and FILE_MAX

2011-05-01 Thread Kevin Grittner
Phoenix Kiula  wrote:
 
> Now, according to my reading in the PG manual and this list, a
> good recommended value for SHMMAX is
> 
>(shared_buffers * 8192)
 
Where did you see that?  The amount of data buffered is the number
of shared buffers * 8KB.  Taking shared_buffers as a number of bytes
and multiplying by 8K makes no sense at all.  Any documentation
which can be read to indicate that should be fixed.
 
Besides that, there is shared memory space needed besides the actual
buffered disk pages, so you're not looking at the whole picture once
you stop dealing with "bytes squared".
 
-Kevin

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] The right SHMMAX and FILE_MAX

2011-05-01 Thread Greg Smith

On 05/01/2011 02:48 AM, Phoenix Kiula wrote:

Hi. I'm on a 64 Bit CentOS 5 system, quadcore processor, 8GB RAM and
tons of data storage (1 TB SATAII disks).

The current SHMMAX and SHMMIN are (commas added for legibility) --

kernel.shmmax = 68,719,476,736
kernel.shmall = 4,294,967,296
   


That's set higher than the amount of RAM in the server.  Run the 
attached script; it will produce reasonable values for your server, 
presuming you'll never want to allocate >50% of the RAM in the server 
for shared memory.  Given standard tuning for shared_buffers is <40%, 
I've never run into a situation where this was a terrible choice if you 
want to just set and forget about it.  Only reason to fine-tine is if 
another major user of shared memory is running on the server



Now, according to my reading in the PG manual and this list, a good
recommended value for SHMMAX is

(shared_buffers * 8192)
   


The value for shared_buffers stored internally is in 8192 byte pages:

select setting,unit,current_setting(name) from pg_settings where 
name='shared_buffers';

 setting | unit | current_setting
-+--+-
 4096| 8kB  | 32MB

So any formula you found that does this sort of thing is just converting 
it back to bytes again, and is probably from an earlier PostgreSQL 
version where you couldn't set this parameter in memory units.  SHMMAX 
needs to be a bit bigger than shared_buffers in bytes.



Similarly with "fs.file_max". There are articles like this one:
http://tldp.org/LDP/solrhe/Securing-Optimizing-Linux-RH-Edition-v1.3/chap6sec72.html
Is this relevant for PostgreSQL performance at all, or should I skip that?
   


That's ancient history.  This is how big the default is on the two Linux 
distributions I have handy:


[RHEL5]
$ cat /proc/sys/fs/file-max
745312

[Debian Squeeze]
$ cat /proc/sys/fs/file-max
1645719

It was a tiny number circa the RedHat 6 that manual was written for, now 
it's very unlikely you'll exceed the kernel setting here.


--
Greg Smith   2ndQuadrant USg...@2ndquadrant.com   Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support  www.2ndQuadrant.us
"PostgreSQL 9.0 High Performance": http://www.2ndQuadrant.com/books

#!/bin/bash

# Output lines suitable for sysctl configuration based
# on total amount of RAM on the system.  The output
# will allow up to 50% of physical memory to be allocated
# into shared memory.

# On Linux, you can use it as follows (as root):
# 
# ./shmsetup >> /etc/sysctl.conf
# sysctl -p

# Early FreeBSD versions do not support the sysconf interface
# used here.  The exact version where this works hasn't
# been confirmed yet.

page_size=`getconf PAGE_SIZE`
phys_pages=`getconf _PHYS_PAGES`

if [ -z "$page_size" ]; then
  echo Error:  cannot determine page size
  exit 1
fi

if [ -z "$phys_pages" ]; then
  echo Error:  cannot determine number of memory pages
  exit 2
fi

shmall=`expr $phys_pages / 2`
shmmax=`expr $shmall \* $page_size` 

echo \# Maximum shared segment size in bytes
echo kernel.shmmax = $shmmax
echo \# Maximum number of shared memory segments in pages
echo kernel.shmall = $shmall

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] stored proc and inserting hundreds of thousands of rows

2011-05-01 Thread Greg Smith

On 04/30/2011 09:00 PM, Samuel Gendler wrote:
Some kind of in-memory cache of doc/ad mappings which the ad server 
interacts with will serve you in good stead and will be much easier to 
scale horizontally than most relational db architectures lend 
themselves to...Even something as simple as a process that pushes the 
most recent doc/ad mappings into a memcache instance could be 
sufficient - and you can scale your memcache across as many hosts as 
is necessary to deliver the lookup latencies that you require no 
matter how large the dataset. 


Many of the things I see people switching over to NoSQL key/value store 
solutions would be served equally well on the performance side by a 
memcache layer between the application and the database.  If you can map 
the problem into key/value pairs for NoSQL, you can almost certainly do 
that using a layer above PostgreSQL instead.


The main downside of that, what people seem to object to, is that it 
makes for two pieces of software that need to be maintained; the NoSQL 
solutions can do it with just one.  If you have more complicated queries 
to run, too, the benefit to using a more complicated database should 
outweigh that extra complexity though.


--
Greg Smith   2ndQuadrant USg...@2ndquadrant.com   Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support  www.2ndQuadrant.us
"PostgreSQL 9.0 High Performance": http://www.2ndQuadrant.com/books


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] The right SHMMAX and FILE_MAX

2011-05-01 Thread Adarsh Sharma
I am also in need of a proper documentation that explains how to set  
SHMAX and SHMALL variables in Postgres.


What things need to be taken in consideration before doing that ?
What is the value of SHMAX & SHMALL if u have 16 GB RAM for Postgres 
Server ?




Thanks

Phoenix Kiula wrote:

Hi. I'm on a 64 Bit CentOS 5 system, quadcore processor, 8GB RAM and
tons of data storage (1 TB SATAII disks).

The current SHMMAX and SHMMIN are (commas added for legibility) --

kernel.shmmax = 68,719,476,736
kernel.shmall = 4,294,967,296

Now, according to my reading in the PG manual and this list, a good
recommended value for SHMMAX is

   (shared_buffers * 8192)

My postgresql.conf settings at the moment are:

max_connections = 300
shared_buffers = 300MB
effective_cache_size = 2000MB

By this calculation, shared_b * 8192 will be:

 2,457,600,000,000

That's a humongous number. So either the principle for SHMMAX is
amiss, or I am reading this wrongly?

Similarly with "fs.file_max". There are articles like this one:
http://tldp.org/LDP/solrhe/Securing-Optimizing-Linux-RH-Edition-v1.3/chap6sec72.html

Is this relevant for PostgreSQL performance at all, or should I skip that?

Thanks for any pointers!

  



--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance