Hi,

I had encountered this problem when there were a large number of writes on the 
disk (the operation involved bulk imports - read and writes) and disk 
configuration at that point of time showed latency in write operations - and 
inturn slowing down the reads. The problem was resolved with following steps:

1) Changing disk configuration - it was a RAID 5. Had it replaced by a RAID 10 
array. You may not require a large array due to size of database, but it will 
certainly help to have multiple disks.
2) Regular VACUUM (auto vacuum/ analyze,  manual vacuum/ analyze on certain 
tables during day and complete db vacuum and analyze during weekends).
3) Regular reindexing - I schedule reindexes on tables after every 3-4 weeks.

HTH,

Shrirang Chitnis
Sr. Manager, Applications Development
HOV Services


Office: (866) 808-0935 Ext: 39210
[email protected]
www.hovservices.com

The information contained in this message, including any attachments, is 
attorney privileged and/or confidential information intended only for the use 
of the individual or entity named as addressee.  The review, dissemination, 
distribution or copying of this communication by or to anyone other than the 
intended addressee is strictly prohibited.  If you have received this 
communication in error, please immediately notify the sender by replying to the 
message and destroy all copies of the original message.

-----Original Message-----
From: Kevin Grittner [mailto:[email protected]]
Sent: Saturday, February 12, 2011 8:05 PM
To: [email protected]; [email protected]
Subject: Re: increased load on server

jf  wrote:

>>> I tried to restart postgresql server and to reboot the server,
>>> but 5 minutes later, the trouble was here again.
>>
>> Any idea what changed during those 5 minutes?
> 5 minutes: just the time for my users to reconnect to my frontend.

> max_fsm_pages = 153600

Probably not the main problem, but you should be sure to schedule a
VACUUM ANALYZE of the whole database (run as the database superuser)
and check the last few lines to make sure the free space manager has
enough space to prevent bloat.

> I saw those things during the problem:
> - a lot of blocks read per seconds (max to 46k) on the disk which
> only contains /var/lib (PostgreSQL datas but not pg_xlog)
> - lot of blocks write on the same disk (max 20k) and on the system
> + pg_xlog disk (max 25k)

> max_connections = 100
> shared_buffers = 64MB

> Connections during problem: peak to 63.
> During normal use my average number is 6.5
>
> My system:
> - RAM: 4Go
> - 2 disks:
> - sdb: contains only /var/lib,
> /var/lib/postgresql/8.3/main/pg_xlog is a symlink to an other
> partition on an other disk
> - sda contains partitions for /, /var, /tmp, /boot, /home
> - CPU: Intel Xeon 1.6G with 4 cores
> - No error on Linux system logs: dmesg, /var/log/syslog

You need to use some sort of connection pool to funnel those requests
through about nine or ten connections.  When you have more
connections than that active on the hardware you describe, you're
going to cut both throughput and response time, just when you don't
want those to suffer.  You want a pool which has a hard upper limit
on number of connections and will queue requests when at the limit.

Once you've done that you'll be in a better position to determine
whether your hardware is hefty enough to handle the load.  In
particular, I'm concerned that you don't have RAID, both from a
performance perspective, and from the ability to survive a drive
failure.  Those do happen.

-Kevin

-- 
Sent via pgsql-admin mailing list ([email protected])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin

Reply via email to