2014-06-30 20:34 GMT+02:00 Jeff Frost <j...@pgexperts.com>: > On Jun 30, 2014, at 10:29 AM, Soni M <diptat...@gmail.com> wrote: > > > > > On Tue, Jul 1, 2014 at 12:14 AM, Andres Freund <and...@2ndquadrant.com> > wrote: > >> >> My guess it's a spinlock, probably xlogctl->info_lck via >> RecoveryInProgress(). Unfortunately inline assembler doesn't always seem >> to show up correctly in profiles... >> >> For this kind of issues a systemtap or dtrace can be useful
http://postgres.cz/wiki/Monitorov%C3%A1n%C3%AD_lwlocku_pomoc%C3%AD_systemtapu you can identify what locking is a problem - please, use a google translate Regards Pavel > What worked for me was to build with -fno-omit-frame-pointer - that >> normally shows the callers, even if it can't generate a proper symbol >> name. >> >> Soni: Do you use Hot Standby? Are there connections active while you >> have that problem? Any other processes with high cpu load? >> >> Greetings, >> >> Andres Freund >> >> -- >> Andres Freund http://www.2ndQuadrant.com/ >> <http://www.2ndquadrant.com/> >> PostgreSQL Development, 24x7 Support, Training & Services >> > > It is > 96.62% postgres [.] StandbyReleaseLocks > as Jeff said. It runs quite long time, more than 5 minutes i think > > i also use hot standby. we have 4 streaming replica, some of them has > active connection some has not. this issue has last more than 4 days. On > one of the standby, above postgres process is the only process that consume > high cpu load. > > > compiled with -fno-omit-frame-pointer doesn't yield much more info: > > 76.24% postgres [.] StandbyReleaseLocks > 2.64% libcrypto.so.1.0.1e [.] > md5_block_asm_data_order > 2.19% libcrypto.so.1.0.1e [.] RC4 > 2.17% postgres [.] RecordIsValid > 1.20% [kernel] [k] > copy_user_generic_unrolled > 1.18% [kernel] [k] _spin_unlock_irqrestore > 0.97% [vmxnet3] [k] vmxnet3_poll_rx_only > 0.87% [kernel] [k] __do_softirq > 0.77% [vmxnet3] [k] vmxnet3_xmit_frame > 0.69% postgres [.] > hash_search_with_hash_value > 0.68% [kernel] [k] fin > > However, this server started progressing through the WAL files quite a bit > better before I finished compiling, so we'll leave it running with this > version and see if there's more info available the next time it starts > replaying slowly. > > >