|
Funny you should mention... We had the same problem with an
8170 64bit db on AIX 4.3.3 yesterday at about 14:30. Paging space had
become exhausted. In brief, I'm guessing the rash of
memory leaks in 8170-8172 (or the temporary fix I did until I can patch to 8174)
is the cause. Details below:
The box is an M80 with 4GB main memory, 1GB paging
space. Two dbs, db1 has sga of ~530M, db2 has sga of ~180M.
When I looked at the server, paging space
usage was at 97%. Trying to run commands at the unix prompt
generated the following:
ksh: 0403-031 The fork function failed. There is not
enough memory available.
To protect itself, the opsys had aparently also killed a few
processes as evidenced by a handful of PGSP_KILL errors in the system error
log.
As users and developers (don't ask) began to bail off, paging
space usage dropped to 86% and we were able to maintain connectivity until a
(previously scheduled) maintenance window yesterday evening. Here is how
the paging space looked as we brought the dbs down:
Before either db is down:
# lsps -a Page Space Physical Volume Volume Group Size %Used Active Auto Type hd6 hdisk0 rootvg 1024MB 86 yes yes lv After db2 is brought down (this indicates db2 with sga of 180M had 185M of paging space): # lsps -a Page Space Physical Volume Volume Group Size %Used Active Auto Type hd6 hdisk0 rootvg 1024MB 68 yes yes lv After db1 is brought down (this indicates db1 with sga of 530M had 481M of paging space): # lsps -a
Page Space Physical Volume Volume Group Size %Used Active Auto Type hd6 hdisk0 rootvg 1024MB 21 yes yes lv After the reboot, before any dbs are up: # lsps -a Page Space Physical Volume Volume Group Size %Used Active Auto Type hd6 hdisk0 rootvg 1024MB 1 yes yes lv After the two dbs are back up: # lsps -a Page Space Physical Volume Volume Group Size %Used Active Auto Type hd6 hdisk0 rootvg 1024MB 1 yes yes lv I had kicked the shared pool up a good bit on db1 12 days earlier
along with a couple of other init parm changes to deal with ORA-04031 errors due
to the memory leaks. Here are the relevant init parm entries (the
_db_handles_cached parm will impact performance):
# ORA-04031 errors with BAMIMA upon login.
Until
# patched to 8.1.7.4, kick up shared_pool, make shared_pool_reserved_size =10-15% # and try to bounce the db on occasion. Add large_pool area for parallel query. # Finally, added _db_handles_cached=0 to keep from hitting one memory leaking bug. # Remove this parm after patched to 8.1.7.4 _db_handles_cached=0 shared_pool_size = 400M shared_pool_reserved_size = 60M # 10-15%ofshared_pool_size = 60M large_pool_size = 20M # start at 20, maybe go to 40 if ok I will probably cut back on the shared pool until I can get this patched to
8174 (and monitor paging space to bounce the dbs when needed).
HTH, Scott
>>> [EMAIL PROTECTED] 10/4/02 10:38:31 AM >>> HI all We had those messages yesterday in the listener.log file TNS-12500: TNS:listener failed to start a dedicated server process TNS-12540: TNS:internal limit restriction exceeded TNS-12560: TNS:protocol adapter error TNS-00510: Internal limit restriction exceeded Also on the unix side, we had a message about the OS that can not fork a new process. This is on 8172 32bits/AIX 4.3.3 The sga is 1.7G, the server has 8G of ram. There is between 150 and 300 users connected. The init.ora process parameter is set to 425. The unix number of process allowed is set to 500. I've check on metalink, but found nothing that we do not already do. Any ideas ? Thanks ===== St�phane Paquette DBA Oracle, consultant entrep�t de donn�es Oracle DBA, datawarehouse consultant [EMAIL PROTECTED] |
- TNS-00510: Internal limit restriction exceeded Farnsworth, Dave
- TNS-00510: Internal limit restriction exceeded paquette stephane
- RE: TNS-00510: Internal limit restriction exceeded Gogala, Mladen
- Scott Behrens
