Title: RE: Freeable memory

Hi Mike,

I did a test on one of our 7.3 databases.  It seems that some (but not all) freeable chunks get coalesced by flushing the shared pool.  I tested it on a database with no other users logged in and took before/after pictures of the chunk breakdown.

SQL> select
  ksmchcom  contents,
  count(*)  chunks,
  sum(decode(ksmchcls, 'recr', ksmchsiz))  recreatable,
  sum(decode(ksmchcls, 'freeabl', ksmchsiz))  freeable,
  sum(ksmchsiz)  total
from
  sys.x$ksmsp
where
  ksmchcls not like 'R%'
group by
  ksmchcom
/
CONTENTS             CHUNKS RECREATABLE   FREEABLE      TOTAL
---------------- ---------- ----------- ---------- ----------
KGL handles             188       54776                 54776
PL/SQL DIANA             72        6604     134196     140800
PL/SQL MPCODE            20        6496      25264      31760
PLS cca hp desc           1                    164        164
PLS non-lib hp            1        2096                  2096
character set m           5                  21456      21456
dictionary cach          85                 172468     172468
fixed allocatio          26         832                   832
free memory              25                          82956796
kzull                     6                    284        284
library cache           460       78924      67472     146396
multiblock rea            1                   1040       1040
permanent memor           1                          13179484
row cache lru            30        1320                  1320
session param v           9                  19764      19764
sql area                119      186336     149732     336068

16 rows selected.

SQL> alter system flush shared_pool;

System altered.

SQL> select
  ksmchcom  contents,
  count(*)  chunks,
  sum(decode(ksmchcls, 'recr', ksmchsiz))  recreatable,
  sum(decode(ksmchcls, 'freeabl', ksmchsiz))  freeable,
  sum(ksmchsiz)  total
from
  sys.x$ksmsp
where
  ksmchcls not like 'R%'
group by
  ksmchcom
/
CONTENTS             CHUNKS RECREATABLE   FREEABLE      TOTAL
---------------- ---------- ----------- ---------- ----------
KGL handles              67       19812                 19812
PL/SQL DIANA             66        4508     125668     130176
PL/SQL MPCODE            12        3036      13648      16684
PLS cca hp desc           1                    164        164
PLS non-lib hp            1        2096                  2096
character set m           5                  21456      21456
dictionary cach          78                 144728     144728
fixed allocatio          26         832                   832
free memory              35                          83352232
kzull                     6                    284        284
library cache           164       27060      24940      52000
permanent memor           1                          13179484
row cache lru            30        1320                  1320
session param v           9                  19764      19764
sql area                 46       64152      60320     124472

15 rows selected.

SQL>

As for determining the chunks per session, the only way I know is to dump the heaps via ORADEBUG DUMP commands.  The trace files will have the breakdown of type and status for each chunk.  I don't remember off the top of my head how to dump it for individual sessions but it can be researched.  Here is a sample of ORADUBG DUMP HEAPDUMP 10:

*** 2002.01.18.10.43.55.000
*** SESSION ID:(7.8650) 2002.01.18.10.43.55.000
******************************************************
HEAP DUMP heap name="sga heap"  desc=0x8000001c
 extent sz=0xfc4 alt=44 het=32767 rec=1 flg=2 opc=0
 parent=0 owner=0 nex=0 xsz=0xc91a64
EXTENT 0
  Chunk 85009b7c sz= 13179484    perm      "perm           "  alo=7200716
EXTENT 1
  Chunk 84009b80 sz= 15604404    free      "               "
  Chunk 84eeb634 sz=      560    recreate  "library cache  "  latch=0
     ds 84eeb870 sz=      560
  Chunk 84eeb864 sz=       96    freeable  "library cache  "
  Chunk 84eeb8c4 sz=      172    recreate  "KGL handles    "  latch=0
  Chunk 84eeb970 sz=      288    recreate  "KGL handles    "  latch=0
  Chunk 84eeba90 sz=      560    recreate  "library cache  "  latch=0
     ds 84eebccc sz=      560
  Chunk 84eebcc0 sz=       96    freeable  "library cache  "
  Chunk 84eebd20 sz=      172    recreate  "KGL handles    "  latch=0
  Chunk 84eebdcc sz=      560    recreate  "library cache  "  latch=856ca7e0
     ds 84eec008 sz=      560
  Chunk 84eebffc sz=       96    freeable  "library cache  "
  Chunk 84eec05c sz=     3952    freeable  "sql area       "  ds=84eee5e0
  Chunk 84eecfcc sz=      296    recreate  "KGL handles    "  latch=856ca7e0
  Chunk 84eed0f4 sz=      168    recreate  "library cache  "  latch=856ca7e0
     ds 84eed330 sz=      168
  Chunk 84eed19c sz=      392    freeable  "library cache  "  ds=84eef190
  Chunk 84eed324 sz=       96    freeable  "library cache  "
  Chunk 84eed384 sz=      288    recreate  "KGL handles    "  latch=0
  Chunk 84eed4a4 sz=     4248    recreate  "sql area       "  latch=856ca7e0
     ds 84eee5e0 sz=     8200
        84eec05c sz=     3952
  Chunk 84eee53c sz=      560    recreate  "library cache  "  latch=856ca7e0
     ds 84eef190 sz=      952
        84eed19c sz=      392
  Chunk 84eee76c sz=     2496    freeable  "sql area       "  ds=84efb478
  Chunk 84eef12c sz=       88    free      "               "
  Chunk 84eef184 sz=       96    freeable  "library cache  "
  Chunk 84ef1638 sz=     1440    freeable  "PL/SQL DIANA   "  ds=84ef4dfc
  Chunk 84ef1bd8 sz=       96    freeable  "library cache  "
  Chunk 84ef1c38 sz=      560    freeable  "library cache  "  ds=84f8a988
  Chunk 84ef1e68 sz=     2096    freeable  "PL/SQL DIANA   "  ds=84ef4dfc
  Chunk 84ef2698 sz=     2096    freeable  "PL/SQL DIANA   "  ds=84ef4dfc
  Chunk 84ef2ec8 sz=     2096    freeable  "PL/SQL DIANA   "  ds=84ef9d70
  Chunk 84ef36f8 sz=     2096    freeable  "PL/SQL DIANA   "  ds=84ef9d70
  Chunk 84ef3f28 sz=     1220    freeable  "PL/SQL DIANA   "  ds=84ef9d70
  Chunk 84ef43ec sz=      316    recreate  "PL/SQL DIANA   "  latch=856ca898
     ds 84ef4dfc sz=     8044
        84ef0e08 sz=     2096
        84ef1638 sz=     1440
        84ef1e68 sz=     2096
        84ef2698 sz=     2096
  Chunk 84ef4528 sz=       44    freeable  "library cache  "  ds=84ef6994
  Chunk 84ef4554 sz=      216    recreate  "PL/SQL MPCODE  "  latch=856ca898
     ds 84f01008 sz=     2436
    ......... snipped .......


HTH
Tony Aponte
Home SHopping network


-----Original Message-----
From: Michael Ghelli [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, January 16, 2002 5:11 PM
To: Multiple recipients of list ORACLE-L
Subject: Freeable memory


All,
  In trying to solve some Library Cache latch contention issues, I've been
stumped on the following issues:

  1) Do freeable chunks get coalesced during an ALTER SYSTEM FLUSH
SHARED_POOL.
  2) If freeable chunks are freed when a session disconnects, is there a way
to determine chunks per session? The idea here is better memory management
through better session management.

  I've aleviated the majority of latch contention by altering the
_kgl_latch_count parameter, but haven't addressed what I think is the root
cause: an oversized shared pool. I've found scripts that show the amount of
memory occupied by the sql area and library cache, and most if the memory is
in freeable chunks. Before downsizing the shared pool, I'd like to have a
better understanding of how these chunks are managed.

Regards,
  Mike G
--
Please see the official ORACLE-L FAQ: http://www.orafaq.com
--
Author: Michael Ghelli
  INET: [EMAIL PROTECTED]

Fat City Network Services    -- (858) 538-5051  FAX: (858) 538-5051
San Diego, California        -- Public Internet access / Mailing Lists
--------------------------------------------------------------------
To REMOVE yourself from this mailing list, send an E-Mail message
to: [EMAIL PROTECTED] (note EXACT spelling of 'ListGuru') and in
the message BODY, include a line containing: UNSUB ORACLE-L
(or the name of mailing list you want to be removed from).  You may
also send the HELP command for other information (like subscribing).

Reply via email to