This is great data. Thanks. I'll take a look and see if we might be able to trim it down a bit more.
Aaron --- On Wed, 9/3/08, Nick <[EMAIL PROTECTED]> wrote: > From: Nick <[EMAIL PROTECTED]> > Subject: Re: JCS using lots of mem for objects written to disk? > To: "JCS Users List" <jcs-users@jakarta.apache.org> > Date: Wednesday, September 3, 2008, 8:09 PM > Aaron, > > Thanks for your prompt response! > > Here is the results after setting > jcs.default.cacheattributes.MaxObjects=0, > and putting two more 3 sec sleeps followed by GCs after the > cache is loaded: > > 100K objects in the disk cache: > > TEST STARTING heap size: 2787 KB > Purgatory Size = 89171 > TEST: Sleeping 15 sec to let objs roll to disk > Purgatory Size = 0 > TEST after cache load: 37772 KB, delta 34985 KB > TEST after 3 sec pause: 37772 KB, delta 34985 KB > TEST after 3 another sec pause: 20216 KB, delta 17429 > KB > Purgatory Size = 0 > > 200K objects in the disk cache: > > TEST STARTING heap size: 2787 KB > Purgatory Size = 173664 > TEST: Sleeping 15 sec to let objs roll to disk > Purgatory Size = 0 > TEST after cache load: 57444 KB, delta 54657 KB > TEST after 3 sec pause: 57443 KB, delta 54656 KB > TEST after 3 another sec pause: 39886 KB, delta 37099 > KB > > Storing no objects in cache reduced the heap, as we'd > expect. However, > we were still holding 190 bytes per item in the disk cache > (e.g., in the > 200K objects case, the delta from the start was 37099 KB, > divided by 200K > objects is 190 bytes. For the 100K objects case, it comes > out 178 bytes. > > I modified my test program to store the same data > (TestData) in a simple > HashMap, which resulted in this: > > 100K objects in-mem in HashMap: > > TEST STARTING heap size: 2787 KB > TEST after cache load: 84312 KB, delta 81525 KB > TEST after 3 sec pause: 83903 KB, delta 81116 KB > TEST after 3 another sec pause: 83903 KB, delta 81116 > KB > > 200K objects in-mem in HashMap: > > TEST STARTING heap size: 2787 KB > TEST after cache load: 172580 KB, delta 169793 KB > TEST after 3 sec pause: 171600 KB, delta 168813 KB > TEST after 3 another sec pause: 171600 KB, delta 168813 > KB > > Note that in the case of using JCS, after loading 200K > objects all into the > disk cache, JCS reported Data File Length = 140755580. So > of the 168 MB > being used by the HashMap test, we should expect about 140 > MB of that to be > data (assuming Java serialization is pretty efficient), > leaving about 30 MB > for the HashMap infrastructure. > > I then modified the HashMap test case to store a simple > "new Object()" in the > HashMap instead of a TestData. > > TEST STARTING heap size: 2787 KB > TEST after cache load: 16573 KB, delta 13786 KB > TEST after 3 sec pause: 16572 KB, delta 13785 KB > TEST after 3 another sec pause: 16572 KB, delta 13785 > KB > > TEST STARTING heap size: 2787 KB > TEST after cache load: 33136 KB, delta 30349 KB > TEST after 3 sec pause: 33070 KB, delta 30283 KB > TEST after 3 another sec pause: 33070 KB, delta 30283 > KB > > So JCS's overhead is really not that bad, compared to a > HashMap of > just the keys to Object (about 22% more space). > > I also ran the 200K TestData JCS cache with 0 objects in > mem case under > the YourKit profiler. It showed 37 MB being used by the > IndexedDiskCache, > 36.1 MB of that in java.util.HashMap. It reported these as > the primary > mem users: > > +---------------------------------------------------------------------------+-----------------+-------------------+ > | Name > | Objects | Shallow Size | > +---------------------------------------------------------------------------+-----------------+-------------------+ > |java.lang.String > | 200,429 25 % | 8,017,160 21 % | > |char[] > | 200,347 25 % | 7,945,024 21 % | > |java.util.HashMap$Entry > | 200,254 25 % | 9,612,192 26 % | > |org.apache.jcs.auxiliary.disk.indexed.IndexedDiskElementDescriptor > | 200,000 25 % | 6,400,000 17 % | > > The interesting there is that my average String size is 40 > bytes, ditto the > char[]. Note that I form the key, and the values in > TestData, using > Integer.toString(int). I guess I'll have to look at > the code for it, maybe > its really inefficent. > > After seeing this result, I went back and changed my code > to use > "new Integer(i)" as the key, instead of > "Integer.toString(i)". Results: > > 100K objects in the disk cache: > > TEST STARTING heap size: 2787 KB > Purgatory Size = 89646 > TEST: Sleeping 15 sec to let objs roll to disk > Purgatory Size = 0 > TEST after cache load: 32268 KB, delta 29481 KB > TEST after 3 sec pause: 14712 KB, delta 11925 KB > TEST after 3 another sec pause: 14712 KB, delta 11925 > KB > > 200K objects in the disk cache: > > TEST STARTING heap size: 2787 KB > Purgatory Size = 176268 > TEST: Sleeping 15 sec to let objs roll to disk > Purgatory Size = 0 > TEST after cache load: 46339 KB, delta 43552 KB > TEST after 3 sec pause: 28783 KB, delta 25996 KB > TEST after 3 another sec pause: 28783 KB, delta 25996 > KB > TEST after 30 another sec pause: 28783 KB, delta 25996 > KB > > This gets us down to a per-disk-cached-item overhead of 130 > bytes. This may > be the minimum that overhead achievable with the current > JCS > implementation of the indexed disk cache. > > Nick > > > > > > --------------------------------------------------------------------- > To unsubscribe, e-mail: > [EMAIL PROTECTED] > For additional commands, e-mail: > [EMAIL PROTECTED] --------------------------------------------------------------------- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]