Hi,

we have been using this patched version now in a production environment few days. Seems to be working nicely. Memory problems disappeared.

- Juha

Igor Vaynberg wrote:
if someone can confirm that the patch works in a production env i will
be happy to commit it. i just havent had the time to test it myself
yet.

-igor

On Tue, Jun 10, 2008 at 7:09 AM, Juha Alatalo
<[EMAIL PROTECTED]> wrote:
Hi All,

I run our profiling tests (version 1.3.3) using Application.java and
Localizer.java patched by Stefan. Patch seems to be solving our memory
problems.

Is this patch coming to 1.3.4 and do you have any idea when 1.3.4 will be
released?

Best Regards
- Juha


Stefan Fußenegger wrote:
Hi Daniel,

I didn't put the patch into production yet, but I am quite confident, that
it will help. As you can see in the example I attached to the JIRA issue
(just attached a new version), the unpatched Localizer had 200 entries in
his cache, the patched Localizer only four - which is a Good Thing (tm),
as
there are only 4 different cached values!

Regards, Stefan



Daniel Frisk wrote:
So the patch did help?

I too have observed this problem but it was at the moment less of a
 problem than other heap eaters, now this is next in line. We have  added a
script which automatically restarts the server when repeated  OOME occurs
and are down to a couple of times per week without the  patch. But still,
who wouldn't want to see months of uptime...

// Daniel
jalbum.net


On 2008-06-10, at 11:29, Stefan Fußenegger wrote:

Hi Igor,

Thanks for your quick reply and the patch, sorry for not searching the
mailinglist only but not JIRA.

Your patch was for 1.4, I applied it to 1.3.3, created a quickstart
including JUnit test and attached it to the JIRA issue. Hope this  fix
gets
into the next maintenance release. I am to lazy to create a properly
 patched
jar and a MVN repo for my team right now ;)

Regards, Stefan



igor.vaynberg wrote:
try applying this patch and see if it helps

https://issues.apache.org/jira/browse/WICKET-1667

-igor

On Mon, Jun 9, 2008 at 8:11 AM, Stefan Fußenegger
<[EMAIL PROTECTED]> wrote:
I am just analysing a heap dump (god bless the
-XX:+HeapDumpOnOutOfMemoryError flag) of a recent application  cache
due
to
an OutOfMemoryError ("GC overhead limit exceeded" to be precise).
 Using
jhat, the "175456 instances of class
org.apache.wicket.util.concurrent.ConcurrentHashMap$Entry"
 immediately
got
my attention. While looking through the 107 instance of
ConcurrentHashMap, I
found one *really* big one: Localizer.cache has a hash table  length
of
262144, each of its 32 segments with about 5300 entries, where a  hash
key
is
a string, sometimes longer than 500 charactes, similar to (see
Localizer.getCacheKey(String,Component)):

fooTitle.bar-
org.apache.wicket.markup.html.link.BookmarkablePageLink:fooLink-
org.apache.wicket.markup.html.panel.Fragment:track-
org.apache.wicket.markup.html.list.ListItem:14-
my.company.FooListPanel$1:fooList-my.company.FooListPanel:foos-
org.apache.wicket.markup.html.list.ListItem:0-
my.company.BarListPanel$1:bars-my.company.FooListPanel:panel-
my.company.boxes.BodyBox:2-
org.apache.wicket.markup.repeater.RepeatingView:body-
my.company.layout.Border:border-my.company.pages.music.FoobarPage:
43-de-null

Those numbers pretty much convinced me: The localizer cache has  blown
away
my application.

Looking at this hash keys, I suspect the following problem: those
 strings
are constructed from the "position" of a localized String on a page,
which
is quite a bad thing if you use nested list views or repeating  views
to
construct your page. For instance, I have a panel with a long
 (pageable)
list of entries, might be > 5000 entries which might appear on
 various
positions in a repeating view I use as a container for most of my
 pages.
Let's say there are 5 possible positions, this would cause 2500
 thousand
cached entries, each with a key of 300+ characters plus some more
characters
for the cached message - feel free to do the maths. From a quick
 estimate
I'd say: No wonder, this has blown away my app.

As a quick fix, I'd suggest to regularly clear the localizer  cache,
use a
more sophisticated cache (that expires old entries once in a  while!!)
or
to
disable the cache completely. However, don't try to overwrite
Localizer.newCache() and clear the cache regularly: clearCache()  will
replace your cache with a ConcurrentHashMap (not using
Localizer.newCache()). However, quite unlikely, that this will  happen
as
newCache() is private anyway ;) I am going to add some code to  clear
the
cache regularly.

Best regards, Stefan

PS: I'll also create a JIRA issue, but I am really short on time
 right
now.

-----
-------
Stefan Fußenegger
http://talk-on-tech.blogspot.com // looking for a nicer domain ;)
--
View this message in context:

http://www.nabble.com/Localizer-cache-with-150.000%2B-entries-causing-OutOfMemory-tp17734931p17734931.html
Sent from the Wicket - User mailing list archive at Nabble.com.


---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



-----
-------
Stefan Fußenegger
http://talk-on-tech.blogspot.com // looking for a nicer domain ;)
--
View this message in context:

http://www.nabble.com/Localizer-cache-with-150.000%2B-entries-causing-OutOfMemory-tp17734931p17751273.html
Sent from the Wicket - User mailing list archive at Nabble.com.


---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




-----
-------
Stefan Fußenegger
http://talk-on-tech.blogspot.com // looking for a nicer domain ;)
---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to