Re: Localizer cache with 150.000+ entries causing OutOfMemory

2008-06-19 Thread Quan Zhou
I've met the same problem several days away and resovled it by override a
method of Localizer

see:
http://www.nabble.com/Why-Localizer-Retained-so-many-heapsize--to17142582.html#a17182935

that may help you a little.


2008/6/9 Stefan Fußenegger [EMAIL PROTECTED]:


 I am just analysing a heap dump (god bless the
 -XX:+HeapDumpOnOutOfMemoryError flag) of a recent application cache due to
 an OutOfMemoryError (GC overhead limit exceeded to be precise). Using
 jhat, the 175456 instances of class
 org.apache.wicket.util.concurrent.ConcurrentHashMap$Entry immediately got
 my attention. While looking through the 107 instance of ConcurrentHashMap,
 I
 found one *really* big one: Localizer.cache has a hash table length of
 262144, each of its 32 segments with about 5300 entries, where a hash key
 is
 a string, sometimes longer than 500 charactes, similar to (see
 Localizer.getCacheKey(String,Component)):


 fooTitle.bar-org.apache.wicket.markup.html.link.BookmarkablePageLink:fooLink-org.apache.wicket.markup.html.panel.Fragment:track-org.apache.wicket.markup.html.list.ListItem:14-my.company.FooListPanel$1:fooList-my.company.FooListPanel:foos-org.apache.wicket.markup.html.list.ListItem:0-my.company.BarListPanel$1:bars-my.company.FooListPanel:panel-my.company.boxes.BodyBox:2-org.apache.wicket.markup.repeater.RepeatingView:body-my.company.layout.Border:border-my.company.pages.music.FoobarPage:43-de-null

 Those numbers pretty much convinced me: The localizer cache has blown away
 my application.

 Looking at this hash keys, I suspect the following problem: those strings
 are constructed from the position of a localized String on a page, which
 is quite a bad thing if you use nested list views or repeating views to
 construct your page. For instance, I have a panel with a long (pageable)
 list of entries, might be  5000 entries which might appear on various
 positions in a repeating view I use as a container for most of my pages.
 Let's say there are 5 possible positions, this would cause 2500 thousand
 cached entries, each with a key of 300+ characters plus some more
 characters
 for the cached message - feel free to do the maths. From a quick estimate
 I'd say: No wonder, this has blown away my app.

 As a quick fix, I'd suggest to regularly clear the localizer cache, use a
 more sophisticated cache (that expires old entries once in a while!!) or to
 disable the cache completely. However, don't try to overwrite
 Localizer.newCache() and clear the cache regularly: clearCache() will
 replace your cache with a ConcurrentHashMap (not using
 Localizer.newCache()). However, quite unlikely, that this will happen as
 newCache() is private anyway ;) I am going to add some code to clear the
 cache regularly.

 Best regards, Stefan

 PS: I'll also create a JIRA issue, but I am really short on time right now.

 -
 ---
 Stefan Fußenegger
 http://talk-on-tech.blogspot.com // looking for a nicer domain ;)
 --
 View this message in context:
 http://www.nabble.com/Localizer-cache-with-150.000%2B-entries-causing-OutOfMemory-tp17734931p17734931.html
 Sent from the Wicket - User mailing list archive at Nabble.com.


 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]




Re: Localizer cache with 150.000+ entries causing OutOfMemory

2008-06-19 Thread Stefan Fußenegger

Hi,

According to https://issues.apache.org/jira/browse/WICKET-1667 this issue
will be fixed in Wicket 1.3.4 and 1.4-M3 respectively. I already put the
patch into production and it works. Thanks to Igor!

Regards, Stefan


Heart wrote:
 
 I've met the same problem several days away and resovled it by override a
 method of Localizer
 
 see:
 http://www.nabble.com/Why-Localizer-Retained-so-many-heapsize--to17142582.html#a17182935
 
 that may help you a little.
 
 
 2008/6/9 Stefan Fußenegger [EMAIL PROTECTED]:
 

 I am just analysing a heap dump (god bless the
 -XX:+HeapDumpOnOutOfMemoryError flag) of a recent application cache due
 to
 an OutOfMemoryError (GC overhead limit exceeded to be precise). Using
 jhat, the 175456 instances of class
 org.apache.wicket.util.concurrent.ConcurrentHashMap$Entry immediately
 got
 my attention. While looking through the 107 instance of
 ConcurrentHashMap,
 I
 found one *really* big one: Localizer.cache has a hash table length of
 262144, each of its 32 segments with about 5300 entries, where a hash key
 is
 a string, sometimes longer than 500 charactes, similar to (see
 Localizer.getCacheKey(String,Component)):


 fooTitle.bar-org.apache.wicket.markup.html.link.BookmarkablePageLink:fooLink-org.apache.wicket.markup.html.panel.Fragment:track-org.apache.wicket.markup.html.list.ListItem:14-my.company.FooListPanel$1:fooList-my.company.FooListPanel:foos-org.apache.wicket.markup.html.list.ListItem:0-my.company.BarListPanel$1:bars-my.company.FooListPanel:panel-my.company.boxes.BodyBox:2-org.apache.wicket.markup.repeater.RepeatingView:body-my.company.layout.Border:border-my.company.pages.music.FoobarPage:43-de-null

 Those numbers pretty much convinced me: The localizer cache has blown
 away
 my application.

 Looking at this hash keys, I suspect the following problem: those strings
 are constructed from the position of a localized String on a page,
 which
 is quite a bad thing if you use nested list views or repeating views to
 construct your page. For instance, I have a panel with a long (pageable)
 list of entries, might be  5000 entries which might appear on various
 positions in a repeating view I use as a container for most of my pages.
 Let's say there are 5 possible positions, this would cause 2500 thousand
 cached entries, each with a key of 300+ characters plus some more
 characters
 for the cached message - feel free to do the maths. From a quick estimate
 I'd say: No wonder, this has blown away my app.

 As a quick fix, I'd suggest to regularly clear the localizer cache, use a
 more sophisticated cache (that expires old entries once in a while!!) or
 to
 disable the cache completely. However, don't try to overwrite
 Localizer.newCache() and clear the cache regularly: clearCache() will
 replace your cache with a ConcurrentHashMap (not using
 Localizer.newCache()). However, quite unlikely, that this will happen as
 newCache() is private anyway ;) I am going to add some code to clear the
 cache regularly.

 Best regards, Stefan

 PS: I'll also create a JIRA issue, but I am really short on time right
 now.

 -
 ---
 Stefan Fußenegger
 http://talk-on-tech.blogspot.com // looking for a nicer domain ;)
 --
 View this message in context:
 http://www.nabble.com/Localizer-cache-with-150.000%2B-entries-causing-OutOfMemory-tp17734931p17734931.html
 Sent from the Wicket - User mailing list archive at Nabble.com.


 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]


 
 


-
---
Stefan Fußenegger
http://talk-on-tech.blogspot.com // looking for a nicer domain ;)
-- 
View this message in context: 
http://www.nabble.com/Localizer-cache-with-150.000%2B-entries-causing-OutOfMemory-tp17734931p18000380.html
Sent from the Wicket - User mailing list archive at Nabble.com.


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Localizer cache with 150.000+ entries causing OutOfMemory

2008-06-16 Thread Juha Alatalo

Hi,

we have been using this patched version now in a production environment 
few days. Seems to be working nicely. Memory problems disappeared.


- Juha

Igor Vaynberg wrote:

if someone can confirm that the patch works in a production env i will
be happy to commit it. i just havent had the time to test it myself
yet.

-igor

On Tue, Jun 10, 2008 at 7:09 AM, Juha Alatalo
[EMAIL PROTECTED] wrote:

Hi All,

I run our profiling tests (version 1.3.3) using Application.java and
Localizer.java patched by Stefan. Patch seems to be solving our memory
problems.

Is this patch coming to 1.3.4 and do you have any idea when 1.3.4 will be
released?

Best Regards
- Juha


Stefan Fußenegger wrote:

Hi Daniel,

I didn't put the patch into production yet, but I am quite confident, that
it will help. As you can see in the example I attached to the JIRA issue
(just attached a new version), the unpatched Localizer had 200 entries in
his cache, the patched Localizer only four - which is a Good Thing (tm),
as
there are only 4 different cached values!

Regards, Stefan



Daniel Frisk wrote:

So the patch did help?

I too have observed this problem but it was at the moment less of a
 problem than other heap eaters, now this is next in line. We have  added a
script which automatically restarts the server when repeated  OOME occurs
and are down to a couple of times per week without the  patch. But still,
who wouldn't want to see months of uptime...

// Daniel
jalbum.net


On 2008-06-10, at 11:29, Stefan Fußenegger wrote:


Hi Igor,

Thanks for your quick reply and the patch, sorry for not searching the
mailinglist only but not JIRA.

Your patch was for 1.4, I applied it to 1.3.3, created a quickstart
including JUnit test and attached it to the JIRA issue. Hope this  fix
gets
into the next maintenance release. I am to lazy to create a properly
 patched
jar and a MVN repo for my team right now ;)

Regards, Stefan



igor.vaynberg wrote:

try applying this patch and see if it helps

https://issues.apache.org/jira/browse/WICKET-1667

-igor

On Mon, Jun 9, 2008 at 8:11 AM, Stefan Fußenegger
[EMAIL PROTECTED] wrote:

I am just analysing a heap dump (god bless the
-XX:+HeapDumpOnOutOfMemoryError flag) of a recent application  cache
due
to
an OutOfMemoryError (GC overhead limit exceeded to be precise).
 Using
jhat, the 175456 instances of class
org.apache.wicket.util.concurrent.ConcurrentHashMap$Entry
 immediately
got
my attention. While looking through the 107 instance of
ConcurrentHashMap, I
found one *really* big one: Localizer.cache has a hash table  length
of
262144, each of its 32 segments with about 5300 entries, where a  hash
key
is
a string, sometimes longer than 500 charactes, similar to (see
Localizer.getCacheKey(String,Component)):

fooTitle.bar-
org.apache.wicket.markup.html.link.BookmarkablePageLink:fooLink-
org.apache.wicket.markup.html.panel.Fragment:track-
org.apache.wicket.markup.html.list.ListItem:14-
my.company.FooListPanel$1:fooList-my.company.FooListPanel:foos-
org.apache.wicket.markup.html.list.ListItem:0-
my.company.BarListPanel$1:bars-my.company.FooListPanel:panel-
my.company.boxes.BodyBox:2-
org.apache.wicket.markup.repeater.RepeatingView:body-
my.company.layout.Border:border-my.company.pages.music.FoobarPage:
43-de-null

Those numbers pretty much convinced me: The localizer cache has  blown
away
my application.

Looking at this hash keys, I suspect the following problem: those
 strings
are constructed from the position of a localized String on a page,
which
is quite a bad thing if you use nested list views or repeating  views
to
construct your page. For instance, I have a panel with a long
 (pageable)
list of entries, might be  5000 entries which might appear on
 various
positions in a repeating view I use as a container for most of my
 pages.
Let's say there are 5 possible positions, this would cause 2500
 thousand
cached entries, each with a key of 300+ characters plus some more
characters
for the cached message - feel free to do the maths. From a quick
 estimate
I'd say: No wonder, this has blown away my app.

As a quick fix, I'd suggest to regularly clear the localizer  cache,
use a
more sophisticated cache (that expires old entries once in a  while!!)
or
to
disable the cache completely. However, don't try to overwrite
Localizer.newCache() and clear the cache regularly: clearCache()  will
replace your cache with a ConcurrentHashMap (not using
Localizer.newCache()). However, quite unlikely, that this will  happen
as
newCache() is private anyway ;) I am going to add some code to  clear
the
cache regularly.

Best regards, Stefan

PS: I'll also create a JIRA issue, but I am really short on time
 right
now.

-
---
Stefan Fußenegger
http://talk-on-tech.blogspot.com // looking for a nicer domain ;)
--
View this message in context:

http://www.nabble.com/Localizer-cache-with-150.000%2B-entries-causing-OutOfMemory-tp17734931p17734931.html
Sent from the Wicket - User mailing list archive at Nabble.com.



Re: Localizer cache with 150.000+ entries causing OutOfMemory

2008-06-16 Thread Igor Vaynberg
ive committed the fix for 1.3 and 1.4. please test it out, there was a
minor tweak for also putting page class into the cache key for
wicket-1697. there is a little bit of syncing going on in localizer
now that is used to translate class name to an integer in order to
drastically shorten the cache key strings. this is not optimal in 1.3
so if anyone notices it is a hotspot we can fix that.

-igor

On Mon, Jun 16, 2008 at 2:05 AM, Juha Alatalo
[EMAIL PROTECTED] wrote:
 Hi,

 we have been using this patched version now in a production environment few
 days. Seems to be working nicely. Memory problems disappeared.

 - Juha

 Igor Vaynberg wrote:

 if someone can confirm that the patch works in a production env i will
 be happy to commit it. i just havent had the time to test it myself
 yet.

 -igor

 On Tue, Jun 10, 2008 at 7:09 AM, Juha Alatalo
 [EMAIL PROTECTED] wrote:

 Hi All,

 I run our profiling tests (version 1.3.3) using Application.java and
 Localizer.java patched by Stefan. Patch seems to be solving our memory
 problems.

 Is this patch coming to 1.3.4 and do you have any idea when 1.3.4 will be
 released?

 Best Regards
 - Juha


 Stefan Fußenegger wrote:

 Hi Daniel,

 I didn't put the patch into production yet, but I am quite confident,
 that
 it will help. As you can see in the example I attached to the JIRA issue
 (just attached a new version), the unpatched Localizer had 200 entries
 in
 his cache, the patched Localizer only four - which is a Good Thing (tm),
 as
 there are only 4 different cached values!

 Regards, Stefan



 Daniel Frisk wrote:

 So the patch did help?

 I too have observed this problem but it was at the moment less of a
  problem than other heap eaters, now this is next in line. We have
  added a
 script which automatically restarts the server when repeated  OOME
 occurs
 and are down to a couple of times per week without the  patch. But
 still,
 who wouldn't want to see months of uptime...

 // Daniel
 jalbum.net


 On 2008-06-10, at 11:29, Stefan Fußenegger wrote:

 Hi Igor,

 Thanks for your quick reply and the patch, sorry for not searching the
 mailinglist only but not JIRA.

 Your patch was for 1.4, I applied it to 1.3.3, created a quickstart
 including JUnit test and attached it to the JIRA issue. Hope this  fix
 gets
 into the next maintenance release. I am to lazy to create a properly
  patched
 jar and a MVN repo for my team right now ;)

 Regards, Stefan



 igor.vaynberg wrote:

 try applying this patch and see if it helps

 https://issues.apache.org/jira/browse/WICKET-1667

 -igor

 On Mon, Jun 9, 2008 at 8:11 AM, Stefan Fußenegger
 [EMAIL PROTECTED] wrote:

 I am just analysing a heap dump (god bless the
 -XX:+HeapDumpOnOutOfMemoryError flag) of a recent application  cache
 due
 to
 an OutOfMemoryError (GC overhead limit exceeded to be precise).
  Using
 jhat, the 175456 instances of class
 org.apache.wicket.util.concurrent.ConcurrentHashMap$Entry
  immediately
 got
 my attention. While looking through the 107 instance of
 ConcurrentHashMap, I
 found one *really* big one: Localizer.cache has a hash table  length
 of
 262144, each of its 32 segments with about 5300 entries, where a
  hash
 key
 is
 a string, sometimes longer than 500 charactes, similar to (see
 Localizer.getCacheKey(String,Component)):

 fooTitle.bar-
 org.apache.wicket.markup.html.link.BookmarkablePageLink:fooLink-
 org.apache.wicket.markup.html.panel.Fragment:track-
 org.apache.wicket.markup.html.list.ListItem:14-
 my.company.FooListPanel$1:fooList-my.company.FooListPanel:foos-
 org.apache.wicket.markup.html.list.ListItem:0-
 my.company.BarListPanel$1:bars-my.company.FooListPanel:panel-
 my.company.boxes.BodyBox:2-
 org.apache.wicket.markup.repeater.RepeatingView:body-
 my.company.layout.Border:border-my.company.pages.music.FoobarPage:
 43-de-null

 Those numbers pretty much convinced me: The localizer cache has
  blown
 away
 my application.

 Looking at this hash keys, I suspect the following problem: those
  strings
 are constructed from the position of a localized String on a page,
 which
 is quite a bad thing if you use nested list views or repeating
  views
 to
 construct your page. For instance, I have a panel with a long
  (pageable)
 list of entries, might be  5000 entries which might appear on
  various
 positions in a repeating view I use as a container for most of my
  pages.
 Let's say there are 5 possible positions, this would cause 2500
  thousand
 cached entries, each with a key of 300+ characters plus some more
 characters
 for the cached message - feel free to do the maths. From a quick
  estimate
 I'd say: No wonder, this has blown away my app.

 As a quick fix, I'd suggest to regularly clear the localizer  cache,
 use a
 more sophisticated cache (that expires old entries once in a
  while!!)
 or
 to
 disable the cache completely. However, don't try to overwrite
 Localizer.newCache() and clear the cache regularly: clearCache()
  will
 replace your cache with a 

Re: Localizer cache with 150.000+ entries causing OutOfMemory

2008-06-12 Thread Juha Alatalo
We are now using patched version of 1.3.X in a production environment 
and usage of memory seems to be much more stable.


Is there still some bug in PageWindowManager? Seems that size of 
idToWindowIndices increases every time WebPage is opened in a browser.


I run jmeter test during last night. It was simulating 50 users which 
opens a page once in a 0 - 60 seconds. Jprofiler was showing as many

IntHashMap$Entry instances as there were samples in jmeter.

- Juha



Igor Vaynberg wrote:

if someone can confirm that the patch works in a production env i will
be happy to commit it. i just havent had the time to test it myself
yet.

-igor

On Tue, Jun 10, 2008 at 7:09 AM, Juha Alatalo
[EMAIL PROTECTED] wrote:

Hi All,

I run our profiling tests (version 1.3.3) using Application.java and
Localizer.java patched by Stefan. Patch seems to be solving our memory
problems.

Is this patch coming to 1.3.4 and do you have any idea when 1.3.4 will be
released?

Best Regards
- Juha


Stefan Fußenegger wrote:

Hi Daniel,

I didn't put the patch into production yet, but I am quite confident, that
it will help. As you can see in the example I attached to the JIRA issue
(just attached a new version), the unpatched Localizer had 200 entries in
his cache, the patched Localizer only four - which is a Good Thing (tm),
as
there are only 4 different cached values!

Regards, Stefan



Daniel Frisk wrote:

So the patch did help?

I too have observed this problem but it was at the moment less of a
 problem than other heap eaters, now this is next in line. We have  added a
script which automatically restarts the server when repeated  OOME occurs
and are down to a couple of times per week without the  patch. But still,
who wouldn't want to see months of uptime...

// Daniel
jalbum.net


On 2008-06-10, at 11:29, Stefan Fußenegger wrote:


Hi Igor,

Thanks for your quick reply and the patch, sorry for not searching the
mailinglist only but not JIRA.

Your patch was for 1.4, I applied it to 1.3.3, created a quickstart
including JUnit test and attached it to the JIRA issue. Hope this  fix
gets
into the next maintenance release. I am to lazy to create a properly
 patched
jar and a MVN repo for my team right now ;)

Regards, Stefan



igor.vaynberg wrote:

try applying this patch and see if it helps

https://issues.apache.org/jira/browse/WICKET-1667

-igor

On Mon, Jun 9, 2008 at 8:11 AM, Stefan Fußenegger
[EMAIL PROTECTED] wrote:

I am just analysing a heap dump (god bless the
-XX:+HeapDumpOnOutOfMemoryError flag) of a recent application  cache
due
to
an OutOfMemoryError (GC overhead limit exceeded to be precise).
 Using
jhat, the 175456 instances of class
org.apache.wicket.util.concurrent.ConcurrentHashMap$Entry
 immediately
got
my attention. While looking through the 107 instance of
ConcurrentHashMap, I
found one *really* big one: Localizer.cache has a hash table  length
of
262144, each of its 32 segments with about 5300 entries, where a  hash
key
is
a string, sometimes longer than 500 charactes, similar to (see
Localizer.getCacheKey(String,Component)):

fooTitle.bar-
org.apache.wicket.markup.html.link.BookmarkablePageLink:fooLink-
org.apache.wicket.markup.html.panel.Fragment:track-
org.apache.wicket.markup.html.list.ListItem:14-
my.company.FooListPanel$1:fooList-my.company.FooListPanel:foos-
org.apache.wicket.markup.html.list.ListItem:0-
my.company.BarListPanel$1:bars-my.company.FooListPanel:panel-
my.company.boxes.BodyBox:2-
org.apache.wicket.markup.repeater.RepeatingView:body-
my.company.layout.Border:border-my.company.pages.music.FoobarPage:
43-de-null

Those numbers pretty much convinced me: The localizer cache has  blown
away
my application.

Looking at this hash keys, I suspect the following problem: those
 strings
are constructed from the position of a localized String on a page,
which
is quite a bad thing if you use nested list views or repeating  views
to
construct your page. For instance, I have a panel with a long
 (pageable)
list of entries, might be  5000 entries which might appear on
 various
positions in a repeating view I use as a container for most of my
 pages.
Let's say there are 5 possible positions, this would cause 2500
 thousand
cached entries, each with a key of 300+ characters plus some more
characters
for the cached message - feel free to do the maths. From a quick
 estimate
I'd say: No wonder, this has blown away my app.

As a quick fix, I'd suggest to regularly clear the localizer  cache,
use a
more sophisticated cache (that expires old entries once in a  while!!)
or
to
disable the cache completely. However, don't try to overwrite
Localizer.newCache() and clear the cache regularly: clearCache()  will
replace your cache with a ConcurrentHashMap (not using
Localizer.newCache()). However, quite unlikely, that this will  happen
as
newCache() is private anyway ;) I am going to add some code to  clear
the
cache regularly.

Best regards, Stefan

PS: I'll also create a JIRA issue, but I am really short on time
 

Re: Localizer cache with 150.000+ entries causing OutOfMemory

2008-06-12 Thread Matej Knopp
I think that should be already fixed in SVN. We didn't remove the
entries on session expiration.

-Matej

On Thu, Jun 12, 2008 at 2:26 PM, Juha Alatalo
[EMAIL PROTECTED] wrote:
 We are now using patched version of 1.3.X in a production environment and
 usage of memory seems to be much more stable.

 Is there still some bug in PageWindowManager? Seems that size of
 idToWindowIndices increases every time WebPage is opened in a browser.

 I run jmeter test during last night. It was simulating 50 users which opens
 a page once in a 0 - 60 seconds. Jprofiler was showing as many
 IntHashMap$Entry instances as there were samples in jmeter.

 - Juha



 Igor Vaynberg wrote:

 if someone can confirm that the patch works in a production env i will
 be happy to commit it. i just havent had the time to test it myself
 yet.

 -igor

 On Tue, Jun 10, 2008 at 7:09 AM, Juha Alatalo
 [EMAIL PROTECTED] wrote:

 Hi All,

 I run our profiling tests (version 1.3.3) using Application.java and
 Localizer.java patched by Stefan. Patch seems to be solving our memory
 problems.

 Is this patch coming to 1.3.4 and do you have any idea when 1.3.4 will be
 released?

 Best Regards
 - Juha


 Stefan Fußenegger wrote:

 Hi Daniel,

 I didn't put the patch into production yet, but I am quite confident,
 that
 it will help. As you can see in the example I attached to the JIRA issue
 (just attached a new version), the unpatched Localizer had 200 entries
 in
 his cache, the patched Localizer only four - which is a Good Thing (tm),
 as
 there are only 4 different cached values!

 Regards, Stefan



 Daniel Frisk wrote:

 So the patch did help?

 I too have observed this problem but it was at the moment less of a
  problem than other heap eaters, now this is next in line. We have
  added a
 script which automatically restarts the server when repeated  OOME
 occurs
 and are down to a couple of times per week without the  patch. But
 still,
 who wouldn't want to see months of uptime...

 // Daniel
 jalbum.net


 On 2008-06-10, at 11:29, Stefan Fußenegger wrote:

 Hi Igor,

 Thanks for your quick reply and the patch, sorry for not searching the
 mailinglist only but not JIRA.

 Your patch was for 1.4, I applied it to 1.3.3, created a quickstart
 including JUnit test and attached it to the JIRA issue. Hope this  fix
 gets
 into the next maintenance release. I am to lazy to create a properly
  patched
 jar and a MVN repo for my team right now ;)

 Regards, Stefan



 igor.vaynberg wrote:

 try applying this patch and see if it helps

 https://issues.apache.org/jira/browse/WICKET-1667

 -igor

 On Mon, Jun 9, 2008 at 8:11 AM, Stefan Fußenegger
 [EMAIL PROTECTED] wrote:

 I am just analysing a heap dump (god bless the
 -XX:+HeapDumpOnOutOfMemoryError flag) of a recent application  cache
 due
 to
 an OutOfMemoryError (GC overhead limit exceeded to be precise).
  Using
 jhat, the 175456 instances of class
 org.apache.wicket.util.concurrent.ConcurrentHashMap$Entry
  immediately
 got
 my attention. While looking through the 107 instance of
 ConcurrentHashMap, I
 found one *really* big one: Localizer.cache has a hash table  length
 of
 262144, each of its 32 segments with about 5300 entries, where a
  hash
 key
 is
 a string, sometimes longer than 500 charactes, similar to (see
 Localizer.getCacheKey(String,Component)):

 fooTitle.bar-
 org.apache.wicket.markup.html.link.BookmarkablePageLink:fooLink-
 org.apache.wicket.markup.html.panel.Fragment:track-
 org.apache.wicket.markup.html.list.ListItem:14-
 my.company.FooListPanel$1:fooList-my.company.FooListPanel:foos-
 org.apache.wicket.markup.html.list.ListItem:0-
 my.company.BarListPanel$1:bars-my.company.FooListPanel:panel-
 my.company.boxes.BodyBox:2-
 org.apache.wicket.markup.repeater.RepeatingView:body-
 my.company.layout.Border:border-my.company.pages.music.FoobarPage:
 43-de-null

 Those numbers pretty much convinced me: The localizer cache has
  blown
 away
 my application.

 Looking at this hash keys, I suspect the following problem: those
  strings
 are constructed from the position of a localized String on a page,
 which
 is quite a bad thing if you use nested list views or repeating
  views
 to
 construct your page. For instance, I have a panel with a long
  (pageable)
 list of entries, might be  5000 entries which might appear on
  various
 positions in a repeating view I use as a container for most of my
  pages.
 Let's say there are 5 possible positions, this would cause 2500
  thousand
 cached entries, each with a key of 300+ characters plus some more
 characters
 for the cached message - feel free to do the maths. From a quick
  estimate
 I'd say: No wonder, this has blown away my app.

 As a quick fix, I'd suggest to regularly clear the localizer  cache,
 use a
 more sophisticated cache (that expires old entries once in a
  while!!)
 or
 to
 disable the cache completely. However, don't try to overwrite
 Localizer.newCache() and clear the cache regularly: clearCache()
  will
 replace your 

Re: Localizer cache with 150.000+ entries causing OutOfMemory

2008-06-10 Thread Stefan Fußenegger

Hi Igor,

Thanks for your quick reply and the patch, sorry for not searching the
mailinglist only but not JIRA.

Your patch was for 1.4, I applied it to 1.3.3, created a quickstart
including JUnit test and attached it to the JIRA issue. Hope this fix gets
into the next maintenance release. I am to lazy to create a properly patched
jar and a MVN repo for my team right now ;)

Regards, Stefan



igor.vaynberg wrote:
 
 try applying this patch and see if it helps
 
 https://issues.apache.org/jira/browse/WICKET-1667
 
 -igor
 
 On Mon, Jun 9, 2008 at 8:11 AM, Stefan Fußenegger
 [EMAIL PROTECTED] wrote:

 I am just analysing a heap dump (god bless the
 -XX:+HeapDumpOnOutOfMemoryError flag) of a recent application cache due
 to
 an OutOfMemoryError (GC overhead limit exceeded to be precise). Using
 jhat, the 175456 instances of class
 org.apache.wicket.util.concurrent.ConcurrentHashMap$Entry immediately
 got
 my attention. While looking through the 107 instance of
 ConcurrentHashMap, I
 found one *really* big one: Localizer.cache has a hash table length of
 262144, each of its 32 segments with about 5300 entries, where a hash key
 is
 a string, sometimes longer than 500 charactes, similar to (see
 Localizer.getCacheKey(String,Component)):

 fooTitle.bar-org.apache.wicket.markup.html.link.BookmarkablePageLink:fooLink-org.apache.wicket.markup.html.panel.Fragment:track-org.apache.wicket.markup.html.list.ListItem:14-my.company.FooListPanel$1:fooList-my.company.FooListPanel:foos-org.apache.wicket.markup.html.list.ListItem:0-my.company.BarListPanel$1:bars-my.company.FooListPanel:panel-my.company.boxes.BodyBox:2-org.apache.wicket.markup.repeater.RepeatingView:body-my.company.layout.Border:border-my.company.pages.music.FoobarPage:43-de-null

 Those numbers pretty much convinced me: The localizer cache has blown
 away
 my application.

 Looking at this hash keys, I suspect the following problem: those strings
 are constructed from the position of a localized String on a page,
 which
 is quite a bad thing if you use nested list views or repeating views to
 construct your page. For instance, I have a panel with a long (pageable)
 list of entries, might be  5000 entries which might appear on various
 positions in a repeating view I use as a container for most of my pages.
 Let's say there are 5 possible positions, this would cause 2500 thousand
 cached entries, each with a key of 300+ characters plus some more
 characters
 for the cached message - feel free to do the maths. From a quick estimate
 I'd say: No wonder, this has blown away my app.

 As a quick fix, I'd suggest to regularly clear the localizer cache, use a
 more sophisticated cache (that expires old entries once in a while!!) or
 to
 disable the cache completely. However, don't try to overwrite
 Localizer.newCache() and clear the cache regularly: clearCache() will
 replace your cache with a ConcurrentHashMap (not using
 Localizer.newCache()). However, quite unlikely, that this will happen as
 newCache() is private anyway ;) I am going to add some code to clear the
 cache regularly.

 Best regards, Stefan

 PS: I'll also create a JIRA issue, but I am really short on time right
 now.

 -
 ---
 Stefan Fußenegger
 http://talk-on-tech.blogspot.com // looking for a nicer domain ;)
 --
 View this message in context:
 http://www.nabble.com/Localizer-cache-with-150.000%2B-entries-causing-OutOfMemory-tp17734931p17734931.html
 Sent from the Wicket - User mailing list archive at Nabble.com.


 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]


 
 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]
 
 
 


-
---
Stefan Fußenegger
http://talk-on-tech.blogspot.com // looking for a nicer domain ;)
-- 
View this message in context: 
http://www.nabble.com/Localizer-cache-with-150.000%2B-entries-causing-OutOfMemory-tp17734931p17751273.html
Sent from the Wicket - User mailing list archive at Nabble.com.


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Localizer cache with 150.000+ entries causing OutOfMemory

2008-06-10 Thread Daniel Frisk

So the patch did help?

I too have observed this problem but it was at the moment less of a  
problem than other heap eaters, now this is next in line. We have  
added a script which automatically restarts the server when repeated  
OOME occurs and are down to a couple of times per week without the  
patch. But still, who wouldn't want to see months of uptime...


// Daniel
jalbum.net


On 2008-06-10, at 11:29, Stefan Fußenegger wrote:



Hi Igor,

Thanks for your quick reply and the patch, sorry for not searching the
mailinglist only but not JIRA.

Your patch was for 1.4, I applied it to 1.3.3, created a quickstart
including JUnit test and attached it to the JIRA issue. Hope this  
fix gets
into the next maintenance release. I am to lazy to create a properly  
patched

jar and a MVN repo for my team right now ;)

Regards, Stefan



igor.vaynberg wrote:


try applying this patch and see if it helps

https://issues.apache.org/jira/browse/WICKET-1667

-igor

On Mon, Jun 9, 2008 at 8:11 AM, Stefan Fußenegger
[EMAIL PROTECTED] wrote:


I am just analysing a heap dump (god bless the
-XX:+HeapDumpOnOutOfMemoryError flag) of a recent application  
cache due

to
an OutOfMemoryError (GC overhead limit exceeded to be precise).  
Using

jhat, the 175456 instances of class
org.apache.wicket.util.concurrent.ConcurrentHashMap$Entry  
immediately

got
my attention. While looking through the 107 instance of
ConcurrentHashMap, I
found one *really* big one: Localizer.cache has a hash table  
length of
262144, each of its 32 segments with about 5300 entries, where a  
hash key

is
a string, sometimes longer than 500 charactes, similar to (see
Localizer.getCacheKey(String,Component)):

fooTitle.bar- 
org.apache.wicket.markup.html.link.BookmarkablePageLink:fooLink- 
org.apache.wicket.markup.html.panel.Fragment:track- 
org.apache.wicket.markup.html.list.ListItem:14- 
my.company.FooListPanel$1:fooList-my.company.FooListPanel:foos- 
org.apache.wicket.markup.html.list.ListItem:0- 
my.company.BarListPanel$1:bars-my.company.FooListPanel:panel- 
my.company.boxes.BodyBox:2- 
org.apache.wicket.markup.repeater.RepeatingView:body- 
my.company.layout.Border:border-my.company.pages.music.FoobarPage: 
43-de-null


Those numbers pretty much convinced me: The localizer cache has  
blown

away
my application.

Looking at this hash keys, I suspect the following problem: those  
strings

are constructed from the position of a localized String on a page,
which
is quite a bad thing if you use nested list views or repeating  
views to
construct your page. For instance, I have a panel with a long  
(pageable)
list of entries, might be  5000 entries which might appear on  
various
positions in a repeating view I use as a container for most of my  
pages.
Let's say there are 5 possible positions, this would cause 2500  
thousand

cached entries, each with a key of 300+ characters plus some more
characters
for the cached message - feel free to do the maths. From a quick  
estimate

I'd say: No wonder, this has blown away my app.

As a quick fix, I'd suggest to regularly clear the localizer  
cache, use a
more sophisticated cache (that expires old entries once in a  
while!!) or

to
disable the cache completely. However, don't try to overwrite
Localizer.newCache() and clear the cache regularly: clearCache()  
will

replace your cache with a ConcurrentHashMap (not using
Localizer.newCache()). However, quite unlikely, that this will  
happen as
newCache() is private anyway ;) I am going to add some code to  
clear the

cache regularly.

Best regards, Stefan

PS: I'll also create a JIRA issue, but I am really short on time  
right

now.

-
---
Stefan Fußenegger
http://talk-on-tech.blogspot.com // looking for a nicer domain ;)
--
View this message in context:
http://www.nabble.com/Localizer-cache-with-150.000%2B-entries-causing-OutOfMemory-tp17734931p17734931.html
Sent from the Wicket - User mailing list archive at Nabble.com.


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]






-
---
Stefan Fußenegger
http://talk-on-tech.blogspot.com // looking for a nicer domain ;)
--
View this message in context: 
http://www.nabble.com/Localizer-cache-with-150.000%2B-entries-causing-OutOfMemory-tp17734931p17751273.html
Sent from the Wicket - User mailing list archive at Nabble.com.


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Localizer cache with 150.000+ entries causing OutOfMemory

2008-06-10 Thread Stefan Fußenegger

Hi Daniel,

I didn't put the patch into production yet, but I am quite confident, that
it will help. As you can see in the example I attached to the JIRA issue
(just attached a new version), the unpatched Localizer had 200 entries in
his cache, the patched Localizer only four - which is a Good Thing (tm), as
there are only 4 different cached values!

Regards, Stefan



Daniel Frisk wrote:
 
 So the patch did help?
 
 I too have observed this problem but it was at the moment less of a  
 problem than other heap eaters, now this is next in line. We have  
 added a script which automatically restarts the server when repeated  
 OOME occurs and are down to a couple of times per week without the  
 patch. But still, who wouldn't want to see months of uptime...
 
 // Daniel
 jalbum.net
 
 
 On 2008-06-10, at 11:29, Stefan Fußenegger wrote:
 

 Hi Igor,

 Thanks for your quick reply and the patch, sorry for not searching the
 mailinglist only but not JIRA.

 Your patch was for 1.4, I applied it to 1.3.3, created a quickstart
 including JUnit test and attached it to the JIRA issue. Hope this  
 fix gets
 into the next maintenance release. I am to lazy to create a properly  
 patched
 jar and a MVN repo for my team right now ;)

 Regards, Stefan



 igor.vaynberg wrote:

 try applying this patch and see if it helps

 https://issues.apache.org/jira/browse/WICKET-1667

 -igor

 On Mon, Jun 9, 2008 at 8:11 AM, Stefan Fußenegger
 [EMAIL PROTECTED] wrote:

 I am just analysing a heap dump (god bless the
 -XX:+HeapDumpOnOutOfMemoryError flag) of a recent application  
 cache due
 to
 an OutOfMemoryError (GC overhead limit exceeded to be precise).  
 Using
 jhat, the 175456 instances of class
 org.apache.wicket.util.concurrent.ConcurrentHashMap$Entry  
 immediately
 got
 my attention. While looking through the 107 instance of
 ConcurrentHashMap, I
 found one *really* big one: Localizer.cache has a hash table  
 length of
 262144, each of its 32 segments with about 5300 entries, where a  
 hash key
 is
 a string, sometimes longer than 500 charactes, similar to (see
 Localizer.getCacheKey(String,Component)):

 fooTitle.bar- 
 org.apache.wicket.markup.html.link.BookmarkablePageLink:fooLink- 
 org.apache.wicket.markup.html.panel.Fragment:track- 
 org.apache.wicket.markup.html.list.ListItem:14- 
 my.company.FooListPanel$1:fooList-my.company.FooListPanel:foos- 
 org.apache.wicket.markup.html.list.ListItem:0- 
 my.company.BarListPanel$1:bars-my.company.FooListPanel:panel- 
 my.company.boxes.BodyBox:2- 
 org.apache.wicket.markup.repeater.RepeatingView:body- 
 my.company.layout.Border:border-my.company.pages.music.FoobarPage: 
 43-de-null

 Those numbers pretty much convinced me: The localizer cache has  
 blown
 away
 my application.

 Looking at this hash keys, I suspect the following problem: those  
 strings
 are constructed from the position of a localized String on a page,
 which
 is quite a bad thing if you use nested list views or repeating  
 views to
 construct your page. For instance, I have a panel with a long  
 (pageable)
 list of entries, might be  5000 entries which might appear on  
 various
 positions in a repeating view I use as a container for most of my  
 pages.
 Let's say there are 5 possible positions, this would cause 2500  
 thousand
 cached entries, each with a key of 300+ characters plus some more
 characters
 for the cached message - feel free to do the maths. From a quick  
 estimate
 I'd say: No wonder, this has blown away my app.

 As a quick fix, I'd suggest to regularly clear the localizer  
 cache, use a
 more sophisticated cache (that expires old entries once in a  
 while!!) or
 to
 disable the cache completely. However, don't try to overwrite
 Localizer.newCache() and clear the cache regularly: clearCache()  
 will
 replace your cache with a ConcurrentHashMap (not using
 Localizer.newCache()). However, quite unlikely, that this will  
 happen as
 newCache() is private anyway ;) I am going to add some code to  
 clear the
 cache regularly.

 Best regards, Stefan

 PS: I'll also create a JIRA issue, but I am really short on time  
 right
 now.

 -
 ---
 Stefan Fußenegger
 http://talk-on-tech.blogspot.com // looking for a nicer domain ;)
 --
 View this message in context:
 http://www.nabble.com/Localizer-cache-with-150.000%2B-entries-causing-OutOfMemory-tp17734931p17734931.html
 Sent from the Wicket - User mailing list archive at Nabble.com.


 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]



 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]





 -
 ---
 Stefan Fußenegger
 http://talk-on-tech.blogspot.com // looking for a nicer domain ;)
 -- 
 View this message in context:
 

Re: Localizer cache with 150.000+ entries causing OutOfMemory

2008-06-10 Thread Juha Alatalo

Hi All,

I run our profiling tests (version 1.3.3) using Application.java and 
Localizer.java patched by Stefan. Patch seems to be solving our memory 
problems.


Is this patch coming to 1.3.4 and do you have any idea when 1.3.4 will 
be released?


Best Regards
- Juha


Stefan Fußenegger wrote:

Hi Daniel,

I didn't put the patch into production yet, but I am quite confident, that
it will help. As you can see in the example I attached to the JIRA issue
(just attached a new version), the unpatched Localizer had 200 entries in
his cache, the patched Localizer only four - which is a Good Thing (tm), as
there are only 4 different cached values!

Regards, Stefan



Daniel Frisk wrote:

So the patch did help?

I too have observed this problem but it was at the moment less of a  
problem than other heap eaters, now this is next in line. We have  
added a script which automatically restarts the server when repeated  
OOME occurs and are down to a couple of times per week without the  
patch. But still, who wouldn't want to see months of uptime...


// Daniel
jalbum.net


On 2008-06-10, at 11:29, Stefan Fußenegger wrote:


Hi Igor,

Thanks for your quick reply and the patch, sorry for not searching the
mailinglist only but not JIRA.

Your patch was for 1.4, I applied it to 1.3.3, created a quickstart
including JUnit test and attached it to the JIRA issue. Hope this  
fix gets
into the next maintenance release. I am to lazy to create a properly  
patched

jar and a MVN repo for my team right now ;)

Regards, Stefan



igor.vaynberg wrote:

try applying this patch and see if it helps

https://issues.apache.org/jira/browse/WICKET-1667

-igor

On Mon, Jun 9, 2008 at 8:11 AM, Stefan Fußenegger
[EMAIL PROTECTED] wrote:

I am just analysing a heap dump (god bless the
-XX:+HeapDumpOnOutOfMemoryError flag) of a recent application  
cache due

to
an OutOfMemoryError (GC overhead limit exceeded to be precise).  
Using

jhat, the 175456 instances of class
org.apache.wicket.util.concurrent.ConcurrentHashMap$Entry  
immediately

got
my attention. While looking through the 107 instance of
ConcurrentHashMap, I
found one *really* big one: Localizer.cache has a hash table  
length of
262144, each of its 32 segments with about 5300 entries, where a  
hash key

is
a string, sometimes longer than 500 charactes, similar to (see
Localizer.getCacheKey(String,Component)):

fooTitle.bar- 
org.apache.wicket.markup.html.link.BookmarkablePageLink:fooLink- 
org.apache.wicket.markup.html.panel.Fragment:track- 
org.apache.wicket.markup.html.list.ListItem:14- 
my.company.FooListPanel$1:fooList-my.company.FooListPanel:foos- 
org.apache.wicket.markup.html.list.ListItem:0- 
my.company.BarListPanel$1:bars-my.company.FooListPanel:panel- 
my.company.boxes.BodyBox:2- 
org.apache.wicket.markup.repeater.RepeatingView:body- 
my.company.layout.Border:border-my.company.pages.music.FoobarPage: 
43-de-null


Those numbers pretty much convinced me: The localizer cache has  
blown

away
my application.

Looking at this hash keys, I suspect the following problem: those  
strings

are constructed from the position of a localized String on a page,
which
is quite a bad thing if you use nested list views or repeating  
views to
construct your page. For instance, I have a panel with a long  
(pageable)
list of entries, might be  5000 entries which might appear on  
various
positions in a repeating view I use as a container for most of my  
pages.
Let's say there are 5 possible positions, this would cause 2500  
thousand

cached entries, each with a key of 300+ characters plus some more
characters
for the cached message - feel free to do the maths. From a quick  
estimate

I'd say: No wonder, this has blown away my app.

As a quick fix, I'd suggest to regularly clear the localizer  
cache, use a
more sophisticated cache (that expires old entries once in a  
while!!) or

to
disable the cache completely. However, don't try to overwrite
Localizer.newCache() and clear the cache regularly: clearCache()  
will

replace your cache with a ConcurrentHashMap (not using
Localizer.newCache()). However, quite unlikely, that this will  
happen as
newCache() is private anyway ;) I am going to add some code to  
clear the

cache regularly.

Best regards, Stefan

PS: I'll also create a JIRA issue, but I am really short on time  
right

now.

-
---
Stefan Fußenegger
http://talk-on-tech.blogspot.com // looking for a nicer domain ;)
--
View this message in context:
http://www.nabble.com/Localizer-cache-with-150.000%2B-entries-causing-OutOfMemory-tp17734931p17734931.html
Sent from the Wicket - User mailing list archive at Nabble.com.


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]





-
---

Re: Localizer cache with 150.000+ entries causing OutOfMemory

2008-06-10 Thread Igor Vaynberg
if someone can confirm that the patch works in a production env i will
be happy to commit it. i just havent had the time to test it myself
yet.

-igor

On Tue, Jun 10, 2008 at 7:09 AM, Juha Alatalo
[EMAIL PROTECTED] wrote:
 Hi All,

 I run our profiling tests (version 1.3.3) using Application.java and
 Localizer.java patched by Stefan. Patch seems to be solving our memory
 problems.

 Is this patch coming to 1.3.4 and do you have any idea when 1.3.4 will be
 released?

 Best Regards
 - Juha


 Stefan Fußenegger wrote:

 Hi Daniel,

 I didn't put the patch into production yet, but I am quite confident, that
 it will help. As you can see in the example I attached to the JIRA issue
 (just attached a new version), the unpatched Localizer had 200 entries in
 his cache, the patched Localizer only four - which is a Good Thing (tm),
 as
 there are only 4 different cached values!

 Regards, Stefan



 Daniel Frisk wrote:

 So the patch did help?

 I too have observed this problem but it was at the moment less of a
  problem than other heap eaters, now this is next in line. We have  added a
 script which automatically restarts the server when repeated  OOME occurs
 and are down to a couple of times per week without the  patch. But still,
 who wouldn't want to see months of uptime...

 // Daniel
 jalbum.net


 On 2008-06-10, at 11:29, Stefan Fußenegger wrote:

 Hi Igor,

 Thanks for your quick reply and the patch, sorry for not searching the
 mailinglist only but not JIRA.

 Your patch was for 1.4, I applied it to 1.3.3, created a quickstart
 including JUnit test and attached it to the JIRA issue. Hope this  fix
 gets
 into the next maintenance release. I am to lazy to create a properly
  patched
 jar and a MVN repo for my team right now ;)

 Regards, Stefan



 igor.vaynberg wrote:

 try applying this patch and see if it helps

 https://issues.apache.org/jira/browse/WICKET-1667

 -igor

 On Mon, Jun 9, 2008 at 8:11 AM, Stefan Fußenegger
 [EMAIL PROTECTED] wrote:

 I am just analysing a heap dump (god bless the
 -XX:+HeapDumpOnOutOfMemoryError flag) of a recent application  cache
 due
 to
 an OutOfMemoryError (GC overhead limit exceeded to be precise).
  Using
 jhat, the 175456 instances of class
 org.apache.wicket.util.concurrent.ConcurrentHashMap$Entry
  immediately
 got
 my attention. While looking through the 107 instance of
 ConcurrentHashMap, I
 found one *really* big one: Localizer.cache has a hash table  length
 of
 262144, each of its 32 segments with about 5300 entries, where a  hash
 key
 is
 a string, sometimes longer than 500 charactes, similar to (see
 Localizer.getCacheKey(String,Component)):

 fooTitle.bar-
 org.apache.wicket.markup.html.link.BookmarkablePageLink:fooLink-
 org.apache.wicket.markup.html.panel.Fragment:track-
 org.apache.wicket.markup.html.list.ListItem:14-
 my.company.FooListPanel$1:fooList-my.company.FooListPanel:foos-
 org.apache.wicket.markup.html.list.ListItem:0-
 my.company.BarListPanel$1:bars-my.company.FooListPanel:panel-
 my.company.boxes.BodyBox:2-
 org.apache.wicket.markup.repeater.RepeatingView:body-
 my.company.layout.Border:border-my.company.pages.music.FoobarPage:
 43-de-null

 Those numbers pretty much convinced me: The localizer cache has  blown
 away
 my application.

 Looking at this hash keys, I suspect the following problem: those
  strings
 are constructed from the position of a localized String on a page,
 which
 is quite a bad thing if you use nested list views or repeating  views
 to
 construct your page. For instance, I have a panel with a long
  (pageable)
 list of entries, might be  5000 entries which might appear on
  various
 positions in a repeating view I use as a container for most of my
  pages.
 Let's say there are 5 possible positions, this would cause 2500
  thousand
 cached entries, each with a key of 300+ characters plus some more
 characters
 for the cached message - feel free to do the maths. From a quick
  estimate
 I'd say: No wonder, this has blown away my app.

 As a quick fix, I'd suggest to regularly clear the localizer  cache,
 use a
 more sophisticated cache (that expires old entries once in a  while!!)
 or
 to
 disable the cache completely. However, don't try to overwrite
 Localizer.newCache() and clear the cache regularly: clearCache()  will
 replace your cache with a ConcurrentHashMap (not using
 Localizer.newCache()). However, quite unlikely, that this will  happen
 as
 newCache() is private anyway ;) I am going to add some code to  clear
 the
 cache regularly.

 Best regards, Stefan

 PS: I'll also create a JIRA issue, but I am really short on time
  right
 now.

 -
 ---
 Stefan Fußenegger
 http://talk-on-tech.blogspot.com // looking for a nicer domain ;)
 --
 View this message in context:

 http://www.nabble.com/Localizer-cache-with-150.000%2B-entries-causing-OutOfMemory-tp17734931p17734931.html
 Sent from the Wicket - User mailing list archive at Nabble.com.


 

Re: Localizer cache with 150.000+ entries causing OutOfMemory

2008-06-09 Thread Igor Vaynberg
try applying this patch and see if it helps

https://issues.apache.org/jira/browse/WICKET-1667

-igor

On Mon, Jun 9, 2008 at 8:11 AM, Stefan Fußenegger
[EMAIL PROTECTED] wrote:

 I am just analysing a heap dump (god bless the
 -XX:+HeapDumpOnOutOfMemoryError flag) of a recent application cache due to
 an OutOfMemoryError (GC overhead limit exceeded to be precise). Using
 jhat, the 175456 instances of class
 org.apache.wicket.util.concurrent.ConcurrentHashMap$Entry immediately got
 my attention. While looking through the 107 instance of ConcurrentHashMap, I
 found one *really* big one: Localizer.cache has a hash table length of
 262144, each of its 32 segments with about 5300 entries, where a hash key is
 a string, sometimes longer than 500 charactes, similar to (see
 Localizer.getCacheKey(String,Component)):

 fooTitle.bar-org.apache.wicket.markup.html.link.BookmarkablePageLink:fooLink-org.apache.wicket.markup.html.panel.Fragment:track-org.apache.wicket.markup.html.list.ListItem:14-my.company.FooListPanel$1:fooList-my.company.FooListPanel:foos-org.apache.wicket.markup.html.list.ListItem:0-my.company.BarListPanel$1:bars-my.company.FooListPanel:panel-my.company.boxes.BodyBox:2-org.apache.wicket.markup.repeater.RepeatingView:body-my.company.layout.Border:border-my.company.pages.music.FoobarPage:43-de-null

 Those numbers pretty much convinced me: The localizer cache has blown away
 my application.

 Looking at this hash keys, I suspect the following problem: those strings
 are constructed from the position of a localized String on a page, which
 is quite a bad thing if you use nested list views or repeating views to
 construct your page. For instance, I have a panel with a long (pageable)
 list of entries, might be  5000 entries which might appear on various
 positions in a repeating view I use as a container for most of my pages.
 Let's say there are 5 possible positions, this would cause 2500 thousand
 cached entries, each with a key of 300+ characters plus some more characters
 for the cached message - feel free to do the maths. From a quick estimate
 I'd say: No wonder, this has blown away my app.

 As a quick fix, I'd suggest to regularly clear the localizer cache, use a
 more sophisticated cache (that expires old entries once in a while!!) or to
 disable the cache completely. However, don't try to overwrite
 Localizer.newCache() and clear the cache regularly: clearCache() will
 replace your cache with a ConcurrentHashMap (not using
 Localizer.newCache()). However, quite unlikely, that this will happen as
 newCache() is private anyway ;) I am going to add some code to clear the
 cache regularly.

 Best regards, Stefan

 PS: I'll also create a JIRA issue, but I am really short on time right now.

 -
 ---
 Stefan Fußenegger
 http://talk-on-tech.blogspot.com // looking for a nicer domain ;)
 --
 View this message in context: 
 http://www.nabble.com/Localizer-cache-with-150.000%2B-entries-causing-OutOfMemory-tp17734931p17734931.html
 Sent from the Wicket - User mailing list archive at Nabble.com.


 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]