Re: Tomcat dying with Wicket 1.3.2 (Windows / JDK 1.5.0_10)

2008-04-12 Thread Johan Compagner
Ok if you dont strip the jsessionid again (like they do or trying to
do in the other thread where you also replied on) then maybe there is
now a crawler thats strips  it when it sees it and sends the stripped
version back to you, if somebody does that then sessions are created
in a very fast way.

Maybe you can look at your accesslog?

On 4/12/08, Jeremy Thomerson [EMAIL PROTECTED] wrote:
 Thanks for the insight - didn't know that the webapp had to make a call to
 force the cookie-less support.  Someone asked for how often Google is
 crawling us.  It seems like at any given point of almost any day, we have
 one crawler or another going through the site.  I included some numbers
 below to give an idea.

 Igor - thanks - it could easily be the search form, which is the only thing
 that would be stateful on about 95% of the pages that will be crawled.  I
 made myself a note yestereday that I need to look at making that a stateless
 form to see if that fixes the unnecessary session creation.  I'll post the
 results.

 The one thing I have determined from all this (which answers a question from
 the other thread) is that Google (and the other crawlers) is definitely
 going to pages with a jsessionid in the URL, and the jsessionid is not
 appearing in the search results (with 2 exceptions out of 30,000+ pages
 indexed).  But I know that maybe only a month ago, there were hundreds of
 pages from our site that had jsessionids in the URLs that Google had
 indexed.  Could it be possible that they are stripping the jsessionid from
 URLs they visit now?  I haven't found anywhere that they volunteer much
 information on this matter.

 Bottom line - thanks for everyone's help - I have a bandaid on this now
 which will buy me the time to see what's creating the early unnecessary
 sessions.  Is there a particular place in the code I should put a breakpoint
 to see where the session is being created / where it says oh, you have a
 stateful page - here's the component that makes it stateful?  That's where
 I'm headed next, so if anyone knows where that piece of code is, the tip
 would be greatly appreciated.

 Thanks again,
 Jeremy

 Here's a few numbers for the curious.  I took a four minute segment of our
 logs from a very slow traffic period - middle of the night.  In that time,
 67 sessions were created.  Then did reverse DNS lookups on the IPs.  The
 traffic was from:

 cuill.com crawler4   (interesting - new search engine - didn't know
 about it before)
 googlebot4
 live.com bot1
 unknown13
 user28
 yahoo crawler26




 On Fri, Apr 11, 2008 at 9:20 PM, Igor Vaynberg [EMAIL PROTECTED]
 wrote:

  On Fri, Apr 11, 2008 at 6:37 PM, Jeremy Thomerson
  [EMAIL PROTECTED] wrote:
If you go to http://www.texashuntfish.com/thf/app/home, you will notice
  that
the first time you hit the page, there are jsessionids in every link -
  same
if you go there with cookies disabled.
 
  as far as i know jsessionid is only appended once an http session is
  created and needs to be tracked. so the fact you see it right after
  you go to /app/home should tell you that right away the session is
  created and bound. not good. something in your page is stateful.
 
I think this problem is caused by something making the session bind at
  an
earlier time than it did when I was using 1.2.6 - it's probably still
something that I'm doing weird, but I need to find it.
 
  i think this is unlikely. if i remember correctly delayed session
  creation was introduced in 1.3.0. 1.2.6 _always created a session on
  first request_ regardless of whether or not the page you requested was
  stateless or stateful.
 
  -igor
 
 
  
On Fri, Apr 11, 2008 at 3:33 AM, Johan Compagner [EMAIL PROTECTED]
wrote:
  
  
  
 by the way it is all your own fault that you get so many session.
 I just searched for your other mails and i did came across: Removing
  the
 jsessionid for SEO

 where you where explaining that you remove the jsessionids from the
  urls..

 johan


 On Thu, Apr 3, 2008 at 7:23 AM, Jeremy Thomerson 
 [EMAIL PROTECTED]
 wrote:

  I upgraded my biggest production app from 1.2.6 to 1.3 last week.
   I
 have
  had several apps running on 1.3 since it was in beta with no
  problems -
  running for months without restarting.
 
  This app receives more traffic than any of the rest.  We have a
  decent
  server, and I had always allowed Tomcat 1.5GB of RAM to operate
  with.
  It
  never had a problem doing so, and I didn't have OutOfMemory errors.
  Now,
  after the upgrade to 1.3.2, I am having all sorts of trouble.  It
  ran
 for
  several days without a problem, but then started dying a couple
  times a
  day.  Today it has died four times.  Here are a couple odd things
  about
  this:
 
- On 1.2.6, I never had a problem with stability - the app would
  run
weeks between 

Re: Tomcat dying with Wicket 1.3.2 (Windows / JDK 1.5.0_10)

2008-04-12 Thread Martijn Dashorst
You could also try to detect the spider and set its session timeout
(if one was created) to 1 minute or so... Detecting the bots shouldn't
be too hard iiuc some of the articles on robots.txt

Martijn

On 4/12/08, Johan Compagner [EMAIL PROTECTED] wrote:
 Ok if you dont strip the jsessionid again (like they do or trying to
  do in the other thread where you also replied on) then maybe there is
  now a crawler thats strips  it when it sees it and sends the stripped
  version back to you, if somebody does that then sessions are created
  in a very fast way.

  Maybe you can look at your accesslog?


  On 4/12/08, Jeremy Thomerson [EMAIL PROTECTED] wrote:
   Thanks for the insight - didn't know that the webapp had to make a call to
   force the cookie-less support.  Someone asked for how often Google is
   crawling us.  It seems like at any given point of almost any day, we have
   one crawler or another going through the site.  I included some numbers
   below to give an idea.
  
   Igor - thanks - it could easily be the search form, which is the only thing
   that would be stateful on about 95% of the pages that will be crawled.  I
   made myself a note yestereday that I need to look at making that a 
 stateless
   form to see if that fixes the unnecessary session creation.  I'll post the
   results.
  
   The one thing I have determined from all this (which answers a question 
 from
   the other thread) is that Google (and the other crawlers) is definitely
   going to pages with a jsessionid in the URL, and the jsessionid is not
   appearing in the search results (with 2 exceptions out of 30,000+ pages
   indexed).  But I know that maybe only a month ago, there were hundreds of
   pages from our site that had jsessionids in the URLs that Google had
   indexed.  Could it be possible that they are stripping the jsessionid from
   URLs they visit now?  I haven't found anywhere that they volunteer much
   information on this matter.
  
   Bottom line - thanks for everyone's help - I have a bandaid on this now
   which will buy me the time to see what's creating the early unnecessary
   sessions.  Is there a particular place in the code I should put a 
 breakpoint
   to see where the session is being created / where it says oh, you have a
   stateful page - here's the component that makes it stateful?  That's where
   I'm headed next, so if anyone knows where that piece of code is, the tip
   would be greatly appreciated.
  
   Thanks again,
   Jeremy
  
   Here's a few numbers for the curious.  I took a four minute segment of our
   logs from a very slow traffic period - middle of the night.  In that time,
   67 sessions were created.  Then did reverse DNS lookups on the IPs.  The
   traffic was from:
  
   cuill.com crawler4   (interesting - new search engine - didn't know
   about it before)
   googlebot4
   live.com bot1
   unknown13
   user28
   yahoo crawler26
  
  
  
  
   On Fri, Apr 11, 2008 at 9:20 PM, Igor Vaynberg [EMAIL PROTECTED]
   wrote:
  
On Fri, Apr 11, 2008 at 6:37 PM, Jeremy Thomerson
[EMAIL PROTECTED] wrote:
  If you go to http://www.texashuntfish.com/thf/app/home, you will 
 notice
that
  the first time you hit the page, there are jsessionids in every link -
same
  if you go there with cookies disabled.
   
as far as i know jsessionid is only appended once an http session is
created and needs to be tracked. so the fact you see it right after
you go to /app/home should tell you that right away the session is
created and bound. not good. something in your page is stateful.
   
  I think this problem is caused by something making the session bind at
an
  earlier time than it did when I was using 1.2.6 - it's probably still
  something that I'm doing weird, but I need to find it.
   
i think this is unlikely. if i remember correctly delayed session
creation was introduced in 1.3.0. 1.2.6 _always created a session on
first request_ regardless of whether or not the page you requested was
stateless or stateful.
   
-igor
   
   

  On Fri, Apr 11, 2008 at 3:33 AM, Johan Compagner [EMAIL PROTECTED]
  wrote:



   by the way it is all your own fault that you get so many session.
   I just searched for your other mails and i did came across: 
 Removing
the
   jsessionid for SEO
  
   where you where explaining that you remove the jsessionids from the
urls..
  
   johan
  
  
   On Thu, Apr 3, 2008 at 7:23 AM, Jeremy Thomerson 
   [EMAIL PROTECTED]
   wrote:
  
I upgraded my biggest production app from 1.2.6 to 1.3 last week.
 I
   have
had several apps running on 1.3 since it was in beta with no
problems -
running for months without restarting.
   
This app receives more traffic than any of the rest.  We have a
decent
server, and I had always allowed 

Re: Tomcat dying with Wicket 1.3.2 (Windows / JDK 1.5.0_10)

2008-04-12 Thread Ryan Holmes
If you hit a wall in terms of decreasing session timeout or
deferring/avoiding session creation, you might want to look at Tomcat's
PersistentManager. It passivates idle (but non-expired) sessions out of RAM
to either disk or a database. The JDBC version should give you better
performance and more room to scale.

http://tomcat.apache.org/tomcat-5.5-doc/config/manager.html

-Ryan

On Fri, Apr 11, 2008 at 9:26 PM, Jeremy Thomerson 
[EMAIL PROTECTED] wrote:

 Thanks for the insight - didn't know that the webapp had to make a call to
 force the cookie-less support.  Someone asked for how often Google is
 crawling us.  It seems like at any given point of almost any day, we have
 one crawler or another going through the site.  I included some numbers
 below to give an idea.

 Igor - thanks - it could easily be the search form, which is the only
 thing
 that would be stateful on about 95% of the pages that will be crawled.  I
 made myself a note yestereday that I need to look at making that a
 stateless
 form to see if that fixes the unnecessary session creation.  I'll post the
 results.

 The one thing I have determined from all this (which answers a question
 from
 the other thread) is that Google (and the other crawlers) is definitely
 going to pages with a jsessionid in the URL, and the jsessionid is not
 appearing in the search results (with 2 exceptions out of 30,000+ pages
 indexed).  But I know that maybe only a month ago, there were hundreds of
 pages from our site that had jsessionids in the URLs that Google had
 indexed.  Could it be possible that they are stripping the jsessionid from
 URLs they visit now?  I haven't found anywhere that they volunteer much
 information on this matter.

 Bottom line - thanks for everyone's help - I have a bandaid on this now
 which will buy me the time to see what's creating the early unnecessary
 sessions.  Is there a particular place in the code I should put a
 breakpoint
 to see where the session is being created / where it says oh, you have a
 stateful page - here's the component that makes it stateful?  That's
 where
 I'm headed next, so if anyone knows where that piece of code is, the tip
 would be greatly appreciated.

 Thanks again,
 Jeremy

 Here's a few numbers for the curious.  I took a four minute segment of our
 logs from a very slow traffic period - middle of the night.  In that time,
 67 sessions were created.  Then did reverse DNS lookups on the IPs.  The
 traffic was from:

 cuill.com crawler4   (interesting - new search engine - didn't know
 about it before)
 googlebot4
 live.com bot1
 unknown13
 user28
 yahoo crawler26




 On Fri, Apr 11, 2008 at 9:20 PM, Igor Vaynberg [EMAIL PROTECTED]
 wrote:

  On Fri, Apr 11, 2008 at 6:37 PM, Jeremy Thomerson
  [EMAIL PROTECTED] wrote:
If you go to http://www.texashuntfish.com/thf/app/home, you will
 notice
  that
the first time you hit the page, there are jsessionids in every link
 -
  same
if you go there with cookies disabled.
 
  as far as i know jsessionid is only appended once an http session is
  created and needs to be tracked. so the fact you see it right after
  you go to /app/home should tell you that right away the session is
  created and bound. not good. something in your page is stateful.
 
I think this problem is caused by something making the session bind
 at
  an
earlier time than it did when I was using 1.2.6 - it's probably still
something that I'm doing weird, but I need to find it.
 
  i think this is unlikely. if i remember correctly delayed session
  creation was introduced in 1.3.0. 1.2.6 _always created a session on
  first request_ regardless of whether or not the page you requested was
  stateless or stateful.
 
  -igor
 
 
  
On Fri, Apr 11, 2008 at 3:33 AM, Johan Compagner 
 [EMAIL PROTECTED]
wrote:
  
  
  
 by the way it is all your own fault that you get so many session.
 I just searched for your other mails and i did came across:
 Removing
  the
 jsessionid for SEO

 where you where explaining that you remove the jsessionids from the
  urls..

 johan


 On Thu, Apr 3, 2008 at 7:23 AM, Jeremy Thomerson 
 [EMAIL PROTECTED]
 wrote:

  I upgraded my biggest production app from 1.2.6 to 1.3 last week.
   I
 have
  had several apps running on 1.3 since it was in beta with no
  problems -
  running for months without restarting.
 
  This app receives more traffic than any of the rest.  We have a
  decent
  server, and I had always allowed Tomcat 1.5GB of RAM to operate
  with.
  It
  never had a problem doing so, and I didn't have OutOfMemory
 errors.
  Now,
  after the upgrade to 1.3.2, I am having all sorts of trouble.  It
  ran
 for
  several days without a problem, but then started dying a couple
  times a
  day.  Today it has died four times.  Here are a couple odd things
  about
  this:
 
- On 1.2.6, I 

Re: Tomcat dying with Wicket 1.3.2 (Windows / JDK 1.5.0_10)

2008-04-11 Thread Johan Compagner
by the way it is all your own fault that you get so many session.
I just searched for your other mails and i did came across: Removing the
jsessionid for SEO

where you where explaining that you remove the jsessionids from the urls..

johan


On Thu, Apr 3, 2008 at 7:23 AM, Jeremy Thomerson [EMAIL PROTECTED]
wrote:

 I upgraded my biggest production app from 1.2.6 to 1.3 last week.  I have
 had several apps running on 1.3 since it was in beta with no problems -
 running for months without restarting.

 This app receives more traffic than any of the rest.  We have a decent
 server, and I had always allowed Tomcat 1.5GB of RAM to operate with.  It
 never had a problem doing so, and I didn't have OutOfMemory errors.  Now,
 after the upgrade to 1.3.2, I am having all sorts of trouble.  It ran for
 several days without a problem, but then started dying a couple times a
 day.  Today it has died four times.  Here are a couple odd things about
 this:

   - On 1.2.6, I never had a problem with stability - the app would run
   weeks between restarts (I restart once per deployment, anywhere from
 once a
   week to at the longest about two months between deploy / restart).
   - Tomcat DIES instead of hanging when there is a problem.  Always
   before, if I had an issue, Tomcat would hang, and there would be OOM in
 the
   logs.  Now, when it crashes, and I sign in to the server, Tomcat is not
   running at all.  There is nothing in the Tomcat logs that says anything,
 or
   in eventvwr.
   - I do not get OutOfMemory error in any logs, whereas I have always
   seen it in the logs before when I had an issue with other apps.  I am
   running Tomcat as a service on Windows, but it writes stdout / stderr to
   logs, and I write my logging out to logs, and none of these logs include
 ANY
   errors - they all just suddenly stop at the time of the crash.

 My money is that it is an OOM error caused by somewhere that I am doing
 something I shouldn't be with Wicket.  There's no logs that even say it is
 an OOM, but the memory continues to increase linearly over time as the app
 runs now (it didn't do that before).  My first guess is my previous
 proliferate use of anonymous inner classes.  I have seen in the email
 threads that this shouldn't be done in 1.3.

 Of course, the real answer is that I'm going to be digging through
 profilers
 and lines of code until I get this fixed.

 My question, though, is from the Wicket devs / experienced users - where
 should I look first?  Is there something that changed between 1.2.6 and
 1.3
 that might have caused me problems where 1.2.6 was more forgiving?

 I'm running the app with JProbe right now so that I can get a snapshot of
 memory when it gets really high.

 Thank you,
 Jeremy Thomerson



Re: Tomcat dying with Wicket 1.3.2 (Windows / JDK 1.5.0_10)

2008-04-11 Thread Jeremy Thomerson
Thanks for your not very helpful email, but unfortunately, you're wrong.  In
that other email, I did say But, most don't (have jsessionid) because
almost all of my links are bookmarkable.  I don't strip out jsessionid - I
don't think you even can without disabling cookieless support - your
container adds the jsessionid to links in your returned HTML - it's not like
you add it (or remove it) manually.

If you go to http://www.texashuntfish.com/thf/app/home, you will notice that
the first time you hit the page, there are jsessionids in every link - same
if you go there with cookies disabled.  This won't happen if you just go to
http://www.texashuntfish.com/ because it does a redirect, and you end up
sending cookies back, and no jsessionid is needed.

I really don't know why Google doesn't index the jsessionid in our URLs - we
have 30,600 pages in Google indexes[1], and only two have jsessionid[2].

If anyone is interested, a slightly modified version of the code (just
setting different expiration lengths depending on signed in / not signed in)
I pasted in pastebin (linked in an earlier email) worked - we're maintaining
100-300 sessions now, and no crashes in the past 24 hours it's been
running.  This isn't a fix - it's a bandaid.

I think this problem is caused by something making the session bind at an
earlier time than it did when I was using 1.2.6 - it's probably still
something that I'm doing weird, but I need to find it.  Looking at the logs,
we're still having one to two sessions created every second - they're just
getting cleaned up better now.

[1] -
http://www.google.com/search?q=site%3Atexashuntfish.comie=utf-8oe=utf-8aq=trls=com.ubuntu:en-US:officialclient=firefox-a
[2] -
http://www.google.com/search?hl=ensafe=offclient=firefox-arls=com.ubuntu%3Aen-US%3Aofficialhs=bFpq=site%3Atexashuntfish.com+inurl%3AjsessionidbtnG=Search

On Fri, Apr 11, 2008 at 3:33 AM, Johan Compagner [EMAIL PROTECTED]
wrote:

 by the way it is all your own fault that you get so many session.
 I just searched for your other mails and i did came across: Removing the
 jsessionid for SEO

 where you where explaining that you remove the jsessionids from the urls..

 johan


 On Thu, Apr 3, 2008 at 7:23 AM, Jeremy Thomerson 
 [EMAIL PROTECTED]
 wrote:

  I upgraded my biggest production app from 1.2.6 to 1.3 last week.  I
 have
  had several apps running on 1.3 since it was in beta with no problems -
  running for months without restarting.
 
  This app receives more traffic than any of the rest.  We have a decent
  server, and I had always allowed Tomcat 1.5GB of RAM to operate with.
  It
  never had a problem doing so, and I didn't have OutOfMemory errors.
  Now,
  after the upgrade to 1.3.2, I am having all sorts of trouble.  It ran
 for
  several days without a problem, but then started dying a couple times a
  day.  Today it has died four times.  Here are a couple odd things about
  this:
 
- On 1.2.6, I never had a problem with stability - the app would run
weeks between restarts (I restart once per deployment, anywhere from
  once a
week to at the longest about two months between deploy / restart).
- Tomcat DIES instead of hanging when there is a problem.  Always
before, if I had an issue, Tomcat would hang, and there would be OOM
 in
  the
logs.  Now, when it crashes, and I sign in to the server, Tomcat is
 not
running at all.  There is nothing in the Tomcat logs that says
 anything,
  or
in eventvwr.
- I do not get OutOfMemory error in any logs, whereas I have always
seen it in the logs before when I had an issue with other apps.  I am
running Tomcat as a service on Windows, but it writes stdout / stderr
 to
logs, and I write my logging out to logs, and none of these logs
 include
  ANY
errors - they all just suddenly stop at the time of the crash.
 
  My money is that it is an OOM error caused by somewhere that I am doing
  something I shouldn't be with Wicket.  There's no logs that even say it
 is
  an OOM, but the memory continues to increase linearly over time as the
 app
  runs now (it didn't do that before).  My first guess is my previous
  proliferate use of anonymous inner classes.  I have seen in the email
  threads that this shouldn't be done in 1.3.
 
  Of course, the real answer is that I'm going to be digging through
  profilers
  and lines of code until I get this fixed.
 
  My question, though, is from the Wicket devs / experienced users - where
  should I look first?  Is there something that changed between 1.2.6 and
  1.3
  that might have caused me problems where 1.2.6 was more forgiving?
 
  I'm running the app with JProbe right now so that I can get a snapshot
 of
  memory when it gets really high.
 
  Thank you,
  Jeremy Thomerson
 



Re: Tomcat dying with Wicket 1.3.2 (Windows / JDK 1.5.0_10)

2008-04-11 Thread James Carman
On Fri, Apr 11, 2008 at 9:37 PM, Jeremy Thomerson
[EMAIL PROTECTED] wrote:
 Thanks for your not very helpful email, but unfortunately, you're wrong.  In
  that other email, I did say But, most don't (have jsessionid) because
  almost all of my links are bookmarkable.  I don't strip out jsessionid - I
  don't think you even can without disabling cookieless support - your
  container adds the jsessionid to links in your returned HTML - it's not like
  you add it (or remove it) manually.

I do not believe the servlet container adds the jsessionid into your
URLs automatically.  You have to call HttpServletResponse.encodeURL()
or HttpServletResponse.encodeRedirectURL() to get the jsessionid
appended.

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Tomcat dying with Wicket 1.3.2 (Windows / JDK 1.5.0_10)

2008-04-11 Thread Ryan Gravener
Are you storing a lot of variables in your session?  Also how often is
google visiting your site?

On Fri, Apr 11, 2008 at 9:37 PM, Jeremy Thomerson
[EMAIL PROTECTED] wrote:
 Thanks for your not very helpful email, but unfortunately, you're wrong.  In
  that other email, I did say But, most don't (have jsessionid) because
  almost all of my links are bookmarkable.  I don't strip out jsessionid - I
  don't think you even can without disabling cookieless support - your
  container adds the jsessionid to links in your returned HTML - it's not like
  you add it (or remove it) manually.

  If you go to http://www.texashuntfish.com/thf/app/home, you will notice that
  the first time you hit the page, there are jsessionids in every link - same
  if you go there with cookies disabled.  This won't happen if you just go to
  http://www.texashuntfish.com/ because it does a redirect, and you end up
  sending cookies back, and no jsessionid is needed.

  I really don't know why Google doesn't index the jsessionid in our URLs - we
  have 30,600 pages in Google indexes[1], and only two have jsessionid[2].

  If anyone is interested, a slightly modified version of the code (just
  setting different expiration lengths depending on signed in / not signed in)
  I pasted in pastebin (linked in an earlier email) worked - we're maintaining
  100-300 sessions now, and no crashes in the past 24 hours it's been
  running.  This isn't a fix - it's a bandaid.

  I think this problem is caused by something making the session bind at an
  earlier time than it did when I was using 1.2.6 - it's probably still
  something that I'm doing weird, but I need to find it.  Looking at the logs,
  we're still having one to two sessions created every second - they're just
  getting cleaned up better now.

  [1] -
  
 http://www.google.com/search?q=site%3Atexashuntfish.comie=utf-8oe=utf-8aq=trls=com.ubuntu:en-US:officialclient=firefox-a
  [2] -
  
 http://www.google.com/search?hl=ensafe=offclient=firefox-arls=com.ubuntu%3Aen-US%3Aofficialhs=bFpq=site%3Atexashuntfish.com+inurl%3AjsessionidbtnG=Search

  On Fri, Apr 11, 2008 at 3:33 AM, Johan Compagner [EMAIL PROTECTED]
  wrote:



   by the way it is all your own fault that you get so many session.
   I just searched for your other mails and i did came across: Removing the
   jsessionid for SEO
  
   where you where explaining that you remove the jsessionids from the urls..
  
   johan
  
  
   On Thu, Apr 3, 2008 at 7:23 AM, Jeremy Thomerson 
   [EMAIL PROTECTED]
   wrote:
  
I upgraded my biggest production app from 1.2.6 to 1.3 last week.  I
   have
had several apps running on 1.3 since it was in beta with no problems -
running for months without restarting.
   
This app receives more traffic than any of the rest.  We have a decent
server, and I had always allowed Tomcat 1.5GB of RAM to operate with.
It
never had a problem doing so, and I didn't have OutOfMemory errors.
Now,
after the upgrade to 1.3.2, I am having all sorts of trouble.  It ran
   for
several days without a problem, but then started dying a couple times a
day.  Today it has died four times.  Here are a couple odd things about
this:
   
  - On 1.2.6, I never had a problem with stability - the app would run
  weeks between restarts (I restart once per deployment, anywhere from
once a
  week to at the longest about two months between deploy / restart).
  - Tomcat DIES instead of hanging when there is a problem.  Always
  before, if I had an issue, Tomcat would hang, and there would be OOM
   in
the
  logs.  Now, when it crashes, and I sign in to the server, Tomcat is
   not
  running at all.  There is nothing in the Tomcat logs that says
   anything,
or
  in eventvwr.
  - I do not get OutOfMemory error in any logs, whereas I have always
  seen it in the logs before when I had an issue with other apps.  I am
  running Tomcat as a service on Windows, but it writes stdout / stderr
   to
  logs, and I write my logging out to logs, and none of these logs
   include
ANY
  errors - they all just suddenly stop at the time of the crash.
   
My money is that it is an OOM error caused by somewhere that I am doing
something I shouldn't be with Wicket.  There's no logs that even say it
   is
an OOM, but the memory continues to increase linearly over time as the
   app
runs now (it didn't do that before).  My first guess is my previous
proliferate use of anonymous inner classes.  I have seen in the email
threads that this shouldn't be done in 1.3.
   
Of course, the real answer is that I'm going to be digging through
profilers
and lines of code until I get this fixed.
   
My question, though, is from the Wicket devs / experienced users - where
should I look first?  Is there something that changed between 1.2.6 and
1.3
that might have caused me problems where 1.2.6 was more 

Re: Tomcat dying with Wicket 1.3.2 (Windows / JDK 1.5.0_10)

2008-04-11 Thread Igor Vaynberg
which wicket does for every url via webrequest.encodeurl or something
like that. of course if you subclass webrequest and dont forward the
encodeurl to httpservletrequest you effectively strip jsessionid from
the urls.

-igor


On Fri, Apr 11, 2008 at 6:53 PM, James Carman
[EMAIL PROTECTED] wrote:
 On Fri, Apr 11, 2008 at 9:37 PM, Jeremy Thomerson
  [EMAIL PROTECTED] wrote:
   Thanks for your not very helpful email, but unfortunately, you're wrong.  
 In
that other email, I did say But, most don't (have jsessionid) because
almost all of my links are bookmarkable.  I don't strip out jsessionid - 
 I
don't think you even can without disabling cookieless support - your
container adds the jsessionid to links in your returned HTML - it's not 
 like
you add it (or remove it) manually.

  I do not believe the servlet container adds the jsessionid into your
  URLs automatically.  You have to call HttpServletResponse.encodeURL()
  or HttpServletResponse.encodeRedirectURL() to get the jsessionid
  appended.



  -
  To unsubscribe, e-mail: [EMAIL PROTECTED]
  For additional commands, e-mail: [EMAIL PROTECTED]



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Tomcat dying with Wicket 1.3.2 (Windows / JDK 1.5.0_10)

2008-04-11 Thread Igor Vaynberg
On Fri, Apr 11, 2008 at 6:37 PM, Jeremy Thomerson
[EMAIL PROTECTED] wrote:
  If you go to http://www.texashuntfish.com/thf/app/home, you will notice that
  the first time you hit the page, there are jsessionids in every link - same
  if you go there with cookies disabled.

as far as i know jsessionid is only appended once an http session is
created and needs to be tracked. so the fact you see it right after
you go to /app/home should tell you that right away the session is
created and bound. not good. something in your page is stateful.

  I think this problem is caused by something making the session bind at an
  earlier time than it did when I was using 1.2.6 - it's probably still
  something that I'm doing weird, but I need to find it.

i think this is unlikely. if i remember correctly delayed session
creation was introduced in 1.3.0. 1.2.6 _always created a session on
first request_ regardless of whether or not the page you requested was
stateless or stateful.

-igor



  On Fri, Apr 11, 2008 at 3:33 AM, Johan Compagner [EMAIL PROTECTED]
  wrote:



   by the way it is all your own fault that you get so many session.
   I just searched for your other mails and i did came across: Removing the
   jsessionid for SEO
  
   where you where explaining that you remove the jsessionids from the urls..
  
   johan
  
  
   On Thu, Apr 3, 2008 at 7:23 AM, Jeremy Thomerson 
   [EMAIL PROTECTED]
   wrote:
  
I upgraded my biggest production app from 1.2.6 to 1.3 last week.  I
   have
had several apps running on 1.3 since it was in beta with no problems -
running for months without restarting.
   
This app receives more traffic than any of the rest.  We have a decent
server, and I had always allowed Tomcat 1.5GB of RAM to operate with.
It
never had a problem doing so, and I didn't have OutOfMemory errors.
Now,
after the upgrade to 1.3.2, I am having all sorts of trouble.  It ran
   for
several days without a problem, but then started dying a couple times a
day.  Today it has died four times.  Here are a couple odd things about
this:
   
  - On 1.2.6, I never had a problem with stability - the app would run
  weeks between restarts (I restart once per deployment, anywhere from
once a
  week to at the longest about two months between deploy / restart).
  - Tomcat DIES instead of hanging when there is a problem.  Always
  before, if I had an issue, Tomcat would hang, and there would be OOM
   in
the
  logs.  Now, when it crashes, and I sign in to the server, Tomcat is
   not
  running at all.  There is nothing in the Tomcat logs that says
   anything,
or
  in eventvwr.
  - I do not get OutOfMemory error in any logs, whereas I have always
  seen it in the logs before when I had an issue with other apps.  I am
  running Tomcat as a service on Windows, but it writes stdout / stderr
   to
  logs, and I write my logging out to logs, and none of these logs
   include
ANY
  errors - they all just suddenly stop at the time of the crash.
   
My money is that it is an OOM error caused by somewhere that I am doing
something I shouldn't be with Wicket.  There's no logs that even say it
   is
an OOM, but the memory continues to increase linearly over time as the
   app
runs now (it didn't do that before).  My first guess is my previous
proliferate use of anonymous inner classes.  I have seen in the email
threads that this shouldn't be done in 1.3.
   
Of course, the real answer is that I'm going to be digging through
profilers
and lines of code until I get this fixed.
   
My question, though, is from the Wicket devs / experienced users - where
should I look first?  Is there something that changed between 1.2.6 and
1.3
that might have caused me problems where 1.2.6 was more forgiving?
   
I'm running the app with JProbe right now so that I can get a snapshot
   of
memory when it gets really high.
   
Thank you,
Jeremy Thomerson
   
  


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Tomcat dying with Wicket 1.3.2 (Windows / JDK 1.5.0_10)

2008-04-11 Thread Jeremy Thomerson
Thanks for the insight - didn't know that the webapp had to make a call to
force the cookie-less support.  Someone asked for how often Google is
crawling us.  It seems like at any given point of almost any day, we have
one crawler or another going through the site.  I included some numbers
below to give an idea.

Igor - thanks - it could easily be the search form, which is the only thing
that would be stateful on about 95% of the pages that will be crawled.  I
made myself a note yestereday that I need to look at making that a stateless
form to see if that fixes the unnecessary session creation.  I'll post the
results.

The one thing I have determined from all this (which answers a question from
the other thread) is that Google (and the other crawlers) is definitely
going to pages with a jsessionid in the URL, and the jsessionid is not
appearing in the search results (with 2 exceptions out of 30,000+ pages
indexed).  But I know that maybe only a month ago, there were hundreds of
pages from our site that had jsessionids in the URLs that Google had
indexed.  Could it be possible that they are stripping the jsessionid from
URLs they visit now?  I haven't found anywhere that they volunteer much
information on this matter.

Bottom line - thanks for everyone's help - I have a bandaid on this now
which will buy me the time to see what's creating the early unnecessary
sessions.  Is there a particular place in the code I should put a breakpoint
to see where the session is being created / where it says oh, you have a
stateful page - here's the component that makes it stateful?  That's where
I'm headed next, so if anyone knows where that piece of code is, the tip
would be greatly appreciated.

Thanks again,
Jeremy

Here's a few numbers for the curious.  I took a four minute segment of our
logs from a very slow traffic period - middle of the night.  In that time,
67 sessions were created.  Then did reverse DNS lookups on the IPs.  The
traffic was from:

cuill.com crawler4   (interesting - new search engine - didn't know
about it before)
googlebot4
live.com bot1
unknown13
user28
yahoo crawler26




On Fri, Apr 11, 2008 at 9:20 PM, Igor Vaynberg [EMAIL PROTECTED]
wrote:

 On Fri, Apr 11, 2008 at 6:37 PM, Jeremy Thomerson
 [EMAIL PROTECTED] wrote:
   If you go to http://www.texashuntfish.com/thf/app/home, you will notice
 that
   the first time you hit the page, there are jsessionids in every link -
 same
   if you go there with cookies disabled.

 as far as i know jsessionid is only appended once an http session is
 created and needs to be tracked. so the fact you see it right after
 you go to /app/home should tell you that right away the session is
 created and bound. not good. something in your page is stateful.

   I think this problem is caused by something making the session bind at
 an
   earlier time than it did when I was using 1.2.6 - it's probably still
   something that I'm doing weird, but I need to find it.

 i think this is unlikely. if i remember correctly delayed session
 creation was introduced in 1.3.0. 1.2.6 _always created a session on
 first request_ regardless of whether or not the page you requested was
 stateless or stateful.

 -igor


 
   On Fri, Apr 11, 2008 at 3:33 AM, Johan Compagner [EMAIL PROTECTED]
   wrote:
 
 
 
by the way it is all your own fault that you get so many session.
I just searched for your other mails and i did came across: Removing
 the
jsessionid for SEO
   
where you where explaining that you remove the jsessionids from the
 urls..
   
johan
   
   
On Thu, Apr 3, 2008 at 7:23 AM, Jeremy Thomerson 
[EMAIL PROTECTED]
wrote:
   
 I upgraded my biggest production app from 1.2.6 to 1.3 last week.
  I
have
 had several apps running on 1.3 since it was in beta with no
 problems -
 running for months without restarting.

 This app receives more traffic than any of the rest.  We have a
 decent
 server, and I had always allowed Tomcat 1.5GB of RAM to operate
 with.
 It
 never had a problem doing so, and I didn't have OutOfMemory errors.
 Now,
 after the upgrade to 1.3.2, I am having all sorts of trouble.  It
 ran
for
 several days without a problem, but then started dying a couple
 times a
 day.  Today it has died four times.  Here are a couple odd things
 about
 this:

   - On 1.2.6, I never had a problem with stability - the app would
 run
   weeks between restarts (I restart once per deployment, anywhere
 from
 once a
   week to at the longest about two months between deploy /
 restart).
   - Tomcat DIES instead of hanging when there is a problem.  Always
   before, if I had an issue, Tomcat would hang, and there would be
 OOM
in
 the
   logs.  Now, when it crashes, and I sign in to the server, Tomcat
 is
not
   running at all.  There is nothing in the Tomcat logs that says
anything,
 or
   in eventvwr.
  

Re: Tomcat dying with Wicket 1.3.2 (Windows / JDK 1.5.0_10)

2008-04-11 Thread Igor Vaynberg
try a breakpoint in ISessionStore.bind() - that is where the wicket
session is pushed into httpsession

-igor


On Fri, Apr 11, 2008 at 9:26 PM, Jeremy Thomerson
[EMAIL PROTECTED] wrote:
 Thanks for the insight - didn't know that the webapp had to make a call to
  force the cookie-less support.  Someone asked for how often Google is
  crawling us.  It seems like at any given point of almost any day, we have
  one crawler or another going through the site.  I included some numbers
  below to give an idea.

  Igor - thanks - it could easily be the search form, which is the only thing
  that would be stateful on about 95% of the pages that will be crawled.  I
  made myself a note yestereday that I need to look at making that a stateless
  form to see if that fixes the unnecessary session creation.  I'll post the
  results.

  The one thing I have determined from all this (which answers a question from
  the other thread) is that Google (and the other crawlers) is definitely
  going to pages with a jsessionid in the URL, and the jsessionid is not
  appearing in the search results (with 2 exceptions out of 30,000+ pages
  indexed).  But I know that maybe only a month ago, there were hundreds of
  pages from our site that had jsessionids in the URLs that Google had
  indexed.  Could it be possible that they are stripping the jsessionid from
  URLs they visit now?  I haven't found anywhere that they volunteer much
  information on this matter.

  Bottom line - thanks for everyone's help - I have a bandaid on this now
  which will buy me the time to see what's creating the early unnecessary
  sessions.  Is there a particular place in the code I should put a breakpoint
  to see where the session is being created / where it says oh, you have a
  stateful page - here's the component that makes it stateful?  That's where
  I'm headed next, so if anyone knows where that piece of code is, the tip
  would be greatly appreciated.

  Thanks again,
  Jeremy

  Here's a few numbers for the curious.  I took a four minute segment of our
  logs from a very slow traffic period - middle of the night.  In that time,
  67 sessions were created.  Then did reverse DNS lookups on the IPs.  The
  traffic was from:

  cuill.com crawler4   (interesting - new search engine - didn't know
  about it before)
  googlebot4
  live.com bot1
  unknown13
  user28
  yahoo crawler26




  On Fri, Apr 11, 2008 at 9:20 PM, Igor Vaynberg [EMAIL PROTECTED]
  wrote:



   On Fri, Apr 11, 2008 at 6:37 PM, Jeremy Thomerson
   [EMAIL PROTECTED] wrote:
 If you go to http://www.texashuntfish.com/thf/app/home, you will notice
   that
 the first time you hit the page, there are jsessionids in every link -
   same
 if you go there with cookies disabled.
  
   as far as i know jsessionid is only appended once an http session is
   created and needs to be tracked. so the fact you see it right after
   you go to /app/home should tell you that right away the session is
   created and bound. not good. something in your page is stateful.
  
 I think this problem is caused by something making the session bind at
   an
 earlier time than it did when I was using 1.2.6 - it's probably still
 something that I'm doing weird, but I need to find it.
  
   i think this is unlikely. if i remember correctly delayed session
   creation was introduced in 1.3.0. 1.2.6 _always created a session on
   first request_ regardless of whether or not the page you requested was
   stateless or stateful.
  
   -igor
  
  
   
 On Fri, Apr 11, 2008 at 3:33 AM, Johan Compagner [EMAIL PROTECTED]
 wrote:
   
   
   
  by the way it is all your own fault that you get so many session.
  I just searched for your other mails and i did came across: Removing
   the
  jsessionid for SEO
 
  where you where explaining that you remove the jsessionids from the
   urls..
 
  johan
 
 
  On Thu, Apr 3, 2008 at 7:23 AM, Jeremy Thomerson 
  [EMAIL PROTECTED]
  wrote:
 
   I upgraded my biggest production app from 1.2.6 to 1.3 last week.
I
  have
   had several apps running on 1.3 since it was in beta with no
   problems -
   running for months without restarting.
  
   This app receives more traffic than any of the rest.  We have a
   decent
   server, and I had always allowed Tomcat 1.5GB of RAM to operate
   with.
   It
   never had a problem doing so, and I didn't have OutOfMemory errors.
   Now,
   after the upgrade to 1.3.2, I am having all sorts of trouble.  It
   ran
  for
   several days without a problem, but then started dying a couple
   times a
   day.  Today it has died four times.  Here are a couple odd things
   about
   this:
  
 - On 1.2.6, I never had a problem with stability - the app would
   run
 weeks between restarts (I restart once per deployment, anywhere
   from
   once a
 week to 

Re: Tomcat dying with Wicket 1.3.2 (Windows / JDK 1.5.0_10)

2008-04-11 Thread Ryan Gravener
Just make sure you are not receiving more than 90 requests from search
engines an hour.  If you are you may want to set up a robots.txt
(http://www.robotstxt.org/) and a sitemap (http://www.sitemaps.org/).

On Sat, Apr 12, 2008 at 1:20 AM, Igor Vaynberg [EMAIL PROTECTED] wrote:
 try a breakpoint in ISessionStore.bind() - that is where the wicke
  session is pushed into httpsession

  -igor


  On Fri, Apr 11, 2008 at 9:26 PM, Jeremy Thomerson


 [EMAIL PROTECTED] wrote:
   Thanks for the insight - didn't know that the webapp had to make a call to
force the cookie-less support.  Someone asked for how if your Google is
crawling us.  It seems like at any given point of almost any day, we have
one crawler or another going through the site.  I included some numbers
below to give an idea.
  
Igor - thanks - it could easily be the search form, which is the only 
 thing
that would be stateful on about 95% of the pages that will be crawled.  I
made myself a note yestereday that I need to look at making that a 
 stateless
form to see if that fixes the unnecessary session creation.  I'll post the
results.
  
The one thing I have determined from all this (which answers a question 
 from
the other thread) is that Google (and the other crawlers) is definitely
going to pages with a jsessionid in the URL, and the jsessionid is not
appearing in the search results (with 2 exceptions out of 30,000+ pages
indexed).  But I know that maybe only a month ago, there were hundreds of
pages from our site that had jsessionids in the URLs that Google had
indexed.  Could it be possible that they are stripping the jsessionid from
URLs they visit now?  I haven't found anywhere that they volunteer much
information on this matter.
  
Bottom line - thanks for everyone's help - I have a bandaid on this now
which will buy me the time to see what's creating the early unnecessary
sessions.  Is there a particular place in the code I should put a 
 breakpoint
to see where the session is being created / where it says oh, you have a
stateful page - here's the component that makes it stateful?  That's 
 where
I'm headed next, so if anyone knows where that piece of code is, the tip
would be greatly appreciated.
  
Thanks again,
Jeremy
  
Here's a few numbers for the curious.  I took a four minute segment of our
logs from a very slow traffic period - middle of the night.  In that time,
67 sessions were created.  Then did reverse DNS lookups on the IPs.  The
traffic was from:
  
cuill.com crawler4   (interesting - new search engine - didn't know
about it before)
googlebot4
live.com bot1
unknown13
user28
yahoo crawler26
  
  
  
  
On Fri, Apr 11, 2008 at 9:20 PM, Igor Vaynberg [EMAIL PROTECTED]
wrote:
  
  
  
 On Fri, Apr 11, 2008 at 6:37 PM, Jeremy Thomerson
 [EMAIL PROTECTED] wrote:
   If you go to http://www.texashuntfish.com/thf/app/home, you will 
 notice
 that
   the first time you hit the page, there are jsessionids in every link 
 -
 same
   if you go there with cookies disabled.

 as far as i know jsessionid is only appended once an http session is
 created and needs to be tracked. so the fact you see it right after
 you go to /app/home should tell you that right away the session is
 created and bound. not good. something in your page is stateful.

   I think this problem is caused by something making the session bind 
 at
 an
   earlier time than it did when I was using 1.2.6 - it's probably still
   something that I'm doing weird, but I need to find it.

 i think this is unlikely. if i remember correctly delayed session
 creation was introduced in 1.3.0. 1.2.6 _always created a session on
 first request_ regardless of whether or not the page you requested was
 stateless or stateful.

 -igor


 
   On Fri, Apr 11, 2008 at 3:33 AM, Johan Compagner [EMAIL PROTECTED]
   wrote:
 
 
 
by the way it is all your own fault that you get so many session.
I just searched for your other mails and i did came across: 
 Removing
 the
jsessionid for SEO
   
where you where explaining that you remove the jsessionids from the
 urls..
   
johan
   
   
On Thu, Apr 3, 2008 at 7:23 AM, Jeremy Thomerson 
[EMAIL PROTECTED]
wrote:
   
 I upgraded my biggest production app from 1.2.6 to 1.3 last week.
  I
have
 had several apps running on 1.3 since it was in beta with no
 problems -
 running for months without restarting.

 This app receives more traffic than any of the rest.  We have a
 decent
 server, and I had always allowed Tomcat 1.5GB of RAM to operate
 with.
 It
 never had a 

Re: Tomcat dying with Wicket 1.3.2 (Windows / JDK 1.5.0_10)

2008-04-10 Thread Erik van Oosten
Jeremy,

A workaround is to make the session timeout way lower and add some keep
alive javascript to each page. For example as described by Eelco
(http://chillenious.wordpress.com/2007/06/19/how-to-create-a-text-area-with-a-heart-beat-with-wicket/).

Regards,
 Erik.


Jeremy Thomerson wrote:
 Yes - quite large.  I'm hoping someone has an idea to overcome this.  There
 were definitely not 4500+ unique users on the site at the time.

 There were two copies of the same app deployed on that server at the time -
 one was a staging environment, not being indexed, which is probably where
 the extra ten wicket sessions came from.

 Any ideas?

 Jeremy


 On 4/9/08, Johan Compagner [EMAIL PROTECTED] wrote:
   
 4585 tomcat sessions?

 thats quite large if may say that..
 and even more 10 wicket sessions that tomcat sessions
 Do you have multiply apps deployed on that server?

 if a search engine doesnt send a cookie back then the urls should be
 encoded with jsessionid
 and we get the session from that..

 johan

 

--
Erik van Oosten
http://day-to-day-stuff.blogspot.com/


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Tomcat dying with Wicket 1.3.2 (Windows / JDK 1.5.0_10)

2008-04-10 Thread Jeremy Thomerson
Thanks for the tip.  I came up with an idea last night that I would like to
get input on.  I created an HttpSessionListener that will track all created
sessions.  It has a thread that will run every few minutes, and if a session
does not belong to a signed-in user, it will invalidate it after only ten
minutes of inactivity.  If the session belongs to a signed-in user, it will
give them much longer to be inactive.

Here's the code: http://pastebin.com/m712c7ff0
In the group's opinion, will this work?  It seems like a hack to me, but I
have to do something.

Jeremy

On Thu, Apr 10, 2008 at 1:56 AM, Erik van Oosten [EMAIL PROTECTED]
wrote:

 Jeremy,

 A workaround is to make the session timeout way lower and add some keep
 alive javascript to each page. For example as described by Eelco
 (
 http://chillenious.wordpress.com/2007/06/19/how-to-create-a-text-area-with-a-heart-beat-with-wicket/
 ).

 Regards,
 Erik.


 Jeremy Thomerson wrote:
  Yes - quite large.  I'm hoping someone has an idea to overcome this.
  There
  were definitely not 4500+ unique users on the site at the time.
 
  There were two copies of the same app deployed on that server at the
 time -
  one was a staging environment, not being indexed, which is probably
 where
  the extra ten wicket sessions came from.
 
  Any ideas?
 
  Jeremy
 
 
  On 4/9/08, Johan Compagner [EMAIL PROTECTED] wrote:
 
  4585 tomcat sessions?
 
  thats quite large if may say that..
  and even more 10 wicket sessions that tomcat sessions
  Do you have multiply apps deployed on that server?
 
  if a search engine doesnt send a cookie back then the urls should be
  encoded with jsessionid
  and we get the session from that..
 
  johan
 
 

 --
 Erik van Oosten
 http://day-to-day-stuff.blogspot.com/


 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]




Re: Tomcat dying with Wicket 1.3.2 (Windows / JDK 1.5.0_10)

2008-04-10 Thread Maarten Bosteels
Hi,


On Thu, Apr 10, 2008 at 6:40 PM, Igor Vaynberg [EMAIL PROTECTED]
wrote:

 httpsession already has a settimeout no? so once a user logs in you
 can set it to a longer period


We use that technique (not on a wicket app though) and it seems to work.

Something else to consider:  Do you want Google (and other bots) to crawl
your site ?
If not, you could install a robots.txt file.

http://www.google.com/support/webmasters/bin/answer.py?answer=83097

regards,
Maarten




 -igor


 On Thu, Apr 10, 2008 at 7:38 AM, Jeremy Thomerson
 [EMAIL PROTECTED] wrote:
  Thanks for the tip.  I came up with an idea last night that I would like
 to
   get input on.  I created an HttpSessionListener that will track all
 created
   sessions.  It has a thread that will run every few minutes, and if a
 session
   does not belong to a signed-in user, it will invalidate it after only
 ten
   minutes of inactivity.  If the session belongs to a signed-in user, it
 will
   give them much longer to be inactive.
 
   Here's the code: http://pastebin.com/m712c7ff0
   In the group's opinion, will this work?  It seems like a hack to me,
 but I
   have to do something.
 
   Jeremy
 
   On Thu, Apr 10, 2008 at 1:56 AM, Erik van Oosten [EMAIL PROTECTED]
   wrote:
 
 
 
Jeremy,
   
A workaround is to make the session timeout way lower and add some
 keep
alive javascript to each page. For example as described by Eelco
(
   
 http://chillenious.wordpress.com/2007/06/19/how-to-create-a-text-area-with-a-heart-beat-with-wicket/
).
   
Regards,
Erik.
   
   
Jeremy Thomerson wrote:
 Yes - quite large.  I'm hoping someone has an idea to overcome
 this.
 There
 were definitely not 4500+ unique users on the site at the time.

 There were two copies of the same app deployed on that server at
 the
time -
 one was a staging environment, not being indexed, which is probably
where
 the extra ten wicket sessions came from.

 Any ideas?

 Jeremy


 On 4/9/08, Johan Compagner [EMAIL PROTECTED] wrote:

 4585 tomcat sessions?

 thats quite large if may say that..
 and even more 10 wicket sessions that tomcat sessions
 Do you have multiply apps deployed on that server?

 if a search engine doesnt send a cookie back then the urls should
 be
 encoded with jsessionid
 and we get the session from that..

 johan


   
--
Erik van Oosten
http://day-to-day-stuff.blogspot.com/
   
   
-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
   
   
 

 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]




Re: Tomcat dying with Wicket 1.3.2 (Windows / JDK 1.5.0_10)

2008-04-10 Thread Jeremy Thomerson
Thanks for the suggestion - I may just go that route instead of my own
listener.

As far as Google - yes!  We're a public community site (for hunting and
fishing in Texas), and almost all of our non-repeat traffic comes from
search engines, so we must be highly ranked.  We're #2 in Google for texas
hunting, which is huge since the only one above us is a Texas state
department - Texas Parks and Wildlife.

Thank you,
Jeremy

On Thu, Apr 10, 2008 at 2:50 PM, Maarten Bosteels [EMAIL PROTECTED]
wrote:

 Hi,


 On Thu, Apr 10, 2008 at 6:40 PM, Igor Vaynberg [EMAIL PROTECTED]
 wrote:

  httpsession already has a settimeout no? so once a user logs in you
  can set it to a longer period


 We use that technique (not on a wicket app though) and it seems to work.

 Something else to consider:  Do you want Google (and other bots) to crawl
 your site ?
 If not, you could install a robots.txt file.

 http://www.google.com/support/webmasters/bin/answer.py?answer=83097

 regards,
 Maarten


 
 
  -igor
 
 
  On Thu, Apr 10, 2008 at 7:38 AM, Jeremy Thomerson
  [EMAIL PROTECTED] wrote:
   Thanks for the tip.  I came up with an idea last night that I would
 like
  to
get input on.  I created an HttpSessionListener that will track all
  created
sessions.  It has a thread that will run every few minutes, and if a
  session
does not belong to a signed-in user, it will invalidate it after only
  ten
minutes of inactivity.  If the session belongs to a signed-in user,
 it
  will
give them much longer to be inactive.
  
Here's the code: http://pastebin.com/m712c7ff0
In the group's opinion, will this work?  It seems like a hack to me,
  but I
have to do something.
  
Jeremy
  
On Thu, Apr 10, 2008 at 1:56 AM, Erik van Oosten 
 [EMAIL PROTECTED]
wrote:
  
  
  
 Jeremy,

 A workaround is to make the session timeout way lower and add some
  keep
 alive javascript to each page. For example as described by Eelco
 (

 
 http://chillenious.wordpress.com/2007/06/19/how-to-create-a-text-area-with-a-heart-beat-with-wicket/
 ).

 Regards,
 Erik.


 Jeremy Thomerson wrote:
  Yes - quite large.  I'm hoping someone has an idea to overcome
  this.
  There
  were definitely not 4500+ unique users on the site at the time.
 
  There were two copies of the same app deployed on that server at
  the
 time -
  one was a staging environment, not being indexed, which is
 probably
 where
  the extra ten wicket sessions came from.
 
  Any ideas?
 
  Jeremy
 
 
  On 4/9/08, Johan Compagner [EMAIL PROTECTED] wrote:
 
  4585 tomcat sessions?
 
  thats quite large if may say that..
  and even more 10 wicket sessions that tomcat sessions
  Do you have multiply apps deployed on that server?
 
  if a search engine doesnt send a cookie back then the urls
 should
  be
  encoded with jsessionid
  and we get the session from that..
 
  johan
 
 

 --
 Erik van Oosten
 http://day-to-day-stuff.blogspot.com/



 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]


  
 
  -
  To unsubscribe, e-mail: [EMAIL PROTECTED]
  For additional commands, e-mail: [EMAIL PROTECTED]
 
 



Re: Tomcat dying with Wicket 1.3.2 (Windows / JDK 1.5.0_10)

2008-04-09 Thread Jeremy Thomerson
I finally am able to get a good analysis of it.  It dumped two memory dumps
when it died in the past couple days (it's still dying about once or twice a
day).  Using this GREAT tool:
https://www.sdn.sap.com/irj/sdn/wiki?path=/display/Java/Java+Memory+AnalysisI
am able to see deep memory views that tell me what is taking up the
memory.

Each time it dies, it is usually during heavy web crawler traffic from
Google / Ask.com / etc.  The memory tool shows me having:
4585 instances of org.apache.catalina.session.StandardSession, totaling
984,160,016 bytes of memory (I give Tomcat 1GB).
4595 instances of com.texashuntfish.web.wicket.WebSession, totaling
530,524,680 bytes of memory.

I have had the session expiration turned down to 90 minutes in Tomcat for a
very long time.  So, this means that 4,585 sessions are created in 90
minutes, right?  It seems like I shouldn't have this many sessions, right?
Are the search engines crawlers creating a new session for every page they
hit?  I'm going to put some logging output in my subclass of WebSession that
tells me when it's being created, and by what IP so that I can take a look
at this.

Does anyone have any ideas?  I don't know what my session counts were in
1.2.6, but I never ran into this problem or ran out of memory, excecpt for
about a year and a half ago when I had session lengths turned up to a couple
days long.

Thank you,
Jeremy Thomerson


On Fri, Apr 4, 2008 at 12:03 AM, Jeremy Thomerson 
[EMAIL PROTECTED] wrote:

 Nope - one page never holds on to another.  I never even pass pages into
 another page or link or something as a reference.

 Interestingly, I DECREASED the memory the JVM could have from 1.5 GB to
 1.0 GB today, and it has been stable all day (after also releasing a version
 using Wicket 1.3.3).  That's not a definite sign - it was stable for several
 days after upgrading to 1.3.2 from 1.2.6 before freaking out.  But I'll
 watch it closely.  The memory creeped slowly up to the max, and has stayed
 there, but without the site crashing, and without any degradation of
 performance.  Does that give anyone any ideas?  I'm so exhausted, I think
 that I'm starting to lose my ability to think freshly about it.

 Thank you,
 Jeremy


 On Thu, Apr 3, 2008 at 5:44 PM, Matej Knopp [EMAIL PROTECTED] wrote:

  This is really weird. Do you have any inter-page references in your
  application?
 
  -Matej
 
  On Thu, Apr 3, 2008 at 9:35 PM, Jeremy Thomerson
  [EMAIL PROTECTED] wrote:
   The oddness is what baffles me: Tomcat has no output anywhere.  I have
grepped and tailed the entire Tomcat logs directory, stdout*,
  stderr*,
localhost*, etc.  Nothing in eventvwr.
  
It must be memory related, though.  There is a steadily increasing
  memory
footprint - it was increasing so fast yesterday because we were
  getting
pounded by tons of traffic and Google's crawler and Ask's crawler all
simultaneously.  Of course, the traffic was still no higher than it
  has been
in the past - this is definitely a new problem.
  
I redeployed today with the pending 1.3.3 release built by Frank to
  see if
my leak could be the same as Martijn's below, but the memory
  continues to
increase.  It will die soon.  I have added the parameter to tell it
  to dump
on OOM - hopefully I got the right parameter and it will work.
  
Anyone here know how to (or if you can) use jstat / jmap with
  tomcat5.exe,
running as Windows service?  All my development is on Linux machines,
  and I
can easily use those tools, but on the Windows prod environment
  (ughh), jps
doesn't give me a VMID for Tomcat.
  
Thank you for your help!
Jeremy
  
  
  
On Thu, Apr 3, 2008 at 2:27 PM, Al Maw [EMAIL PROTECTED] wrote:
  
 You can use as many anonymous inner classes as you like. I have
  them
 coming
 out of my ears, personally.

 It's very odd for tomcat to die with no output. There will be
  output
 somewhere. Check logs/catalina.out and also logs/localhost*. If the
  JVM
 dies, it will hotspot or even segfault and log that, at least. If
  you have
 gradually increasing memory footprint then this should be pretty
  easy to
 track down with a profiler.

 Make sure you run Tomcat with a sensible amount of permanent
  generation
 space (128M+).

 Regards,

 Alastair



 On Thu, Apr 3, 2008 at 6:43 AM, Martijn Dashorst 
 [EMAIL PROTECTED]
 wrote:

  There are commandline options for the jvm to dump on OOM.
 
  Anyway, doesn't the log file give any insight into what is
  happening
  in your application? Did you (or your sysadmin) disable logging
  for
  Wicket?
 
  You can also run external tools to see what is happening inside
  your
  JVM without blocking the app. e.g. use jmap -histo to see how
  many
  objects are alive at a particular moment. The top 10 is always
  interesting. In my case I found a memory 

Re: Tomcat dying with Wicket 1.3.2 (Windows / JDK 1.5.0_10)

2008-04-09 Thread Johan Compagner
4585 tomcat sessions?

thats quite large if may say that..
and even more 10 wicket sessions that tomcat sessions
Do you have multiply apps deployed on that server?

if a search engine doesnt send a cookie back then the urls should be encoded
with jsessionid
and we get the session from that..

johan



On Thu, Apr 10, 2008 at 12:22 AM, Jeremy Thomerson 
[EMAIL PROTECTED] wrote:

 I finally am able to get a good analysis of it.  It dumped two memory
 dumps
 when it died in the past couple days (it's still dying about once or twice
 a
 day).  Using this GREAT tool:

 https://www.sdn.sap.com/irj/sdn/wiki?path=/display/Java/Java+Memory+AnalysisI
 am able to see deep memory views that tell me what is taking up the
 memory.

 Each time it dies, it is usually during heavy web crawler traffic from
 Google / Ask.com / etc.  The memory tool shows me having:
 4585 instances of org.apache.catalina.session.StandardSession, totaling
 984,160,016 bytes of memory (I give Tomcat 1GB).
 4595 instances of com.texashuntfish.web.wicket.WebSession, totaling
 530,524,680 bytes of memory.

 I have had the session expiration turned down to 90 minutes in Tomcat for
 a
 very long time.  So, this means that 4,585 sessions are created in 90
 minutes, right?  It seems like I shouldn't have this many sessions, right?
 Are the search engines crawlers creating a new session for every page they
 hit?  I'm going to put some logging output in my subclass of WebSession
 that
 tells me when it's being created, and by what IP so that I can take a look
 at this.

 Does anyone have any ideas?  I don't know what my session counts were in
 1.2.6, but I never ran into this problem or ran out of memory, excecpt for
 about a year and a half ago when I had session lengths turned up to a
 couple
 days long.

 Thank you,
 Jeremy Thomerson


 On Fri, Apr 4, 2008 at 12:03 AM, Jeremy Thomerson 
 [EMAIL PROTECTED] wrote:

  Nope - one page never holds on to another.  I never even pass pages into
  another page or link or something as a reference.
 
  Interestingly, I DECREASED the memory the JVM could have from 1.5 GB to
  1.0 GB today, and it has been stable all day (after also releasing a
 version
  using Wicket 1.3.3).  That's not a definite sign - it was stable for
 several
  days after upgrading to 1.3.2 from 1.2.6 before freaking out.  But I'll
  watch it closely.  The memory creeped slowly up to the max, and has
 stayed
  there, but without the site crashing, and without any degradation of
  performance.  Does that give anyone any ideas?  I'm so exhausted, I
 think
  that I'm starting to lose my ability to think freshly about it.
 
  Thank you,
  Jeremy
 
 
  On Thu, Apr 3, 2008 at 5:44 PM, Matej Knopp [EMAIL PROTECTED]
 wrote:
 
   This is really weird. Do you have any inter-page references in your
   application?
  
   -Matej
  
   On Thu, Apr 3, 2008 at 9:35 PM, Jeremy Thomerson
   [EMAIL PROTECTED] wrote:
The oddness is what baffles me: Tomcat has no output anywhere.  I
 have
 grepped and tailed the entire Tomcat logs directory, stdout*,
   stderr*,
 localhost*, etc.  Nothing in eventvwr.
   
 It must be memory related, though.  There is a steadily increasing
   memory
 footprint - it was increasing so fast yesterday because we were
   getting
 pounded by tons of traffic and Google's crawler and Ask's crawler
 all
 simultaneously.  Of course, the traffic was still no higher than it
   has been
 in the past - this is definitely a new problem.
   
 I redeployed today with the pending 1.3.3 release built by Frank to
   see if
 my leak could be the same as Martijn's below, but the memory
   continues to
 increase.  It will die soon.  I have added the parameter to tell it
   to dump
 on OOM - hopefully I got the right parameter and it will work.
   
 Anyone here know how to (or if you can) use jstat / jmap with
   tomcat5.exe,
 running as Windows service?  All my development is on Linux
 machines,
   and I
 can easily use those tools, but on the Windows prod environment
   (ughh), jps
 doesn't give me a VMID for Tomcat.
   
 Thank you for your help!
 Jeremy
   
   
   
 On Thu, Apr 3, 2008 at 2:27 PM, Al Maw [EMAIL PROTECTED] wrote:
   
  You can use as many anonymous inner classes as you like. I have
   them
  coming
  out of my ears, personally.
 
  It's very odd for tomcat to die with no output. There will be
   output
  somewhere. Check logs/catalina.out and also logs/localhost*. If
 the
   JVM
  dies, it will hotspot or even segfault and log that, at least. If
   you have
  gradually increasing memory footprint then this should be pretty
   easy to
  track down with a profiler.
 
  Make sure you run Tomcat with a sensible amount of permanent
   generation
  space (128M+).
 
  Regards,
 
  Alastair
 
 
 
  On Thu, Apr 3, 2008 at 6:43 AM, Martijn Dashorst 
  [EMAIL PROTECTED]
  wrote:

Re: Tomcat dying with Wicket 1.3.2 (Windows / JDK 1.5.0_10)

2008-04-09 Thread Jeremy Thomerson
Yes - quite large.  I'm hoping someone has an idea to overcome this.  There
were definitely not 4500+ unique users on the site at the time.

There were two copies of the same app deployed on that server at the time -
one was a staging environment, not being indexed, which is probably where
the extra ten wicket sessions came from.

Any ideas?

Jeremy


On 4/9/08, Johan Compagner [EMAIL PROTECTED] wrote:

 4585 tomcat sessions?

 thats quite large if may say that..
 and even more 10 wicket sessions that tomcat sessions
 Do you have multiply apps deployed on that server?

 if a search engine doesnt send a cookie back then the urls should be
 encoded with jsessionid
 and we get the session from that..

 johan



 On Thu, Apr 10, 2008 at 12:22 AM, Jeremy Thomerson 
 [EMAIL PROTECTED] wrote:

  I finally am able to get a good analysis of it.  It dumped two memory
  dumps
  when it died in the past couple days (it's still dying about once or
  twice a
  day).  Using this GREAT tool:
 
  https://www.sdn.sap.com/irj/sdn/wiki?path=/display/Java/Java+Memory+AnalysisI
  am able to see deep memory views that tell me what is taking up the
  memory.
 
  Each time it dies, it is usually during heavy web crawler traffic from
  Google / Ask.com / etc.  The memory tool shows me having:
  4585 instances of org.apache.catalina.session.StandardSession, totaling
  984,160,016 bytes of memory (I give Tomcat 1GB).
  4595 instances of com.texashuntfish.web.wicket.WebSession, totaling
  530,524,680 bytes of memory.
 
  I have had the session expiration turned down to 90 minutes in Tomcat
  for a
  very long time.  So, this means that 4,585 sessions are created in 90
  minutes, right?  It seems like I shouldn't have this many sessions,
  right?
  Are the search engines crawlers creating a new session for every page
  they
  hit?  I'm going to put some logging output in my subclass of WebSession
  that
  tells me when it's being created, and by what IP so that I can take a
  look
  at this.
 
  Does anyone have any ideas?  I don't know what my session counts were in
  1.2.6, but I never ran into this problem or ran out of memory, excecpt
  for
  about a year and a half ago when I had session lengths turned up to a
  couple
  days long.
 
  Thank you,
  Jeremy Thomerson
 
 
  On Fri, Apr 4, 2008 at 12:03 AM, Jeremy Thomerson 
   [EMAIL PROTECTED] wrote:
 
   Nope - one page never holds on to another.  I never even pass pages
  into
   another page or link or something as a reference.
  
   Interestingly, I DECREASED the memory the JVM could have from 1.5 GB
  to
   1.0 GB today, and it has been stable all day (after also releasing a
  version
   using Wicket 1.3.3).  That's not a definite sign - it was stable for
  several
   days after upgrading to 1.3.2 from 1.2.6 before freaking out.  But
  I'll
   watch it closely.  The memory creeped slowly up to the max, and has
  stayed
   there, but without the site crashing, and without any degradation of
   performance.  Does that give anyone any ideas?  I'm so exhausted, I
  think
   that I'm starting to lose my ability to think freshly about it.
  
   Thank you,
   Jeremy
  
  
   On Thu, Apr 3, 2008 at 5:44 PM, Matej Knopp [EMAIL PROTECTED]
  wrote:
  
This is really weird. Do you have any inter-page references in your
application?
   
-Matej
   
On Thu, Apr 3, 2008 at 9:35 PM, Jeremy Thomerson
[EMAIL PROTECTED] wrote:
 The oddness is what baffles me: Tomcat has no output anywhere.  I
  have
  grepped and tailed the entire Tomcat logs directory, stdout*,
stderr*,
  localhost*, etc.  Nothing in eventvwr.

  It must be memory related, though.  There is a steadily
  increasing
memory
  footprint - it was increasing so fast yesterday because we were
getting
  pounded by tons of traffic and Google's crawler and Ask's crawler
  all
  simultaneously.  Of course, the traffic was still no higher than
  it
has been
  in the past - this is definitely a new problem.

  I redeployed today with the pending 1.3.3 release built by Frank
  to
see if
  my leak could be the same as Martijn's below, but the memory
continues to
  increase.  It will die soon.  I have added the parameter to tell
  it
to dump
  on OOM - hopefully I got the right parameter and it will work.

  Anyone here know how to (or if you can) use jstat / jmap with
tomcat5.exe,
  running as Windows service?  All my development is on Linux
  machines,
and I
  can easily use those tools, but on the Windows prod environment
(ughh), jps
  doesn't give me a VMID for Tomcat.

  Thank you for your help!
  Jeremy



  On Thu, Apr 3, 2008 at 2:27 PM, Al Maw [EMAIL PROTECTED] wrote:

   You can use as many anonymous inner classes as you like. I have
them
   coming
   out of my ears, personally.
  
   It's very odd for tomcat to die with no output. There will be
 

Re: Tomcat dying with Wicket 1.3.2 (Windows / JDK 1.5.0_10)

2008-04-03 Thread Al Maw
You can use as many anonymous inner classes as you like. I have them coming
out of my ears, personally.

It's very odd for tomcat to die with no output. There will be output
somewhere. Check logs/catalina.out and also logs/localhost*. If the JVM
dies, it will hotspot or even segfault and log that, at least. If you have
gradually increasing memory footprint then this should be pretty easy to
track down with a profiler.

Make sure you run Tomcat with a sensible amount of permanent generation
space (128M+).

Regards,

Alastair



On Thu, Apr 3, 2008 at 6:43 AM, Martijn Dashorst [EMAIL PROTECTED]
wrote:

 There are commandline options for the jvm to dump on OOM.

 Anyway, doesn't the log file give any insight into what is happening
 in your application? Did you (or your sysadmin) disable logging for
 Wicket?

 You can also run external tools to see what is happening inside your
 JVM without blocking the app. e.g. use jmap -histo to see how many
 objects are alive at a particular moment. The top 10 is always
 interesting. In my case I found a memory leak in the diskpagestore
 when exceptions occurred during writing to disk. This is solved in
 1.3.3 (which is just days away from an official release, try it!)

 jstat -gc -h50 pid 1000 will log the garbage collector statistics
 every second.

 Martijn

 On 4/3/08, Jeremy Thomerson [EMAIL PROTECTED] wrote:
  I upgraded my biggest production app from 1.2.6 to 1.3 last week.  I
 have
   had several apps running on 1.3 since it was in beta with no problems -
   running for months without restarting.
 
   This app receives more traffic than any of the rest.  We have a decent
   server, and I had always allowed Tomcat 1.5GB of RAM to operate with.
  It
   never had a problem doing so, and I didn't have OutOfMemory errors.
  Now,
   after the upgrade to 1.3.2, I am having all sorts of trouble.  It ran
 for
   several days without a problem, but then started dying a couple times a
   day.  Today it has died four times.  Here are a couple odd things about
   this:
 
 - On 1.2.6, I never had a problem with stability - the app would run
 weeks between restarts (I restart once per deployment, anywhere from
 once a
 week to at the longest about two months between deploy / restart).
 - Tomcat DIES instead of hanging when there is a problem.  Always
 before, if I had an issue, Tomcat would hang, and there would be OOM
 in the
 logs.  Now, when it crashes, and I sign in to the server, Tomcat is
 not
 running at all.  There is nothing in the Tomcat logs that says
 anything, or
 in eventvwr.
 - I do not get OutOfMemory error in any logs, whereas I have always
 seen it in the logs before when I had an issue with other apps.  I am
 running Tomcat as a service on Windows, but it writes stdout / stderr
 to
 logs, and I write my logging out to logs, and none of these logs
 include ANY
 errors - they all just suddenly stop at the time of the crash.
 
   My money is that it is an OOM error caused by somewhere that I am doing
   something I shouldn't be with Wicket.  There's no logs that even say it
 is
   an OOM, but the memory continues to increase linearly over time as the
 app
   runs now (it didn't do that before).  My first guess is my previous
   proliferate use of anonymous inner classes.  I have seen in the email
   threads that this shouldn't be done in 1.3.
 
   Of course, the real answer is that I'm going to be digging through
 profilers
   and lines of code until I get this fixed.
 
   My question, though, is from the Wicket devs / experienced users -
 where
   should I look first?  Is there something that changed between 1.2.6 and
 1.3
   that might have caused me problems where 1.2.6 was more forgiving?
 
   I'm running the app with JProbe right now so that I can get a snapshot
 of
   memory when it gets really high.
 
   Thank you,
 
  Jeremy Thomerson
 


 --
 Buy Wicket in Action: http://manning.com/dashorst
 Apache Wicket 1.3.2 is released
 Get it now: http://www.apache.org/dyn/closer.cgi/wicket/1.3.2

 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]




Re: Tomcat dying with Wicket 1.3.2 (Windows / JDK 1.5.0_10)

2008-04-03 Thread Jeremy Thomerson
The oddness is what baffles me: Tomcat has no output anywhere.  I have
grepped and tailed the entire Tomcat logs directory, stdout*, stderr*,
localhost*, etc.  Nothing in eventvwr.

It must be memory related, though.  There is a steadily increasing memory
footprint - it was increasing so fast yesterday because we were getting
pounded by tons of traffic and Google's crawler and Ask's crawler all
simultaneously.  Of course, the traffic was still no higher than it has been
in the past - this is definitely a new problem.

I redeployed today with the pending 1.3.3 release built by Frank to see if
my leak could be the same as Martijn's below, but the memory continues to
increase.  It will die soon.  I have added the parameter to tell it to dump
on OOM - hopefully I got the right parameter and it will work.

Anyone here know how to (or if you can) use jstat / jmap with tomcat5.exe,
running as Windows service?  All my development is on Linux machines, and I
can easily use those tools, but on the Windows prod environment (ughh), jps
doesn't give me a VMID for Tomcat.

Thank you for your help!
Jeremy

On Thu, Apr 3, 2008 at 2:27 PM, Al Maw [EMAIL PROTECTED] wrote:

 You can use as many anonymous inner classes as you like. I have them
 coming
 out of my ears, personally.

 It's very odd for tomcat to die with no output. There will be output
 somewhere. Check logs/catalina.out and also logs/localhost*. If the JVM
 dies, it will hotspot or even segfault and log that, at least. If you have
 gradually increasing memory footprint then this should be pretty easy to
 track down with a profiler.

 Make sure you run Tomcat with a sensible amount of permanent generation
 space (128M+).

 Regards,

 Alastair



 On Thu, Apr 3, 2008 at 6:43 AM, Martijn Dashorst 
 [EMAIL PROTECTED]
 wrote:

  There are commandline options for the jvm to dump on OOM.
 
  Anyway, doesn't the log file give any insight into what is happening
  in your application? Did you (or your sysadmin) disable logging for
  Wicket?
 
  You can also run external tools to see what is happening inside your
  JVM without blocking the app. e.g. use jmap -histo to see how many
  objects are alive at a particular moment. The top 10 is always
  interesting. In my case I found a memory leak in the diskpagestore
  when exceptions occurred during writing to disk. This is solved in
  1.3.3 (which is just days away from an official release, try it!)
 
  jstat -gc -h50 pid 1000 will log the garbage collector statistics
  every second.
 
  Martijn
 
  On 4/3/08, Jeremy Thomerson [EMAIL PROTECTED] wrote:
   I upgraded my biggest production app from 1.2.6 to 1.3 last week.  I
  have
had several apps running on 1.3 since it was in beta with no problems
 -
running for months without restarting.
  
This app receives more traffic than any of the rest.  We have a
 decent
server, and I had always allowed Tomcat 1.5GB of RAM to operate with.
   It
never had a problem doing so, and I didn't have OutOfMemory errors.
   Now,
after the upgrade to 1.3.2, I am having all sorts of trouble.  It ran
  for
several days without a problem, but then started dying a couple times
 a
day.  Today it has died four times.  Here are a couple odd things
 about
this:
  
  - On 1.2.6, I never had a problem with stability - the app would
 run
  weeks between restarts (I restart once per deployment, anywhere
 from
  once a
  week to at the longest about two months between deploy / restart).
  - Tomcat DIES instead of hanging when there is a problem.  Always
  before, if I had an issue, Tomcat would hang, and there would be
 OOM
  in the
  logs.  Now, when it crashes, and I sign in to the server, Tomcat is
  not
  running at all.  There is nothing in the Tomcat logs that says
  anything, or
  in eventvwr.
  - I do not get OutOfMemory error in any logs, whereas I have always
  seen it in the logs before when I had an issue with other apps.  I
 am
  running Tomcat as a service on Windows, but it writes stdout /
 stderr
  to
  logs, and I write my logging out to logs, and none of these logs
  include ANY
  errors - they all just suddenly stop at the time of the crash.
  
My money is that it is an OOM error caused by somewhere that I am
 doing
something I shouldn't be with Wicket.  There's no logs that even say
 it
  is
an OOM, but the memory continues to increase linearly over time as
 the
  app
runs now (it didn't do that before).  My first guess is my previous
proliferate use of anonymous inner classes.  I have seen in the email
threads that this shouldn't be done in 1.3.
  
Of course, the real answer is that I'm going to be digging through
  profilers
and lines of code until I get this fixed.
  
My question, though, is from the Wicket devs / experienced users -
  where
should I look first?  Is there something that changed between 1.2.6
 and
  1.3
that might have caused me problems 

Re: Tomcat dying with Wicket 1.3.2 (Windows / JDK 1.5.0_10)

2008-04-03 Thread Jeremy Thomerson
Nope - one page never holds on to another.  I never even pass pages into
another page or link or something as a reference.

Interestingly, I DECREASED the memory the JVM could have from 1.5 GB to 1.0
GB today, and it has been stable all day (after also releasing a version
using Wicket 1.3.3).  That's not a definite sign - it was stable for several
days after upgrading to 1.3.2 from 1.2.6 before freaking out.  But I'll
watch it closely.  The memory creeped slowly up to the max, and has stayed
there, but without the site crashing, and without any degradation of
performance.  Does that give anyone any ideas?  I'm so exhausted, I think
that I'm starting to lose my ability to think freshly about it.

Thank you,
Jeremy

On Thu, Apr 3, 2008 at 5:44 PM, Matej Knopp [EMAIL PROTECTED] wrote:

 This is really weird. Do you have any inter-page references in your
 application?

 -Matej

 On Thu, Apr 3, 2008 at 9:35 PM, Jeremy Thomerson
 [EMAIL PROTECTED] wrote:
  The oddness is what baffles me: Tomcat has no output anywhere.  I have
   grepped and tailed the entire Tomcat logs directory, stdout*, stderr*,
   localhost*, etc.  Nothing in eventvwr.
 
   It must be memory related, though.  There is a steadily increasing
 memory
   footprint - it was increasing so fast yesterday because we were getting
   pounded by tons of traffic and Google's crawler and Ask's crawler all
   simultaneously.  Of course, the traffic was still no higher than it has
 been
   in the past - this is definitely a new problem.
 
   I redeployed today with the pending 1.3.3 release built by Frank to see
 if
   my leak could be the same as Martijn's below, but the memory continues
 to
   increase.  It will die soon.  I have added the parameter to tell it to
 dump
   on OOM - hopefully I got the right parameter and it will work.
 
   Anyone here know how to (or if you can) use jstat / jmap with
 tomcat5.exe,
   running as Windows service?  All my development is on Linux machines,
 and I
   can easily use those tools, but on the Windows prod environment (ughh),
 jps
   doesn't give me a VMID for Tomcat.
 
   Thank you for your help!
   Jeremy
 
 
 
   On Thu, Apr 3, 2008 at 2:27 PM, Al Maw [EMAIL PROTECTED] wrote:
 
You can use as many anonymous inner classes as you like. I have them
coming
out of my ears, personally.
   
It's very odd for tomcat to die with no output. There will be output
somewhere. Check logs/catalina.out and also logs/localhost*. If the
 JVM
dies, it will hotspot or even segfault and log that, at least. If you
 have
gradually increasing memory footprint then this should be pretty easy
 to
track down with a profiler.
   
Make sure you run Tomcat with a sensible amount of permanent
 generation
space (128M+).
   
Regards,
   
Alastair
   
   
   
On Thu, Apr 3, 2008 at 6:43 AM, Martijn Dashorst 
[EMAIL PROTECTED]
wrote:
   
 There are commandline options for the jvm to dump on OOM.

 Anyway, doesn't the log file give any insight into what is
 happening
 in your application? Did you (or your sysadmin) disable logging for
 Wicket?

 You can also run external tools to see what is happening inside
 your
 JVM without blocking the app. e.g. use jmap -histo to see how many
 objects are alive at a particular moment. The top 10 is always
 interesting. In my case I found a memory leak in the diskpagestore
 when exceptions occurred during writing to disk. This is solved in
 1.3.3 (which is just days away from an official release, try it!)

 jstat -gc -h50 pid 1000 will log the garbage collector statistics
 every second.

 Martijn

 On 4/3/08, Jeremy Thomerson [EMAIL PROTECTED] wrote:
  I upgraded my biggest production app from 1.2.6 to 1.3 last week.
  I
 have
   had several apps running on 1.3 since it was in beta with no
 problems
-
   running for months without restarting.
 
   This app receives more traffic than any of the rest.  We have a
decent
   server, and I had always allowed Tomcat 1.5GB of RAM to operate
 with.
  It
   never had a problem doing so, and I didn't have OutOfMemory
 errors.
  Now,
   after the upgrade to 1.3.2, I am having all sorts of trouble.
  It ran
 for
   several days without a problem, but then started dying a couple
 times
a
   day.  Today it has died four times.  Here are a couple odd
 things
about
   this:
 
 - On 1.2.6, I never had a problem with stability - the app
 would
run
 weeks between restarts (I restart once per deployment,
 anywhere
from
 once a
 week to at the longest about two months between deploy /
 restart).
 - Tomcat DIES instead of hanging when there is a problem.
  Always
 before, if I had an issue, Tomcat would hang, and there would
 be
OOM
 in the
 logs.  Now, when it crashes, and I sign in to the server,
 

Tomcat dying with Wicket 1.3.2 (Windows / JDK 1.5.0_10)

2008-04-02 Thread Jeremy Thomerson
I upgraded my biggest production app from 1.2.6 to 1.3 last week.  I have
had several apps running on 1.3 since it was in beta with no problems -
running for months without restarting.

This app receives more traffic than any of the rest.  We have a decent
server, and I had always allowed Tomcat 1.5GB of RAM to operate with.  It
never had a problem doing so, and I didn't have OutOfMemory errors.  Now,
after the upgrade to 1.3.2, I am having all sorts of trouble.  It ran for
several days without a problem, but then started dying a couple times a
day.  Today it has died four times.  Here are a couple odd things about
this:

   - On 1.2.6, I never had a problem with stability - the app would run
   weeks between restarts (I restart once per deployment, anywhere from once a
   week to at the longest about two months between deploy / restart).
   - Tomcat DIES instead of hanging when there is a problem.  Always
   before, if I had an issue, Tomcat would hang, and there would be OOM in the
   logs.  Now, when it crashes, and I sign in to the server, Tomcat is not
   running at all.  There is nothing in the Tomcat logs that says anything, or
   in eventvwr.
   - I do not get OutOfMemory error in any logs, whereas I have always
   seen it in the logs before when I had an issue with other apps.  I am
   running Tomcat as a service on Windows, but it writes stdout / stderr to
   logs, and I write my logging out to logs, and none of these logs include ANY
   errors - they all just suddenly stop at the time of the crash.

My money is that it is an OOM error caused by somewhere that I am doing
something I shouldn't be with Wicket.  There's no logs that even say it is
an OOM, but the memory continues to increase linearly over time as the app
runs now (it didn't do that before).  My first guess is my previous
proliferate use of anonymous inner classes.  I have seen in the email
threads that this shouldn't be done in 1.3.

Of course, the real answer is that I'm going to be digging through profilers
and lines of code until I get this fixed.

My question, though, is from the Wicket devs / experienced users - where
should I look first?  Is there something that changed between 1.2.6 and 1.3
that might have caused me problems where 1.2.6 was more forgiving?

I'm running the app with JProbe right now so that I can get a snapshot of
memory when it gets really high.

Thank you,
Jeremy Thomerson


Re: Tomcat dying with Wicket 1.3.2 (Windows / JDK 1.5.0_10)

2008-04-02 Thread Martijn Dashorst
There are commandline options for the jvm to dump on OOM.

Anyway, doesn't the log file give any insight into what is happening
in your application? Did you (or your sysadmin) disable logging for
Wicket?

You can also run external tools to see what is happening inside your
JVM without blocking the app. e.g. use jmap -histo to see how many
objects are alive at a particular moment. The top 10 is always
interesting. In my case I found a memory leak in the diskpagestore
when exceptions occurred during writing to disk. This is solved in
1.3.3 (which is just days away from an official release, try it!)

jstat -gc -h50 pid 1000 will log the garbage collector statistics
every second.

Martijn

On 4/3/08, Jeremy Thomerson [EMAIL PROTECTED] wrote:
 I upgraded my biggest production app from 1.2.6 to 1.3 last week.  I have
  had several apps running on 1.3 since it was in beta with no problems -
  running for months without restarting.

  This app receives more traffic than any of the rest.  We have a decent
  server, and I had always allowed Tomcat 1.5GB of RAM to operate with.  It
  never had a problem doing so, and I didn't have OutOfMemory errors.  Now,
  after the upgrade to 1.3.2, I am having all sorts of trouble.  It ran for
  several days without a problem, but then started dying a couple times a
  day.  Today it has died four times.  Here are a couple odd things about
  this:

- On 1.2.6, I never had a problem with stability - the app would run
weeks between restarts (I restart once per deployment, anywhere from once a
week to at the longest about two months between deploy / restart).
- Tomcat DIES instead of hanging when there is a problem.  Always
before, if I had an issue, Tomcat would hang, and there would be OOM in the
logs.  Now, when it crashes, and I sign in to the server, Tomcat is not
running at all.  There is nothing in the Tomcat logs that says anything, or
in eventvwr.
- I do not get OutOfMemory error in any logs, whereas I have always
seen it in the logs before when I had an issue with other apps.  I am
running Tomcat as a service on Windows, but it writes stdout / stderr to
logs, and I write my logging out to logs, and none of these logs include 
 ANY
errors - they all just suddenly stop at the time of the crash.

  My money is that it is an OOM error caused by somewhere that I am doing
  something I shouldn't be with Wicket.  There's no logs that even say it is
  an OOM, but the memory continues to increase linearly over time as the app
  runs now (it didn't do that before).  My first guess is my previous
  proliferate use of anonymous inner classes.  I have seen in the email
  threads that this shouldn't be done in 1.3.

  Of course, the real answer is that I'm going to be digging through profilers
  and lines of code until I get this fixed.

  My question, though, is from the Wicket devs / experienced users - where
  should I look first?  Is there something that changed between 1.2.6 and 1.3
  that might have caused me problems where 1.2.6 was more forgiving?

  I'm running the app with JProbe right now so that I can get a snapshot of
  memory when it gets really high.

  Thank you,

 Jeremy Thomerson



-- 
Buy Wicket in Action: http://manning.com/dashorst
Apache Wicket 1.3.2 is released
Get it now: http://www.apache.org/dyn/closer.cgi/wicket/1.3.2

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]