try a breakpoint in ISessionStore.bind() - that is where the wicket
session is pushed into httpsession

-igor


On Fri, Apr 11, 2008 at 9:26 PM, Jeremy Thomerson
<[EMAIL PROTECTED]> wrote:
> Thanks for the insight - didn't know that the webapp had to make a call to
>  force the cookie-less support.  Someone asked for how often Google is
>  crawling us.  It seems like at any given point of almost any day, we have
>  one crawler or another going through the site.  I included some numbers
>  below to give an idea.
>
>  Igor - thanks - it could easily be the search form, which is the only thing
>  that would be stateful on about 95% of the pages that will be crawled.  I
>  made myself a note yestereday that I need to look at making that a stateless
>  form to see if that fixes the unnecessary session creation.  I'll post the
>  results.
>
>  The one thing I have determined from all this (which answers a question from
>  the other thread) is that Google (and the other crawlers) is definitely
>  going to pages with a jsessionid in the URL, and the jsessionid is not
>  appearing in the search results (with 2 exceptions out of 30,000+ pages
>  indexed).  But I know that maybe only a month ago, there were hundreds of
>  pages from our site that had jsessionids in the URLs that Google had
>  indexed.  Could it be possible that they are stripping the jsessionid from
>  URLs they visit now?  I haven't found anywhere that they volunteer much
>  information on this matter.
>
>  Bottom line - thanks for everyone's help - I have a bandaid on this now
>  which will buy me the time to see what's creating the early unnecessary
>  sessions.  Is there a particular place in the code I should put a breakpoint
>  to see where the session is being created / where it says "oh, you have a
>  stateful page - here's the component that makes it stateful"?  That's where
>  I'm headed next, so if anyone knows where that piece of code is, the tip
>  would be greatly appreciated.
>
>  Thanks again,
>  Jeremy
>
>  Here's a few numbers for the curious.  I took a four minute segment of our
>  logs from a very slow traffic period - middle of the night.  In that time,
>  67 sessions were created.  Then did reverse DNS lookups on the IPs.  The
>  traffic was from:
>
>  cuill.com crawler    4   (interesting - new search engine - didn't know
>  about it before)
>  googlebot    4
>  live.com bot    1
>  unknown    13
>  user    28
>  yahoo crawler    26
>
>
>
>
>  On Fri, Apr 11, 2008 at 9:20 PM, Igor Vaynberg <[EMAIL PROTECTED]>
>  wrote:
>
>
>
>  > On Fri, Apr 11, 2008 at 6:37 PM, Jeremy Thomerson
>  > <[EMAIL PROTECTED]> wrote:
>  > >  If you go to http://www.texashuntfish.com/thf/app/home, you will notice
>  > that
>  > >  the first time you hit the page, there are jsessionids in every link -
>  > same
>  > >  if you go there with cookies disabled.
>  >
>  > as far as i know jsessionid is only appended once an http session is
>  > created and needs to be tracked. so the fact you see it right after
>  > you go to /app/home should tell you that right away the session is
>  > created and bound. not good. something in your page is stateful.
>  >
>  > >  I think this problem is caused by something making the session bind at
>  > an
>  > >  earlier time than it did when I was using 1.2.6 - it's probably still
>  > >  something that I'm doing weird, but I need to find it.
>  >
>  > i think this is unlikely. if i remember correctly delayed session
>  > creation was introduced in 1.3.0. 1.2.6 _always created a session on
>  > first request_ regardless of whether or not the page you requested was
>  > stateless or stateful.
>  >
>  > -igor
>  >
>  >
>  > >
>  > >  On Fri, Apr 11, 2008 at 3:33 AM, Johan Compagner <[EMAIL PROTECTED]>
>  > >  wrote:
>  > >
>  > >
>  > >
>  > >  > by the way it is all your own fault that you get so many session.
>  > >  > I just searched for your other mails and i did came across: "Removing
>  > the
>  > >  > jsessionid for SEO"
>  > >  >
>  > >  > where you where explaining that you remove the jsessionids from the
>  > urls..
>  > >  >
>  > >  > johan
>  > >  >
>  > >  >
>  > >  > On Thu, Apr 3, 2008 at 7:23 AM, Jeremy Thomerson <
>  > >  > [EMAIL PROTECTED]>
>  > >  > wrote:
>  > >  >
>  > >  > > I upgraded my biggest production app from 1.2.6 to 1.3 last week.
>  >  I
>  > >  > have
>  > >  > > had several apps running on 1.3 since it was in beta with no
>  > problems -
>  > >  > > running for months without restarting.
>  > >  > >
>  > >  > > This app receives more traffic than any of the rest.  We have a
>  > decent
>  > >  > > server, and I had always allowed Tomcat 1.5GB of RAM to operate
>  > with.
>  > >  >  It
>  > >  > > never had a problem doing so, and I didn't have OutOfMemory errors.
>  > >  >  Now,
>  > >  > > after the upgrade to 1.3.2, I am having all sorts of trouble.  It
>  > ran
>  > >  > for
>  > >  > > several days without a problem, but then started dying a couple
>  > times a
>  > >  > > day.  Today it has died four times.  Here are a couple odd things
>  > about
>  > >  > > this:
>  > >  > >
>  > >  > >   - On 1.2.6, I never had a problem with stability - the app would
>  > run
>  > >  > >   weeks between restarts (I restart once per deployment, anywhere
>  > from
>  > >  > > once a
>  > >  > >   week to at the longest about two months between deploy /
>  > restart).
>  > >  > >   - Tomcat DIES instead of hanging when there is a problem.  Always
>  > >  > >   before, if I had an issue, Tomcat would hang, and there would be
>  > OOM
>  > >  > in
>  > >  > > the
>  > >  > >   logs.  Now, when it crashes, and I sign in to the server, Tomcat
>  > is
>  > >  > not
>  > >  > >   running at all.  There is nothing in the Tomcat logs that says
>  > >  > anything,
>  > >  > > or
>  > >  > >   in eventvwr.
>  > >  > >   - I do not get OutOfMemory error in any logs, whereas I have
>  > always
>  > >  > >   seen it in the logs before when I had an issue with other apps.
>  >  I am
>  > >  > >   running Tomcat as a service on Windows, but it writes stdout /
>  > stderr
>  > >  > to
>  > >  > >   logs, and I write my logging out to logs, and none of these logs
>  > >  > include
>  > >  > > ANY
>  > >  > >   errors - they all just suddenly stop at the time of the crash.
>  > >  > >
>  > >  > > My money is that it is an OOM error caused by somewhere that I am
>  > doing
>  > >  > > something I shouldn't be with Wicket.  There's no logs that even
>  > say it
>  > >  > is
>  > >  > > an OOM, but the memory continues to increase linearly over time as
>  > the
>  > >  > app
>  > >  > > runs now (it didn't do that before).  My first guess is my previous
>  > >  > > proliferate use of anonymous inner classes.  I have seen in the
>  > email
>  > >  > > threads that this shouldn't be done in 1.3.
>  > >  > >
>  > >  > > Of course, the real answer is that I'm going to be digging through
>  > >  > > profilers
>  > >  > > and lines of code until I get this fixed.
>  > >  > >
>  > >  > > My question, though, is from the Wicket devs / experienced users -
>  > where
>  > >  > > should I look first?  Is there something that changed between 1.2.6
>  > and
>  > >  > > 1.3
>  > >  > > that might have caused me problems where 1.2.6 was more forgiving?
>  > >  > >
>  > >  > > I'm running the app with JProbe right now so that I can get a
>  > snapshot
>  > >  > of
>  > >  > > memory when it gets really high.
>  > >  > >
>  > >  > > Thank you,
>  > >  > > Jeremy Thomerson
>  > >  > >
>  > >  >
>  > >
>  >
>
>
> > ---------------------------------------------------------------------
>  > To unsubscribe, e-mail: [EMAIL PROTECTED]
>  > For additional commands, e-mail: [EMAIL PROTECTED]
>  >
>  >
>

---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to