--- Begin Message ---
The test script has a lot of regex extractors, and nearly all of them
include the '.*' pattern in them. If you're pages are large with
several instances of the pattern you are trying to find, this could end
up matching large chunks of every page and storing them in memory. You
might need to find a way to make less greedy regular expressions.
Most of your regular expression also don't escape all the characters you
need to escape. Consider escaping '<' '>' and '.'
This is a pretty cool script though. I ran it here and ran out of
memory after 7 minutes, 31 seconds, after only 6 samples had returned
(they had to timeout, of course, which took 3 minutes each).
I can't say exactly what is causing the problem, but my suspicion is all
the regex extractors - maybe they use a lot of memory setting up, and
once you've cloned them 125 times, it's just too much? Still, I would
not expect that to take 1800MB of memory!
It would be interesting to run a profiler on this test.
-Mike
On Wed, 2005-06-29 at 08:15 -0500, Praveen Kallakuri wrote:
> i am attaching the jmx.. all my listeners should be disabled... the
> 135 processes are spawned off in about 8-10 seconds and thats when the
> out of mem errors begin.
>
> On 6/29/05, Michael Stover <[EMAIL PROTECTED]> wrote:
> > What kind of listeners do you have in the test? And how many seconds
> > are "a few"?
> >
> > -Mike
> >
> > On Wed, 2005-06-29 at 07:56 -0500, Praveen Kallakuri wrote:
> > > hello,
> > >
> > > i am using jmeter 2.1.20050327 (compiled from source) on a linux box.
> > > the box has 2068332 kB total memory of which 1932968 kB is free. the
> > > jmx file being used is 492 kB. the number of threads configured is 125
> > > with a total rampup time of 2500 seconds.
> > >
> > > within a few seconds after i start jmeter (non-interactive mode), i
> > > see an out of memory error.
> > >
> > > i played with various settings in the jmeter startup script and the
> > > current settings are given below.
> > >
> > > HEAP="-Xms1000m -Xmx1800m" # custom
> > > NEW="-XX:NewSize=512m -XX:MaxNewSize=1024m" #custom
> > > TENURING="-XX:MaxTenuringThreshold=2" # default
> > > EVACUATION="-XX:MaxLiveObjectEvacuationRatio=60%" # custom
> > > RMIGC="-Dsun.rmi.dgc.client.gcInterval=600000
> > > -Dsun.rmi.dgc.server.gcInterval=600000" # default
> > > PERM="-XX:PermSize=64m -XX:MaxPermSize=64m" #default
> > > DEBUG="-verbose:gc -XX:+PrintTenuringDistribution" #default
> > >
> > > I read in previous postings about tuning the evacuation settings
> > > (which was originally 20% I think), but that did not help.
> > >
> > > A process listing shows 135-136 java processes spawned off within a
> > > few seconds of starting the test, and the out of memory errors start
> > > occuring pretty much around the 135th process getting spawned.
> > >
> > > I remember reading in some java docs about the stack size on linux
> > > systems... a ulimit command shows this:
> > >
> > > core file size (blocks, -c) 0
> > > data seg size (kbytes, -d) unlimited
> > > file size (blocks, -f) unlimited
> > > max locked memory (kbytes, -l) unlimited
> > > max memory size (kbytes, -m) unlimited
> > > open files (-n) 1024
> > > pipe size (512 bytes, -p) 8
> > > stack size (kbytes, -s) unlimited
> > > cpu time (seconds, -t) unlimited
> > > max user processes (-u) unlimited
> > > virtual memory (kbytes, -v) unlimited
> > >
> > > I am at loss as to what more I can do... any suggestions?
> > >
> >
> >
> >
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: [EMAIL PROTECTED]
> > For additional commands, e-mail: [EMAIL PROTECTED]
> >
> >
>
>
--- End Message ---
---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]