> The test script has a lot of regex extractors, and nearly all of them
> include the '.*' pattern in them.  If you're pages are large with

Right.. I was going to mention that in the initial description about
the number of regex extractors but it slipped my mind. There are
around 3-4 regex extractprs for each HTTP sampler. I figured that that
could be quite intense, but thought it would be more of a CPU hog than
a memory hog because:

- Almost all the regex patterns match at most 1-2 strings in the response 
- For even those regexes that would potentially have more than 1-2
matches, I set the match number (or similarly named setting in the
regex extractor) to 1 so that only the first match is stored. My
expectation is that the regex engine would stop parsing the rest of
the response the moment it gets the first match.
- You are right in that there are a lot of .* matches and hence the
regex engine has to be greedy, but like I mentioned above, the match
itself would be the length of a regular URL (not exceeding, in fact
far less than 256 characters). But I will consider changing the
regexes for future versions of this jmx.
- I am not sure about escaping the '<', '>', and '.' ... I have to
look at the individual regexes to see what you are saying..
- What I do doubt is how long GC would take to cleanup historical
regex extractors and samplers. You mentioned timeouts for samplers in
your mail.. I am ignorant about this but if references to these
objects are retained across GCs, that could be the problem.
- I could try other GC modes (concurrent mark sweep has proven to be
quite a memory saver in other java apps I have used..)

But my core question still remains... how can I make jmeter use the
rest of the memory available.. it takes up at most 1Gig and there are
18 Gigs more that could be used... I am not sure how I can make jmeter
use the rest of that memory. Any suggestions on this?


> several instances of the pattern you are trying to find, this could end
> up matching large chunks of every page and storing them in memory.  You
> might need to find a way to make less greedy regular expressions.
> 
> Most of your regular expression also don't escape all the characters you
> need to escape.  Consider escaping '<' '>' and '.'
> 
> This is a pretty cool script though.  I ran it here and ran out of
> memory after 7 minutes, 31 seconds, after only 6 samples had returned
> (they had to timeout, of course, which took 3 minutes each).
> 
> I can't say exactly what is causing the problem, but my suspicion is all
> the regex extractors - maybe they use a lot of memory setting up, and
> once you've cloned them 125 times, it's just too much?  Still, I would
> not expect that to take 1800MB of memory!
> 
> It would be interesting to run a profiler on this test.
> 
> -Mike
> 
> 
> On Wed, 2005-06-29 at 08:15 -0500, Praveen Kallakuri wrote:
> > i am attaching the jmx.. all my listeners should be disabled... the
> > 135 processes are spawned off in about 8-10 seconds and thats when the
> > out of mem errors begin.
> >
> > On 6/29/05, Michael Stover <[EMAIL PROTECTED]> wrote:
> > > What kind of listeners do you have in the test?  And how many seconds
> > > are "a few"?
> > >
> > > -Mike
> > >
> > > On Wed, 2005-06-29 at 07:56 -0500, Praveen Kallakuri wrote:
> > > > hello,
> > > >
> > > > i am using jmeter 2.1.20050327 (compiled from source) on a linux box.
> > > > the box has 2068332 kB total memory of which 1932968 kB is free. the
> > > > jmx file being used is 492 kB. the number of threads configured is 125
> > > > with a total rampup time of 2500 seconds.
> > > >
> > > > within a few seconds after i start jmeter (non-interactive mode), i
> > > > see an out of memory error.
> > > >
> > > > i played with various settings in the jmeter startup script and the
> > > > current settings are given below.
> > > >
> > > > HEAP="-Xms1000m -Xmx1800m"  # custom
> > > > NEW="-XX:NewSize=512m -XX:MaxNewSize=1024m"  #custom
> > > > TENURING="-XX:MaxTenuringThreshold=2" # default
> > > > EVACUATION="-XX:MaxLiveObjectEvacuationRatio=60%" # custom
> > > > RMIGC="-Dsun.rmi.dgc.client.gcInterval=600000
> > > > -Dsun.rmi.dgc.server.gcInterval=600000" # default
> > > > PERM="-XX:PermSize=64m -XX:MaxPermSize=64m" #default
> > > > DEBUG="-verbose:gc -XX:+PrintTenuringDistribution" #default
> > > >
> > > > I read in previous postings about tuning the evacuation settings
> > > > (which was originally 20% I think), but that did not help.
> > > >
> > > > A process listing shows 135-136 java processes spawned off within a
> > > > few seconds of starting the test, and the out of memory errors start
> > > > occuring pretty much around the 135th process getting spawned.
> > > >
> > > > I remember reading in some java docs about the stack size on linux
> > > > systems... a ulimit command shows this:
> > > >
> > > > core file size        (blocks, -c) 0
> > > > data seg size         (kbytes, -d) unlimited
> > > > file size             (blocks, -f) unlimited
> > > > max locked memory     (kbytes, -l) unlimited
> > > > max memory size       (kbytes, -m) unlimited
> > > > open files                    (-n) 1024
> > > > pipe size          (512 bytes, -p) 8
> > > > stack size            (kbytes, -s) unlimited
> > > > cpu time             (seconds, -t) unlimited
> > > > max user processes            (-u) unlimited
> > > > virtual memory        (kbytes, -v) unlimited
> > > >
> > > > I am at loss as to what more I can do... any suggestions?
> > > >
> > >
> > >
> > >
> > > ---------------------------------------------------------------------
> > > To unsubscribe, e-mail: [EMAIL PROTECTED]
> > > For additional commands, e-mail: [EMAIL PROTECTED]
> > >
> > >
> >
> >
> 
> 
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: [EMAIL PROTECTED]
> For additional commands, e-mail: [EMAIL PROTECTED]
> 
> 


-- 
                                     k.p.

---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to