>The throughput remained at approx 17/second through out the test.
even for lower values of users? it should have climbed to this value and
then stabilised right?
>Then when it hit the test with 125 users, it start to throw error on start
but
>stabilized .. the same happned for 150 and 200.
What errors? Socket exceptions? This might  happen if the server cant take
the initial burst of traffic , but once the jmeter client starts taking time
to process results , the pure concurrent access to the server reduces , so
it might be able to handle the traffic.

>Whats the max users that we can throw at this server?
>Can this server handle 200 concurrent users?
Depends. If your CPU is pegged at 100% then I doubt your administrators will
allow it , even if it doesnt crash the machine. For e.g. we used to
calculated the load allowed as that which would keep the CPU under 60-80% .
Another factor is response time. If the system takes greater than say 10
seconds to return the response for 100 users (where 10 seconds is the agreed
upon SLA) then we would say it cant handle the load for 100 users. You need
to define whats acceptable for your system. If you are getting errors (at
125 users) then I would conclude that thats your hard limit right now and
you need to tune the server ...


regards
deepak



On Fri, Sep 11, 2009 at 6:19 AM, Bruce Foster <[email protected]> wrote:

> Thanks deepkak and sebb,
>
> Today I did some test on the server and attached the results;
>
> The benchmark was done on a ;
>
> OS: Windows 2003 Server R2
> CPU: Pentium 4 Duel Core E5200 CPU. 2.5GHz
> RAM: 4 GB DDR2
> HDD: 500GB SATA, 7200RPM
> Jmeter 2.3.4
>
> The plan requests a web service, requesting an image size of 800x600px
> with random and dynamic request. no caching on server side either.
>
> Using Tom Cat 5.5 as app server.
>
>
> The machine is not a server, but a workstation and I'm using it to
> test the performance. The client machine was another one quite similar
> to this but with xenon processor. (I will reverse later to see the
> results).
>
>
> sampler_label,count,average,min,max,stddev,error%,rate,bandwidth,average_bytes
> 1 User,500,110,51,188,28.3553116,0,9.043226623,747.0846722,84595.326
> 5 User,500,285,71,711,113.3641348,0,17.02823281,1413.342425,84991.946
> 10 User,1000,568,66,2149,292.8266531,0,16.81944328,1416.137681,86217.181
> 25 User,2500,1420,74,4236,450.7173973,0,17.12774558,1405.982954,84058.1464
> 50 User,2500,2902,202,5408,510.3698817,0,16.86147288,1404.980977,85324.724
> 75 User,3750,4348,265,6638,605.8713867,0,16.93285109,1405.05503,84969.5272
> 100
> User,5000,5796,193,10475,751.1101016,0,16.95909805,1388.316015,83827.3118
> 125
> User,2500,6200,3,10641,1985.596293,0.0724,18.11069255,1389.256787,78550.2236
> 150
> User,3000,7947,2,13988,2292.793403,0.047,17.68023526,1385.007784,80216.578
> 200
> User,4000,10147,9,21149,4684.772214,0.0125,16.97771251,1387.075046,83660.55475
>
> TOTAL,25250,5417,2,21149,3672.792818,0.014732673,16.80385936,1367.333332,83323.08083
>
> I had a test plan with 1 to 200 users in 10 steps to test the server.
> Since I'm requesting a web service, the load on the server is quite
> high. I could see the CPU was almost 100% right at 50 users. Then when
> it hit the test with 125 users, it start to throw error on start but
> stabilized .. the same happned for 150 and 200.
>
> The throughput remained at approx 17/second through out the test.
>
> With this result, what will be conclusion ?
>
> Whats the max users that we can throw at this server?
> Can this server handle 200 concurrent users?
>
> How to present this result, bit confused ... hope you can guide me.
>
> Thanks
>
> Bruce
>
>
>
>
> On Wed, Sep 9, 2009 at 2:31 AM, sebb <[email protected]> wrote:
> > On 08/09/2009, Deepak Shetty <[email protected]> wrote:
> >> >I need to log the time taken by each request when 100/200/300/400/500
> >>  >concurrent requests are made. Hope the logger can do that.
> >>
> >> Yes.
> >>
> >>
> >>  >when i have 5 users (threads) and 50 users (threads), the througput is
> >>  >same 12/sec. Now how do I explain the user concurrency,  load /
> >>  >stress?
> >>
> >> See explanation on throughput curves.
> >>
> http://books.google.com/books?id=HTX8DyD0WzkC&pg=PA12&lpg=PA12&dq=throughput+curve&source=bl&ots=7qYRIZiPX9&sig=7UoxT-8gpbmqWwwUcu0aROe_QWA&hl=en&ei=1X6mSqLyHpDK_gbMgvC-CQ&sa=X&oi=book_result&ct=result&resnum=5#v=onepage&q=throughput%20curve&f=false
> >>
> >>  You have reached your throughput 'plateau' and need to check response
> times
> >>  as well..
> >>
> >
> > This could be due to:
> > * network saturation (unlikely at this throughput unless the response
> > are huge, though using virtual hosts on a single physical system may
> > be relevant)
> > * JMeter limit (unlikely with only 50 threads - assuming you have not
> > added a throughput timer!)
> > * host resource exhaustion - possible, given that everything is
> > running on the same host
> > * server resource exhaustion - again possible, if not configured with
> > enough sessions.
> >
> >>  >How to measure the load / stress on the server?
> >>
> >> Thats server specific, your O.S. will give you tools to do this. (e.g.
> >>  perfmon on windows, vmstat on unix , other tools ).
> >>  regards
> >>
> >> deepak
> >>
> >>
> >>
> >>  On Tue, Sep 8, 2009 at 3:35 AM, Bruce Foster <[email protected]>
> wrote:
> >>
> >>  > Thanks All,
> >>  >
> >>  > I will try the options and let you know. Got distracted with some
> >>  > other work and will spend some time on the benchmarking next week.
> >>  >
> >>  > I need to log the time taken by each request when 100/200/300/400/500
> >>  > concurrent requests are made. Hope the logger can do that.
> >>  >
> >>  > I have some  basic question being newbie;
> >>  >
> >>  > when i have 5 users (threads) and 50 users (threads), the througput
> is
> >>  > same 12/sec. Now how do I explain the user concurrency,  load /
> >>  > stress?
> >>  >
> >>  > I need to find out if the system can handle 500 concurrent users.
> >>  >
> >>  > Throughput is the response time right which turns out to be around
> >>  > 85ms (12/sec), since there are no change from 5 to 50, how do I test
> >>  > for 500 concurrent users ( or 300 or 200)?
> >>  >
> >>  > How to measure the load / stress on the server?
> >>  >
> >>  > Thanks a lot
> >>  >
> >>  > Bruce
> >>  >
> >>  >
> >>  >
> >>  >
> >>  >
> >>  >
> >>  >
> >>  > Thanks
> >>  > Bruce
> >>  >
> >>  >
> >>  >
> >>  > On Sat, Sep 5, 2009 at 6:09 PM, sebb<[email protected]> wrote:
> >>  > > On 05/09/2009, Bruce Foster <[email protected]> wrote:
> >>  > >> Hi Deepak and others,
> >>  > >>
> >>  > >>  Thanks for quick response and help.
> >>  > >>
> >>  > >>  Yes, the listener Save_Responses_to_a_file did the trick for me.
> Just
> >>  > >>  ran a test with 1000 request to see the response and got all the
> >>  > >>  images saved in directory. Well, the purpose was to check the
> response
> >>  > >>  and not the performance (response time). After making sure that
> the
> >>  > >>  image are correct, I ran the actual test to get the performance
> >>  > >>  results.
> >>  > >>
> >>  > >>  Well, I'm using the random function and it worked well to
> generate
> >>  > >>  random bound box request. Also, I adapted the osgeo test method
> of
> >>  > >>  using pre generated csv file.
> >>  > >>
> >>  > >>  got a good result of 12 user per second in one method for total
> >>  > >>  random, and 20 users per second for 800x600px random bbox
> request.
> >>  > >>  need further more to test.
> >>  > >>
> >>  > >>  now i have to find out how to log the 10000 request time. jmeter
> gives
> >>  > >>  only summary/average.
> >>  > >
> >>  > > In the GUI, that depends on the Listener - e.g. the Table View
> >>  > > Listener shows response times. But don't use this for a performance
> >>  > > test as it will use lots of memory.
> >>  > >
> >>  > > Just save the responses to a file, and you have all the details
> there,
> >>  > > depending on what you have configured. Probably easiest to use CSV
> >>  > > output.
> >>  > >
> >>  > >>  Cheers
> >>  > >>
> >>  > >> bruce.
> >>  > >>
> >>  > >>
> >>  > >>
> >>  > >>
> >>  > >>
> >>  > >>  On Thu, Sep 3, 2009 at 7:16 PM, sebb<[email protected]> wrote:
> >>  > >>  > On 03/09/2009, Adrian Speteanu <[email protected]> wrote:
> >>  > >>  >> true, you can use either method for what you said you need,
> but in
> >>  > >>  >>  this case, saving the file on the test machine will
> significantly
> >>  > >>  >>  increase the stress on the test environment (quality image
> files
> >>  > mean
> >>  > >>  >>  lots of space and that means disk usage).
> >>  > >>  >>
> >>  > >>  >>  if you run the test with fewer requests and see that you get
> the
> >>  > >>  >>  responses you expect, then you will also get these responses
> in a
> >>  > load
> >>  > >>  >>  / stress test even if you don't save the files locally.
> >>  > >>  >
> >>  > >>  > Not necessarily; the server may degrade under load.
> >>  > >>  >
> >>  > >>  > For checking responses such as images, consider using
> >>  > >>  >
> >>  > >>  >
> >>  >
> http://jakarta.apache.org/jmeter/usermanual/component_reference.html#MD5Hex_Assertion
> >>  > >>  >
> >>  > >>  > Or you can use the HTTP sampler option "Save response as MD5
> hash?"
> >>  > >>  > and check that.
> >>  > >>  >
> >>  > >>  >>  this is
> >>  > >>  >>  recommended.
> >>  > >>  >>
> >>  > >>  >>
> >>  > >>  >>  On Tue, Sep 1, 2009 at 2:04 AM, Deepak Shetty<
> [email protected]>
> >>  > wrote:
> >>  > >>  >>  > Hi
> >>  > >>  >>  > you can add
> >>  > >>  >>  >
> >>  >
> http://jakarta.apache.org/jmeter/usermanual/component_reference.html#Save_Responses_to_a_file
> >>  > >>  >>  > OR you can add a BeanShell Post Assertion  that can read
> the
> >>  > bytes and save
> >>  > >>  >>  > it to whatever you want or run comparisons
> >>  > >>  >>  > OR
> >>  > >>  >>  >
> >>  >
> http://jakarta.apache.org/jmeter/usermanual/component_reference.html#Sample_Result_Save_Configuration
> >>  > >>  >>  > (Check Save Response Data) - I wouldnt do this though
> because
> >>  > some binary
> >>  > >>  >>  > can cause the xml to break
> >>  > >>  >>  >
> >>  > >>  >>  >
> >>  > >>  >>  > regards
> >>  > >>  >>  > deepak
> >>  > >>  >>  >
> >>  > >>  >>  > On Mon, Aug 31, 2009 at 3:57 PM, Bruce Foster <
> >>  > [email protected]> wrote:
> >>  > >>  >>  >
> >>  > >>  >>  >> Hi List,
> >>  > >>  >>  >>
> >>  > >>  >>  >> I'm totally new to jmeter and also benchmarking.
> >>  > >>  >>  >>
> >>  > >>  >>  >> I'm testing a WMS (web map service) service performance of
> three
> >>  > >>  >>  >> server softwares. Basically, they are GET request of
> images from
> >>  > a
> >>  > >>  >>  >> server.
> >>  > >>  >>  >>
> >>  > >>  >>  >> Is there a way to SAVE the requested images? I have the
> mandate
> >>  > to
> >>  > >>  >>  >> make sure that the response from the servers are exactly
> the
> >>  > same
> >>  > >>  >>  >> image (in resolution, quality) that we request for.
> >>  > >>  >>  >>
> >>  > >>  >>  >> When I did a test, I put a network monitor. I could see
> 70mb of
> >>  > data
> >>  > >>  >>  >> is transfered. Now, where to look for that, does jmeter
> save
> >>  > them in
> >>  > >>  >>  >> cache?
> >>  > >>  >>  >>
> >>  > >>  >>  >> Note, I'm doing everything on a vmware machine running on
> my
> >>  > notebook.
> >>  > >>  >>  >>
> >>  > >>  >>  >>
> >>  > >>  >>  >> Thanks
> >>  > >>  >>  >> Bruce
> >>  > >>  >>  >>
> >>  > >>  >>  >>
> >>  > ---------------------------------------------------------------------
> >>  > >>  >>  >> To unsubscribe, e-mail:
> >>  > [email protected]
> >>  > >>  >>  >> For additional commands, e-mail:
> >>  > [email protected]
> >>  > >>  >>  >>
> >>  > >>  >>  >>
> >>  > >>  >>  >
> >>  > >>  >>
> >>  > >>  >>
> >>  >
>  ---------------------------------------------------------------------
> >>  > >>  >>  To unsubscribe, e-mail:
> [email protected]
> >>  > >>  >>  For additional commands, e-mail:
> >>  > [email protected]
> >>  > >>  >>
> >>  > >>  >>
> >>  > >>  >
> >>  > >>  >
> ---------------------------------------------------------------------
> >>  > >>  > To unsubscribe, e-mail:
> [email protected]
> >>  > >>  > For additional commands, e-mail:
> [email protected]
> >>  > >>  >
> >>  > >>  >
> >>  > >>
> >>  > >>
>  ---------------------------------------------------------------------
> >>  > >>  To unsubscribe, e-mail:
> [email protected]
> >>  > >>  For additional commands, e-mail:
> [email protected]
> >>  > >>
> >>  > >>
> >>  > >
> >>  > >
> ---------------------------------------------------------------------
> >>  > > To unsubscribe, e-mail: [email protected]
> >>  > > For additional commands, e-mail:
> [email protected]
> >>  > >
> >>  > >
> >>  >
> >>  > ---------------------------------------------------------------------
> >>  > To unsubscribe, e-mail: [email protected]
> >>  > For additional commands, e-mail: [email protected]
> >>  >
> >>  >
> >>
> >
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: [email protected]
> > For additional commands, e-mail: [email protected]
> >
> >
>
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: [email protected]
> For additional commands, e-mail: [email protected]
>

Reply via email to