XP timers suck.. the thread scheduler on XP sucks.. Windows 7 thread scheduler 
is better than all the others, LInux, Solaris, Mac OSX.... This is not a troll 
as I have been running benchmarks on 100s of different machines over the last 
couple of years and the results are pretty much always the same. On WIndow 7 
the bench runs in 1.01 seconds, on Mac OSX 1.1-1.2 seconds, Linux, 1.2 seconds, 
Solaris (properly configured) 1.2 seconds, XP 1.9 seconds.

These results appear to be mostly independent of hardware. Run with Windows 7, 
1.01 seconds, reboot to Linux.. 1.2 seconds. Do this in a virtualized 
environment and it's 2.4 seconds unless it's XP and then timings jump to 2.8 
seconds. As a point of reference, the bench should ideally run in 1 second... 
which it will on most newish Sony machines running windows 7. As for Sony's 
result.. my guess is that Sony takes the Windows source and in the process of 
adapting it to their hardware, they tweek the timing drivers. Mac and Linux all 
look the same 'cos the scheduler comes from the same source.

Full disclosure, I have no functional windows machines in my home nor do I use 
them anywhere as I'm almost completely Mac and the variant is one Ubuntu 
desktop.

I guess I should add, I would not trust a timing taken on any machine using the 
RTC (System.currentTimeMillis()) that was under say.. 50ms let alone anything 
approaching 1ms. Think of it this way, how can you measure something that 
happens in less than 1ms with currentTimeMillis(). Answer, you can't. Question, 
but why do I get a non-zero answer when I do this. Answer, because you're 
counting clock transitions.

Regards,
Kirk

On 2012-02-15, at 10:52 PM, Philippe Mouawad wrote:

> Hello,
> I think you should submit your Test as a bug just for us to check what
> could explain this.
> I made many tests on WinXP and never faced this issue so I am not convinced
> it's the real explanation although you reproduced it.
> 
> Can you also make a test by setting this in jmeter.properties:
> 
> 
>   - sampleresult.useNanoTime=false
>   - sampleresult.nanoThreadSleep=0
> 
> 
> Thanks
> Regards
> Philippe
> 
> 
> On Wed, Feb 15, 2012 at 6:39 PM, Robin D. Wilson <rwils...@gmail.com> wrote:
> 
>> FYI, I have confirmed that my test case only fails on 'WinXP'. When I run
>> the exact same test on Win7 (x64), it works fine.
>> 
>> The sad thing is, I've been benchmarking my systems using WinXP for over a
>> year ... sigh.
>> 
>> So now I find out that all my benchmark numbers are essentially bogus.
>> 
>> --
>> Robin D. Wilson
>> Sr. Director of Web Development
>> KingsIsle Entertainment, Inc.
>> VOICE: 512-777-1861
>> www.KingsIsle.com
>> 
>> 
>> -----Original Message-----
>> From: Philippe Mouawad [mailto:philippe.moua...@gmail.com]
>> Sent: Monday, February 13, 2012 2:13 PM
>> To: JMeter Users List
>> Subject: Re: Average times are confusing me...
>> 
>> Mac OSX 10.6.8.
>> 
>> I suggest you open an issue with your attached test plan (exactly the one
>> that has the issue).
>> 
>> Regards
>> Philippe
>> On Mon, Feb 13, 2012 at 8:28 PM, Robin D. Wilson <rwils...@gmail.com>
>> wrote:
>> 
>>> Just out of curiosity - what type of OS was your JMeter GUI running on?
>>> I've been running my tests on Windows XP.
>>> 
>>> --
>>> Robin D. Wilson
>>> Sr. Director of Web Development
>>> KingsIsle Entertainment, Inc.
>>> VOICE: 512-777-1861
>>> www.KingsIsle.com
>>> 
>>> 
>>> -----Original Message-----
>>> From: Philippe Mouawad [mailto:philippe.moua...@gmail.com]
>>> Sent: Friday, February 10, 2012 7:41 AM
>>> To: JMeter Users List
>>> Subject: Re: Average times are confusing me...
>>> 
>>> Hello,
>>> I have just made the same test with a JSP (doing a Thread.sleep(2000) =>
>>> 2s) behind a Tomcat 6 with JMeter 2.6
>>> 
>>> I put 100 threads that loop 100 times in GUI mode (one JMETER):
>>> All results shown in Aggregate report are around 2s .
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>> 
>> sampler_label,aggregate_report_count,average,aggregate_report_median,aggregate_report_90%_line,aggregate_report_min,aggregate_report
>>> 
>>> 
>> _max,aggregate_report_error%,aggregate_report_rate,aggregate_report_bandwidth
>>> HTTP
>>> 
>>> 
>> Request,10000,2006,2004,2012,2001,2195,0.0,49.5793194742609,1224.082013722936
>>> 
>> TOTAL,10000,2006,2004,2012,2001,2195,0.0,49.5793194742609,1224.082013722936
>>> 
>>> 
>>> Summary listener shows this:
>>> 2012/02/10 14:36:00 INFO  - jmeter.reporters.Summariser: Generate Summary
>>> Results +  2655 in  54,8s =   48,4/s Avg:  2010 Min:  2001 Max:  2195
>>> Err:     0 (0,00%)
>>> 2012/02/10 14:38:26 INFO  - jmeter.reporters.Summariser: Generate Summary
>>> Results +  7345 in 148,9s =   49,3/s Avg:  2005 Min:  2001 Max:  2088
>>> Err:     0 (0,00%)
>>> 2012/02/10 14:38:26 INFO  - jmeter.reporters.Summariser: Generate Summary
>>> Results = 10000 in 201,7s =   49,6/s Avg:  2006 Min:  2001 Max:  2195
>>> Err:     0 (0,00%)
>>> 
>>> So as you can see, nothing abnormal .
>>> 
>>> JSP Code:
>>> 
>>> <%@ page  contentType="text/html; charset=UTF-8" %>
>>> <html>
>>> <body>
>>> <%
>>> Thread.sleep(2000);
>>> %>
>>> <!-- GENERATE around 25Ko page -->
>>> <%=org.apache.commons.lang.RandomStringUtils.random(25000,
>>> "abcdefghijklmnopqrstuvwxyz01234567890")%>
>>> <p>Test</p>
>>> </body>
>>> </html>
>>> 
>>> 
>>> Regards
>>> Philippe
>>> http://www.ubik-ingenierie.com
>>> 
>>> On Thu, Feb 9, 2012 at 4:52 PM, sebb <seb...@gmail.com> wrote:
>>> 
>>>> On 9 February 2012 15:38, Robin D. Wilson <rwils...@gmail.com> wrote:
>>>>> Thanks sebb for the replies...
>>>>> 
>>>>> Here's the deal, I am running the same test script on JM2.4 and
>> JM2.6.
>>> I
>>>> am running in GUI mode. The test script has 3 thread groups
>>>>> - but the first and the last thread group is just a 'timer' I created
>>> to
>>>> log the total elapsed time of the test (the first and last
>>>>> group has 1 thread, and 1 request, and take less than 1 second each
>> to
>>>> run). The 'real' test is the middle thread group. It has 100
>>>>> threads (0 ramp), and runs 100 iterations (10,000 total samples). It
>>>> simply does a 'POST' to a URL, with 15
>>>>> 
>>>>> So the 'elapsed time' I referring to in my test is actually the
>>>> timestamp taken in the first thread group (in ms since epoch)
>>>>> subtracted from the timestamp taken in the 3rd (last) thread group.
>>> That
>>>> part of my test may only add 2 total seconds to the test,
>>>>> so while it may skew my results slightly - it doesn't explain the
>> vast
>>>> difference in the 'average' sample duration. According to the
>>>>> Summary Report docs, the "Average" is supposed to be "the average
>>>> elapsed time of a set of samples". But clearly, if the minimum
>>>>> time it takes to actually get the page is 2 seconds (due to the
>>> built-in
>>>> delay in the cgi-script), there is no way I could have an
>>>>> 'average' elapsed time of less than 2 seconds, yet I'm showing an
>>>> average elapsed time of ~750 ms... (My "Max" elapsed time shows as
>>>>> only 1198!). When I request the page in Firefox, it takes ~2104ms
>>> (using
>>>> a status bar timer), so I think the cgi script is working
>>>>> correctly.)
>>>>> 
>>>>> Sebb asked:
>>>>> 
>>>>>> Again, the throughput calculations are based on total test time. Are
>>> you
>>>> sure the test run times are comparable?
>>>>> 
>>>>> The test run times are automatically calculated by the 1st and 3rd
>>>> thread groups. The ~210 seconds total elapsed time is accurate
>>>>> based on my external measurement too (e.g., it is close to what I can
>>>> observe with my stopwatch).
>>>>> 
>>>>> Both the JM2.4 test and the JM2.6 test are using the exact same
>> ".jmx"
>>>> test file.
>>>>> 
>>>>>> There's clearly something else going on here.
>>>>> 
>>>>> I don't believe that the Summary Report is accurately calculating
>>>> anything except the total number of samples and the Avg. Bytes...
>>>> 
>>>> What makes you say that?
>>>> Are the Min and Max really incorrect?
>>>> Error %?
>>>> 
>>>> It's easy enough to check the Summary Results if you can provide the
>>>> CSV sample result files.
>>>> 
>>>>> The cgi-script I'm using definitely takes 2+ seconds to respond after
>>> it
>>>> gets the request (I've measured this with Firefox directly,
>>>>> and it _never_ gets a response in less than 2 seconds). I even
>> changed
>>>> the 'sleep' to 9 seconds, and JMeter pauses for that long in
>>>>> recording results (e.g., it shows 100 threads run, then waits 9
>>> seconds,
>>>> shows another 100 threads, etc.), but the numbers just go
>>>>> up to '1758' Average, and '2415' Max (which is impossible since it is
>>>> taking 9+ seconds to respond to each request!). It takes over
>>>>> 15 minutes to complete 10,000 samples (and that seems about right -
>>>> 10000 samples/100 threads * 9 seconds each = 900 seconds).
>>>>> 
>>>>> I even went so far as to inject a 2 second sleep in the middle of the
>>>> response (e.g., pause 2 seconds -  send part of the response -
>>>>> pause 2 more seconds - send the rest), I'm still getting average
>> times
>>>> of ~1000 ms. (That's with 4 seconds of built-in delays, and 2
>>>>> of those seconds are in the middle of the response.) The browser
>> shows
>>>> this delay properly, but JMeter isn't calculating it
>>>>> properly.
>>>>> 
>>>>>> Please recheck the individual sample response times and see how they
>>>> compare to the average.
>>>>> 
>>>>> I'm not sure how to do that in JMeter. I can manually hit the page,
>> and
>>>> it takes about 100ms longer than the built-in delay I have.
>>>> 
>>>> Add a Table View Listener, or just check the CSV sample result files.
>>>> 
>>>>>> If there still appears to be a problem, create a Bugzilla issue and
>>>> attach:
>>>>>> - JMX test case
>>>>> 
>>>>> I'm trying to simplify the test case to the bare minimum case - so
>> the
>>>> results will be indisputable. I will also include the
>>>>> 'cgi-bin' script that I'm using, so someone else can easily setup the
>>>> same test.
>>>> 
>>>> Thanks.
>>>> 
>>>>> 
>>>>>> - log files for JMeter 2.4 and 2.6
>>>>> 
>>>>> Which log files are these? Is it just the 'jmeter.log' that gets
>>> created
>>>> in the 'bin' folder when I run the GUI mode, or do you need
>>>>> another log file?
>>>> 
>>>> jmeter.log
>>>> 
>>>>>> - CSV result files for 2.4 and 2.6
>>>>> 
>>>>> I can do this.
>>>>> 
>>>>> --
>>>>> Robin D. Wilson
>>>>> Sr. Director of Web Development
>>>>> KingsIsle Entertainment, Inc.
>>>>> VOICE: 512-777-1861
>>>>> www.KingsIsle.com
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> ---------------------------------------------------------------------
>>>>> To unsubscribe, e-mail: user-unsubscr...@jmeter.apache.org
>>>>> For additional commands, e-mail: user-h...@jmeter.apache.org
>>>>> 
>>>> 
>>>> ---------------------------------------------------------------------
>>>> To unsubscribe, e-mail: user-unsubscr...@jmeter.apache.org
>>>> For additional commands, e-mail: user-h...@jmeter.apache.org
>>>> 
>>>> 
>>> 
>>> 
>>> --
>>> Cordialement.
>>> Philippe Mouawad.
>>> 
>>> 
>>> ---------------------------------------------------------------------
>>> To unsubscribe, e-mail: user-unsubscr...@jmeter.apache.org
>>> For additional commands, e-mail: user-h...@jmeter.apache.org
>>> 
>>> 
>> 
>> 
>> --
>> Cordialement.
>> Philippe Mouawad.
>> 
>> 
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: user-unsubscr...@jmeter.apache.org
>> For additional commands, e-mail: user-h...@jmeter.apache.org
>> 
>> 
> 
> 
> -- 
> Cordialement.
> Philippe Mouawad.


---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@jmeter.apache.org
For additional commands, e-mail: user-h...@jmeter.apache.org

Reply via email to