Thank you, Felix. This should help.

I tried running the same tests through JMeter 5.3. It is true- the
connections were not reopened. That's a great change!
I believe there is still a problem, though. I earlier had response times of
31 ms without think, and 42 ms with think. Now it's 31 and 37, and it is
consistent. An improvement- but the question still remains why the extra 6
ms. The impact becomes more pronounced when there is high latency. In the
tests that I am referencing here, I have had to deal with just 40-45
microseconds of latency (everything is on the same host).


I have had some problems with JMeter 5.3 because of which I went back to
5.2. I will start a different thread for this. I will just give you an idea
of what I see every time a load test ends with 5.3.

Tidying up ...    @ Wed Aug 19 12:49:45 IST 2020 (1597821585946)
... end of run
The JVM should have exited but did not.
The following non-daemon threads are still running (DestroyJavaVM is OK):
Thread[DestroyJavaVM,5,main], stackTrace:
Thread[AWT-EventQueue-0,6,main], stackTrace:sun.misc.Unsafe#park
java.util.concurrent.locks.LockSupport#park at line:175
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject#await
at line:2039
java.awt.EventQueue#getNextEvent at line:554
java.awt.EventDispatchThread#pumpOneEventForFilters at line:187
java.awt.EventDispatchThread#pumpEventsForFilter at line:116
java.awt.EventDispatchThread#pumpEventsForHierarchy at line:105
java.awt.EventDispatchThread#pumpEvents at line:101
java.awt.EventDispatchThread#pumpEvents at line:93
java.awt.EventDispatchThread#run at line:82

Thread[AWT-Shutdown,5,system], stackTrace:java.lang.Object#wait
sun.awt.AWTAutoShutdown#run at line:314
java.lang.Thread#run at line:748

^C

This error has appeared on all (three) Linux VMs (Oracle Linux) that I've
tried, plus on my Windows system that had a previous installation of
JMeter. I did not encounter this error while running it on another Windows
server that had never had JMeter installed on it.




On Tue, Aug 18, 2020 at 10:38 PM Felix Schumacher <
[email protected]> wrote:

>
> Am 18.08.20 um 18:18 schrieb Barcode S K:
> >  Hello,
> >
> > I recently recorded my application with JMeter.
>
> Which version of JMeter are you using?
>
> >
> >    - The application is SSL-enabled.
> >    - Application launch, login, and a screen launch are inside a
> Once-Only
> >    Controller.
> >    - The actual *transaction*- which involves just one POST request- is
> in
> >    a separate transaction controller.
> >    - Sleep time of 1 second is configured.
> >
> >
> > I have been running 1 VU tests, and I have observed that (for the
> > *transaction* alone):
> >
> >    1. While running with think time, average response time is about 42
> ms.
> >    2. While running without think time, average response time is 31 ms.
> >
> > To eliminate latency and excess load, I am running only 1 VU tests for
> > 30-60 seconds on the same machine that hosts the application. I have
> > observed (via netstat) that JMeter opens a new connection every 2-3
> seconds
> > and I believe the overhead of new handshakes plus the SSL handshake (and
> > other connection-related processing) is the reason between this
> difference.
>
> We changed the default for httpclient4.time_to_live to 60000. If you are
> not using the newest version (which is 5.3 or you can try a nightly
> build), make sure, that you have not overwritten that setting locally.
>
> The change was tracked with
> https://bz.apache.org/bugzilla/show_bug.cgi?id=64289.
>
> Hope this helps
>
>  Felix
>
> >
> > I have not seen this behavior with other tools like NeoLoad and OATS. In
> > those tools, when an application that has sessions is run in a load test,
> > there is only one socket (connection) per virtual client and it remains
> > open throughout the run.
> >
> > Is there any way to ensure that JMeter does not open new sockets for new
> > requests in the middle of the run like this? Keep-alive is enabled for
> HTTP
> > requests.
> >
> > -SK
> >
>

Reply via email to