Hello, I am now using Curl 8.6.0, but I'm still seeing this problem. I have some more information to add though:
- I tried out OpenSSL 1.1 and even with the CA store caching enabled I still saw the same high latencies - I specifically observed a couple of high latency calls. When I call curl_multi_socket_action with the CURL_SOCKET_TIMEOUT argument, I see that call takes 900 ms. I also see some of my requests recording a very high name lookup time of 400 to 600ms, even though I'm providing curl an IP address as the URL (in the IPv4 format, I also set the config on curl to perform IPv4 resolution only). I enabled curl verbose logs for my service, and added timestamps to the messages similar to what's provided <https://github.com/curl/curl/blob/67b0692f093c597b1537d9d64ff75b17a751644e/src/tool_cb_dbg.c#L37> with the trace-time option for the CLI, and observed that during those 900ms for curl_multi_socket_action with the CURL_SOCKET_TIMEOUT argument as well as during the high name resolution times, the verbose logs are filled with messages like "Closing connection". I then found this call <https://github.com/curl/curl/blob/master/lib/url.c#L3577> to "prune_dead_connections" in the "create_conn" method for curl. A couple of questions on this: - Is it possible the time spent in pruning dead connections is getting counted towards DNS resolution? Since thats the very first latency that curl tracks for handles, my guess is anything curl does before actually reaching the resolution part of the request gets counted within the "name lookup" time. - The "prune_dead_connections" method is expected to be called at most once per second, and I do see that happening with my service as well. But given the high traffic I'm submitting to curl (~1000 or more HTTPS requests per second, all going to different URLs/IP addresses), we end up accumulating a lot of dead connections over time, and pruning ends up being a cost we have to pay before we can get to a request almost every second. Is it possible to somehow proactively prune dead connections? Or can we prune in smaller batches so it doesn't end up adding too much time for an actual request? I also noticed that closing an SSL connection is expensive because there are some more calls that curl makes to the TLS library before it actually closes the connection, so closing 100+ connections at a time serially will add up to quite a bit of time. I set a connection pool size of 2000 connections, and I'm using the default connection age of 118 seconds. Thanks, Richa On Sat, Dec 2, 2023 at 8:48 AM Daniel Stenberg via curl-library < curl-library@lists.haxx.se> wrote: > On Fri, 1 Dec 2023, Ray Satiro via curl-library wrote: > > > Old versions. Try with the latest curl [1] and openssl [2]. > > Be aware that OpenSSL 3 has been mentioned to do worse performance-wise > than > its previous versions. I don't know if the recently released version 3.2 > fixes > this, but it could be worth checking that out if someone really wants to > cram > out the most of their CPUs. > > -- > > / daniel.haxx.se > | Commercial curl support up to 24x7 is available! > | Private help, bug fixes, support, ports, new features > | https://curl.se/support.html > -- > Unsubscribe: https://lists.haxx.se/mailman/listinfo/curl-library > Etiquette: https://curl.se/mail/etiquette.html >
-- Unsubscribe: https://lists.haxx.se/mailman/listinfo/curl-library Etiquette: https://curl.se/mail/etiquette.html