> I hear that you have a very specific application scenario where the curl 
> 8.6.0 "almost" (your words) helped you, but the versions after that are no 
> longer that "almost". Especially for cases where the overall network transfer 
> is less than a second (or just a few).

I don't think that I have a very specific scenario. My scenario is a use case 
when multiple back-to-back downloads are done (not just one), and their 
download speeds are measured to make some application critical decisions.
One usage example is a video streaming application downloading media segments, 
and their download speeds are measured to select the video streaming quality.
For such applications, the accuracy and stability of speed measurements for 
each download is very important to provide the best video quality for the 
current network conditions and avoid video underruns and rebuffers.

If the rate limit is not reliable (as far as the transfer measured download 
speed is concerned), and the rate limit (i.e 7 Mbps) allowed media segment 
download to finish with a higher measured speed ( i.e, 20 Mbps), then the 
streaming application may select a higher bitrate (i.e. 20 Mbps bitrate)
with much bigger media segment download sizes. 
And this may cause the video to halt when the applied rate limit (7 Mbps) will 
work better on next bigger downloads thus causing problems for users.

> This is in accordance to what you want, except at the end of a transfer. 
> Here, our definitions of what "rate limit" means differ. (And I would really 
> prefer you not calling our view on things as "not working" or a "regression". 
> It is working fine as we define rate limits. It is just not what you want. 
> Respect each other's views.)

I totally respect the other's view and fully understand it.
But I would like to point out that there are libcurl users that consider the 
"rate limit" concept applied to the whole transfer, and who expect the measured 
transfer speed to be close to the specified rate limit regardless transfer 
sizes and network conditions.

And that's what we had in the previous libcurl versions. 
The 8.6.0 worked very well for rate limiting as it provided stable and 
predictable measured transfer download speeds for any transfer sizes and 
network conditions.
The 8.17.0 wasn't as precise as 8.6.0, but it didn't have cases like 8.18.0 
when the measured transfer speeds were not even close to the rate limit.

Let's think for a second about future when sometime after 8.18.0 is released 
libcurl users will see the results like:

Network bandwidth: 70 Mbps
download size: 1 MB,  rate limit 20 Mbps
time=124 ms, dnld=1048576 B, speed=67141091 bps, spd_diff=47141091 bps, 
pct=235.7 % (The measured transfer speed >2x higher than the rate limit)

download size: 2 MB, rate limit 20 Mbps
time=575 ms, dnld=2097152 B, speed=29169599 bps, spd_diff=9169599 bps, pct=45.8 
% (The measured transfer speed ~46% higher than the rate limit)

download size: 2 MB, rate limit 16 Mbps
time=1018 ms, dnld=2097152 B, speed=16480047 bps, spd_diff=480047 bps, pct=3.0 
% (very good precision)

So, they will see different deviations (sometimes very big) of measured 
transfer speeds from rate limits depending on download size, applied rate limit 
and network speed.
This makes a correlation between the rate limit and the measured transfer speed 
totally unpredictable, so libcurl users will not be able to have the same 
expectations as they
had before.

That's why it will be a "regression" from that perspective - the 8.18.0 will 
take away the feature (predictable correlation between rate limits and measured 
transfer speeds) that was present before.
And we will have a hard time explaining that "it is a feature not a bug" 
because how libcurl users are supposed to believe that the rate limit is 
actually working when they see 2x difference between the rate limit and the 
measured transfer speed?
Just take our word that it really works as intended, and user got unlucky to 
hit a bad combination of rate limit/size/network speed? But what if it was a 
real bug in the rate limiting mechanism?

>  The current rate limit implementation is what should be natural for many 
> libcurl applications. For longer transfers, it works for you was well. 
> For short ones, unfortunately, it does not give you the precision that you 
> want in your application. Acknowledged. 
> Adding a separate rate limit implementation in libcurl, so your application 
> does not have to, is not a convincing argument.

As it can be seen in my test examples, my transfers are not that short. 
I used 1MB, 2MB, 5MB sizes in my tests (which are close to the sizes which 
video streaming apps use for certain bitrates),
and I can tell that the precision (correlation between rate limit and measured 
speed) varies significantly depending on rate limit, transfer size and network 
speed.

It gets a bit better with transfer size increases, but still the transfer speed 
may deviate significantly from the rate limit even for sizes like 5MB - which 
is hardly a short one.

I understand that it may seem that I am the only person who needs this, but I 
guess this is because this "regression" appeared only in 8.18.0, 
and the libcurl users who might be affected by this new way of applying the 
rate limit are probably still using the older versions where the rate limit 
behavior was more predictable.

And I am trying to convince you guys not just for my own sake, but rather for 
libcurl users who have use cases that use back-to-back transfers and rely on 
measured transfer speeds to make critical application decisions.
For this group of users, a natural way to apply the rate limit is to apply it 
on a whole transfer and provide a reliable correlation between the rate limit 
and the measured transfer speed.

tl;dr
- The new rate limit implementation creates regressions for users/apps that 
relied on predictable correlation between rate limit and the measured transfer 
speed in the previous releases.  
- The unpredictable transfer speeds, when the rate limit is applied, make it 
difficult to distinguish real rate limiting bugs from just "unlucky" run-time 
conditions.
- There is a group of applications (i.e., video streaming) that rely on 
predictability and stability of measured transfer speeds when rate limit is 
applied to make critical decisions.
- There are cases when multiple back-to-back transfers on fast networks create 
unexpected load on the backend server when the rate limiting doesn't add pauses 
between transfers.

Thanks,
Dmitry


-----Original Message-----
From: Stefan Eissing <[email protected]> 
Sent: Tuesday, January 6, 2026 4:00 AM
To: libcurl development <[email protected]>
Cc: Dmitry Karpov <[email protected]>
Subject: [EXTERNAL] Re: Rate limit regressions in libcurl 8.18.0 vs 8.17.0

I hear that you have a very specific application scenario where the curl 8.6.0 
"almost" (your words) helped you, but the versions after that are no longer 
that "almost". Especially for cases where the overall network transfer is less 
than a second (or just a few).

The curl rate limit works in "bytes per second". It does not say when *during* 
the second the bytes are received. The bytes may be received in the first few 
nanoseconds. If the transfer has not ended by that, libcurl will wait the 
remainder of the second before doing the next receive, obeying the limit.

This is in accordance to what you want, except at the end of a transfer. Here, 
our definitions of what "rate limit" means differ. (And I would really prefer 
you not calling our view on things as "not working" or a "regression". It is 
working fine as we define rate limits. It is just not what you want. Respect 
each other's views.)

An example: a transfer of 100 bytes is configured with a rate limit of 1000 
bytes/s.
- In your definition, the transfer should complete exactly after 100ms.
- In our definition, the transfer completes when the 100 bytes have been 
received, never delivering more than 1000 bytes in a second. And there was no 
second that it delivered more.

For longer transfers, this all does not matter much as the last second has 
decreasing impact.

Now, besides your libcurl application, there are other applications that use 
rate limits but want to have transfers reported as complete when all data has 
arrived. I would hate my steam downloads to pause at the end, for example. You 
have a specific application case and your proposed solution is incompatible 
with other cases.

I do not know your application and cannot judge how to best solve that. It 
seems to be in need to some "holding queue" where finished libcurl transfers 
are held to idle the last second remainder before the application acts on the 
completion. This is what you are asking libcurl to implement, so your 
application does not have to.

tl;dr

The current rate limit implementation is what should be natural for many 
libcurl applications. For longer transfers, it works for you was well. For 
short ones, unfortunately, it does not give you the precision that you want in 
your application. Acknowledged. Adding a separate rate limit implementation in 
libcurl, so your application does not have to, is not a convincing argument.

- Stefan

> Am 05.01.2026 um 21:29 schrieb Dmitry Karpov via curl-library 
> <[email protected]>:
> 
>> I don't think libcurl should do that. Once the transfer is complete, I think 
>> it should return/say so.
> 
> But that creates a big problem for back-to-back transfers like it was 
> observed in my examples, for which it looked like there was no rate limit 
> applied at all.
> And it creates a load problem on the servers which don't see the rate 
> limiting and must serve data with higher load.
> 
> As far as I know, CPU throttling mechanisms add delays after some 
> operation is completed to prevent scheduling of next operations even though 
> it may not be needed for the already completed operation.
> 
> And here we have a kind of the same case - we need to add some delay 
> after the transfer to complete to prevent performing the next transfer too 
> soon because we don't have an "ideal" rate limiting mechanism and need to 
> compensate for that after the transfer is done.
> 
> In 8.6.0, we had almost "ideal" rate limiting mechanism where the speed 
> measurements were done more frequently for the price of higher CPU 
> utilization.
> But this allowed to perform transfers with rate limiting applied more 
> smoothly and with very high precision.
> 
> So, if we decreased the number of speed measurements to reduce CPU usage, 
> then we would need to compensate the loss of precision by adding some delays 
> at the end.
> 
> Otherwise, we can have run-time conditions (like in my test cases) 
> where in some multi-transfer use cases the rate limiting is not actually 
> working, and it will be a regression for clients expecting it to work (and it 
> used to work in 8.17, although not with the same precision as in earlier 
> releases like in 8.6.0).
> 
>> If your app thinks it needs to add that extra wait, it is really easy for 
>> you to add a sleep there though.
> 
> Unfortunately, it is not easy. 
> My app is a very large multi-layered framework, where libcurl 
> transfers are used in too many components, including modules closed for 
> modifications, to make it feasible to add additional code which adds delays 
> after each transfer if the rate limit is not observed by the libcurl rate 
> limiting option.
> And I think I am not alone who has applications like mine.
> 
> If client code needs to do that kind of actions, then it kind of defeats the 
> purpose of the libcurl rate limit option making it unreliable and 
> unpredictable.
> And because it will be a regression from the earlier releases, not 
> sure that many folks will be happy about it and the perspective to add 
> additional code to work around the new problem.
> 
> Thanks!
> Dmitry
> 
> 
> -----Original Message-----
> From: Daniel Stenberg <[email protected]>
> Sent: Saturday, January 3, 2026 2:36 PM
> To: Dmitry Karpov via curl-library <[email protected]>
> Cc: Dmitry Karpov <[email protected]>
> Subject: [EXTERNAL] Re: Rate limit regressions in libcurl 8.18.0 vs 
> 8.17.0
> 
> On Wed, 31 Dec 2025, Dmitry Karpov via curl-library wrote:
> 
>> As I discussed it with Stefan before, the rate limit mechanism should 
>> apply some small delay at the end of a throttled transfer to maintain 
>> the specified rate limit for the transfer.
> 
> I don't think libcurl should do that. Once the transfer is complete, I think 
> it should return/say so.
> 
>> And even though it will not save the bandwidth as all the data has 
>> been already transferred, it will provide proper network speed 
>> measurements for the decision making logic (i.e. bitrate selection 
>> mechanism in video streaming apps) and will help to decrease server 
>> load in back-to-back multi-transfer scenarios.
> 
> If your app thinks it needs to add that extra wait, it is really easy for you 
> to add a sleep there though.
> 
> --
> 
>  / daniel.haxx.se || https://rock-solid.curl.dev
> --
> Unsubscribe: https://lists.haxx.se/mailman/listinfo/curl-library
> Etiquette:   https://curl.se/mail/etiquette.html

-- 
Unsubscribe: https://lists.haxx.se/mailman/listinfo/curl-library
Etiquette:   https://curl.se/mail/etiquette.html

Reply via email to