On real time applications it will be ideal if we could make sure transfers go smoothly and processing proportionate chunks of data.

So I did try to use CURLOPT_MAX_RECV_SPEED_LARGE to limit data transfers using a 1KB buffer to 30KB/s and calling curl_multi_perform at 30 Hz.

    curl_easy_setopt(curl_easy_handle, CURLOPT_BUFFERSIZE, 1024);
curl_easy_setopt(curl_easy_handle, CURLOPT_MAX_RECV_SPEED_LARGE, (curl_off_t) 30720);

I also have a call back write function and pass a pointer to the implementing class instance as the user data for the callback.

curl_easy_setopt(curl_easy_handle, CURLOPT_WRITEFUNCTION, curl_write_function);
    curl_easy_setopt(curl_easy_handle, CURLOPT_WRITEDATA, this);

I was expecting the write callback to be called more or less once per curl_multi_perform with a payload of 1K per frame. But what I see is a bunch of many 1K callbacks until it reaches 30 for 1 second, then nothing then it resumes.

So I noticed that CURLOPT_MAX_RECV_SPEED_LARGE only restricts calling Curl_readwrite if the current download speed exceeds the max_recv_speed if it's set, by switching between CURLM_STATE_PERFORM and CURLM_STATE_TOOFAST states in lib\multi.c function multi_runsingle

If it's within the transfer up/download speeds it will call Curl_readwrite (implemented in lib\transfer.c)

Curl_readwrite does then a select to see if there is data ready for send or receive.

The sending part is fine: It calls readwrite_upload and that method only sends one buffer's worth via Curl_write because it's using a do while (0) loop.

The receiving part is where I see a bottleneck for smooth real-time transfers when it calls readwrite_data (implemented in lib\transfer.c as well)

This is implemented with a do {} while(data_pending(conn)) loop meaning it's going to call Curl_read() as many times as needed until there is no more data. This will break any speed limits set by CURLOPT_MAX_RECV_SPEED_LARGE. The callback set with CURLOPT_WRITEFUNCTION will be called many times within this loop until no more data is available.

I'm suggesting adding a Curl_pgrsUpdate() to update the download speed, then checking if we are over our speed and if so break the loop.

So readwrite_data() 's do loop will look like this at the end of the loop code:

    if(is_empty_data) {
      /* if we received nothing, the server closed the connection and we
         are done */
      k->keepon &= ~KEEP_RECV;
    }
    /*
     * Suggested fix
     */
    /* update connection progress so we can check if over recv speed */
    Curl_pgrsUpdate(conn);
    /* check if over recv speed */
    if((data->set.max_recv_speed > 0) &&
       (data->progress.dlspeed > data->set.max_recv_speed)) {
        break; /* get out of loop */
    }
    /*
     * End Suggested fix
     */
} while(data_pending(conn));


This will only break the loop if max_recv_speed was set with CURLOPT_MAX_RECV_SPEED_LARGE and the current download speed exceeds it.

This change will have the following benefits:
1) it will maintain the speed limit even from the first data frame.
2) CURLOPT_WRITEFUNCTION callback will be called mover evenly (instead of burst at a time when CURLOPT_MAX_RECV_SPEED_LARGE is used) 3) It will call the progress callback more often giving more granularity of data being processed regardless of CURLOPT_MAX_RECV_SPEED_LARGE being set or not. 4) real-time friendly for devices and applications that need a tight control over bandwidth and resources. 5) it will handle very low speeds too balancing processing and transfer times evenly.

Downside: it will call progress update many more times per curl_multi_perform depending on the read buffersize (not sure if a downside but extra function calls could introduce overhead depending what the progress callback function does).

Thanks,

Miguel

-------------------------------------------------------------------
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html

Reply via email to