Daniel P. Berrangé <[email protected]> wrote:
>> >
>> > This has a slight semantic behavioural change.
>>
>> Yeap.
>>
>> See the answer to Peter. But three things came to mind:
>>
>> a - the size of the buffer is small (between 32KB and 256KB depending
>> how you count it). So we are going to call qemu_fflush() really
>> soon.
>>
>> b - We are using this value to calculate how much we can send through
>> the wire. Here we are saything how much we have accepted to send.
>>
>> c - When using multifd the number of bytes that we send through the qemu
>> file is even smaller. migration-test multifd test send 300MB of data
>> through multifd channels and around 300KB on the qemu_file channel.
>>
>>
>> >
>> > By accounting for rate limit in the qemu_put functions, we ensure
>> > that we stop growing the iovec when rate limiting activates.
>> >
>> > If we only apply rate limit in the the flush function, that will
>> > let the f->iov continue to accumulate buffers, while we have
>> > rate limited the actual transfer.
>>
>> 256KB maximum. Our accounting has bigger errors than that.
>>
>>
>> > This makes me uneasy - it feels like a bad idea to continue to
>> > accumulate buffers if we're not ready to send them
>>
>> I still think that the change is correct. But as you and Peter have
>> concerns about it, I will think a bit more about it.
>
> If Peter's calculations are correct, then I don't have any objection,
> as that's a small overhead.
#define IOV_MAX 1024
....
#define IO_BUF_SIZE 32768
#define MAX_IOV_SIZE MIN_CONST(IOV_MAX, 64)
struct QEMUFile {
...
uint8_t buf[IO_BUF_SIZE];
struct iovec iov[MAX_IOV_SIZE];
....
}
Later, Juan.