The current default behaviour of Qt sockets is that they allocate an unbounded 
amount of memory if the application is not reading all data from the socket but 
the event loop is running.
In the worst case, this causes memory allocation failure resulting in a crash 
or app exit due to bad_alloc exception

We could change the default read buffer size from 0 (unbounded) to a sensible 
default value e.g. 64k (to be benchmarked)
Applications wanting the old behaviour could explicitly call 
setReadBufferSize(0)
Applications that already set a read buffer size will be unchanged

The same change would be required at the QNetworkAccessManager level, as there 
is no point applying flow control at the socket and having unbounded memory 
allocations in the QNetworkReply.

This is a behavioural change that would certainly cause regressions.
e.g. any application that waits for the QNetworkReply::finished() signal and 
then calls readAll() would break if the size of the object being downloaded is 
larger than the buffer size.

On the other hand, we can't enable default outbound flow control in sockets (we 
don't want write to block).
Applications need to use the bytesWritten signal.
QNetworkAccessManager already implements outbound flow control for uploads.

Is making applications safe against this kind of overflow by default worth the 
compatibility breaks?


________________________________
Subject to local law, communications with Accenture and its affiliates 
including telephone calls and emails (including content), may be monitored by 
our systems for the purposes of security and the assessment of internal 
compliance with Accenture policy.
______________________________________________________________________________________

www.accenture.com

_______________________________________________
Development mailing list
Development@qt-project.org
http://lists.qt-project.org/mailman/listinfo/development

Reply via email to