For my tape project I used "None" as valid len argument to indicate reading the next available tape block, regardless of size (underlying layer figured out the tape block size). This worked well together with existing file type APIs.

So that would make the API: - reads up to 2048 bytes. If it reads less, it's at the end of the stream. Blocks until the requested amount has been read.     - reads all data from the stream (bad idea), blocks until EOS. - reads the next chunk of data, blocks if no data available. 
Returns 0 size if at EOS.

Mike Looijmans
Philips Natlab / Topic Automation

M Willson (JIRA) wrote:
[ ]
M Willson commented on MODPYTHON-222:

If not possible in full generality then I'd be happy with a special separate 
read method for getting what is in the input buffer so far, with 'read' just 
blocking until the chunked upload is complete. Would this be possible?

Or if we could specify a handler in python which is called back with each new 
chunk of data received, that would give the ultimate flexibility...

Support for chunked transfer encoding on request content.

                Key: MODPYTHON-222
            Project: mod_python
         Issue Type: New Feature
         Components: core
   Affects Versions: 3.3.1
           Reporter: Graham Dumpleton

It is currently not possible to use chunked transfer encoding on request 
content delivered to a mod_python request handler.
The use of chunked transfer encoding is explicitly blocked in C code by:
        rc = ap_setup_client_block(self->request_rec, REQUEST_CHUNKED_ERROR);
To allow chunked transfer encoding instead of REQUEST_CHUNKED_ERROR it would be 
necessary to supply REQUEST_CHUNKED_DECHUNK.
Problem is that it isn't that simple.
First off, the problems associated with MODPYTHON-212 have to be fixed with 
code being able to cope with there being no content length.
The next issue is that method is currently documented as behaving as:
  If the len argument is negative or omitted, reads all data given by the 
This means that can't have with no arguments mean give me everything 
that is currently available in input buffers as everyone currently expects it 
to return everything sent by client. Thus, to be able to process streaming data 
one would have to supply an amount of data that one wants to read. The code for 
that though will always try to ensure that that exact amount of data is read 
and will block if not enough and not end of input. A handler though may not 
want it to block and be happy with just getting what is read and only expect it 
to block if nothing currently available.
In other words, the current specification for how behaves is 
incompatible with what would be required to support chunked transfer encoding 
on request content.
Not sure how this conflict can be resolved.

Reply via email to