Hello Subra.

Maybe you can read part of the blob with a function like MySQL's SUBSTR.

I've never tested that. When there are large files to store, I prefer to store it as a normal file in a directory, and record its path on the database (e.g. in a varchar field).

Samuel



Subra A Narayanan wrote:
Hello folks,

I have an apache module which receives request for data from clients, reads
binary data from a datastore, loads the data in to memory and then sends out
the response with the data in the request body. Here is the code snippet:

        ap_set_content_type(request, "application/octet-stream");

        //Open db
        //Read BLOB data in to objBuffer
        //bufferLen = length(objBuffer)

        ap_rwrite(objBuffer, bufferLen, request);
        ap_set_content_length(request, bufferLen);

        *return* HTTP_OK;


The above code works as desired. But as you can see, this approach reads in
the entire file in to memory and then write it out to the request body. This
method works fine for small amounts of data but for larger data blocks (say
a 1GB file), the server will run out of memory pretty soon.

I was wondering if there is a way to read and transmit chunks of data to the
client. The module will read, lets say 10k of data at a time, transmit it,
read the next chunk, transmit it.....until the end is reached.

Any help would be greatly appreciated!!

Thanks,
Subra

Reply via email to