Here's one that passes all the tests, and is 2x as fast as the 'current'
and 'new' implementations on random binary data. I haven't been able to
generate data where the 'mike' version is slower:
def read_to_boundary(self, req, boundary, file, readBlockSize=65536):
prevline = ""
last_bou
Thanks for that improvement, don't like its complexity though. I'm
testing "mikes" version with my set of files I will all let you know
how it goes.
BTW, the line that reads "last_bound = boundary + '--'" so we save 4
CPU cycles there :)
The next test that I will run this against will be
Alexis Marrero wrote:
The next test that I will run this against will be with an obscene
amount of data for which this improvement helps a lot!
The dumb thing is the checking for boundaries.
I'm using http "chunked" encoding to access a raw TAPE device through
HTTP with python (it GETs or PO
Mike Looijmans wrote:
I've attached a modified upload_test_harness.py that includes the new
and current, also the 'org' version (as in 3.1 release) and the 'mike'
version.
Nice changes, Mike.
I started to get confused by the names of the various read_to_boundary_*
functions, so I've made a s
Inspired by Mike's changes I made some changes the "new" version to
improve performance while keeping readability:
def read_to_boundary_new(self, req, boundary, file, readBlockSize):
previous_delimiter = ''
bound_length = len(boundary)
while 1:
line = req.readline(readBlockS
LF character
Mime-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
psp_parser: replaces "\n" on \n
LF character
-
Key: MODPYTHON-87
URL: http://issues.apache.org/jira/browse/MODPYTHON-87
Project