Almost!
I had to change ZnUtils class>>#streamFrom:to: to say
readCount := inputStream readInto: buffer startingAt: 1 count: buffer
size rather than startingAt: 0.
Otherwise ZnChunkedReadStream>>#readInto:startingAt:count: had an offset
and read of 0 first time in and failed on the ByteArray replace. Seems a
bit fundamental but worked fine after I changed it.
If Chris confirms it fixes his problem, we probably should.
On 02 Aug 2012, at 19:43, Stéphane Ducasse <[email protected]> wrote:
sven should we do an update in zinc for 2.0?
Stef
On Aug 2, 2012, at 11:39 AM, Sven Van Caekenberghe wrote:
Hi Chris,
On 21 Jul 2012, at 17:52, Chris <[email protected]> wrote:
On 20/07/2012 18:30, Sven Van Caekenberghe wrote:
On 20 Jul 2012, at 18:22, Chris wrote:
Thanks for that. I'm actually still having a bit of trouble when the file is bigger
than the chunk size. ZnChunkedReadStream>>#readInto:startingAt:count: uses a
limit variable which is bigger than the collection and requestedCount.
I won't be able to do anything until the beginning of August.
It would be really helpful if you could provide me with an actual test case
that fails.
Okay thanks Sven, I'll see what I can do. I am just working with a third party
server which is sending chunks of 1mb (which I assume is valid) and I think
Zinc just needs to support chunks bigger than the 16k in the above method
Regards,
Chris
The 16K buffer used in ZnUtils>>#streamFrom:to:[size:] is independent from the chunk
buffer used in ZnChunkedReadStream, BUT you did find a serious problem. Actually the code
of ZnChunkedReadStream>>#readInto:startingAt:count: was embarrassingly bad and wrong
although it did work in all cases thrown to it up until now.
I added an extra test for the case where the buffer being used is smaller than
the chunk size:
ZnChunkedReadStream>>#testReadingBuffered
| data chunked plain buffer readStream |
data := String withAll: ($a to: $z), ($A to: $Z).
chunked := String streamContents: [ :stream |
ZnUtils nextPutAll: data on: stream chunked: 16 ].
readStream := ZnChunkedReadStream on: chunked readStream.
buffer := String new: 11.
plain := String streamContents: [ :output | | readCount |
[ readStream atEnd ] whileFalse: [
readCount := readStream readInto: buffer startingAt: 1
count: buffer size.
output next: readCount putAll: buffer ] ].
self assert: plain equals: data
As usual, load the latest code to get the fix. I hope it now works for your
particular case.
Thanks again for reporting this, being persistent about it and for pointing me
in the right direction !
Regards,
Sven
Here are the actual commits:
===
Name: Zinc-HTTP-SvenVanCaekenberghe.291
Author: SvenVanCaekenberghe
Time: 2 August 2012, 11:26:02 am
UUID: 3d8c50cd-2d7b-459f-89f3-b77a23dccfdd
Ancestors: Zinc-HTTP-SvenVanCaekenberghe.290
various fixes to ZnChunkedReadStream>>#readInto:startingAt:count: (thx Chris
Bailey for reporting the problem);
added ZnLimitedReadStream>>#nextInto: as it is used by Fuel
===
Name: Zinc-Tests-SvenVanCaekenberghe.151
Author: SvenVanCaekenberghe
Time: 2 August 2012, 11:27:58 am
UUID: 3da15e83-c0ca-4066-a496-71d91393db01
Ancestors: Zinc-Tests-SvenVanCaekenberghe.150
added new ZnChunkedReadStreamTests>>#testReadingBuffered to validate various fixes to
ZnChunkedReadStream>>#readInto:startingAt:count: (thx Chris Bailey for reporting the
problem)
===
--
Sven Van Caekenberghe
http://stfx.eu
Smalltalk is the Red Pill