I have something coded in zero_copy_stream_unittest.cc that reproduces
what is probably the problem Jacob is seeing. I'm working on a

On Mon, Oct 19, 2009 at 11:59 AM, Kenton Varda <ken...@google.com> wrote:
> It sounds like you know what the problem is, so can you send a patch that
> fixes this?  Remember to add a test to zero_copy_stream_test.cc.
> On Sun, Oct 18, 2009 at 9:27 AM, Jacob Rief <jacob.r...@gmail.com> wrote:
>> I use protobuf to write self delimited messages to a file. When I use
>> FileOutputStream, I can close the stream, reopen it at a later time
>> for writing, closing it again and then parse the whole file. When I
>> try to do the same job after writing with GzipOutputStream, than
>> parsing with GzipInputStream, I can read up to the end of the first
>> chunk, but then CodedInputStream::ReadRaw returns false and my
>> application looses its sync. If however I first uncompress the written
>> file with gunzip and then use FileInputStream to decode it, everything
>> works fine. Also, if I lseek the file descriptor onto the beginning of
>> the second chunk (1f 8b 08 ...) and create a new GzipInputStream
>> object using that file descriptor, I can read everything.
>> I did some debugging and found out that when I use a zipped file with
>> one chunk (normal case) and hit the eof in GzipInputStream::Next
>> Inflate() returns Z_STREAM_END and zcontext_.avail_in is 0. When I do
>> the same tests with a concatenated file, when reaching the end of the
>> first chunk, in GzipInputStream::Next Inflate() returns Z_STREAM_END
>> and zcontext_.avail_in is 1129, which means that the zlib has some
>> unprocessed bytes in the input buffer.
>> >>

You received this message because you are subscribed to the Google Groups 
"Protocol Buffers" group.
To post to this group, send email to protobuf@googlegroups.com
To unsubscribe from this group, send email to 
For more options, visit this group at 

Reply via email to