On Mar 6, 2011, at 18:45 , ksamdev wrote:
I think I found the source of the problem. The problem is that
CodedInputStream has internal counter of how many bytes are read so
far with the same object.
Ah, right. With the C++ API, the intention is that you will not reuse
the
Hmm, thanks for the advice. It may work fine. Nevertheless, I have to skip
previously read messages in this case every time CodedInputStream is read.
In fact, I faced different problem recently. It turns out I can write
arbitrary long files, even 7GB. No problems.
Unfortunately, reading does
On Mar 7, 2011, at 13:03 , ksamdev wrote:
Hmm, thanks for the advice. It may work fine. Nevertheless, I have
to skip previously read messages in this case every time
CodedInputStream is read.
Not true: Creating a CodedInputStream does not change the position in
the underlying stream. Your
How come? I explicitly track the larges message written to the file
with: http://goo.gl/SAKlU
Here is an example of output I get:
[1 ProtoBuf git.hist]$ ./bin/write data.pb echo ---===--- ./bin/read
data.pb
Saved: 100040 events
Largest message size writte: 1815 bytes
---===---
File has:
I think I found the source of the problem. The problem is that
CodedInputStream has internal counter of how many bytes are read so far with
the same object.
In my case, there are a lot of small messages saved in the same file. I do
not read them at once and therefore do not care about large