I think the idea is to break up very large data sets into smaller
packets so they can be 'streamed'.
When I think of something like seismic data, stream based event
handling makes the most sense.
Can the data points be processed individually somehow, or do you need
access to all of them (in
The Partial serialize and parse routines actually do something completely
unrelated: they allow the message to be missing required fields. So, that
doesn't help you.
I'm afraid protocol buffers are not designed for storing very large
collections in a single message. Instead, you should be
OK, that makes sense. Thanks for the quick reply.
I work at a seismic earthquake data center. We're looking at using
protocol buffers as a means of internally moving around processed
chunks of data. Seems to work pretty well, as long as the chunks
aren't too large
(which is a problem one way or