As I say, you can do that *now* (not parsing unwanted fields, and scanning
through the data without buffering everything in memory). For protobuf-net,
there is an example of the *second* part of this ("streaming demo" or
something - I don't have the code handy). The first part is simply: pass in
a
Marc:
Thanks for your input. I think your comment helps me clarify my query:
Most applications or services that are "producers" will generate data with
N fields in it. Consumers may be interested in only m fields- m could be 5
and N could be 20. For example: An address book service will generate
Firstly, I must note that those benchmarks are specific to protobuf-net (a
specific implementation), not "protocol buffers" (which covers a range of
implementations). Re "is it not more realistic"; well, that depends entirely
on what your use-case *is*. It /sounds/ like you are really talking abou
I saw that ProtoBuf has been benchmarked using the Northwind data
set- a data set of size 130K, with 3000 objects including orders and
order line items.
This is an excellent review:
http://code.google.com/p/protobuf-net/wiki/Performance
Is it not more realistic, to have a benchmark with a m