On Thu, Oct 8, 2009 at 10:57, sergei175 <sergei...@googlemail.com> wrote:
> Ok, this is a simple example of proto buffers file.
> I want to write 1000 "Records". Each record has its name and
> Each array has its name and a set of double numbers, For my example,
> I've filled array with 10 000 numbers for all 1000 Records.
> There are 2 things you will see:
> 1) After event 500, even 200MB memory is not enough.
> 2) It's slower by factor ~5 compare to the java serialization with
So for java serialization, you have a class that contains a
ArrayList<NamedArray> with NamedArray objects containing a
Vector<double> and then serialize the whole ArrayList<NamedArray> to
> 3) File size is very large. I do not know how to fill
> compressed recorsd on fly using this package.
If you want to write the independent records, you should write them
delimited to a file and not put everything in memory.
Regarding compression: you write the stuff to a stream eventually, so
you can wrap that with a GZipOutputStream - I guess that is what you
do with the Java serialization with compression as well.
> Finally, there is no even sensible approach to append new "Records"
> to the existing file (without "merge", which in fact has to parse
> existing file first!)
Protocol buffers don't provide the transport or storage layer. They
provide the encoding. You have to provide for the storage yourself. A
simple default implementation might be useful to start but still many
people still would need to write their own way of storing things.
OTOH, it is only a handful of lines to write it yourself.
For things like this (and is has been discussed many times on this
list), you should write out delimiters telling the size of the next
record followed by the record itself. I think there even has been
something added recently to the API to make this simpler (don't know,
I use my own implementation ;) )
You received this message because you are subscribed to the Google Groups
"Protocol Buffers" group.
To post to this group, send email to email@example.com
To unsubscribe from this group, send email to
For more options, visit this group at