These are valid points, but I need to work within the constraints that
I have; I need to write serialized buffers into different files.  I
might end up serializing the second time, but still would like to know
if there is a way which is NOT a hack to avoid this.

Nader

On Mar 29, 2:41 pm, Henner Zeller <[email protected]>
wrote:
> On Mon, Mar 29, 2010 at 10:31, Nader Salehi <[email protected]> wrote:
> > Hi,
>
> > In my code, I have a PB message which encompass other PB messages.
> > For instance,
>
> > Protocol Buffer File
> > ===============
> > message A {
> >  required int32 a = 1;
> > }
>
> > message B {
> >  required int32 b = 1;
> > }
>
> > message C {
> >  required A a = 1;
> >  required B b = 2;
> > }
>
> > C++ File
> > ========
> > #include "PB File.pb.h"
>
> > using namespace <whatever>
>
> > int main ()
> > {
> >  C c;
> >   ...
> >  char buff[MAX_SIZE]
> >  int len = c.SerializeToArray(buff, MAX_SIZE);
> >  write(fd, buff, len);
> >  ...
> > }
>
> > As part of the logic, I need to write into separate files the
> > serialized equivalent of c.a and c.b.  Of course you can always call
> > c.a().SerializeToArray() but that requires additional CPU time which I
> > would like to avoid.  Is there a non-hackish way of getting the offset
> > of c.a in the serialized buffer?  Can I use the offset and
> > c.a().ByteSize() to write the message into the file descriptor?
>
> You might want to write the data directly to the stream with the
> available streams instead of copying into a buffer first.
> Second, I wouldn't worry at all about the required CPU serializing
> c.a() and c.b() individually; you should first measure if it actually
> makes a difference before thinking of optimizing it in a hackish way.
>
> -h

-- 
You received this message because you are subscribed to the Google Groups 
"Protocol Buffers" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to 
[email protected].
For more options, visit this group at 
http://groups.google.com/group/protobuf?hl=en.

Reply via email to