Size in what context?  RAM or on disk?  Or the amount of memory that
is required at run time for your application to function?

Create a statically linked binary using the same set of tests and post
the results.  The size of the .so is a less than perfect test of bloat
or efficiency, but I'd be hard pressed to assume on disk size of a .so
is important in any real world application.  Try either stripping
your .so (strip(1)) or create a statically linked library without
debugging flags (-static -Os && strip -s ${MY_BINARY}) and I'd imagine
the results will yield a reasonably small binary.  It looks like the
the phenomena that you're commenting on is the size of the debugging
symbols (note the size of the text section of the binaries).  Run nm
(1) on the .so and you'll see all kinds of namespace information and
other miscellaneous string data that contributes to an enormous .so
binary.

In a recent C++ project, the size of our .so library went from a few
MiB in size to tens of MiB because we started making use of C++'s
namespaces.  The resulting statically linked and stripped binary,
however, contained none of the bloat we experienced after adding
namespaces to the code base.

Hope that's useful.


Wink Saville wrote:
> I've been looking at protobuf and I'm somewhat disappointed by the size of
> the library on X86_64 and the size of the generated code for a simple
> message:
>
> $ size libprotobuf.so
>    text       data        bss        dec        hex    filename
> 1008339      21344       1128    1030811      fba9b    libprotobuf.so
>
> The flags for gcc I used for my simple test program was:
>
> CFLAGS := -Wall -g -DGOOGLE_NO_RTTI -o2
>
> The simple protobuf message was:
>
> $ cat test1.proto
> syntax = "proto2";
> option optimize_for = SPEED;
>
> package protobuf_tests;
>
> message test1 {
>   required int32 v = 1;
>   optional int32 o = 2;
>   repeated string s = 3;
> }
>
>
> Size when optimized for speed:
>
>    text       data        bss        dec        hex    filename
>   15851          8         33      15892       3e14    test1.pb.o
>
> Size when not optimized for speed::
>
>    text       data        bss        dec        hex    filename
>    6852          8         33       6893       1aed    test1.pb.o
>
>
> As would be expected the performance hit was pretty large, optimized for
> speed:
>
> test1_cpp serialze Done total=0.656162secs 1000000 loops 656ns/loop
> test1_cpp deserialize Done total=0.434740 1000000 loops 434ns/loop
>
> without optimized for speed:
>
> test1_cpp serialze Done total=1.994011secs 1000000 loops 1994ns/loop
> test1_cpp deserialize Done total=1.609001 1000000 loops 1609ns/loop
>
> The two loops are below:
>
>   nsecs_t start = system_time_ns();
>   for (int i=loops; i != 0; i--) {
>     t.SerializeToString(&data);
>   }
>   nsecs_t stop = system_time_ns();
>
>   start = system_time_ns();
>   for (int i=loops; i != 0; i--) {
>     x.ParseFromString(data);
>   }
>   stop = system_time_ns();
>
>
>
> Given the above, I thought I'd try protobuf-c which appears to ignore the
> speed option,
> it is quite a bit smaller and somewhat faster on this simple message:
>
>    text       data        bss        dec        hex    filename
>    1370         56          0       1426        592    test1.pb-c.o
>   51751       1320         16      53087       cf5f    libprotobuf-c.so
>
> test1_c serialze Done total=0.182868secs 1000000 loops 182ns/loop
> test1_c deserialize Done total=0.420284 1000000 loops 420ns/loop
>
> The loops for protobuf-c are:
>
>   nsecs_t start = system_time_ns();
>   for (int i=loops; i != 0; i--) {
>     size = protobuf_tests__test1__get_packed_size(&t);
>     protobuf_tests__test1__pack(&t, data);
>   }
>   nsecs_t stop = system_time_ns();
>
>   start = system_time_ns();
>   for (int i=loops; i != 0; i--) {
>     _ProtobufTests__Test1 *x = protobuf_tests__test1__unpack(NULL, size,
> data);
>     protobuf_tests__test1__free_unpacked(x, NULL);
>   }
>   stop = system_time_ns();
>
> So protobuf library is about 19x larger (1,000,000/52,000) and the code is
> about 11x larger (16,000/1,400)
> when optimized for speed and about 5x larger (6,00/1,400) when not optimized
> for speed. I could be making
> an inappropriate comparison and the protobuf-c is certainly not as mature
> but it does look encouraging.
>
> This may not be news to anyone, but the large difference makes me wonder if
> it would be worth
> while to create protobuf-lite. What do people feel the minimum feature set
> that would be needed
> for protobuf-lite? Does anyone else feel a lite version would be desirable?
>
> Other ideas comments?
>
> -- Wink
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"Protocol Buffers" group.
To post to this group, send email to protobuf@googlegroups.com
To unsubscribe from this group, send email to 
protobuf+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/protobuf?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to