I did a test locally using protobuf-net (since that is what I'm most
familiar with); to get an output of about 3,784,000 I used a count of
172000 items in the inner array - does that sound about right? Then I
tested it in a loop as per this gist: https://gist.github.com/mgravell/
10a21970531485008731d700b89ec732
The timings I get there are about 25ms to serialize, 40ms to deserialize -
although my machine is quite fast, so this may mean you're pretty much
getting "about right" numbers. What sort of numbers are you looking for
here? Note that in .NET the first run will always be slightly slower due to
JIT.
Additionally, I know nothing about the zmq overheads - are you including
the zmq cost in your numbers?
If you want to squeeze the last few drops of performance, you usually can -
for example, by looking at whether zmq allows you to pass a stream or span
or an *oversized* array - again, I'm not as familiar with the Google C#
version as I am with protobuf-net, but *if* (and it is a huge "if") the
"ToByteArray()" is essentially writing to a MemoryStream then calling
ToArray, you can probably avoid an extra blit and some allocs by providing
your own re-used memory-stream and using GetBuffer to access the oversized
array, remembering to limit to just the first .Length bytes. But again a
lot of this depends on specifics of zmq and the Google C# version. It is
almost certainly diminishing returns.
So: what numbers are you *looking to get*? what would be "acceptable"? And
how complex is your data? is what you've shown the *only* data you need to
transfer? if so, it might not be a bad candidate for fully manual explicit
serialization not involving a library - just a payload of:
Height [int32 fixed 4 bytes]
Width [int32 fixed 4 bytes]
Time [int64 fixed 8 bytes]
ElementCount [int32 fixed 4 bytes]
then for each element: 16 bytes consisting of
X [float fixed 4 bytes]
Y [float fixed 4 bytes]
Z [float fixed 4 bytes]
RGB [int32 fixed 4 bytes]
this would require manual coding, but would typically outperform other
options - but would be more brittle and would require you to be reasonably
good at IO code.
Personally, I'd probably leave it alone...
Marc
On 22 December 2017 at 13:03, Ravi <[email protected]> wrote:
> Any suggestions, please?
>
>
> On Wednesday, December 20, 2017 at 10:00:31 PM UTC+9, Ravi wrote:
>>
>> I have defined the Protocol Buffers message file as follows:
>>
>> syntax = "proto3";
>> package Tutorial;
>> import "google/protobuf/timestamp.proto";
>> option csharp_namespace = "Tutorial";
>>
>> message PointCloud {
>> int32 width = 1;
>> int32 height = 2;
>>
>> message Point {
>> float x = 1;
>> float y = 2;
>> float z = 3;
>> fixed32 rgb = 4;
>> }
>> repeated Point points = 3;
>> google.protobuf.Timestamp timestamp = 4;
>> }
>>
>> This is how I am preparing the data and serializing it in C#:
>> using Google.Protobuf;
>> using Tutorial;
>> using ZeroMQ;
>>
>> PointCloud pointCloud = new PointCloud();
>> pointCloud.Height = Height
>> pointCloud.Width = Width;
>> pointCloud.Timestamp = Timestamp.FromDateTime(DateTime.UtcNow);
>>
>> for (var index = 0; index < POINTS3D_COUNT; index++) {
>> PointCloud.Types.Point point = new PointCloud.Types.Point {
>> X = points3D[index].X,
>> Y = points3D[index].Y,
>> Z = points3D[index].Z,
>> Rgb = (uint)((red << 16) | (green << 8) | blue)
>> };
>>
>> pointCloud.Points.Add(point);
>> }
>>
>> zmqPublisher.Send(new ZFrame(pointCloud.ToByteArray()));
>>
>> This is how I deserialize the data in C++:
>> while (receive) {
>> zmq::message_t msg;
>> int rc = zmq_socket.recv(&msg);
>> if (rc) {
>> Tutorial::PointCloud point_cloud;
>> point_cloud.ParseFromArray(msg.data(), msg.size());
>> }point_cloud.ParseFromArray(msg.data(), msg.size())
>> }
>>
>> I am able to get the data back properly. However, the serialization and
>> deserialization process seems slow.
>>
>> - I used *System.Diagnostics.Stopwatch *in C# and noticed that
>> *pointCloud.ToByteArray()* is taking 100ms approximately.
>> - Similarly I used *std::chrono::steady_clock::now()* in C++ and
>> noticed that *point_cloud.ParseFromArray(msg.data(), msg.size())* is
>> taking 96ms approximately.
>> - Just for information, the length of the byte array is roughly
>> 3,784,000.
>>
>>
>> *I want to know how to speed up serialization and deserialization
>> process?*
>>
>> -
>> Thanks
>> Ravi
>>
> --
> You received this message because you are subscribed to the Google Groups
> "Protocol Buffers" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to [email protected].
> To post to this group, send email to [email protected].
> Visit this group at https://groups.google.com/group/protobuf.
> For more options, visit https://groups.google.com/d/optout.
>
--
Regards,
Marc
--
You received this message because you are subscribed to the Google Groups
"Protocol Buffers" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/protobuf.
For more options, visit https://groups.google.com/d/optout.