The data going to FromWire - where did that come from? Is that successive
calls to Read, for example? Because if so you are not guaranteed to "read"
in the same blocks that you "write" - see http://tiny.cc/io (sorry if I'm
saying the obvious here)
Finally, though, my biggest suggestion would be: to investigate, detect
this scenario explicitly, and log the buffer contents (and the length etc).
Then inspect it: if the same data deserializes fine in isolation, I'll be
genuinely surprised - most often the error turns out to be in getting the
right data (and in the right chunks) to the serializer - meaning: the
On 23 Mar 2013 18:53, "oxoocoffee" <rjge...@gmail.com> wrote:
> Hello Mark,
> I will do so and double or tripple check :)
> In the mean time to give you little more info. Client socket reader thread
> is the only "reading off this socket. Main app thread is writing to the
> same socket but using different messages.
> here is my two static function usied to serialization and deserialization
> wire messages where headerSize is constant UInt32 indication message type.
> I just realized that I do check in FromWire if buffer is null and
> construct default object. This looks like possible problem but in cases
> where things go bad culprit message has some fields set and default
> constructor does not set any fields (no defaults in .proto). So this makes
> me belive that buffer is never null. I will change this to assert and check
> if this ever happen just to take this out of the loop.
> static class ProtoUtility
> public static byte ToWire<T>(this T instance, Int32 headerSize)
> using (var stream = new MemoryStream())
> stream.Write(new byte[headerSize], 0, headerSize);
> Serializer.Serialize<T>(stream, instance);
> return stream.ToArray();
> public static T FromWire<T>(this byte buffer, Int32 msgSize, Int32
> if (buffer == null)
> return default(T);
> using (var stream = new MemoryStream(buffer, 0, msgSize))
> stream.Seek(headerSize, SeekOrigin.Begin);
> return Serializer.Deserialize<T>(stream);
> Here reading from socket is like this
> Read UInt32 _type - (message Type)
> case 1:
> MessageType_1 msg_1 =
> ProtoUtility.FromWire< MessageType_1 >(wireData, wireData.Length, 0);
> OnMessage_1_Event( msg_1 );
> Debug.Assert(false, "Unsupported Message Type: " + _type);
> Thank you all for helping out... :)
> On Saturday, March 23, 2013 1:35:27 PM UTC-5, Marc Gravell wrote:
>> Protobuf-net does not swallow any errors - if bad things happen in
>> shouts loudly. Additionally the API is thread safe - during deserialization
>> no state is shared.
>> The first thing I would look at is the code *around* protobuf - any IO
>> code, for example - is there any chance different readers are accessing the
>> same socket (etc) concurrently? Or an error in your "framing" code ?
>> We use protobuf-net constantly in some heavily threaded code under high
>> load (stackoverflow.com), and it does just fine. If you can provide a
>> repro I'll happily take a look...
>> On 23 Mar 2013 17:55, "oxoocoffee" <rjg...@gmail.com> wrote:
>>> Hello everyone,
>>> I have this very strange problem. Let me define what versions of
>>> protobuffers I am using (I tried combinations of them to see if there is
>>> something specific to specific version)
>>> protobuf-net - 188.8.131.520 (also tried latest protobuf-net r622.zip)
>>> protobuf protobuf-2.4.1.tar.gz and protobuf-2.5.0.tar.gz
>>> In a nutshell I get corruption/missing data on dotnet side. Server
>>> always processes messages from client just fine and never a problem.
>>> Here is more details about setup and where things work and where they do
>>> not. Just so you know I am running tests all day everyday sending couple
>>> million messages a day.
>>> I have a application server ( C++/Linux/x64 ) and dotnet client
>>> application (Win7/Win8 x64). Multiple clients are connecting to server
>>> (same server). All clients are using the same version of protobuf-net.
>>> If I have one instance client running on single client machine all is
>>> good and there is no problems.
>>> But in my lab if I want to test multiple clients running on the same
>>> physical machine I get some strange errors at random. I mean very random so
>>> it seams.
>>> Server is implemented that each client is handled by two threads
>>> (network reader and writer). Each client sender thread creates protobuf
>>> message on stack filling it with data and sending it to client.
>>> I do check what is being send to client to file just to make sure all is
>>> ok (this is only enabled in DEBUG this problem). I have about 40 different
>>> types of protobuf messages. C++ and protobuf-net share same .proto file.
>>> So server pseudo code looks like this
>>> SendMessage( SomeInternamMessage ms)
>>> ProtoBufMessage_1 pbMsg;
>>> pbMsg.set_field_1( ms.filed_1() );
>>> pbMsg.set_field_n( ms.filed_n() );
>>> log( pbMsg ); // Log message to text file to check what is being
>>> set and send
>>> Client on the other hand has a separate network reader thread reading
>>> messages and serializing protobuf-net messages calling events to pass data
>>> to UI. There is a check on all events checking for cross-threading calls
>>> and calling BeginInvoke when needed.
>>> Here is the strange part as I started to describe above. When running
>>> single client on single machine connecting to same server all works ok so
>>> it seams running continuously everyday.
>>> When I start running 3-4 clients per client machine still connecting to
>>> same server I get once a while message on client from server that has
>>> invalid data. Not all but some members are invalid causing my application
>>> Some of the fields that I see mostly bad/stale are DateTime or Int64
>>> I did check on server logs what was send and it looks correct not null
>>> or 0 where applicable and values are within proper range (not overflow on
>>> int types) So I know server is sending it correct.
>>> I did go over the code few times to make sure Log() is not changing data
>>> on protobuf message and it is the only call between pack and send.
>>> Are there any protobuf error handlers or C++ and C# side that I can add
>>> to catch internal errors within each library to help me find out what is
>>> going on?
>>> At this point thing are looking like something with protobuf-net. But
>>> that is just a guess since server does not change (except rebuilding it
>>> against 2.4.1 or protobuf-2.5.0 for testing and isolating the case)
>>> Right now I am running 2.4.1 on server since it is more mature
>>> and 184.108.40.2060 on client.
>>> Can any of the devs shad some light about internal protobuf-net dll and
>>> if there are any internal states that would have problem with sharing same
>>> dll against few running clients on same machine?
>>> Any help or suggestions is greatly appreciated.
>>> If you need any extra info and what kind ... just ask..
>>> You received this message because you are subscribed to the Google
>>> Groups "Protocol Buffers" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to protobuf+u...@**googlegroups.com.
>>> To post to this group, send email to prot...@googlegroups.com.
>>> Visit this group at
>>> For more options, visit
> You received this message because you are subscribed to the Google Groups
> "Protocol Buffers" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to protobuf+unsubscr...@googlegroups.com.
> To post to this group, send email to email@example.com.
> Visit this group at http://groups.google.com/group/protobuf?hl=en.
> For more options, visit https://groups.google.com/groups/opt_out.
You received this message because you are subscribed to the Google Groups
"Protocol Buffers" group.
To unsubscribe from this group and stop receiving emails from it, send an email
To post to this group, send email to firstname.lastname@example.org.
Visit this group at http://groups.google.com/group/protobuf?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.