Re: Implementing highly efficient binary protocol clients ands server using Google Protocol Buffers and Netty
Netty 3.1.0.ALPHA3 has been released today - you don't need to build from the source anymore. Please visit the documentation page below and click the 'LocalTime' example: * http://www.jboss.org/auth/netty/documentation.html Thanks, Trustin On Jan 7, 9:34 pm, Trustin Lee trus...@gmail.com wrote: Hi folks, As I posted a topic here before, I have done some integration for protobuf and Netty to enable the rapid implementation of highly efficient binary protocol clients and servers. With both technologies combined, you can build a socket client / server with protobuf very quickly. I'd like to hear some feed back from protobuf users, so I'm posting a link to the detailed information on this: *http://n2.nabble.com/Google-Protocol-Buffers-integration-is-ready.-td... Please feel free to reply to this message, to send me an e-mail directly, or to reply to the original post at your option. Thanks for the great work on protobuf, your previous feed back, and your upcoming feed back in advance! :) Trustin --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups Protocol Buffers group. To post to this group, send email to protobuf@googlegroups.com To unsubscribe from this group, send email to protobuf+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/protobuf?hl=en -~--~~~~--~~--~--~---
Re: Problem to MergeFromCodedStream()
On Mon, Jan 12, 2009 at 8:32 PM, Dave Bailey d...@daveb.net wrote: I agree, and it seems to be a question that comes up frequently in this forum, so maybe we should add a page to the Wiki that discusses how to send and receive a stream of protobuf (or any) messages. I did add some documentation on that here: http://code.google.com/apis/protocolbuffers/docs/techniques.html#streaming Things like run length encoding, magic bytes, checksums, record types - these are either highly desirable or absolutely necessary when streaming blocks of opaque binary data over a network connection, reading a sequence of them from a file, or whatever. I think there may be a misconception out there that libprotobuf somehow magically takes care of all those things. It seems to me that people need to conceive of a serialized protobuf object as the payload to a packet, and it is their job to design a packet header that describes the payload with sufficient information such that it can be extracted and dispatched to the appropriate handler. Well said. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups Protocol Buffers group. To post to this group, send email to protobuf@googlegroups.com To unsubscribe from this group, send email to protobuf+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/protobuf?hl=en -~--~~~~--~~--~--~---