Re: [protobuf] Message was missing required fields
Nearly 5 years later, I found myself having the same problem. Perfectly-valid serialized protobufs generated in C++ for message types with no required fields suddenly started throwing InvalidProtocolBufferException: Message was missing required fields. exceptions in Java. The problem is, as far as I can tell, a bug in the Dalvik JIT. I've never seen in in ARM devices, but have seen it in x86 on Android 4.4.4. Disabling the JIT (setting android:vmSafeMode to true in the AndroidManifest.xml) resolves the problem. It all comes down to a sign extension bug in the following code: private byte memoizedIsInitialized = -1; public final boolean isInitialized() { byte isInitialized = memoizedIsInitialized; if (isInitialized != -1) return isInitialized == 1; memoizedIsInitialized = 1; return true; } The value of isInitialized will be implicitly converted to an int (for comparison to -1 and 1) and, in certain cases, won't be sign extended, giving a value of 255. This in turn causes the function to return false. This bug appears to be completely random, and doesn't require racing threads. I don't know if this bug is during the initial store of the field or in the subsequent load, exactly. A simple workaround is to modify the compiler output to make both memoizedIsInitialized and isInitalized into ints. This avoids any need for casting or sign extension. On Monday, April 19, 2010 at 10:51:14 AM UTC-7, Kenton Varda wrote: If Henner's answer didn't help, you'll need to provide a small, self-contained example which reproduces the problem. On Fri, Apr 16, 2010 at 11:05 AM, SyRenity stas@gmail.com javascript: wrote: Hi. I'm getting occasionally the following error below in my Java app: com.google.protobuf.InvalidProtocolBufferException: Message was missing required fields. (Lite runtime could not determine which fields were missing). at com.google.protobuf.UninitializedMessageException.asInvalidProtocolBufferException(UninitializedMessageException.java: 81) at classes.cameraInfoProto$camera $Builder.buildParsed(cameraInfoProto.java:242) at classes.cameraInfoProto$camera$Builder.access $11(cameraInfoProto.java:238) at classes.cameraInfoProto $camera.parseFrom(cameraInfoProto.java:133) at app.jSockets.FetcherSockets $ResponseThread.readMessage(FetcherSockets.java:386) at app.jSockets.FetcherSockets $ResponseThread.run(FetcherSockets.java:268) I double-checked my code, but I have only a single required field, and I'm always filling it up in the C++ app. Any idea how to diagnose it? Thanks. -- You received this message because you are subscribed to the Google Groups Protocol Buffers group. To post to this group, send email to prot...@googlegroups.com javascript:. To unsubscribe from this group, send email to protobuf+u...@googlegroups.com javascript:. For more options, visit this group at http://groups.google.com/group/protobuf?hl=en. -- You received this message because you are subscribed to the Google Groups Protocol Buffers group. To post to this group, send email to prot...@googlegroups.com javascript:. To unsubscribe from this group, send email to protobuf+u...@googlegroups.com javascript:. For more options, visit this group at http://groups.google.com/group/protobuf?hl=en. -- You received this message because you are subscribed to the Google Groups Protocol Buffers group. To unsubscribe from this group and stop receiving emails from it, send an email to protobuf+unsubscr...@googlegroups.com. To post to this group, send email to protobuf@googlegroups.com. Visit this group at http://groups.google.com/group/protobuf. For more options, visit https://groups.google.com/d/optout.
[protobuf] protobuf-java 2.4.X and 2.5.0 are incompatible
Hi All, I work on Apache Spark which is an open source project. We have recently been dealing with a lot of pain due to the fact that the Java Protobuf libraries for 2.4.X and 2.5.0 are not binary compatible. This makes it really difficult for users to include two dependencies A and B that depend on different versions of protobuf-java. Are these incompatibilities an omission, or is this an intentional policy that protobuf is okay making API-breaking changes in minor versions? This violates typical semantic-versioning conventions and makes it pretty tough for downstream users. I don't see any references to library compatibility in the Java protobuf page or the FAQ - apologies if this is covered somewhere... https://developers.google.com/protocol-buffers/docs/javatutorial https://developers.google.com/protocol-buffers/docs/faq - Patrick -- You received this message because you are subscribed to the Google Groups Protocol Buffers group. To unsubscribe from this group and stop receiving emails from it, send an email to protobuf+unsubscr...@googlegroups.com. To post to this group, send email to protobuf@googlegroups.com. Visit this group at http://groups.google.com/group/protobuf. For more options, visit https://groups.google.com/d/optout.
Re: [protobuf] protobuf-java 2.4.X and 2.5.0 are incompatible
Established based on what conventions? I'm going based on the semantic versioning guidelines here: http://semver.org/ Basically what I'd like to understand is whether Google cares about this or not, because changing public API's is a big problem for downstream projects. It means that if you want to write a library that uses proto-bufs you can't inter-operate with other libraries that also use protobufs. - Patrick On Wed, Apr 16, 2014 at 3:24 PM, Ilia Mirkin imir...@alum.mit.edu wrote: While I don't speak for Google, I believe it's fairly well-established that 2.4 and 2.5 are considered to be major releases. Switching between them requires regenerating the java files with protoc, as the internal APIs used by the generated code tend to change. I believe that in general the public API's remain the same, however that doesn't let you have multiple protobuf versions without something like jarjar. The minor releases (2.4.0 vs 2.4.1, etc) should be binary-compatible AFAIK. -ilia On Wed, Apr 16, 2014 at 4:24 PM, Patrick Wendell pwend...@gmail.com wrote: Hi All, I work on Apache Spark which is an open source project. We have recently been dealing with a lot of pain due to the fact that the Java Protobuf libraries for 2.4.X and 2.5.0 are not binary compatible. This makes it really difficult for users to include two dependencies A and B that depend on different versions of protobuf-java. Are these incompatibilities an omission, or is this an intentional policy that protobuf is okay making API-breaking changes in minor versions? This violates typical semantic-versioning conventions and makes it pretty tough for downstream users. I don't see any references to library compatibility in the Java protobuf page or the FAQ - apologies if this is covered somewhere... https://developers.google.com/protocol-buffers/docs/javatutorial https://developers.google.com/protocol-buffers/docs/faq - Patrick -- You received this message because you are subscribed to the Google Groups Protocol Buffers group. To unsubscribe from this group and stop receiving emails from it, send an email to protobuf+unsubscr...@googlegroups.com. To post to this group, send email to protobuf@googlegroups.com. Visit this group at http://groups.google.com/group/protobuf. For more options, visit https://groups.google.com/d/optout. -- You received this message because you are subscribed to the Google Groups Protocol Buffers group. To unsubscribe from this group and stop receiving emails from it, send an email to protobuf+unsubscr...@googlegroups.com. To post to this group, send email to protobuf@googlegroups.com. Visit this group at http://groups.google.com/group/protobuf. For more options, visit https://groups.google.com/d/optout.
[protobuf] Will the next release officially support golang?
Hey guys, Will the next release officially support golang? And if so, is the implementation / api based on goprotobuf? (We're looking for alternative to replace goprotobuf ; we are currently considering https://code.google.com/p/gogoprotobuf/.) Thanks, Patrick -- You received this message because you are subscribed to the Google Groups Protocol Buffers group. To unsubscribe from this group and stop receiving emails from it, send an email to protobuf+unsubscr...@googlegroups.com. To post to this group, send email to protobuf@googlegroups.com. Visit this group at http://groups.google.com/group/protobuf. For more options, visit https://groups.google.com/d/optout.
[protobuf] python proto optimizations
Hey guys, I wrote two different patches which optimize python proto performance. Both patches are running in production at Dropbox. What is the best way to upstream these changes? Patrick Patch #1. Python message patch (https://www.dropbox.com/s/q0y44ypti0by779/protobuf-2.5.0.patch1): Changes: - precompute various varint tables - don't use proto's ByteSize function for serialization - simplified some code (got rid of the listener) - got rid of StringIO Internal benchmark: - random repeated int32s - ~18% faster - random repeated int64s - ~20% faster - random repeated strings - 27% faster - random repeated bytes - 27% faster - repeated message with each with a single random string - ~20% faster NOTE: - predefined_varints.py is generated by generate_predefined_varints.py Patch #2. C++ experimental binding patch (https://www.dropbox.com/s/5nr0v76nfraaxif/protobuf-2.5.0.patch2): Changes: - fixed memory ownership / dangling pointer (see NOTE #1 for known issues) 1. inc ref count parent message when accessing a field, 2. a cleared field's is freed only when the parent is deleted - fixed MakeDescriptor to correctly generating simple proto (see NOTE #2) - fixed MergeFrom to not crash on check failure due to self merge - fixed both repeated and non-repeated field clearing - modified varint deserialization to always return PyLong (to match existing python implementation) - always mark message as mutate when extending a repeated field (even when extending by an empty list) - deleted/updated bad tests from the protobuf test suite Internal benchmark (relative to the first patch): - 30x faster for repeated varints - 8x faster for repeated string - 6x faster for repeated bytes - 26x speed up for repeated nested msg NOTE: 1. In the current implementation, a new python object is created each time a field is accessed. To make this 100% correct, we should return the same python object whenever the same field is accessed; however, I don't think the accounting overhead is worth it. Implications due to the current implementation: - repeatedly clearing / mutating the same message can cause memory blow up - There's a subtle bug with clearing / mutating default message fields: This is correct. Holding a reference to a MUTATED field X, then clearing the parent, then mutate X. e.g., child = parent.optional_nested_msg child.field = 123 # this mutates the field parent.Clear() child.field = 321 assert not parent.HasField('child') # passes This is incorrect. Holding a reference to a UNMUTATED field X, then clearing the parent, then mutate X. child = parent.optional_nested_msg parent.Clear() child.field = 321 # this inadvertently causes parent to generate a different empty msg for optional_nested_msg. assert not parent.HasField('optional_nested_msg') # fail Luckily, these access patterns are extremely rare (at least at dropbox). 2. I wrote a fully functional MakeDescriptor for c++ protos when I was at google. Talk to the F1 team (specifically Bart Samwel / Chad Whipkey) if you're interested in upstreaming that to the opensource community. -- You received this message because you are subscribed to the Google Groups Protocol Buffers group. To unsubscribe from this group and stop receiving emails from it, send an email to protobuf+unsubscr...@googlegroups.com. To post to this group, send email to protobuf@googlegroups.com. Visit this group at http://groups.google.com/group/protobuf. For more options, visit https://groups.google.com/groups/opt_out.
[protobuf] FindFileByName fails
Hi all, I have an application that was originally doing all of the Protocol Buffer access in Python but I'm in the process of moving a performance critical part to C++. Most of the Protocol Buffer access code remains in Python with just one part in C++ right now. Most of the time everything works fine but periodically the application terminates with this message: libprotobuf FATAL src/proto/user.pb.cc:51] CHECK failed: file != NULL: terminate called after throwing an instance of 'google::protobuf::FatalException' what(): CHECK failed: file != NULL: and the relevant part of the source file is: void protobuf_AssignDesc_user_2eproto() { protobuf_AddDesc_user_2eproto(); const ::google::protobuf::FileDescriptor* file = ::google::protobuf::DescriptorPool::generated_pool()- FindFileByName( user.proto); I'm a little confused about why the C++ code needs to access the original descriptor at runtime and why it seems to be intermittent as to when it needs to do so. The proto file in question does not import any other packages and of course the C++ source file already contains the file descriptor that describes the whole proto file anyway. Does this simple use case require me to delve into descriptor databases and pools? I'm using version 2.4.1 with the C++ backend for Python via the PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=cpp environment variable. Thanks, Patrick -- You received this message because you are subscribed to the Google Groups Protocol Buffers group. To post to this group, send email to protobuf@googlegroups.com. To unsubscribe from this group, send email to protobuf+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/protobuf?hl=en.
[protobuf] Re: CodedInputStream and Windows
Thanks Jason. I'm a Linux guy trying to work on Windows, so this might just be my lack of knowledge. I get this error when flushing the stream (via the destructor that is called when exiting scope): msvcr100.dll!_crt_debugger_hook(int _Reserved) Line 65 C msvcr100.dll!_call_reportfault(int nDbgHookCode, unsigned long dwExceptionCode, unsigned long dwExceptionFlags) Line 167 + 0x6 bytes C++ msvcr100.dll!_invoke_watson(const wchar_t * pszExpression, const wchar_t * pszFunction, const wchar_t * pszFile, unsigned int nLine, unsigned int pReserved) Line 155 + 0xf bytes C++ msvcr100.dll!_invalid_parameter(const wchar_t * pszExpression, const wchar_t * pszFunction, const wchar_t * pszFile, unsigned int nLine, unsigned int pReserved) Line 110 + 0x14 bytes C++ msvcr100.dll!_invalid_parameter_noinfo() Line 121 + 0xc bytes C++ msvcr100.dll!_write(int fh, const void * buf, unsigned int cnt) Line 67 + 0x24 bytesC WinGateway.exe! google:rotobuf::io::FileOutputStream::CopyingFileOutputStream::Write() Line 244 + 0xf bytesC++ WinGateway.exe! google:rotobuf::io::CopyingOutputStreamAdaptor::WriteBuffer() Line 367 + 0x10 bytesC++ WinGateway.exe! google:rotobuf::io::FileOutputStream::~FileOutputStream() Line 180 C+ + WinGateway.exe!igrpc::Connection::writeMessage(int fieldNumber, google:rotobuf::MessageLite message) Line 531C++ You can see that the write function inside FileOutputStream::CopyingFileOutputStream::Write fails. I've read the following which made me originally question whether the protobuf api would work correctly on windows: ..file operations such as read(), write(), and close() cannot be assumed to work correctly when applied to socket descriptors.. Sockets must be closed by using the closesocket() http://www.sockets.com/winsock.htm#CloseSocket, http://www.sockets.com/winsock.htm But then you mentioned that the unit test indicate it should work. I looked at the unit test and they seem to be using file descriptors for files and not sockets. Could it be that with sockets it fails and with files it works fine? Thanks again. Patrick On Apr 20, 6:17 pm, Jason Hsueh jas...@google.com wrote: I suppose this should read POSIX file descriptors, which Windows supports. I have never used protobuf on Windows personally, but the project is tested on Windows during the release process, and zero_copy_stream_unittest.cc has test cases for using file descriptors. On Wed, Apr 20, 2011 at 5:56 PM, Patrick schultz.patr...@gmail.com wrote: I'm in the process of porting my custom protobuf RPC library to windows and have run into a snag. I'm using the CodedInputStream class to read messages. The documentation for zero_copy_input_stream.h implies that this is only compatible with linux: These implementations include Unix file descriptors and C++ iostreams. Am I missing something? It was my assumption that the entire protobuf library was platform independent (at least the external API's). Maybe someone could point me to an example of instantiating a CodedInputStream under Windows/VS enviro? Regards, Patrick -- You received this message because you are subscribed to the Google Groups Protocol Buffers group. To post to this group, send email to protobuf@googlegroups.com. To unsubscribe from this group, send email to protobuf+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/protobuf?hl=en. -- You received this message because you are subscribed to the Google Groups Protocol Buffers group. To post to this group, send email to protobuf@googlegroups.com. To unsubscribe from this group, send email to protobuf+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/protobuf?hl=en.
[protobuf] Re: Typed Array support in Chrome 9
Hi jd, I maintain a library called protobuf.js, available here: https://github.com/sirikata/protojs It does not implement the full specification--only message serialization and parsing, so RPC (xhr?) and extensions must be done in the application. Currently the only Stream implementations are base64 string (for browsers without typed array support), and Array of number. From there, an Array is really simple to convert to a typed array. Also, parsing an existing Uint8Array should work already thanks to duck- typing. I don't currently plan to change this because typed arrays don't allow appending, so with the current implementation, it is probably more efficient to do the conversion after encoding. If you have any questions or bugs about the library, feel free to send me an email directly. -Patrick On Dec 30, 9:54 am, jd unicom...@gmail.com wrote: Are there any plans for code generating javascript (using typed arrays) so we can use protocol buffers directly in the browser (chrome, mozilla, safari) -- You received this message because you are subscribed to the Google Groups Protocol Buffers group. To post to this group, send email to proto...@googlegroups.com. To unsubscribe from this group, send email to protobuf+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/protobuf?hl=en.
[protobuf] Protocol Buffer Size Limit Python
I am currently having some trouble serializing a large python message (45 megs) that I can serialize in C++ and C#. Is there a limitation to the size of a message in python. This is the exception I am receiving: Traceback (most recent call last): File C:\Python26\lib\threading.py, line 532, in __bootstrap_inner self.run() File C:\Python26\lib\threading.py, line 484, in run self.__target(*self.__args, **self.__kwargs) File D:\Code\c\lbmpymodule\Release\test_request.py, line 89, in handle_fis fis_response.ParseFromString(args[0]) File build\bdist.win32\egg\google\protobuf\message.py, line 160, in ParseFro mString self.MergeFromString(serialized) File build\bdist.win32\egg\google\protobuf\reflection.py, line 1215, in Merg eFromString bytes_read = _DeserializeOneEntity(message_descriptor, self, decoder) File build\bdist.win32\egg\google\protobuf\reflection.py, line 1059, in _Des erializeOneEntity raise RuntimeError('TODO(robinson): Wiretype mismatches not handled.') RuntimeError: TODO(robinson): Wiretype mismatches not handled. Thanks, Patrick -- You received this message because you are subscribed to the Google Groups Protocol Buffers group. To post to this group, send email to proto...@googlegroups.com. To unsubscribe from this group, send email to protobuf+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/protobuf?hl=en.
[protobuf] Timeouts for reading from a CodedInputStream
Background: I've developed a C++ RPC server using protobufs as the IDL. It works great. Thanks for protobufs! Problem: When a client connects to the RPC server, a seperate thread in the server handles the connection and blocks, waiting for data from the client. My message parsing function for the connection object starts like this: FileInputStream raw_input(fd); //fd is the socket file descriptor CodedInputStream input(raw_input); uint32 tag = input.ReadTag(); ... This is all fine and dandy except when I want to shutdown the server or connection (not client initiated). The ReadTag (as well as the other Read functions) blocks until data is received but I want it to timeout after a specified amount of time. So in essence a polling read instead of a blocking one. This will allow me to check that the connection is still valid and either re-enter my message parsing function or cleanup and exit. Any ideas on how I can accomplish this? -- You received this message because you are subscribed to the Google Groups Protocol Buffers group. To post to this group, send email to proto...@googlegroups.com. To unsubscribe from this group, send email to protobuf+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/protobuf?hl=en.
[protobuf] Re: Timeouts for reading from a CodedInputStream
Thanks Evan. I don't really want to buy the cow to get a glass of milk. I also have the problem that the RPC I wrote comes in a threaded model and a multi-process model. The multi-process one makes some things a bit harder. I was hoping to utilize a shm mutex to signal termination but this would only work if my message parsing loop timed out every so often and, therefore, could check the mutex. I'm sure Ken has thought about this problem before. I'm curious on his thoughts and if there are plans on supporting polling reads. On Sep 28, 12:41 pm, Evan Jones ev...@mit.edu wrote: On Sep 28, 2010, at 15:33 , Patrick wrote: This is all fine and dandy except when I want to shutdown the server or connection (not client initiated). The ReadTag (as well as the other Read functions) blocks until data is received but I want it to timeout after a specified amount of time. So in essence a polling read instead of a blocking one. This will allow me to check that the connection is still valid and either re-enter my message parsing function or cleanup and exit. One quick hack that might work: if you have threads anyway, if you close the file descriptor in the other thread, the read will fail. This causes input.ReadTag() to return 0. The more complex hack is to supply your own ZeroCopyInputStream implementation, and in your implementation of ::Next, implement your own time out logic. In my implementation, I manage this by manually managing my own buffer, so I never call the CodedInputStream routines unless I know there is sufficient data. This may not be ideal for your application, so your milage may vary. Good luck, Evan Jones -- Evan Joneshttp://evanjones.ca/ -- You received this message because you are subscribed to the Google Groups Protocol Buffers group. To post to this group, send email to proto...@googlegroups.com. To unsubscribe from this group, send email to protobuf+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/protobuf?hl=en.
[protobuf] Re: Timeouts for reading from a CodedInputStream
Thanks Evan. I don't really want to buy the cow to get a glass of milk. I also have the problem that the RPC I wrote comes in a threaded model and a multi-process model. The multi-process one makes some things a bit harder. I was hoping to utilize a shm mutex to signal termination but this would only work if my message parsing loop timed out every so often and, therefore, could check the mutex. I'm sure Kenton has thought about this problem before. I'm curious on his thoughts and if there are plans on supporting polling reads. On Sep 28, 12:41 pm, Evan Jones ev...@mit.edu wrote: On Sep 28, 2010, at 15:33 , Patrick wrote: This is all fine and dandy except when I want to shutdown the server or connection (not client initiated). The ReadTag (as well as the other Read functions) blocks until data is received but I want it to timeout after a specified amount of time. So in essence a polling read instead of a blocking one. This will allow me to check that the connection is still valid and either re-enter my message parsing function or cleanup and exit. One quick hack that might work: if you have threads anyway, if you close the file descriptor in the other thread, the read will fail. This causes input.ReadTag() to return 0. The more complex hack is to supply your own ZeroCopyInputStream implementation, and in your implementation of ::Next, implement your own time out logic. In my implementation, I manage this by manually managing my own buffer, so I never call the CodedInputStream routines unless I know there is sufficient data. This may not be ideal for your application, so your milage may vary. Good luck, Evan Jones -- Evan Joneshttp://evanjones.ca/ -- You received this message because you are subscribed to the Google Groups Protocol Buffers group. To post to this group, send email to proto...@googlegroups.com. To unsubscribe from this group, send email to protobuf+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/protobuf?hl=en.
[protobuf] Re: Timeouts for reading from a CodedInputStream
On Sep 28, 6:07 pm, Kenton Varda ken...@google.com wrote: On Tue, Sep 28, 2010 at 5:38 PM, Evan Jones ev...@mit.edu wrote: On Sep 28, 2010, at 18:36 , Patrick wrote: I also have the problem that the RPC I wrote comes in a threaded model and a multi-process model. The multi-process one makes some things a bit harder. I was hoping to utilize a shm mutex to signal termination but this would only work if my message parsing loop timed out every so often and, therefore, could check the mutex. This should be pretty easy to achieve by supplying your own implementation of FileInputStream that uses select() and a non-blocking read() rather than just read(). It can then fail the call to Next() whenever it is convenient. Oh duh, why didn't I think of that? Sounds simple enough. I'll give it a go. Thanks you both for your help. - Patrick -- You received this message because you are subscribed to the Google Groups Protocol Buffers group. To post to this group, send email to proto...@googlegroups.com. To unsubscribe from this group, send email to protobuf+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/protobuf?hl=en.
[protobuf] Importing Protos In Python
I apologize if this is a repost but I could not find any information searching the forums. I am trying to import two simple protos into python 2.6. D:\proto\src\MyProtos\foo.proto package foobar; option java_package = foobar; option java_outer_classname = Foo; message Foo { optional string the_foo = 1; } D:\proto\src\MyProtos\bar.proto package foobar; option java_package = foobar; option java_outer_classname = Bar; import MyProtos/foo.proto; message Bar { optional Foo my_foo = 1; optional string bar_string = 2; } I compile the protos in windows with the following command: D:\protoprotoc.exe --proto_path=src/ --python_out=build/gen src/ MyProtos/foo.proto src/MyProtos/bar.proto I then run the following python script: import sys sys.path.append(D:\\proto\\build\\gen\\) import foo_pb2 import bar_pb2 and receive the following error: Traceback (most recent call last): File D:\Code\python\lbmpymodule\lbmpymodule\Release\test.py, line 15, in module import bar_pb2 File D:\Code\python\lbmpymodule\lbmpymodule\Release\bar_pb2.py, line 52, in module import MyProtos.foo_pb2 ImportError: No module named MyProtos.foo_pb2 What do I need to do to properly import these two protos into python. Thanks, Patrick -- You received this message because you are subscribed to the Google Groups Protocol Buffers group. To post to this group, send email to proto...@googlegroups.com. To unsubscribe from this group, send email to protobuf+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/protobuf?hl=en.
[protobuf] CodedInputStream hanging in constructor
When I construct a new CodedInputStream: this-fd = sock.impl()-sockfd(); ZeroCopyInputStream *raw_input = new FileInputStream(this-fd); this-input = new CodedInputStream(raw_input); It hangs in the constructor. I ran a backtrace and can see that it is hanging in the Refresh(). From my client side: int fd = sock.impl()-sockfd(); ZeroCopyOutputStream *raw_output = new FileOutputStream(fd); CodedOutputStream *output = new CodedOutputStream(raw_output); uint32 tag = WireFormatLite::MakeTag(Stream::kRFieldNumber, WireFormatLite::WIRETYPE_LENGTH_DELIMITED); printf(Writing tag [%x]\n, tag); int i =0; while (i2050) { output-WriteTag(tag); i++; } The backtrace to the hang: #0 0xe524 in __kernel_vsyscall () #1 0xb7de999b in read () from /lib/tls/i686/cmov/libpthread.so.0 #2 0xb7eb928a in google::protobuf::io::FileInputStream::CopyingFileInputStream::Read (this=0x807b71c, buffer=0x807cc48, size=8192) at google/protobuf/io/zero_copy_stream_impl.cc:141 #3 0xb7e4dc70 in google::protobuf::io::CopyingInputStreamAdaptor::Next ( this=0x807b730, data=0xbfb08204, size=0xbfb08200) at google/protobuf/io/zero_copy_stream_impl_lite.cc:238 #4 0xb7eb836e in google::protobuf::io::FileInputStream::Next (this=0x807b718, data=0xbfb08204, size=0xbfb08200) at google/protobuf/io/zero_copy_stream_impl.cc:89 #5 0xb7e4ba69 in google::protobuf::io::CodedInputStream::Refresh ( this=0x807b758) at google/protobuf/io/coded_stream.cc:492 #6 0x0805dacf in CodedInputStream (this=0x807b758, input=0x807b718) at /usr/local/include/google/protobuf/io/coded_stream.h:1056 #7 0x0805c961 in Connection (this=0x807c500, so...@0xbfb082c0) at src/Connection.cc:56 I noticed that the Read() in the bt has size set to 8192. So I got curious and sent enough tags to hit this size limit. As soon as it receives this much data, the code proceeds. According to a previous post: http://groups.google.com/group/protobuf/browse_thread/thread/6e9da43146339ee2/241f0aa64c4c80ca?lnk=gstq=codedinputstream#241f0aa64c4c80ca you indicated that it should only block if there is no data available. This is obviously not the case. Any idea where I'm going wrong? It looks pretty simple to me. -- You received this message because you are subscribed to the Google Groups Protocol Buffers group. To post to this group, send email to proto...@googlegroups.com. To unsubscribe from this group, send email to protobuf+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/protobuf?hl=en.
[protobuf] Re: CodedInputStream hanging in constructor
I have investigated further and saw that the buffer wasn't being flushed; I should of realized this earlier. Any reason why the Java CodedInputStream has a flush method but the c+ + API has no equivalent? On Feb 25, 9:04 pm, Kenton Varda ken...@google.com wrote: Weird, read() on a socket should return as soon as *any* data is available, not wait until the entire buffer can be filled. Have you set some unusual flags on your socket which may be causing it to behave this way? On Thu, Feb 25, 2010 at 5:20 PM, Patrick schultz.patr...@gmail.com wrote: When I construct a new CodedInputStream: this-fd = sock.impl()-sockfd(); ZeroCopyInputStream *raw_input = new FileInputStream(this-fd); this-input = new CodedInputStream(raw_input); It hangs in the constructor. I ran a backtrace and can see that it is hanging in the Refresh(). From my client side: int fd = sock.impl()-sockfd(); ZeroCopyOutputStream *raw_output = new FileOutputStream(fd); CodedOutputStream *output = new CodedOutputStream(raw_output); uint32 tag = WireFormatLite::MakeTag(Stream::kRFieldNumber, WireFormatLite::WIRETYPE_LENGTH_DELIMITED); printf(Writing tag [%x]\n, tag); int i =0; while (i2050) { output-WriteTag(tag); i++; } The backtrace to the hang: #0 0xe524 in __kernel_vsyscall () #1 0xb7de999b in read () from /lib/tls/i686/cmov/libpthread.so.0 #2 0xb7eb928a in google::protobuf::io::FileInputStream::CopyingFileInputStream::Read (this=0x807b71c, buffer=0x807cc48, size=8192) at google/protobuf/io/zero_copy_stream_impl.cc:141 #3 0xb7e4dc70 in google::protobuf::io::CopyingInputStreamAdaptor::Next ( this=0x807b730, data=0xbfb08204, size=0xbfb08200) at google/protobuf/io/zero_copy_stream_impl_lite.cc:238 #4 0xb7eb836e in google::protobuf::io::FileInputStream::Next (this=0x807b718, data=0xbfb08204, size=0xbfb08200) at google/protobuf/io/zero_copy_stream_impl.cc:89 #5 0xb7e4ba69 in google::protobuf::io::CodedInputStream::Refresh ( this=0x807b758) at google/protobuf/io/coded_stream.cc:492 #6 0x0805dacf in CodedInputStream (this=0x807b758, input=0x807b718) at /usr/local/include/google/protobuf/io/coded_stream.h:1056 #7 0x0805c961 in Connection (this=0x807c500, so...@0xbfb082c0) at src/Connection.cc:56 I noticed that the Read() in the bt has size set to 8192. So I got curious and sent enough tags to hit this size limit. As soon as it receives this much data, the code proceeds. According to a previous post: http://groups.google.com/group/protobuf/browse_thread/thread/6e9da431... you indicated that it should only block if there is no data available. This is obviously not the case. Any idea where I'm going wrong? It looks pretty simple to me. -- You received this message because you are subscribed to the Google Groups Protocol Buffers group. To post to this group, send email to proto...@googlegroups.com. To unsubscribe from this group, send email to protobuf+unsubscr...@googlegroups.comprotobuf%2bunsubscr...@googlegroups.c om . For more options, visit this group at http://groups.google.com/group/protobuf?hl=en. -- You received this message because you are subscribed to the Google Groups Protocol Buffers group. To post to this group, send email to proto...@googlegroups.com. To unsubscribe from this group, send email to protobuf+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/protobuf?hl=en.
[protobuf] Re: CodedInputStream hanging in constructor
I meant CodedOutputStream of course. On Feb 25, 9:18 pm, Patrick schultz.patr...@gmail.com wrote: I have investigated further and saw that the buffer wasn't being flushed; I should of realized this earlier. Any reason why the Java CodedInputStream has a flush method but the c+ + API has no equivalent? On Feb 25, 9:04 pm, Kenton Varda ken...@google.com wrote: Weird, read() on a socket should return as soon as *any* data is available, not wait until the entire buffer can be filled. Have you set some unusual flags on your socket which may be causing it to behave this way? On Thu, Feb 25, 2010 at 5:20 PM, Patrick schultz.patr...@gmail.com wrote: When I construct a new CodedInputStream: this-fd = sock.impl()-sockfd(); ZeroCopyInputStream *raw_input = new FileInputStream(this-fd); this-input = new CodedInputStream(raw_input); It hangs in the constructor. I ran a backtrace and can see that it is hanging in the Refresh(). From my client side: int fd = sock.impl()-sockfd(); ZeroCopyOutputStream *raw_output = new FileOutputStream(fd); CodedOutputStream *output = new CodedOutputStream(raw_output); uint32 tag = WireFormatLite::MakeTag(Stream::kRFieldNumber, WireFormatLite::WIRETYPE_LENGTH_DELIMITED); printf(Writing tag [%x]\n, tag); int i =0; while (i2050) { output-WriteTag(tag); i++; } The backtrace to the hang: #0 0xe524 in __kernel_vsyscall () #1 0xb7de999b in read () from /lib/tls/i686/cmov/libpthread.so.0 #2 0xb7eb928a in google::protobuf::io::FileInputStream::CopyingFileInputStream::Read (this=0x807b71c, buffer=0x807cc48, size=8192) at google/protobuf/io/zero_copy_stream_impl.cc:141 #3 0xb7e4dc70 in google::protobuf::io::CopyingInputStreamAdaptor::Next ( this=0x807b730, data=0xbfb08204, size=0xbfb08200) at google/protobuf/io/zero_copy_stream_impl_lite.cc:238 #4 0xb7eb836e in google::protobuf::io::FileInputStream::Next (this=0x807b718, data=0xbfb08204, size=0xbfb08200) at google/protobuf/io/zero_copy_stream_impl.cc:89 #5 0xb7e4ba69 in google::protobuf::io::CodedInputStream::Refresh ( this=0x807b758) at google/protobuf/io/coded_stream.cc:492 #6 0x0805dacf in CodedInputStream (this=0x807b758, input=0x807b718) at /usr/local/include/google/protobuf/io/coded_stream.h:1056 #7 0x0805c961 in Connection (this=0x807c500, so...@0xbfb082c0) at src/Connection.cc:56 I noticed that the Read() in the bt has size set to 8192. So I got curious and sent enough tags to hit this size limit. As soon as it receives this much data, the code proceeds. According to a previous post: http://groups.google.com/group/protobuf/browse_thread/thread/6e9da431... you indicated that it should only block if there is no data available. This is obviously not the case. Any idea where I'm going wrong? It looks pretty simple to me. -- You received this message because you are subscribed to the Google Groups Protocol Buffers group. To post to this group, send email to proto...@googlegroups.com. To unsubscribe from this group, send email to protobuf+unsubscr...@googlegroups.comprotobuf%2bunsubscr...@googlegroups.c om . For more options, visit this group at http://groups.google.com/group/protobuf?hl=en. -- You received this message because you are subscribed to the Google Groups Protocol Buffers group. To post to this group, send email to proto...@googlegroups.com. To unsubscribe from this group, send email to protobuf+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/protobuf?hl=en.
[protobuf] Javascript protocol buffer implementation
Hi all, I am announcing a BSD-licensed javascript protocol buffer implementation, called protojs. The library supports all wire types, as well as packed fields (not autodetection yet), float/double support thanks to jsfromhell.com, and Unicode support. You can find the github site here: http://github.com/sirikata/protojs To install, you need to run ./bootstrap.sh to download and compile antlr, and then run make to build the pbj compiler and compile the sample javascript code. Protojs also supports higher level types, such as enums, nested composites, and precise 64-bit integers (using two 32-bit numbers). It also allows extending the built-in types, and has a library called pbj which includes other helpful types, such as vectors, quaternions and uuids. The library is modelled after python's protocol buffers (same function names). It uses getters and setters on browsers that support them (all but IE), and works without getters and setters at the expense of no runtime type checking. Sadly, javascript currently has no binary datatype. So in place of that the library currently supports serializing/deserializing base64 strings and to arrays of integers. Feel free to let me know if you have any suggestions or bugs, or you can fork the project since it's developed in git. -Patrick -- You received this message because you are subscribed to the Google Groups Protocol Buffers group. To post to this group, send email to proto...@googlegroups.com. To unsubscribe from this group, send email to protobuf+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/protobuf?hl=en.