google::protobuf::io::ZeroCopyInputStream*
Hi all, I'm trying to read a file using the TextFormat and I'm doing like this: (line 45) int fd = open(sName.c_str(), O_RDONLY); (line 46) ZeroCopyInputStream *input = new FileInputStream(fd); where sName is a string with the name of the file I want to read. Files: #include iostream #include fstream #include fcntl.h #include google/protobuf/text_format.h #include google/protobuf/io/zero_copy_stream.h #include string #include Configuration.h #include reader.pb.h are included and also using namespace ProtoReader; using namespace google::protobuf; using namespace google::protobuf::io; using google::protobuf::TextFormat; using std::cout; using std::endl; using std::fstream; using std::ios; using std::string; The compiling error I receive is: ../Configuration.cpp:46: error: expected type-specifier before ‘FileInputStream’ ../Configuration.cpp:46: error: can't convert ‘int*’ into ‘google::protobuf::io::ZeroCopyInputStream*’ at initialization ../Configuration.cpp:46: error: expected ‘,’ or ‘;’ before ‘FileInputStream’ Can anybody tell me what I'm doing wrong? Best regards, -- Como dijo el sabio, vayas donde vayas, ahí estarás --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups Protocol Buffers group. To post to this group, send email to protobuf@googlegroups.com To unsubscribe from this group, send email to protobuf+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/protobuf?hl=en -~--~~~~--~~--~--~---
ANN Haskell version 1.5.0 released
Hello all, I have just uploaded version 1.5.0 of the Haskell version to hackage. The links for the three pieces are: http://hackage.haskell.org/package/protocol-buffers http://hackage.haskell.org/package/protocol-buffers-descriptor http://hackage.haskell.org/package/hprotoc This catches up to Google's version 2.1.0, as described below: * Support for repeated fields for primitive type (good arrays!). Note that using on an invalid field type will generate an error. * NO support yet for the *_FIELD_NUMBER style constants * It is now an error to define a default value for a repeated field. * Fields can now be marked deprecated (does nothing) * The type name resolver will no longer resolve type names to fields. Note that this applies to type of normal and extension fields. A lexer bug was founds and fixed by George van den Driessche, when a numeric literal in a proto file was followed immediately by a newline character. Cheers, Chris --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups Protocol Buffers group. To post to this group, send email to protobuf@googlegroups.com To unsubscribe from this group, send email to protobuf+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/protobuf?hl=en -~--~~~~--~~--~--~---
Re: google::protobuf::io::ZeroCopyInputStream*
The problem is now solved. The #include google/protobuf/io/zero_copy_stream_impl.h line was missing. Regards, 2009/6/15 Carmen Navarrete carmen.navarr...@gmail.com Hi all, I'm trying to read a file using the TextFormat and I'm doing like this: (line 45) int fd = open(sName.c_str(), O_RDONLY); (line 46) ZeroCopyInputStream *input = new FileInputStream(fd); where sName is a string with the name of the file I want to read. Files: #include iostream #include fstream #include fcntl.h #include google/protobuf/text_format.h #include google/protobuf/io/zero_copy_stream.h #include string #include Configuration.h #include reader.pb.h are included and also using namespace ProtoReader; using namespace google::protobuf; using namespace google::protobuf::io; using google::protobuf::TextFormat; using std::cout; using std::endl; using std::fstream; using std::ios; using std::string; The compiling error I receive is: ../Configuration.cpp:46: error: expected type-specifier before ‘FileInputStream’ ../Configuration.cpp:46: error: can't convert ‘int*’ into ‘google::protobuf::io::ZeroCopyInputStream*’ at initialization ../Configuration.cpp:46: error: expected ‘,’ or ‘;’ before ‘FileInputStream’ Can anybody tell me what I'm doing wrong? Best regards, -- Como dijo el sabio, vayas donde vayas, ahí estarás -- Como dijo el sabio, vayas donde vayas, ahí estarás --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups Protocol Buffers group. To post to this group, send email to protobuf@googlegroups.com To unsubscribe from this group, send email to protobuf+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/protobuf?hl=en -~--~~~~--~~--~--~---
Re: 'Streaming' messages (say over a socket)
The normal way to do it is to send each Entity as a separate message. CodedInput/OutputStream is handed for that kind of thing. --Chris On Sun, Jun 14, 2009 at 4:14 PM, Alex Black a...@alexblack.ca wrote: Is there a way to start sending a message before its fully composed? Say we have messages like this: message Entity { required int32 id = 1; required string name = 2; } message Entities { repeated Entity entity = 1; } If we're sending a message Entities with 1,000 Entity objects in it, is there a way to avoid composing the entire message in memory, serializing it, and then sending it out? I'd like to avoid allocating RAM for the entire message, and just send it out as I compose it... thx, - Alex -- Chris --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups Protocol Buffers group. To post to this group, send email to protobuf@googlegroups.com To unsubscribe from this group, send email to protobuf+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/protobuf?hl=en -~--~~~~--~~--~--~---
Re: 'Streaming' messages (say over a socket)
http://code.google.com/apis/protocolbuffers/docs/techniques.html#streaming On Sun, Jun 14, 2009 at 4:14 PM, Alex Black a...@alexblack.ca wrote: Is there a way to start sending a message before its fully composed? Say we have messages like this: message Entity { required int32 id = 1; required string name = 2; } message Entities { repeated Entity entity = 1; } If we're sending a message Entities with 1,000 Entity objects in it, is there a way to avoid composing the entire message in memory, serializing it, and then sending it out? I'd like to avoid allocating RAM for the entire message, and just send it out as I compose it... thx, - Alex --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups Protocol Buffers group. To post to this group, send email to protobuf@googlegroups.com To unsubscribe from this group, send email to protobuf+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/protobuf?hl=en -~--~~~~--~~--~--~---
Re: Python - Appending repeated messages instead of merging
This is tricky as the Python API (like the C++ API) has a strong sense of ownership. Outer messages own the message objects embedded inside them. In the Python API, this is necessary because assigning a field in an optional sub-message may also cause the sub-message itself to become present in the parent, and marks the parent dirty, meaning that some cached information needs to be recomputed. If you were allowed to append message objects allocated externally, this could get muddled. For example, what happens if you do: message1 = parse_message(...) message2 = MyMessage() message2.submessage.append(message1.submessage[0]) Now message1 and message2 presumably share an embedded message instance. Then what happens when you modify the sub-message? Both message1 and message2 would have to be marked dirty. So we'd have to start keeping track of multiple parents for each message object, which is rather convoluted. So I think the current approach will probably stay. On Thu, Jun 11, 2009 at 6:35 PM, Dan danle...@gmail.com wrote: Hello, I'm finding that I am writing code that looks a lot like this: tmpsubmessage = parse_submessage_data() submessage = message.submessage.add() submessage.MergeFrom(tmpsubmessage) This seems inefficient to me. One alternative would be to create the new submessage, and fill it on the spot: submessage = message.submessage.add() parse_submessage_data(submessage) However, sometimes there's a bit of distance between when I'm generating the data and when I'm putting it all together (such as storing pre-made message sections, and loading them from a database later). What I really want is an append: submessage = parse_submessage_data() message.submessage.append(submessage) However, I can't see any way of doing this directly. The closest I've read about in the API is to create the message's parent, then merging the parents together (in the implementation this automatically appends repeated messages). But that's impractical if the same message has multiple potential parents. Any ideas? Thanks, Daniel --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups Protocol Buffers group. To post to this group, send email to protobuf@googlegroups.com To unsubscribe from this group, send email to protobuf+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/protobuf?hl=en -~--~~~~--~~--~--~---
Java thread safety
In the Java Generated code, there are functions like ParseFrom (CodedInputStream ...) that create protocol buffers messages from a file or other buffer. Can I call these directly from multiple different threads or should I use a wrapper with the synchronized keyword? Thanks, Wayne --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups Protocol Buffers group. To post to this group, send email to protobuf@googlegroups.com To unsubscribe from this group, send email to protobuf+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/protobuf?hl=en -~--~~~~--~~--~--~---