[protobuf] Re: error: cannot access CheckReturnValue
When I remove implementation("com.google.protobuf:protobuf-java:3.23.0") { exclude(group: 'com.google.guava', module: 'guava') exclude(group: 'org.checkerframework', module: 'checker-compat-qual') exclude(group: 'javax.annotation', module: 'jsr250-api') exclude(group: 'io.opencensus', module: 'opencensus-api') exclude(group: 'com.google.errorprone', module: 'error_prone_annotations') exclude(group: 'com.google.protobuf', module: 'protobuf-javalite') exclude(group: 'io.grpc', module: 'grpc-context') } } Timestamp does not get recognized, but CheckReturnValue problem is gone. On the other side when I add it I get CheckReturnValue error. Do you have any idea if the problem is with the protobuf-java or the guava dependency? On Friday, June 16, 2023 at 11:55:09 AM UTC-5 Deanna Garcia wrote: > I think you're right that this is likely a problem with guava. Can you try > posting a bug in their repo? > > On Thursday, June 15, 2023 at 6:55:37 PM UTC-7 Kevin Jimenez wrote: > >> I am trying to install grpc for Android with access to Timestamp. The >> generated code generates ok but the project throws this error. I did some >> research and it looks like it has to do with guava. Attached is my app >> build.grade: >> >> *Error* >> >> > Task :app:compileDevDebugJavaWithJavac >> error: cannot access CheckReturnValue >> class file for javax.annotation.CheckReturnValue not found >> cannot access CheckReturnValue >> >> Note: Some input files use or override a deprecated API. >> Note: Recompile with -Xlint:deprecation for details. >> 1 error >> >> Build Gradle Snippet >> `` >> >> // You need to build grpc-java to obtain these libraries below. >> implementation 'io.grpc:grpc-okhttp:1.55.1' // CURRENT_GRPC_VERSION >> implementation 'io.grpc:grpc-protobuf-lite:1.55.1' // >> CURRENT_GRPC_VERSION >> implementation 'io.grpc:grpc-stub:1.55.1' // CURRENT_GRPC_VERSION >> implementation 'org.apache.tomcat:annotations-api:6.0.53' >> implementation 'com.google.protobuf:protobuf-javalite:3.23.2' >> implementation("com.google.protobuf:protobuf-java:3.23.0") { >> exclude(group: 'com.google.guava', module: 'guava') >> exclude(group: 'org.checkerframework', module: >> 'checker-compat-qual') >> exclude(group: 'javax.annotation', module: 'jsr250-api') >> exclude(group: 'io.opencensus', module: 'opencensus-api') >> exclude(group: 'com.google.errorprone', module: >> 'error_prone_annotations') >> exclude(group: 'com.google.protobuf', module: 'protobuf-javalite') >> exclude(group: 'io.grpc', module: 'grpc-context') >> } >> //implementation >> 'com.google.errorprone:error_prone_annotations:2.18.0' >> //implementation("com.google.guava:guava:32.0.1-android") >> //protobuf 'com.google.protobuf:protobuf-java:3.23.0' >> } >> >> protobuf { >> protoc { artifact = 'com.google.protobuf:protoc:3.23.2' } >> plugins { >> grpc { >> artifact = 'io.grpc:protoc-gen-grpc-java:1.55.1' // >> CURRENT_GRPC_VERSION >> } >> } >> generateProtoTasks { >> all().each { task -> >> task.builtins { >> java { option 'lite' } >> } >> task.plugins { >> grpc { // Options added to --grpc_out >> option 'lite' >> } >> } >> } >> } >> } >> ``` >> >> Thanks in advance! Lmk if you want any more info >> >> -- You received this message because you are subscribed to the Google Groups "Protocol Buffers" group. To unsubscribe from this group and stop receiving emails from it, send an email to protobuf+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/protobuf/0a450964-2396-4798-8757-ea4f1e346e6dn%40googlegroups.com.
[protobuf] error: cannot access CheckReturnValue
I am trying to install grpc for Android with access to Timestamp. The generated code generates ok but the project throws this error. I did some research and it looks like it has to do with guava. Attached is my app build.grade: *Error* > Task :app:compileDevDebugJavaWithJavac error: cannot access CheckReturnValue class file for javax.annotation.CheckReturnValue not found cannot access CheckReturnValue Note: Some input files use or override a deprecated API. Note: Recompile with -Xlint:deprecation for details. 1 error Build Gradle Snippet `` // You need to build grpc-java to obtain these libraries below. implementation 'io.grpc:grpc-okhttp:1.55.1' // CURRENT_GRPC_VERSION implementation 'io.grpc:grpc-protobuf-lite:1.55.1' // CURRENT_GRPC_VERSION implementation 'io.grpc:grpc-stub:1.55.1' // CURRENT_GRPC_VERSION implementation 'org.apache.tomcat:annotations-api:6.0.53' implementation 'com.google.protobuf:protobuf-javalite:3.23.2' implementation("com.google.protobuf:protobuf-java:3.23.0") { exclude(group: 'com.google.guava', module: 'guava') exclude(group: 'org.checkerframework', module: 'checker-compat-qual') exclude(group: 'javax.annotation', module: 'jsr250-api') exclude(group: 'io.opencensus', module: 'opencensus-api') exclude(group: 'com.google.errorprone', module: 'error_prone_annotations') exclude(group: 'com.google.protobuf', module: 'protobuf-javalite') exclude(group: 'io.grpc', module: 'grpc-context') } //implementation 'com.google.errorprone:error_prone_annotations:2.18.0' //implementation("com.google.guava:guava:32.0.1-android") //protobuf 'com.google.protobuf:protobuf-java:3.23.0' } protobuf { protoc { artifact = 'com.google.protobuf:protoc:3.23.2' } plugins { grpc { artifact = 'io.grpc:protoc-gen-grpc-java:1.55.1' // CURRENT_GRPC_VERSION } } generateProtoTasks { all().each { task -> task.builtins { java { option 'lite' } } task.plugins { grpc { // Options added to --grpc_out option 'lite' } } } } } ``` Thanks in advance! Lmk if you want any more info -- You received this message because you are subscribed to the Google Groups "Protocol Buffers" group. To unsubscribe from this group and stop receiving emails from it, send an email to protobuf+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/protobuf/f043a680-3fd7-4552-9db2-4319c901ba33n%40googlegroups.com.
Re: [protobuf] Why Protocol Buffer(protobuf )is not in C program
A vote for the excellent NanoPB, it is well documented, generates clean code, and works across a wide range of embedded systems. We have been using it for many years without issues. https://koti.kapsi.fi/jpa/nanopb/ Kevin On Thursday, October 20, 2016 at 12:00:19 PM UTC-5, Adam Cozzette wrote: > > We don't support C (just C++) in the main protobuf implementation, and I > believe this is mostly because there has not been a lot of interest in a C > implementation. However, there are some other implementations out there > that support C--for example, you might want to look at upb > <https://github.com/google/upb> (which we already use in the internals of > the Ruby and PHP implementations), protobuf-c > <https://github.com/protobuf-c/protobuf-c>, and others listed here > <https://github.com/google/protobuf/blob/master/docs/third_party.md>. > > On Mon, Oct 17, 2016 at 10:52 PM, santhosh Jayawadagi > wrote: > >> Protoc compiler cannot generate file for C program , why ? >> >> -- >> You received this message because you are subscribed to the Google Groups >> "Protocol Buffers" group. >> To unsubscribe from this group and stop receiving emails from it, send an >> email to protobuf+u...@googlegroups.com . >> To post to this group, send email to prot...@googlegroups.com >> . >> Visit this group at https://groups.google.com/group/protobuf. >> For more options, visit https://groups.google.com/d/optout. >> > > -- You received this message because you are subscribed to the Google Groups "Protocol Buffers" group. To unsubscribe from this group and stop receiving emails from it, send an email to protobuf+unsubscr...@googlegroups.com. To post to this group, send email to protobuf@googlegroups.com. Visit this group at https://groups.google.com/group/protobuf. For more options, visit https://groups.google.com/d/optout.
[protobuf] Gem for object <-> message translation in Ruby
Hi all, We've released a gem to support translating seamlessly between Ruby objects and protobuf messages: https://github.com/AngelList/protip. Support for a number of well-known types is included. AngelList has been using it successfully for some time in production. With a a protip-decorated message you can, for example, get/set a StringValue field in the same way you'd get/set a scalar string field (while also allowing for nil values). We'd love community feedback on the API, so we can expand the WKT support and hopefully establish some best practices around working with protobuf in Ruby. Thanks! Kevin -- You received this message because you are subscribed to the Google Groups "Protocol Buffers" group. To unsubscribe from this group and stop receiving emails from it, send an email to protobuf+unsubscr...@googlegroups.com. To post to this group, send email to protobuf@googlegroups.com. Visit this group at https://groups.google.com/group/protobuf. For more options, visit https://groups.google.com/d/optout.
Re: [protobuf] Using map with Any
HI Ranadheer, Sounds like you already had a good handle on the different tradeoffs then! I personally prefer grouping if there are that many fields ... you would still have to look through each possible field to see if it is set in the Map. Also, using a Map doesn't necessarily mean there are lower object instantiation costs, since depending on the implementation the keys & values of the Map may have to be allocated somewhere as well... Especially with performance stuff, there is no substitute for prototyping and timing example use cases. Looking forward to seeing some of the performance numbers! Kevin On Sunday, April 3, 2016 at 6:54:54 AM UTC-5, Ranadheer Pulluru wrote: > > Hi Kevin, > > Thanks for the suggestion. I indeed considered this option of including > all the possible fields in one message. But since some of the financial > instruments, like convertible bonds/options, have 600-700 fields on the > server side, I felt like instantiating such a big object for every update > seems bit costly (though on the wire it is efficient ,as you mentioned, as > only the fields which are set will be present).* Using object pool we can > probably avoid the object instantiating cost though.* Also, once the > object is deserialized on the receiving side, I feel like it is going to be > little tricky to figure which of the fields are actually set on the sender > side. My understanding is that once the object is deserialized, all fields > which are not set on sender side will have default values and we need to > iterate over all the fields and compare them against default values to know > which are actually set by the sender in the message. So, overall i felt > like for classes having many fields this approach is bit inefficient. We > can probably group set of fields and use a separate message for each group > (like QuoteUpdate, PositionUpdate, etc) but that requires lot of changes in > my current code base and theoretically we might end up too many groups. > > Also, fields like positions, market value, etc need to be published for > each portfolio where the portfolios can be dynamic. So, having support for > Map like object seems to solve both the problems. Having said this, I'm > open to the suggestions because the Map approach does have its limitation, > as the field names can be free form and can cause issues later. > > Point duly noted about the timestamp_utc. I changed my schema accordingly. > > Thanks > Ranadheer > > On Saturday, April 2, 2016 at 10:47:47 PM UTC+5:30, Kevin Baker wrote: >> >> Hi Ranadheer, >> >> Just a piece of advice, but you may want to try to keep your types as >> strict as possible... i.e. instead of using a general map to store the >> parameters, use an additional type to keep everything easy to represent. >> Something like: >> >> message Tick { >> string subject = 1; // name of the financial instrument - something >> like MSFT, GOOG, etc >> uint64 timestamp = 2; //millis from epoch signifying the timestamp >> at which the object is constructed at the publisher side. >> TickData tick_data = 3; >> } >> >> message TickData { >> float ask_price = 1; >> float bid_price = 2; >> float trade_price = 3; >> uint32 trade_size = 4; >> } >> >> or even better, just add all the components into one message: >> >> message Tick { >> string subject = 1; // name of the financial instrument - something >> like MSFT, GOOG, etc >> uint64 timestamp = 2; //millis from epoch signifying the timestamp >> at which the object is constructed at the publisher side. >> float ask_price = 3; >> float bid_price = 4; >> float trade_price = 5; >> uint64 trade_size = 6; >> ... >> } >> >> ... adding any other possible fields you might have in your data. As well >> as being a lot more compact on-the-wire for bandwidth and CPU improvements, >> this forces you to think about your data and what might be in it, which >> will result in a lot less bugs down the road for you and your client >> consumers. You won't have to worry about typos like accidentally typing >> 'bid_prce' or 'bidPrice' or 'bidprice' in the Map. >> >> Protobuf will not send any fields that are different from their default >> values, so you don't pay any performance penalty for having a lot of >> optional data in the Tick message. You can also still add fields later to >> the message, while keeping backwards compatibility with old consumers. >> >> Also, another little pedantic thing, but if you are using the times
Re: [protobuf] Using map with Any
Hi Ranadheer, Just a piece of advice, but you may want to try to keep your types as strict as possible... i.e. instead of using a general map to store the parameters, use an additional type to keep everything easy to represent. Something like: message Tick { string subject = 1; // name of the financial instrument - something like MSFT, GOOG, etc uint64 timestamp = 2; //millis from epoch signifying the timestamp at which the object is constructed at the publisher side. TickData tick_data = 3; } message TickData { float ask_price = 1; float bid_price = 2; float trade_price = 3; uint32 trade_size = 4; } or even better, just add all the components into one message: message Tick { string subject = 1; // name of the financial instrument - something like MSFT, GOOG, etc uint64 timestamp = 2; //millis from epoch signifying the timestamp at which the object is constructed at the publisher side. float ask_price = 3; float bid_price = 4; float trade_price = 5; uint64 trade_size = 6; ... } ... adding any other possible fields you might have in your data. As well as being a lot more compact on-the-wire for bandwidth and CPU improvements, this forces you to think about your data and what might be in it, which will result in a lot less bugs down the road for you and your client consumers. You won't have to worry about typos like accidentally typing 'bid_prce' or 'bidPrice' or 'bidprice' in the Map. Protobuf will not send any fields that are different from their default values, so you don't pay any performance penalty for having a lot of optional data in the Tick message. You can also still add fields later to the message, while keeping backwards compatibility with old consumers. Also, another little pedantic thing, but if you are using the timestamp like Javascript's *Date.now()*, always name it *timestamp_utc *instead of just *timestamp*... eventually someone will stuff a local time in there and confuse everyone... better to be explicit. Kevin On Friday, April 1, 2016 at 7:50:09 PM UTC-5, Feng Xiao wrote: > > > > On Fri, Apr 1, 2016 at 6:06 AM, Ranadheer Pulluru > wrote: > >> Hi, >> >> I'm planning to use protobuf for publishing tick data of the financial >> instruments. The consumers can be any of the java/python/node.js >> languages.The tick is expected to contain various fields like (symbol, >> ask_price, bid_price, trade_price, trade_time, trade_size, etc). Basically, >> it is sort of a map from field name to the value, where value type can be >> any of the primitive types. I thought I can define the schema of the Tick >> data structure, using map >> <https://developers.google.com/protocol-buffers/docs/proto3#maps>and Any >> <https://developers.google.com/protocol-buffers/docs/proto3#any>as >> follows >> >> >> syntax = "proto3"; >> >> package tutorial; >> >> import "google/protobuf/any.proto"; >> >> message Tick { >> string subject = 1; // name of the financial instrument - something >> like MSFT, GOOG, etc >> uint64 timestamp = 2; //millis from epoch signifying the timestamp >> at which the object is constructed at the publisher side. >> map fvmap = 3; // the actual map having >> field name and values. Something like {ask_price: 10.5, bid_price: 9.5, >> trade_price: 10, trade_size=5} >> } >> >> Though I'm able to generate the code in different languages for this >> schema, I'm not sure how to populate the values in the *fvmap*. >> >> public class TickTest >> { >> public static void main(String[] args) >> { >> Tick.Builder tick = Tick.newBuilder(); >> tick.setSubject("ucas"); >> tick.setTimestamp(System.currentTimeMillis()); >> Map fvMap = tick.getMutableFvmap(); >> //fvMap.put("ask", value); // Not sure how to pass values like >> 10.5/9.5/10/5 to Any object here. >> } >> } >> >> >> >> Could you please let me know how to populate the fvMap with different >> fields and values here? Please feel tree to tell me if using map >> <https://developers.google.com/protocol-buffers/docs/proto3#maps>and Any >> <https://developers.google.com/protocol-buffers/docs/proto3#any>is not >> the right choice and if there are any better alternatives. >> > It seems to me a google.protobuf.Struct suites your purpose better: > > https://github.com/google/protobuf/blob/master/src/google/protobuf/struct.proto#L51 > > >> >> Thanks >> Ranadheer >> >> -- >> You received
[protobuf] Re: Cross-compiling protobuf to 32-bit
Hi SyRenity. I've been running into this same issue on CentOS 5. Were you able to fix the problem? On Thursday, January 28, 2010 at 5:12:09 AM UTC-8, SyRenity wrote: > > Hi. > > I'm trying to compile a 32-bit protobuf on 64-bit machine. > > No matter what I tried (changing build option, host option or setting - > m32 flag), I couldn't get it working. Either I stuck on compile stage, > or the resulting binaries are 64-bit. > > I do able to compile other apps via -m32 flag. > > (I solved this for now by compiling on another 32-bit machine, which > is awfully inconvenient, plus may cause the strings issue I wrote > earlier.) > > Any advice how to sort it out? > > Thanks! > > -- You received this message because you are subscribed to the Google Groups "Protocol Buffers" group. To unsubscribe from this group and stop receiving emails from it, send an email to protobuf+unsubscr...@googlegroups.com. To post to this group, send email to protobuf@googlegroups.com. Visit this group at https://groups.google.com/group/protobuf. For more options, visit https://groups.google.com/d/optout.
[protobuf] Questions about Services
Hi, I'm completely new to using protocol buffers and there are some things that aren't very clear. I may be missing something very simple but the documentation says the simple services are being deprecated so you have to turn them on using option cc_generic_services = false;. What is the replacement for this in 3+? I did some digging and it looks like gRPC is a plugin that can also be used to generate code but I'd like to avoid that because it says it's still development builds. Are plugins the only options now? Should I still be using the simple services? How well does gRPC work? Is there anything I should be concerned about implementing C# client proxy and C++ server? Is that even possible? -- You received this message because you are subscribed to the Google Groups "Protocol Buffers" group. To unsubscribe from this group and stop receiving emails from it, send an email to protobuf+unsubscr...@googlegroups.com. To post to this group, send email to protobuf@googlegroups.com. Visit this group at https://groups.google.com/group/protobuf. For more options, visit https://groups.google.com/d/optout.
[protobuf] csharptutorial
As of 2015-10-07, On page https://developers.google.com/protocol-buffers/docs/csharptutorial The section "Defining your protocol format" begins by defining "package tutorial" twice. (See attachment or below) package tutorial; syntax = "proto3"; package tutorial; Is this a typo? I was unable to generate a csharp class until I removed the first "package tutorial" line. Thanks -- You received this message because you are subscribed to the Google Groups "Protocol Buffers" group. To unsubscribe from this group and stop receiving emails from it, send an email to protobuf+unsubscr...@googlegroups.com. To post to this group, send email to protobuf@googlegroups.com. Visit this group at http://groups.google.com/group/protobuf. For more options, visit https://groups.google.com/d/optout.
[protobuf] Unspecified protobuf version
Hello, while I was doing my project, receiving this error message: "This file was generated by an older version of protoc which is", and the portion of my code shows that: 2005000 < GOOGLE_PROTOBUF_MIN_PROTOC_VERSION Does anybody could tell me what's going wrong or how should I fix it? THX and with best regard! =) -- You received this message because you are subscribed to the Google Groups "Protocol Buffers" group. To unsubscribe from this group and stop receiving emails from it, send an email to protobuf+unsubscr...@googlegroups.com. To post to this group, send email to protobuf@googlegroups.com. Visit this group at http://groups.google.com/group/protobuf. For more options, visit https://groups.google.com/d/optout.
Re: [protobuf] Re: Protobuf Buffers v3.0.0-alpha-1
That's great news, thanks!!! Looking forward to proto3! Kevin On Tue, Mar 10, 2015 at 3:24 PM, Feng Xiao wrote: > > > On Thu, Feb 26, 2015 at 11:55 PM, Kevin Baker wrote: > >> Hi, >> >> Thanks for all your work with protobuf. I am excited about the changes >> with proto3 that will reduce errors (no forgetting to set has_* in nanopb, >> yay!) and will make mapping into new languages much simpler, helping our >> interop case a lot. >> >> My question is: We are currently using protobuf pretty extensively and it >> looks like we will not be impacted by any changes in proto3 in our proto >> files (all fields being present, removal of required, default values, etc.) >> Does this mean our existing proto2 applications are compatible on-the-wire >> with proto3? >> > Yes. > > >> How upwards-compatible is proto3 with proto2? >> > Proto3 uses the same wire-format as proto2. A proto2 application should be > able to parse the output of a proto3 server using the same .proto > definition (only differing in syntax version). It's also true vice versa. > > >> >> Of course I will test this as well but I was wondering if there are any >> planned breakages of the wire format or if they will be compatibly phased >> in. >> > There is no planned wire-format changes for proto3. > > >> >> Thanks, >> Kevin >> >> On Sunday, February 8, 2015 at 10:04:30 PM UTC-6, Feng Xiao wrote: >>> >>> >>> >>> On Sat, Feb 7, 2015 at 4:31 AM, Jeremy Swigart >>> wrote: >>> >>>> I don't understand. If a message is a simple struct then the generated >>>> wrapper code would populate it with the default as defined by the proto it >>>> was compiled with wouldn't it? Are you suggesting that the implementation >>>> on different platforms would lack the wrapper objects generated by >>>> protobuf? >>> >>> There may be languages whose protobuf implementation would not be able >>> to efficiently support these features. Note that these decisions are not >>> made based on the current languages that we support, but based on that we >>> are going to support a much wider range of languages. >>> >>> >>>> As long as you have that you have the default value. This rationale >>>> doesn't make sense. >>>> >>>> -- >>>> You received this message because you are subscribed to the Google >>>> Groups "Protocol Buffers" group. >>>> To unsubscribe from this group and stop receiving emails from it, send >>>> an email to protobuf+u...@googlegroups.com. >>>> To post to this group, send email to prot...@googlegroups.com. >>>> Visit this group at http://groups.google.com/group/protobuf. >>>> For more options, visit https://groups.google.com/d/optout. >>>> >>> >>> -- >> You received this message because you are subscribed to the Google Groups >> "Protocol Buffers" group. >> To unsubscribe from this group and stop receiving emails from it, send an >> email to protobuf+unsubscr...@googlegroups.com. >> To post to this group, send email to protobuf@googlegroups.com. >> Visit this group at http://groups.google.com/group/protobuf. >> For more options, visit https://groups.google.com/d/optout. >> > > -- You received this message because you are subscribed to the Google Groups "Protocol Buffers" group. To unsubscribe from this group and stop receiving emails from it, send an email to protobuf+unsubscr...@googlegroups.com. To post to this group, send email to protobuf@googlegroups.com. Visit this group at http://groups.google.com/group/protobuf. For more options, visit https://groups.google.com/d/optout.
Re: [protobuf] Re: Protobuf Buffers v3.0.0-alpha-1
Hi, Thanks for all your work with protobuf. I am excited about the changes with proto3 that will reduce errors (no forgetting to set has_* in nanopb, yay!) and will make mapping into new languages much simpler, helping our interop case a lot. My question is: We are currently using protobuf pretty extensively and it looks like we will not be impacted by any changes in proto3 in our proto files (all fields being present, removal of required, default values, etc.) Does this mean our existing proto2 applications are compatible on-the-wire with proto3? How upwards-compatible is proto3 with proto2? Of course I will test this as well but I was wondering if there are any planned breakages of the wire format or if they will be compatibly phased in. Thanks, Kevin On Sunday, February 8, 2015 at 10:04:30 PM UTC-6, Feng Xiao wrote: > > > > On Sat, Feb 7, 2015 at 4:31 AM, Jeremy Swigart > wrote: > >> I don't understand. If a message is a simple struct then the generated >> wrapper code would populate it with the default as defined by the proto it >> was compiled with wouldn't it? Are you suggesting that the implementation >> on different platforms would lack the wrapper objects generated by protobuf? > > There may be languages whose protobuf implementation would not be able to > efficiently support these features. Note that these decisions are not made > based on the current languages that we support, but based on that we are > going to support a much wider range of languages. > > >> As long as you have that you have the default value. This rationale >> doesn't make sense. >> >> -- >> You received this message because you are subscribed to the Google Groups >> "Protocol Buffers" group. >> To unsubscribe from this group and stop receiving emails from it, send an >> email to protobuf+u...@googlegroups.com . >> To post to this group, send email to prot...@googlegroups.com >> . >> Visit this group at http://groups.google.com/group/protobuf. >> For more options, visit https://groups.google.com/d/optout. >> > > -- You received this message because you are subscribed to the Google Groups "Protocol Buffers" group. To unsubscribe from this group and stop receiving emails from it, send an email to protobuf+unsubscr...@googlegroups.com. To post to this group, send email to protobuf@googlegroups.com. Visit this group at http://groups.google.com/group/protobuf. For more options, visit https://groups.google.com/d/optout.
[protobuf] Re: Issue 632 in protobuf: Protocol buffer library fails to build if download directory path contains a space.
I encountered this same issue, however, by placing the unzipped "protobuf-2.5.0" folder *inside* a folder with a space (ie ../protocol buffers/protobuf-2.5.0 ) Thanks, Kevin On Monday, April 21, 2014 8:38:41 AM UTC-7, prot...@googlecode.com wrote: > > Status: New > Owner: liu...@google.com > Labels: Type-Defect Priority-Medium > > New issue 632 by tom.ritc...@gmail.com: Protocol buffer library fails to > > build if download directory path contains a space. > http://code.google.com/p/protobuf/issues/detail?id=632 > > What steps will reproduce the problem? > 1. Download protocol buffer source. > 2. Rename directory to contain a space character (i.e. > ~/Downloads/"protobuf-2.5.0 2") > 3. Attempt to build protocol buffer library. > > What is the expected output? What do you see instead? > > I expect a completed build. I get the following error: > > /bin/sh ../libtool --tag=CXX --mode=link g++ -D_THREAD_SAFE -Wall > -Wwrite-strings -Woverloaded-virtual -Wno-sign-compare -O2 -g -DNDEBUG > -D_THREAD_SAFE -o protoc main.o libprotobuf.la libprotoc.la -lz > libtool: link: cannot find the library `2/src/libprotobuf.la' or > unhandled > argument `2/src/libprotobuf.la' > make[2]: *** [protoc] Error 1 > make[1]: *** [all-recursive] Error 1 > make: *** [all] Error 2 > > What version of the product are you using? On what operating system? > > I'm using protocol buffers 2.5.0 on Mac OS/X 10.9 ("Mavericks") > > > Please provide any additional information below. > > This is almost certainly a shell problem caused by not quoting the > arguments to libtool. Easy workaround - rename or move the download > directory so it doesn't have a space in the path. > > > -- > You received this message because this project is configured to send all > issue notifications to this address. > You may adjust your notification preferences at: > https://code.google.com/hosting/settings > -- You received this message because you are subscribed to the Google Groups "Protocol Buffers" group. To unsubscribe from this group and stop receiving emails from it, send an email to protobuf+unsubscr...@googlegroups.com. To post to this group, send email to protobuf@googlegroups.com. Visit this group at http://groups.google.com/group/protobuf. For more options, visit https://groups.google.com/d/optout.
[protobuf] Simply library for converting between protobuf objects and JsonCpp objects.
I've been using the following code in my project to convert between JSON and protobuf objects (in C++): http://code.google.com/p/protobuf-to-jsoncpp/ http://code.google.com/p/protobuf-to-jsoncpp/source/browse/trunk/json_protobuf.h Currently, this has only a rudimentary Makefile (that I've run on Ubuntu 12.04) that expects the protobuf library and JsonCpp (http://jsoncpp.sourceforge.net/) to be installed in /usr/local. Please let me know what you think and if you have any suggestions. Thanks, Kevin Regan k.re...@emc.com kevin.d.re...@gmail.com -- You received this message because you are subscribed to the Google Groups "Protocol Buffers" group. To unsubscribe from this group and stop receiving emails from it, send an email to protobuf+unsubscr...@googlegroups.com. To post to this group, send email to protobuf@googlegroups.com. Visit this group at http://groups.google.com/group/protobuf. For more options, visit https://groups.google.com/groups/opt_out.
Re: [protobuf] Partial Decoding of message
This is how I handle the same issue. This would be similar to most multi-threaded daemons taking client input. The manager reads the message type and passes the socket/stream to a handling thread. On Monday, July 8, 2013 10:59:12 AM UTC-7, Ilia Mirkin wrote: > > Unfortunately it's not guaranteed that earlier fields appear earlier > in the message. Although that is often the case, I wouldn't recommend > writing your code s.t. it assumes this. The usual way that I handle > this is by splitting the message into a header and data message, and > then send something like > > > > > > > That way your manager thread just decodes the header figures out what > to do, and sends it on. Then the actual worker thread decodes the > data. > > If this is not an option, you can write a custom decoder that just > skips over fields you don't need to read. This is a little tricky, but > if you're not trying to be too generic it shouldn't be that much code. > > > On Mon, Jul 8, 2013 at 6:54 AM, > wrote: > > Hi Group, > > I am using protobuf in a multi-threaded software. Here manager thread > > decodes the protobuf encoded message and then assign the message to a > > particular worker thread based on key. I want to minimize per message > > processing at manager thread. Is it possible to encode the key at the > head > > of message and decode only this key at manager thread. Complete decoding > of > > message will be moved to the actual worker thread. > > thanks > > Ittium > > > > -- > > You received this message because you are subscribed to the Google > Groups > > "Protocol Buffers" group. > > To unsubscribe from this group and stop receiving emails from it, send > an > > email to protobuf+u...@googlegroups.com . > > To post to this group, send email to prot...@googlegroups.com. > > > Visit this group at http://groups.google.com/group/protobuf. > > For more options, visit https://groups.google.com/groups/opt_out. > > > > > -- You received this message because you are subscribed to the Google Groups "Protocol Buffers" group. To unsubscribe from this group and stop receiving emails from it, send an email to protobuf+unsubscr...@googlegroups.com. To post to this group, send email to protobuf@googlegroups.com. Visit this group at http://groups.google.com/group/protobuf. For more options, visit https://groups.google.com/groups/opt_out.
Re: [protobuf] Re: Question about set_allocated_foo/release_foo
It is very common to have a "const std::string&" argument. Indeed, just about any API that you would export to the end user would probably support taking a"const std::string& key" in this case. --Kevin On Thursday, March 14, 2013 6:31:54 PM UTC-7, Feng Xiao wrote: > > A better solution should be to refactor your code a little bit to pass in > a mutable string object directly. You can expect protos to take movable > objects in the future, but even when that happens you will still need to > refactor your code. > I don't think we will change protobuf to share references to const > objects, nor will protobuf support shared_ptr stuff. > > > On Thu, Mar 14, 2013 at 6:17 PM, Kevin Regan > > wrote: > >> Giving this a bump. This might also be considered a feature request (the >> ability to temporarily assign a string value to a protocol buffer during >> serialization, rather than copying it). >> >> --Kevin >> >> >> On Friday, March 8, 2013 1:49:29 PM UTC-8, Kevin Regan wrote: >>> >>> I often have situations like this: >>> >>> void do_something(const std::string& value) >>> { >>>my_proto_buffer.set_value(**value); >>>// serialize my_proto_buffer to a stream >>> } >>> >>> Would it be valid to do this: >>> >>> void do_something(const std::string& value) >>> { >>>my_proto_buffer.set_allocated_**value(&((std::string&)value)); >>>try { >>>// serialize my_proto_buffer to a stream >>>} >>>catch (...) { >>> my_proto_buffer.release_value(**) >>> throw; >>>} >>>my_proto_buffer.release_value(**); >>> } >>> >>> or can I not rely on the internal string not being modified during >>> serialization? >>> >>> Thanks, >>> Kevin >>> >>> -- >> You received this message because you are subscribed to the Google Groups >> "Protocol Buffers" group. >> To unsubscribe from this group and stop receiving emails from it, send an >> email to protobuf+u...@googlegroups.com . >> To post to this group, send email to prot...@googlegroups.com >> . >> Visit this group at http://groups.google.com/group/protobuf?hl=en. >> For more options, visit https://groups.google.com/groups/opt_out. >> >> >> > > -- You received this message because you are subscribed to the Google Groups "Protocol Buffers" group. To unsubscribe from this group and stop receiving emails from it, send an email to protobuf+unsubscr...@googlegroups.com. To post to this group, send email to protobuf@googlegroups.com. Visit this group at http://groups.google.com/group/protobuf?hl=en. For more options, visit https://groups.google.com/groups/opt_out.
[protobuf] Re: Question about set_allocated_foo/release_foo
Giving this a bump. This might also be considered a feature request (the ability to temporarily assign a string value to a protocol buffer during serialization, rather than copying it). --Kevin On Friday, March 8, 2013 1:49:29 PM UTC-8, Kevin Regan wrote: > > I often have situations like this: > > void do_something(const std::string& value) > { >my_proto_buffer.set_value(value); >// serialize my_proto_buffer to a stream > } > > Would it be valid to do this: > > void do_something(const std::string& value) > { >my_proto_buffer.set_allocated_value(&((std::string&)value)); >try { >// serialize my_proto_buffer to a stream >} >catch (...) { > my_proto_buffer.release_value() > throw; >} >my_proto_buffer.release_value(); > } > > or can I not rely on the internal string not being modified during > serialization? > > Thanks, > Kevin > > -- You received this message because you are subscribed to the Google Groups "Protocol Buffers" group. To unsubscribe from this group and stop receiving emails from it, send an email to protobuf+unsubscr...@googlegroups.com. To post to this group, send email to protobuf@googlegroups.com. Visit this group at http://groups.google.com/group/protobuf?hl=en. For more options, visit https://groups.google.com/groups/opt_out.
[protobuf] Question about set_allocated_foo/release_foo
I often have situations like this: void do_something(const std::string& value) { my_proto_buffer.set_value(value); // serialize my_proto_buffer to a stream } Would it be valid to do this: void do_something(const std::string& value) { my_proto_buffer.set_allocated_value(&((std::string&)value)); try { // serialize my_proto_buffer to a stream } catch (...) { my_proto_buffer.release_value() throw; } my_proto_buffer.release_value(); } or can I not rely on the internal string not being modified during serialization? Thanks, Kevin -- You received this message because you are subscribed to the Google Groups "Protocol Buffers" group. To unsubscribe from this group and stop receiving emails from it, send an email to protobuf+unsubscr...@googlegroups.com. To post to this group, send email to protobuf@googlegroups.com. Visit this group at http://groups.google.com/group/protobuf?hl=en. For more options, visit https://groups.google.com/groups/opt_out.
[protobuf] Any examples of a SHA1 or MD5 ZeroCopy{Input/Output}Stream?
I'd like to be able to compute a SHA1 or MD5 checksum while writing out and reading in a file (which I'm doing through a CodedStream/GzipStream/FileStream stack). I'd like to throw the checksum ZeroCopyStream in there (probably between the GzipStream and FileStream). Are there any examples out there? I'm not lazy, but I'd rather not write one if it already exists. :) Thanks! Kevin -- You received this message because you are subscribed to the Google Groups "Protocol Buffers" group. To unsubscribe from this group and stop receiving emails from it, send an email to protobuf+unsubscr...@googlegroups.com. To post to this group, send email to protobuf@googlegroups.com. Visit this group at http://groups.google.com/group/protobuf?hl=en. For more options, visit https://groups.google.com/groups/opt_out.
[protobuf] Protobuf Map
I've created the Map (see below) as a generic container. Has this already been done somewhere that I can download? Any comments on how to optimize this for speed and size or leverage an existing solution would be helpful. Thanks in advance. message Map { message DateTime { required int32 Year = 1; required int32 Month = 2; required int32 Day = 3; optional int32 Hour = 4 [default = 0]; optional int32 Minute = 5 [default = 0]; optional int32 Second = 6 [default=0]; optional int32 Milli = 7 [ default =0]; } message ValueNode { optional string Text = 1; optional int32 Integer = 2; optional double Double = 3; optional DateTime Date = 4; optional Map childMap = 5; } message MapKeyAndValue { required string key = 1; required ValueNode value = 5; } repeated MapKeyAndValue map_set = 1; } -- You received this message because you are subscribed to the Google Groups "Protocol Buffers" group. To view this discussion on the web visit https://groups.google.com/d/msg/protobuf/-/THZS9euGCdIJ. To post to this group, send email to protobuf@googlegroups.com. To unsubscribe from this group, send email to protobuf+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/protobuf?hl=en.
[protobuf] protobuf TextFormat question: Parsing TextFormat into message [fixed]
...previous message didn't have example code sanitized and probably doesn't make sense. Please delete/ignore previous and use this one. :) Hard to find much high-level documentation on using TextFormat, but for some Python protobuf work I'm doing, the text format is much easier to read (and easier to organize into test vector files) in the protobuf text format, rather than manipulating the python objects manually. (I've changed message types/contents to generalize, please forgive any typos, this isn't the real code) my_envelope.proto (this file gets sent through protoc to generate Python code): message MyEnvelope { optional InnerMessage innerMessage = 1; } and I have a test message that I'd like to take from TextFormat and put into a protobuf message structure: innerMessage { value1: 100 value2: 200 } I have found that I can get it to work in python using from google.protobuf import text_format import my_envelope_pb2 my_msg = my_envelope_pb2.MyEnvelope() text_format.Merge(""" innerMessage { value1: 100 value2: 200 } """, my_msg) But I guess my question is this- is there a preferred way to do this? I had originally expected that ParseFromString would figure out that this was an ASCII representation and call Merge appropriately. i.e. I had expected the following would work import my_envelope_pb2 my_msg = my_envelope_pb2.MyEnvelope() my_msg.ParseFromString(""" inner { value1: 100 value2: 200 } """) but from the errors it appeared that ParseFromString only deals with string containers for binary format. Is this the case? Just trying to make sure I'm not calling Merge at an innapropriate/strange layer when some high-level call exists. As I mentioned, documentation on text format is somewhat thin, so this is the best I could piece together from API info. Thanks! -- You received this message because you are subscribed to the Google Groups "Protocol Buffers" group. To post to this group, send email to protobuf@googlegroups.com. To unsubscribe from this group, send email to protobuf+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/protobuf?hl=en.
[protobuf] Re: Delay in Sending Data
I have seemed to fix the reading issues on the C++ and Java sides and it is working as expected. My only concern now is in regards to messages sizes and the prepending the size at the beginning. What is the best way to go about this? My test message required on one byte but my next messages will probably require 2 if not 3 bytes. What is the proper way to handle this in the C++ code as the Java code has this built-in? In addition, my colleague has used Thrift before and was extremely surprised that the C++ classes did not have matching function calls in Java and vice versa. Can someone explain this short coming? Thanks, Kevin On Oct 21, 11:28 am, Evan Jones wrote: > On Oct 21, 2010, at 1:21 , Kevin wrote: > > > Basically, the code that receives the data will wait until the stream > > is closed before reading the data. I thought that flushing the data > > would cause the data to be sent but that apparently has no effect. Is > > this my implementation or a problem with using the writeTo > > function? > > The flush *should* be causing the data to be sent. The problem is on > the reader side: the default read methods read until the end of the > stream. You'll need to prepend a length. You may want to use > parseDelimited(). See the following document, or search the archives > for many conversations about this. Hope this helps, > > Evan > > http://code.google.com/apis/protocolbuffers/docs/techniques.html#stre... > > -- > Evan Joneshttp://evanjones.ca/ -- You received this message because you are subscribed to the Google Groups "Protocol Buffers" group. To post to this group, send email to proto...@googlegroups.com. To unsubscribe from this group, send email to protobuf+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/protobuf?hl=en.
[protobuf] Delay in Sending Data
I initially was try to get Protocol Buffers to work from Java to C++ over sockets. I am still unsure how to do this but I decided to try and test to see if I could get a Java to Java version working over sockets and was successful. However, I was experiencing a weirdness which might be causing the error on the C++ side of things. Below is my code on the server side. Basically to make a long story short I thought that closing the stream was causing my C++ code to fail, so I put a little sleep in there to make sure it was still connected. Then when adding the java stuff I left the sleep in. Basically, the code that receives the data will wait until the stream is closed before reading the data. I thought that flushing the data would cause the data to be sent but that apparently has no effect. Is this my implementation or a problem with using the writeTo function? public class ProtoWriter { public static void main(String[] args) { Socket socket; ServerSocket serverSocket; ObjectOutputStream oos = null; serverSocket = new ServerSocket(12345); socket = serverSocket.accept(); oos = new ObjectOutputStream(socket.getOutputStream()); Measurement measurement = measBuilder.build(); measurement.writeTo(oos); oos.flush(); TimeUnit.SECONDS.sleep(10); oos.close(); -- You received this message because you are subscribed to the Google Groups "Protocol Buffers" group. To post to this group, send email to proto...@googlegroups.com. To unsubscribe from this group, send email to protobuf+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/protobuf?hl=en.
Re: [protobuf] How realistic are benchmarks such as NorthWind ?
Marc: Thanks for your input. I think your comment helps me clarify my query: Most applications or services that are "producers" will generate data with N fields in it. Consumers may be interested in only m fields- m could be 5 and N could be 20. For example: An address book service will generate an address with 25 fields in it. An application that consumes the service will want only 3- say name, phone number, and zip code In the current implementation, there is a way of picking 5 fields only. Ideally, the time taken to pick only 3 fields, should be a lot less than picking 25 fields. An even better implementation will screen records based on field values. I do not agree that this is "making it a database". XML has allowed query processing for at least 10 years. XML even allows joining 2 XML records based on a common key. In a database, whether the traditional RDBMS or a NoSQL kind, one has to pay the price for ACID properties or for "CAP" - consistency, availability and partitioning. These problems do not exist if one is screening 10,000 protocol buffers looking for a particular field. I would imagine that there are many applications which read Protocol Buffers for thousands of records, picking only a small fraction of them. I appreciate the simplicity of Protocol Buffers, but adding features like these have nothing to do with complicating the original simplicity, as it is like a layer that adds value without overhead- Those applications which want to screen based on field values, can screen. Kevin On Fri, May 14, 2010 at 11:52 PM, Marc Gravell wrote: > Firstly, I must note that those benchmarks are specific to protobuf-net (a > specific implementation), not "protocol buffers" (which covers a range of > implementations). Re "is it not more realistic"; well, that depends entirely > on what your use-case *is*. It /sounds/ like you are really talking about > querying ad-hoc data; if so a file-based database may be more appropriate. > But it depends entirely on your scenario. > > It /would/ be possible (with protobuf-net at least; I can't comment beyond > that) to construct a type that represents the data that you *are* interested > in - the other fields would be quietly dropped without having to fully > process them, avoiding some CPU. Likewise, it is possible to read items in a > non-buffered way (i.e. you only have 1 object directly available in memory; > any others are discarded immediately, available for GC). However; again - it > sounds like you *really* want a database. Which "protocol buffers" isn't. > > Marc Gravell > > On 14 May 2010 11:31, Kevin Apte- SOA and Cloud Computing Architect < > technicalarchitect2...@gmail.com> wrote: > >>I saw that ProtoBuf has been benchmarked using the Northwind data >> set- a data set of size 130K, with 3000 objects including orders and >> order line items. >> >> This is an excellent review: >> http://code.google.com/p/protobuf-net/wiki/Performance >> >> Is it not more realistic, to have a benchmark with a much larger file, >> in which we are interested only in a few records, and a few fields >> within those records. >> >> For example: 10,000 order line items, we want only a line item with a >> particular product code. >> Or we want to pick orders for a particular customer type, or with a >> particular description. >> >> Are there use cases where data is stored in Protocol Buffer Format in >> a file, and read into memory? >> >> Another issue is that the size seems rather small- it is only 256 >> bytes per object,- I would imagine there are many use cases where the >> objects are much bigger. >> >> Many use cases are going to be with much larger objects and will >> select m out N fields- where m will be 5 and N will be 20. This is >> because very rarely can an application want all of the information in >> a protocol buffer generated by another program. >> >> Any comments? >> >> >> >> >> >> >> >> >> >> >> >> >> -- >> You received this message because you are subscribed to the Google Groups >> "Protocol Buffers" group. >> To post to this group, send email to proto...@googlegroups.com. >> To unsubscribe from this group, send email to >> protobuf+unsubscr...@googlegroups.com >> . >> For more options, visit this group at >> http://groups.google.com/group/protobuf?hl=en. >> >> > > > -- > Regards, > > Marc > -- You received this message because you are subscribed to the Google Groups "Protocol Buffers" group. To post to this group, send email to proto...@googlegroups.com. To unsubscribe from this group, send email to protobuf+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/protobuf?hl=en.
[protobuf] How realistic are benchmarks such as NorthWind ?
I saw that ProtoBuf has been benchmarked using the Northwind data set- a data set of size 130K, with 3000 objects including orders and order line items. This is an excellent review: http://code.google.com/p/protobuf-net/wiki/Performance Is it not more realistic, to have a benchmark with a much larger file, in which we are interested only in a few records, and a few fields within those records. For example: 10,000 order line items, we want only a line item with a particular product code. Or we want to pick orders for a particular customer type, or with a particular description. Are there use cases where data is stored in Protocol Buffer Format in a file, and read into memory? Another issue is that the size seems rather small- it is only 256 bytes per object,- I would imagine there are many use cases where the objects are much bigger. Many use cases are going to be with much larger objects and will select m out N fields- where m will be 5 and N will be 20. This is because very rarely can an application want all of the information in a protocol buffer generated by another program. Any comments? -- You received this message because you are subscribed to the Google Groups "Protocol Buffers" group. To post to this group, send email to proto...@googlegroups.com. To unsubscribe from this group, send email to protobuf+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/protobuf?hl=en.
[protobuf] Re: Creating an instance of a message from the descriptor (java)
Kenton, This description in the MessageLite documentation is what led me to believe the default values would be there. I figured I was misinterpreting the documentation, or the meaning behind the default value: getDefaultInstanceForType MessageLite getDefaultInstanceForType() Get an instance of the type with all fields set to their default values. This may or may not be a singleton. This differs from the getDefaultInstance() method of generated message classes in that this method is an abstract method of the MessageLite interface whereas getDefaultInstance() is a static method of a specific class. They return the same thing. http://code.google.com/apis/protocolbuffers/docs/reference/java/index.html Thanks for your help, Kevin On Mar 22, 7:07 pm, Kenton Varda wrote: > "required" means "If this field is not explicitly set before build() is > called, or if parseFrom() parses a message missing this field, throw an > exception.". It does NOT mean "Automatically fill in this field.". Please > point me at any documentation which suggests the latter meaning so I can fix > it. > > The default value is the value returned by the field's getter when the field > has not been explicitly assigned any other value. > > getAllFields() only returns field which have been explicitly set. > > On Mon, Mar 22, 2010 at 2:42 PM, Kevin Tambascio > wrote: > > > > > Kenton, > > > I did make some more progress today, along the lines of what you said > > below. I'm seeing an issue where calling > > DynamicMessage.getDefaultInstance(type) is not filling in the default > > values. This is with 2.3.0 of GPB. > > > My proto file: > > > message StringTableEntry > > { > > required string lang = 1 [default = "en-US"]; > > required string value = 2 [default = ""]; > > } > > > When I instantiate an instance of StringTableEntry, using > > DynamicMessage, the required fields are not in the message instance. > > From reading the documentation, it sounds like the default values > > should show up if I create an object. My code for creation is this: > > > FileDescriptor fd = > > FileDescriptor.buildFrom(fdSet.getFile(0), fds); > > List messageTypes = fd.getMessageTypes(); > > for(Descriptor type : messageTypes) > > { > > DynamicMessage dm = > > DynamicMessage.newBuilder(type).getDefaultInstanceForType(); > > > //DynamicMessage dm = > > DynamicMessage.getDefaultInstance(type); > > > Map dmFields = > > dm.getAllFields(); > > for(Entry entry : > > dmFields.entrySet()) > > { > > System.out.println("default value for this > > field: " + > > entry.getValue().toString()); > > entry.setValue("Data"); > > } > > > System.out.println(XmlFormat.printToString(dm)); > > } > > > When instantiating the object, using either the commented or > > uncommented out lines of code above, fails to contain the required > > fields with their default values. If I poke around the 'type' > > variable in the debugger, I can see that the 2 fields are in the > > descriptor, and the default values are there as well. But the > > instance of the message does not contain those two fields > > (dmFields.entrySet() returns null, and the code inside the > > "for(Entry entry : dmFields.entrySet())" loop > > does not execute). > > > It seems like I could write a routine to set the default values based > > on the Descriptor data, but I think that getDefaultInstance should do > > that for me. > > > Thoughts? > > > Thanks, > > Kevin > > > On Mar 22, 4:02 pm, Kenton Varda wrote: > > > DescriptorProto.getDescriptorForType() returns the Descriptor for > > > DescriptorProto, not for the type which that DescriptorProto is > > describing. > > > Remember that DescriptorProto is just a protocol message like any other > > -- > > > it does not have any special methods that recognize its higher-level > > > meaning. > > > > To convert DescriptorProtos to Descriptors, you need to use > > > FileDescriptor.buildFrom(). > > > > On Sun, Mar 21, 2010 at 5:20 PM, Kevin Tambascio > > > wrote: > > > > > Hi, > > > > > I'm having trouble getting the following code to work
[protobuf] Re: Creating an instance of a message from the descriptor (java)
Kenton, I did make some more progress today, along the lines of what you said below. I'm seeing an issue where calling DynamicMessage.getDefaultInstance(type) is not filling in the default values. This is with 2.3.0 of GPB. My proto file: message StringTableEntry { required string lang = 1 [default = "en-US"]; required string value = 2 [default = ""]; } When I instantiate an instance of StringTableEntry, using DynamicMessage, the required fields are not in the message instance. >From reading the documentation, it sounds like the default values should show up if I create an object. My code for creation is this: FileDescriptor fd = FileDescriptor.buildFrom(fdSet.getFile(0), fds); List messageTypes = fd.getMessageTypes(); for(Descriptor type : messageTypes) { DynamicMessage dm = DynamicMessage.newBuilder(type).getDefaultInstanceForType(); //DynamicMessage dm = DynamicMessage.getDefaultInstance(type); Map dmFields = dm.getAllFields(); for(Entry entry : dmFields.entrySet()) { System.out.println("default value for this field: " + entry.getValue().toString()); entry.setValue("Data"); } System.out.println(XmlFormat.printToString(dm)); } When instantiating the object, using either the commented or uncommented out lines of code above, fails to contain the required fields with their default values. If I poke around the 'type' variable in the debugger, I can see that the 2 fields are in the descriptor, and the default values are there as well. But the instance of the message does not contain those two fields (dmFields.entrySet() returns null, and the code inside the "for(Entry entry : dmFields.entrySet())" loop does not execute). It seems like I could write a routine to set the default values based on the Descriptor data, but I think that getDefaultInstance should do that for me. Thoughts? Thanks, Kevin On Mar 22, 4:02 pm, Kenton Varda wrote: > DescriptorProto.getDescriptorForType() returns the Descriptor for > DescriptorProto, not for the type which that DescriptorProto is describing. > Remember that DescriptorProto is just a protocol message like any other -- > it does not have any special methods that recognize its higher-level > meaning. > > To convert DescriptorProtos to Descriptors, you need to use > FileDescriptor.buildFrom(). > > On Sun, Mar 21, 2010 at 5:20 PM, Kevin Tambascio > wrote: > > > > > Hi, > > > I'm having trouble getting the following code to work. Using > > protoc.exe, I generated a file with the descriptor data using the -- > > descriptor_set_out file. I've written some Java code to read the > > file, and try to instantiate a default instance of one of the objects > > in the descriptor, so that I write it out to an XML file using the > > protobuf-format-java library. > > > Here's my code. The variable "descriptorData" contains the binary > > content of the descriptor file, without any modifications: > > > DescriptorProtos.FileDescriptorSet fdSet = > > FileDescriptorSet.newBuilder().mergeFrom(descriptorData).build(); > > FileDescriptorProto fdp = fdSet.getFile(0); > > > List messageTypes = > > fdp.getMessageTypeList(); > > for(DescriptorProto type : messageTypes) > > { > > System.out.println("Type is: " + type.getName()); > > FileDescriptor fd = > > type.getDescriptorForType().getFile(); > > > DynamicMessage dm = > > DynamicMessage.getDefaultInstance(type.getDescriptorForType()); > > System.out.println(XmlFormat.printToString(dm)); > > } > > > I've tried numerous combinations of the above code, but each time I > > get the following output: > > > Type is: Type1 > > > > Type is: Type2 > > > > Type is: Type3 > > > > Type is: Type4 > > > > Type is: Type5 > > > > Type is: Type6 > > > > > The proto file has Type1, Type2, Type3, etc, defined as messages. The > > fact that type.getName() does return the type names from my proto > > file, leads me to believe I'm heading in the right direction. > > However, the DynamicMessage type that is created (and serialized to > > XML) seems to indicate that I'm not passing the right descriptor > > instance in to
[protobuf] Creating an instance of a message from the descriptor (java)
Hi, I'm having trouble getting the following code to work. Using protoc.exe, I generated a file with the descriptor data using the -- descriptor_set_out file. I've written some Java code to read the file, and try to instantiate a default instance of one of the objects in the descriptor, so that I write it out to an XML file using the protobuf-format-java library. Here's my code. The variable "descriptorData" contains the binary content of the descriptor file, without any modifications: DescriptorProtos.FileDescriptorSet fdSet = FileDescriptorSet.newBuilder().mergeFrom(descriptorData).build(); FileDescriptorProto fdp = fdSet.getFile(0); List messageTypes = fdp.getMessageTypeList(); for(DescriptorProto type : messageTypes) { System.out.println("Type is: " + type.getName()); FileDescriptor fd = type.getDescriptorForType().getFile(); DynamicMessage dm = DynamicMessage.getDefaultInstance(type.getDescriptorForType()); System.out.println(XmlFormat.printToString(dm)); } I've tried numerous combinations of the above code, but each time I get the following output: Type is: Type1 Type is: Type2 Type is: Type3 Type is: Type4 Type is: Type5 Type is: Type6 The proto file has Type1, Type2, Type3, etc, defined as messages. The fact that type.getName() does return the type names from my proto file, leads me to believe I'm heading in the right direction. However, the DynamicMessage type that is created (and serialized to XML) seems to indicate that I'm not passing the right descriptor instance in to create the object. Any thoughts? Thanks, Kevin -- You received this message because you are subscribed to the Google Groups "Protocol Buffers" group. To post to this group, send email to proto...@googlegroups.com. To unsubscribe from this group, send email to protobuf+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/protobuf?hl=en.
Re: RPM Spec File
On Wed, Aug 19, 2009 at 20:35, Kenton Varda wrote: > Well, I haven't observed this problem on other platforms or distros. What > happens if you write a very basic program that uses pthread_once, then try > to compile it with -pthread (but not -lpthread)? If this doesn't work, I > suspect something is wrong with the way GCC was built for your distribution. FYI, I've just reported the issue in Mandriva's bug tracker: https://qa.mandriva.com/show_bug.cgi?id=53578 -- Kevin Deldycke • blog: http://kevin.deldycke.com • band: http://coolcavemen.com --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "Protocol Buffers" group. To post to this group, send email to protobuf@googlegroups.com To unsubscribe from this group, send email to protobuf+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/protobuf?hl=en -~--~~~~--~~--~--~---
Re: RPM Spec File
On Wed, Aug 19, 2009 at 20:35, Kenton Varda wrote: > In any case, the work-around you may want is to set PTHREAD_CFLAGS='-pthread > -lpthread' -- that is, pass both flags. Or better yet, set > PTHREAD_CFLAGS=-pthread and PTHREAD_LIBS=-lpthread -- this way -lpthread is > only passed while linking. Thanks for the tip ! I've tried the combination of PTHREAD_CFLAGS and PTHREAD_LIBS you suggested and it work well. Better: setting PTHREAD_LIBS alone is enough. You can find attached an updated version of my spec file. -- Kevin Deldycke • blog: http://kevin.deldycke.com • band: http://coolcavemen.com --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "Protocol Buffers" group. To post to this group, send email to protobuf@googlegroups.com To unsubscribe from this group, send email to protobuf+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/protobuf?hl=en -~--~~~~--~~--~--~--- protobuf.spec Description: Binary data
Re: RPM Spec File
On Wed, Aug 19, 2009 at 02:19, Kenton Varda wrote: > The problem with these spec files is that they're large and complicated and > I just don't have time to learn how they work and maintain them. If someone > would like to commit to maintaining these things -- which means I'd call on > you to update them for each release, answer questions about them, etc., and > I'd need you to respond promptly (within a day or two) -- then we could add > them to the official package. Otherwise I'd prefer to stick with the > current decentralized approach. I think you are right regarding spec file management: maintaining a generic spec that work for each RPM-based distribution is a lot of work. These files became a big mess really fast: think about supporting different distributions, distribution versions and architectures... That's a lot of possible combination ! I understand why you require such commitment and I agree with you. So +1 for the decentralized approach. BTW, should we continue packaging-related discussion of protobuf here ? > BTW, Kevin, I'm confused about this line in your changelog: > >> - Add -lpthread option to environment (else configure set it to -pthread) > > -pthread is the correct option, and it implies -lpthread. -pthread ensures > that the C runtime library uses thread-safe functions (e.g. errno becomes > thread-local) whereas -lpthread merely links against libpthread.so. Why did > you change this? Here is what happened when I don't force the "-lpthread" option. First, "configure" guess that we should use "-pthread": (...) checking for the pthreads library -lpthreads... no checking whether pthreads work without any flags... no checking whether pthreads work with -Kthread... no checking whether pthreads work with -kthread... no checking for the pthreads library -llthread... no checking whether pthreads work with -pthread... yes checking for joinable pthread attribute... PTHREAD_CREATE_JOINABLE checking if more special flags are required for pthreads... no checking whether to check for GCC pthread/shared inconsistencies... yes checking whether -pthread is sufficient with -shared... yes checking the location of hash_map... configure: creating ./config.status (...) Then the compilation fail: (...) libtool: link: x86_64-mandriva-linux-gnu-g++ -shared -nostdlib /usr/lib/gcc/x86_64-manbo-linux-gnu/4.3.2/../../../../lib64/crti.o /usr/lib/gcc/x86_64-manbo-linux-gnu/4.3.2/crtbeginS.o .libs/common.o .libs/once.o .libs/hash.o .libs/extension_set.o .libs/generated_message_util.o .libs/message_lite.o .libs/repeated_field.o .libs/wire_format_lite.o .libs/coded_stream.o .libs/zero_copy_stream.o .libs/zero_copy_stream_impl_lite.o .libs/strutil.o .libs/substitute.o .libs/structurally_valid.o .libs/descriptor.o .libs/descriptor.pb.o .libs/descriptor_database.o .libs/dynamic_message.o .libs/extension_set_heavy.o .libs/generated_message_reflection.o .libs/message.o .libs/reflection_ops.o .libs/service.o .libs/text_format.o .libs/unknown_field_set.o .libs/wire_format.o .libs/gzip_stream.o .libs/printer.o .libs/tokenizer.o .libs/zero_copy_stream_impl.o .libs/importer.o .libs/parser.o -lz -L/usr/lib/gcc/x86_64-manbo-linux-gnu/4.3.2 -L/usr/lib/gcc/x86_64-manbo-linux-gnu/4.3.2/../../../../lib64 -L/lib/../lib64 -L/usr/lib/../lib64 -L/usr/lib/gcc/x86_64-manbo-linux-gnu/4.3.2/../../.. -lstdc++ -lm -lc -lgcc_s /usr/lib/gcc/x86_64-manbo-linux-gnu/4.3.2/crtendS.o /usr/lib/gcc/x86_64-manbo-linux-gnu/4.3.2/../../../../lib64/crtn.o -pthread -Wl,--as-needed -Wl,--no-undefined -Wl,-z -Wl,relro -pthread -Wl,-soname -Wl,libprotobuf.so.4 -o .libs/libprotobuf.so.4.0.0 libtool: compile: x86_64-mandriva-linux-gnu-g++ -DHAVE_CONFIG_H -I. -I.. -pthread -Wall -Wwrite-strings -Woverloaded-virtual -Wno-sign-compare -O2 -g -pipe -Wformat -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -MT python_generator.lo -MD -MP -MF .deps/python_generator.Tpo -c google/protobuf/compiler/python/python_generator.cc -o python_generator.o >/dev/null 2>&1 .libs/common.o: In function `InitShutdownFunctionsOnce': /home/kevin/rpm/BUILD/protobuf-2.2.0/src/./google/protobuf/stubs/once.h:115: undefined reference to `pthread_once' .libs/common.o: In function `GoogleOnceInit': /home/kevin/rpm/BUILD/protobuf-2.2.0/src/./google/protobuf/stubs/once.h:115: undefined reference to `pthread_once' .libs/common.o: In function `GoogleOnceInit': /home/kevin/rpm/BUILD/protobuf-2.2.0/src/google/protobuf/stubs/common.cc:137: undefined reference to `pthread_once' .libs/extension_set.o: In function `GoogleOnceInit': /home/kevin/rpm/BUILD/protobuf-2.2.0/src/./google/protobuf/stubs/once.h:115: undefined reference to `pthread_once' /usr/bin/ld: Dwarf Error: Offset (391001) greater than or equ
Re: RPM Spec File
On Wed, Aug 19, 2009 at 01:18, Kev wrote: > And tonight I've managed to upgrade it, so you can find a RPM of > Protocol Buffers 2.2.0 for Mandriva 2009.1 in my repository: > http://kev.coolcavemen.com/static/repository/mandriva/2009.1/x86_64/ Oh, and FYI, please find attached my spec file. If you find strange or bad things in it, please tell me, I'm far from being proficient in C++... :] -- Kevin Deldycke • blog: http://kevin.deldycke.com • band: http://coolcavemen.com --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "Protocol Buffers" group. To post to this group, send email to protobuf@googlegroups.com To unsubscribe from this group, send email to protobuf+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/protobuf?hl=en -~--~~~~--~~--~--~--- protobuf.spec Description: Binary data
Re: Printing out unset fields
... and once you have the FieldDescriptor, you can call hasDefaultValue and getDefaultValue to fill in values for unset fields with defaults. Kevin On Tue, Apr 14, 2009 at 8:06 PM, Kenton Varda wrote: > The descriptor itself contains a list of all of the defined fields for the > type. E.g.: > const Descriptor* type = message->GetDescriptor(); > for (int i = 0; i < type->field_count(); i++) { > const FieldDescriptor* field = type->field(i); > // handle field > } > > On Tue, Apr 14, 2009 at 6:09 PM, Joe wrote: > >> >> I'm using the TextFormat provided by protocol buffers to read in and >> print out messages in ... text format. I have a lot of optional >> fields with defaults in my .proto file. When I do a >> TextFormat::Print, only set values are printed. I'd like >> TextFormat::Print to also print unset values. Am I crazy? >> >> I tried to figure out a way to do this using the reflection API, but >> it seems to only provide a way to list the set fields of a message. >> Is there any other way to do this? >> >> > > > > --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "Protocol Buffers" group. To post to this group, send email to protobuf@googlegroups.com To unsubscribe from this group, send email to protobuf+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/protobuf?hl=en -~--~~~~--~~--~--~---
Textmate bundle for protobufs
I made a syntax highlighting bundle for editing .proto files in Textmate. It's pretty rough, but it's helped me, so I thought others might like it as well. Feedback is welcome, and feel free to improve it too. http://github.com/kevinweil/protobuf.tmbundle Thanks, Kevin --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "Protocol Buffers" group. To post to this group, send email to protobuf@googlegroups.com To unsubscribe from this group, send email to protobuf+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/protobuf?hl=en -~--~~~~--~~--~--~---